Business

Clear App Deletes Problematic Social Media History

Advertisment

Most of us have been there and have posted inappropriate stuff on our social media feeds. Whether it’s because we were young and did not know any better or because we had PMS or were hungover or simply had a grumpy day. The internet and social media have been around for a while, but unlike the younger generation, most people did not grow up in the era of the internet. So most people did not become aware that what happens on Facebook does not stay on Facebook until recently.

socialmedia-952091_1280

One of the people who had to learn this fact the hard way was Ethan Czahor. Early 2015 he was elected to be Jeb Bush’s digital advisor for his campaign and was subsequently fired three days later. The reason: Czahor had tweeted a few inappropriate tweets and blog posts years prior and after him being hired, the opposition was quick to search for and find these comments.

Czahor rose and fell quickly, lasting only 36 hours. After being publicly humiliated and fired, Czahor invested his energy into turning things around for him. Only a few months later he rose like phoenix from the ashes. In April 2015 he launched an app called “Clear.” The app’s self-proclaimed purpose is to protect people from having to experience the same fate as Ethan Czahor.

Czahor, who insists that his comments were meant humorously and never meant to offend anyone, developed an app that allows users to delete embarrassing or inappropriate posts on Twitter, Facebook and Instagram. The app searches for keywords or words that could be politically incorrect and then offers to delete these posts completely.

The app uses sentiment analysis, something that is possible because it uses Watson, IBM’s supercomputer. And Watson must know, since he won Jeopardy. Through sentiment analysis, the app tries to understand, what a post was supposed to mean.

As of now, the algorithm is still not perfected, and the grading scale is a bit confusing. Once the app is out of the beta phase, the grading scale is supposed to reach from zero to one hundred. The higher the score, the safer the profile.

But even with Watson’s help, the app will have to go through a learning curve in order to understand which post really is offensive and when you only used the keyword “black” to describe your snazzy new shoes.

Since Czahor is adamant that his statements were taken out of context and by no means meant to be insulting or offensive, he is also adamant that his app is not meant to protect those that really are using racist slurs or other comments along that line. He insist, and likely rightly so, that many millennials might have posted comments at some point that will come back to haunt them years later.

A picture of a drunken night out might have seemed innocent at the time, but since more and more recruiters look at a candidate’s social media history when considering them for a job, they might not want their future bosses to see them chugging Tequila shots. Strictly speaking it is not legal in every country for future employers to look at candidate’s profiles but that does not mean it does not happen.

Czahor states that one of the purposes of the app is to make users aware of their social media history. In some cases they might not even remember having posted that one comment ten years ago and Czahor feels that it is unfair if they are later judged by that comment alone.

Since the algorithm cannot catch everything that might be judged offensive in a specific context, it is unlikely to actually help save careers, especially since there are ways to find deleted tweets and once they have been picked up by others it becomes nearly impossible to delete them. The app could indeed make users aware of the pitfalls of social media, however, something that will become more and more important in the future, considering how much people share on their accounts – and then forget about.