Can the algorithm help us detect fake messages?

Fake news: can algorithms expose fake news?

The researchers then asked other participants to assess which of the stories were lies. They recognized this with an accuracy of 53 percent - so you might as well have tossed a coin. "People are bad at seeing lies," says Rubin. "We have a wrong tendency, we like to believe everything, we don't look for lies." An algorithm, on the other hand, which Rubin then trained with the same true and false stories, apparently found patterns in the texts that suggested lies: It assigned 63 percent of the texts correctly. While this is also far from reliable, it is a matter of defining what serves as a measure of the "real certainty" demanded by "Wired". The human being? Then the machines are already better today.

"People are bad at recognizing lies" (Victoria Rubin)

The researchers used various other training data in the following years, for example satirical articles that also contain "false" claims - at least if one does not understand the irony and satire contained therein, which is also difficult for many people. In 2016, an algorithm by Rubin and her team recognized satire with a reliability of 87 percent - and when the researchers programmed additional rules, such as that the first and last sentence of such a satirical article usually contain particularly clear indications, they even achieved an accuracy of 91 percent. "Until recently it was said that irony, sarcasm and emotions only recognize people," says Rubin. So why should lie detection remain a human domain?

But which patterns do the machines recognize? How do they convict people of being cheated? The researchers have only partially deciphered that - and it very much depends on the form in which the lies are present. In conversation, for example, flabby people express themselves more verbatim than the average, they use more words that are related to the senses, such as "see" or "feel", and their sentences are on average less complex. On the other hand, board members of large corporations use expressions such as "fantastic" or "excellent" more often in public speeches when they are deliberately lying.

What is fake news?

But that also shows one thing: lies are not just lies. Obviously, it must be very precisely defined what kind of untruths and what situations are meant for the algorithms to have a chance. "That alone is a huge problem," said former Echo developer Rao to "Wired". “We have to spend a lot of time on our own to define what exactly fake news is.” And that is perhaps the greatest challenge: Pattern recognition algorithms need a well-defined problem.

"Of course we have to determine exactly what we are looking for," says Rubin. But fake news is not clearly defined: sometimes it's rumors, sometimes deliberate lies, sometimes misunderstood satire or irony. And sometimes just misunderstandings. An algorithm that exposes conscious lies relatively well fails if someone believes these lies and reproduces them in their own words. Because then it does not use the predictors that Rubin and their machine learning processes discovered in conscious untruths. He then speaks as if he were telling the truth - because he himself believes it is the truth. Then only the context helps to clearly recognize content as wrong or right. And with that, the algorithms have a problem: recognizing it intuitively is a human gift that computers cannot easily adopt.

But if you look closely, you will also notice that the researchers are not quite so divided in the end. Even though Pomerleau is certain that he will win his bet (and rightly so, if only because 90 percent accuracy is a major challenge), he believes that a combination of man and machine is the best cure for fake news. Algorithms could help alleviate the problem, he hopes, by exposing potential hoaxes and then asking people for advice on which of the messages are actually problematic. Delip Rao, who topped up Pomerleau's bet by another US $ 1,000, confirms: "We need the people in the system, the judgment of experts is irreplaceable."