BUSINESS

Can the AI that makes ‘fake news’ viral also help filter it?

February 15, 2018

DIGITAL specialists across the world are making rapid advances in using machine learning to manipulate video, images, and sound ‘imagined’ by software, based on hundreds of input data that are scanned by the computer to create a new composite, or a counterfeit version of reality.

Bombarded by 300 million photo uploads per day on Facebook, or 55 billion messages sent out on WhatsApp each day, the average human brain is unlikely over time to be able to distinguish the real from the fake. The question is: Can the tables then be turned to re-deploy artificial intelligence to help us tell the difference between what’s real and what isn’t?

Predictions based on research and trend analysis by Gartner forecast that by 2020, an AI-driven creation of ‘counterfeit reality’, or fake content, will outpace AI’s ability to detect it, fomenting digital distrust. Most of this fake reality will be delivered to us via platforms such as Facebook, WhatsApp, and YouTube, right to our smartphones.

Gartner defines counterfeit reality as digital media manipulated to portray events that never occurred or did not occur in the manner in which they are presented. Currently, AI can successfully categorize images as well as humans at a fast rate. Although AI also has the best chance of detecting and fighting counterfeit reality, it has also increased the ability to create it, making it two sharp edges of the same sword. “Unfortunately, the ability to detect lags behind the ability to create,” Gartner says in its report.

CHALLENGE FOR COMPANIES

By 2022, the majority of individuals in mature economies will consume more false information than true information, Gartner predicts. This raises challenges for the corporate sector as well. “With an increasing amount of fake news, companies need to closely monitor what is being said about their brand and the context in which it is being said. Brands will need to cultivate a pattern of behavior and values that will reduce the ability of others to undermine the brand,” said Daryl Plummer, vice president and Gartner Fellow.

This is relevant in the context of social platforms setting up business-only versions of their products. Neeraj Arora, the 35-year-old vice president at WhatsApp, the company that Facebook acquired for $19 billion, spoke at a session titled ‘55 Billion Messages a Day: The Story of WhatsApp’ at the World Government Summit 2018, on Feb. 11, about the launch of WhatsApp Business: “With the app, businesses can interact with customers easily by using tools to automate, sort, and quickly respond to messages.”

An AI technology known as deep learning has proved very powerful at solving problems and has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is a possibility that the same technology will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

Deep learning is responsible for today’s explosion of AI. It has given computers extraordinary powers, like the ability to recognize spoken words almost as well as a person could, a skill too complex to code into the machine by hand. Deep learning has transformed computer vision and dramatically improved machine translation.

MANIPULATION MARCHES ON

At the other end of the spectrum, however, the technologies that will make the generation of fake news easier and more accessible to the layperson are making rapid strides ahead.

In 2017, we saw the release of FaceApp a smartphone app that can automatically modify someone’s face to add a smile, add or subtract years, or swap genders. The app can also apply ‘beautifying’ effects that include smoothing out wrinkles and, more controversially, lightening the skin.

The same year, a company spun out of the University of Montreal demonstrated technology that it says can be used to impersonate another person’s voice. The company posted demonstration clips of Barack Obama, Donald Trump, and Hillary Clinton — all ‘endorsing’ the technology.

These are examples of how the most powerful AI algorithms can be used to generate content rather than simply analyze data.

Powerful graphics hardware and software, as well as new video-capture technologies, are also driving this trend. Also in 2017, researchers at Stanford University demonstrated a face-swapping program called Face2Face. This system can manipulate video footage so that a person’s facial expressions match those of someone being tracked using a depth-sensing camera. The result is eerily realistic.

These applications use deep generative convolutional networks to perform their tricks. The technology has emerged in recent years as a way of getting algorithms to go beyond just learning to classify things and start generating plausible data of their own.

CALLING OUT THE FAKERS

But can researchers not use the technologies in reverse to detect ‘news’ that has been either manipulated or even created from scratch?

Justus Thies, a doctoral student at Friedrich Alexander University in Germany and one of the researchers behind Face2Face, says he has started a project aimed at detecting manipulation of video. “Intermediate results look promising,” he says.

Yaroslav Goncharov, the CEO of FaceApp, says people will just have to learn to stop taking videos at face value. “If ordinary people can create such content themselves, I hope it will make people pay more attention to verifying any information they consume,” he said in an interview. “Right now, a lot of heavily modified/fake content is produced and it goes under the radar.”

Thies has argued that big tech platforms like Facebook have a duty to proactively police for fraudulent media. “Social-media companies as well as the classical media companies have the responsibility to develop and set up fraud detection systems to prevent spreading/shearing of misinformation,” he said.

One such algorithm is AdVerif.ai, developed by a start-up of the same name. The AI software is built to detect phony stories, nudity, malware, and other types of problematic content. AdVerif.ai launched a beta version in November 2017, and works with content platforms and advertising networks in the United States and Europe.

According to Or Levi, the founder of AdVerif.ai, individual consumers might not worry about the veracity of each story they are clicking on, but advertisers and content platforms have something to lose by hosting or advertising bad content. “If they make changes to their services, they can be effective in cutting off revenue streams for people who earn money creating fake news. It would be a big step in fighting this type of content,” Levi said in an interview to the MIT Technology Review.

AdVerif.ai’s FakeRank software leverages knowledge from the Internet with deep learning and Natural Language Processing techniques to understand the meaning of a news story and verify that it is supported by facts. Levi says he eventually plans to add the ability to spot manipulated images and have a browser plugin.

A PLATFORM RESPONSIBILITY

Both Google and Facebook have announced separate initiatives to fight the pandemic of fake news proliferating on their platforms. You can expect to see optional plug-ins for your browsing, messaging, and sharing apps that tag potential fake news with warnings. — SG


February 15, 2018
2000 views
HIGHLIGHTS
BUSINESS
13 hours ago

Jeeny Launches the Month of Goodwill’s CSS Initiative

BUSINESS
day ago

Infinix Note 40 Pro 5G - The gaming powerhouse at a JAW dropping price of SR999

BUSINESS
day ago

Xiaomi: Chinese smartphone giant taking on Tesla