Because artificial intelligence is still terrible in detecting online violence

But on Friday, when a suspected terrorist in New Zealand broadcast live video on Facebook of a mass murder, the technology was of no help. The gruesome broadcast went on for at least 17 minutes until the New Zealand police reported it to the social network. Video recordings and related posts about it have invaded social media as companies have tried to keep up.

Because AI, which is already used by major social networks to help moderate status updates, photos and videos uploaded by users, can simply be implemented in larger measures to remove such violence as quickly as it does it appears?

A big reason is that, whether it is hateful written posts, pornographic material or violent images or videos, artificial intelligence is not yet able to identify offensive content online. This is largely due to the fact that, while humans are good at understanding the context surrounding a status update or YouTube, the context is a difficult thing to understand for IA.

Artificial intelligence has improved dramatically in recent years and Facebook, Twitter, YouTube, Tumblr and others increasingly rely on a combination of artificial intelligence and human moderators to control user-submitted content.

Huge volume of posts

But with an enormous volume of posts appearing on these sites every day, it is difficult even for this combination of people and machines to keep up. IA still has a long way to go before it can reliably detect hate speech or online violence.

Machine learning, AI technology companies depend on finding unpleasant content, they understand how to identify patterns in data stacks; can identify offensive language, video or images in specific contexts. This is because these types of messages follow patterns on which artificial intelligence can be trained. For example, if you give a machine learning algorithm to a lot of images of guns or written religious insults, you can learn to identify those things in other images and text.

However, AI is not good at understanding things like who is writing or uploading an image, or what might be important in the surrounding social or cultural context.

Especially when it comes to language that incites violence, the context is "very important," said Daniel Lowd, associate professor at the University of Oregon who studies artificial intelligence and machine learning.

Comments may seem very violent superficial, but in reality be satirical to protest against violence. Or they may seem benign, but be identifiable as dangerous to someone with knowledge of recent news or the local culture in which they were created.

"Much of the impact of a few words depends on the cultural context," said Lowd, pointing out that even human moderators have difficulty analyzing it on social networks.

Identify violence

Although violence appears to be shown in a video, it is not always so simple that a human being – not to mention a trained machine – can locate it or decide what to do better with it. A weapon might not be visible in a video or photo, or what appears to be violence could actually be a simulation.

In addition, factors such as lighting or background images can tear off a computer.

The Internet is radicalizing white men. Great technology could do more

It is computationally complicated to use artificial intelligence to find violence in video, in particular, said Sarah T. Roberts, an assistant professor at UCLA who is researching content moderation and social media.

"The complexity of that medium, the specificities surrounding things like, not only as much as many frames per second, but then add things like making the meaning of what was recorded, is very difficult," he said.

"It's not simply that using artificial intelligence to glean the meaning of a video is difficult," he said. It does so with the high volume of videos that social networks see day after day. On YouTube, for example, users upload over 400 hours of video per minute or more than 576,000 hours a day.

"Hundreds of thousands of hours of video are what these companies trade," Roberts said. "This is actually what they urge and what they want."



Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.