Because AI, which is already used by major social networks to help moderate status updates, photos and videos uploaded by users, can simply be implemented in larger measures to remove such violence as quickly as it does it appears?
A big reason is that, whether it is hateful written posts, pornographic material or violent images or videos, artificial intelligence is not yet able to identify offensive content online. This is largely due to the fact that, while humans are good at understanding the context surrounding a status update or YouTube, the context is a difficult thing to understand for IA.
Huge volume of posts
But with an enormous volume of posts appearing on these sites every day, it is difficult even for this combination of people and machines to keep up. IA still has a long way to go before it can reliably detect hate speech or online violence.
Machine learning, AI technology companies depend on finding unpleasant content, they understand how to identify patterns in data stacks; can identify offensive language, video or images in specific contexts. This is because these types of messages follow patterns on which artificial intelligence can be trained. For example, if you give a machine learning algorithm to a lot of images of guns or written religious insults, you can learn to identify those things in other images and text.
However, AI is not good at understanding things like who is writing or uploading an image, or what might be important in the surrounding social or cultural context.
Especially when it comes to language that incites violence, the context is "very important," said Daniel Lowd, associate professor at the University of Oregon who studies artificial intelligence and machine learning.
Comments may seem very violent superficial, but in reality be satirical to protest against violence. Or they may seem benign, but be identifiable as dangerous to someone with knowledge of recent news or the local culture in which they were created.
"Much of the impact of a few words depends on the cultural context," said Lowd, pointing out that even human moderators have difficulty analyzing it on social networks.
Although violence appears to be shown in a video, it is not always so simple that a human being – not to mention a trained machine – can locate it or decide what to do better with it. A weapon might not be visible in a video or photo, or what appears to be violence could actually be a simulation.
In addition, factors such as lighting or background images can tear off a computer.
It is computationally complicated to use artificial intelligence to find violence in video, in particular, said Sarah T. Roberts, an assistant professor at UCLA who is researching content moderation and social media.
"The complexity of that medium, the specificities surrounding things like, not only as much as many frames per second, but then add things like making the meaning of what was recorded, is very difficult," he said.
"Hundreds of thousands of hours of video are what these companies trade," Roberts said. "This is actually what they urge and what they want."