The complexity of social network content filtering is rising.

When social networks depended mostly on text, like Facebook and Twitter, or photos, as was the case with Instagram for a while, efforts by various technology companies to identify and delete violent, hazardous, or misleading information on their platforms have previously shown to be insufficient. Content moderation has grown to be an extraordinary difficulty for the firms themselves in the modern day, where millions of users use the platforms virtually continuously to watch video. Technology corporations have historically frequently minimized this complicated activity because of the financial investment it necessitates. The need to protect the mental health of employees, who are exposed to potentially traumatic information in particular, makes it difficult and expensive to hire moderators who are familiar with the language and context in which content is created.

Even while video moderation still suffers from significant technological constraints, automated moderation solutions are becoming more prevalent. Artificial intelligence in this context makes use of the study of images by connecting them to a collection of other images on which it has been “trained”: in this way, it is possible, for example, to recognize whether a person is depicted in a photo as being naked or clothed or whether they are holding a weapon. This strategy is more challenging to implement and is insufficient when dealing with videos, which are nothing more than a collection of many copies of the same image. A video’s numerous individual pictures might not in and of themselves be against a platform’s rules, but when they are combined, they might create a narrative that is damaging. For instance, a TikTok video that claimed grapefruits could be used to make cocaine was widely shared. While the grapefruit photos used in the video did not violate the platform’s rules, given the context, they likely produced uninformative content. The video of two people debating the price of an antique weapon would be flagged by a system that looks for firearms in every video. The video of someone being shot with a gun while out of frame would not be recognized, according to The Atlantic. Even though YouTube probably has the most expertise with automatic video moderation, millions of people still watch videos every day that are in violation of its policies.

Following TikTok’s success, platforms, headed by Instagram and YouTube, have extensively invested in video shorts. As a result, the shorts have theoretically grown more complicated and have eventually developed into three-dimensional memes that incorporate audio, visuals, and text. This complicates moderation considerably more for humans as well as machines. «Neither a human nor a machine would have an issue with footage of a barren landscape with wind-blown weeds rolling around, any more than they would with an audio clip of someone saying, “Look how many people adore you.” A machine would never comprehend that if you mix the two to insult someone and declare that no one loves them, according to The Atlantic. The same thing happened with a video of American First Lady Jill Biden visiting cancer patients that was widely shared on TikTok. Whistles and cackles, which a user had added in post-production, could be heard in the background. Because of this, only 40% of videos deleted from social networks like TikTok are edited by AI; the other controls, which number in the millions, are still carried out by real people, with all the ramifications it entails. Due to the fact that platforms only consider the quantity of user interactions, not the quality of those interactions, when republishing videos, they paradoxically have a tendency to magnify information that is upsetting or false. For instance, the platform will continue to score a user’s involvement with a video as “good” and then run the danger of magnifying related information in their feed if they had frequently watched a video or commented on it because they thought it was unjust. The New York Times claims that “continued exposure to modified content might promote polarization and diminish viewers’ ability and willingness to discriminate between truth and fabrication.” The risk of misleading content, according to journalist Tiffany Hsu, an expert on the internet and false information, “lies not only in individual posts, but in the way it further erodes the ability of many users to determine what is real and what is not,” It is therefore not a surprise that author Clay A. Johson compares the current creation of low-quality information to junk food in his book Information Diet: appealing content, but little “nutritional value” for the intellect.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s