Security vs. Censorship – Monitoring Social Content

In recent months, social media platforms have come under intense criticism for their handling of content showcasing violence and promoting online harassment.

Facebook CEO Mark Zuckerberg announced earlier this month that the company would be hiring 3,000 additional employees over the next year who will monitor and react to reports of harm and harassment on the platform. This new wave of hires – an increase of nearly 70 percent of current staff dedicated to the area –largely is a response to recent acts of violence committed through the Facebook Live feature, with many saying that the company has a social responsibility to prevent and react to such incidents.

Twitter has faced similar online harassment allegations within the past year. Incidents, such as comedian and actress Leslie Jones being the target of racial slurs, have sparked conversation about further steps the platform should take to ensure that harassers are reprimanded for their actions, or even blocked from the social channel. Earlier this year, Twitter announced the rollout of new reporting tools that would ensure better follow-up to harassment claims, but many say these such measures are not enough.

Cases of violence and harassment being perpetrated across social media platforms undoubtedly are wrong and should be addressed. The question at hand, however, is how far can platforms go to protect users?

Some view the technology as a new wave of defense against violence. Will Facebook’s new content surveillance group’s protocol be to simply remove or suspend content that promotes violence, or could it also entail contacting local police authorities based upon geolocations? The platform could not only prevent propagation, but address and even prevent it through greater security measures.

Many fear that social media platforms may turn security into a 1984-esque, dystopian form of censorship. Twitter has been alleged of removing critical tweets of United Airlines in the wake of its April debacle in which a man was physically removed from an overbooked plane. Facebook previously was accused of removing posts featuring the iconic Vietnam War photo “Napalm Girl” because the image was deemed inappropriate. To what extent should social media platforms deem what can and cannot be dispersed?

Regardless of one’s viewpoint on how companies should approach the issue, social channels have a civic responsibility to ensure that their users are safe and protected.

Birmingham, AL |

Washington, D.C. |

Contact

©2024 Markstein