[ad_1]
Losing access to YouTube, Twitch, and Reddit is not trivial. It means hate-mongers cannot use these mainstream forms of social media to reach new audiences and harass their targets into silence. Because many of the men who were expelled rely on their own names to draw attention and donations, it will be difficult for them to sneak back onto these platforms and develop the same prominence they had before. Already this week, other racist and sexist communities and accounts that produce “borderline content” have been deleting old videos and increasing moderation. Do not mistake this for a “chilling effect,” though; it’s more like an acknowledgement that hateful instigation and harassment of women, people of color, and LGBTQ users will no longer be hidden under the guise of free speech.
Though Twitter and Facebook were not a part of Monday’s anti-hate measures, the purge built off weeks of smaller actions that they’d taken. On Tuesday, Facebook followed up with bans and account removals for individuals associated with the Boogaloo faction, an anti-government group that has been showing up to Black Lives Matter rallies heavily armed and dressed in Hawaiian shirts. It also comes amid two public health crises: Covid-19 and systemic racism. Covid-19 appears to have led all the tech companies to move more aggressively, and concertedly, in recent months; and it laid the groundwork for their taking more action against racist content, as the nation wakes up to the urgency and ubiquity of white supremacy.
Not so long ago, before the pandemic hit, each platform would only tend to its specific user base, keeping up with a triple bottom line by balancing profits with social and environmental impact. Now, having witnessed the terrifying results of unchecked medical misinformation, the same companies understand the importance of ensuring access to timely, local, and relevant facts. After accepting that truth with regard to medical misinformation, it’s impossible to ignore that unchecked racist and misogynist content is terrifying, too, when it’s left out there for anyone to discover at any time. Sadly, we know the violence it brings in its wake. Sadly, we know that the twin crises of racism and Covid-19 are deeply intertwined.
We have seen purges before from YouTube, Twitter, and Facebook. To maximize the benefits of this kind of action, though, we need a plan for what comes next. We can’t let the gains from this great expulsion dissipate as political pressure mounts on tech companies to enforce a false neutrality about racist and misogynist content.
In April 2018, Zuckerberg addressed Congress and told them he would soon have 20,000 people working in security and content moderation. But without a strategy for how to curate content, tech companies will always be one step behind media manipulators, disinformers, and purveyors of hate and fear. Moderation is a plan to remove what is harmful; whereas curation actively finds what is helpful, contextual, and, most importantly, truthful.
Truth needs an advocate and it should come in the form of an enormous flock of librarians descending on Silicon Valley to create the internet we deserve, an information ecosystem that serves the people.The blessing and curse of social media is that it must remain open so we can reap the most benefits; but openness must be tempered with the strong and consistent curation and moderation that these librarians could provide, so that everyone’s voice is protected and amplified.
It is the duty of platform companies to curate content on contentious topics so that their systems do not amplify hate or make it profitable. Tech companies that refuse to adapt for the culture will become obsolete.
WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here. Submit an op-ed at opinion@wired.com.
[ad_2]
Source link