Online platforms face unavoidable responsibilities for Trust and Safety, particularly in maintaining election integrity by combating disinformation and other dangers. AI further complicates these challenges, not only through the creation of deepfakes but also by empowering more malicious entities.
Amidst the conflict between Hamas and Israel, a disturbing surge in antisemitic and Islamophobic hate speech has swept across social media platforms. Extremist influences, fueled by the ongoing conflict between Israel and Gaza, have played a significant role in exacerbating this alarming rise in hate speech online.
The shift away from in-house trust and safety teams has created an opportunity for consultancies and startups to introduce something novel: trust and safety as a service.
When the militant group Hamas launched a devastating surprise attack on Israel on Oct. 7, some fighters breached the country’s defenses in motorized paragliders. In the following days, photos and illustrations of Hamas forces coasting by wing became highly charged, controversial symbols: an emblem of Palestinian resistance to some, a glorification of terrorism to others.
The startup ActiveFence, a trust and safety provider for online platforms, is one company sounding the alarm about how predators are abusing generative AI, and helping others in the tech industry navigate the risks posed by these models.
TikTok became the world’s window into the conflict in Israel. Clips from a music festival in southern Israel, where 260 attendees were killed and more taken hostage according to Israel rescue agency Zaka, broke through the algorithm’s regularly scheduled lighthearted programming. For the most part,Noam Schwartz thinks TikTok has played a positive role in the conflict. “People would not believe the magnitude of this event without it being amplified in social media,” he said.