The startup ActiveFence, a trust and safety provider for online platforms, is one company sounding the alarm about how predators are abusing generative AI, and helping others in the tech industry navigate the risks posed by these models.
TikTok became the world’s window into the conflict in Israel. Clips from a music festival in southern Israel, where 260 attendees were killed and more taken hostage according to Israel rescue agency Zaka, broke through the algorithm’s regularly scheduled lighthearted programming. For the most part,Noam Schwartz thinks TikTok has played a positive role in the conflict. “People would not believe the magnitude of this event without it being amplified in social media,” he said.
ActiveFence, one of the bigger startups building tech for trust and safety teams, has acquired Spectrum Labs, another key startup in the space building AI tools to track online toxicity.
Russian propaganda is spreading into the world’s video games. Propaganda is appearing in Minecraft and other popular games and discussion groups as the Kremlin tries to win over new audiences.
The revolution in artificial intelligence has sparked an explosion of disturbingly lifelike images showing child sexual exploitation, fueling concerns among child-safety investigators that they will undermine efforts to find victims and combat real-world abuse.
Child safety experts are growing increasingly powerless to stop thousands of "AI-generated child sex images" from being easily and rapidly created, then shared across dark web pedophile forums. This explosion of disturbingly realistic images could normalize child sexual exploitation, lure more children into harm's way, and make it harder for law enforcement to find actual children being harmed.