NEW YORK, January 25, 2023– Alice, formerly ActiveFence, the leading trust & safety technology company, today launched a new Research & Development (R&D) blog, which will share the insights and knowledge ActiveFence researchers have gained while on the mission to make the internet a safer place. The blog will outline the distinct challenges that researchers have faced, ActiveFence’s complex development process, and the advancements made with the company’s solutions.
Protecting platforms and their users is one of today’s most complex, nuanced challenges, and it requires not just the dedication of skilled data scientists, engineers, and developers, but also collaboration and knowledge-sharing among companies, academic institutions, and research organizations. By sharing information, ActiveFence hopes to inspire other teams to work together to create new opportunities and innovations, especially in support of organizations with limited in-house capabilities.
“R&D is a critical component of any Trust & Safety organization,” said Iftach Orr, CTO and Co-Founder at ActiveFence. “Our R&D team is performing groundbreaking work developing new technologies and techniques to identify and mitigate risks, and ensure user safety. Through this new blog, we hope to share our approach and have other teams engage in our mission and think creatively about the challenges to make the internet a safer place.”
The blog will launch with numerous new articles on a range of topics – from ActiveFence’s AI capabilities and the metaverse to deep learning, data models and the importance of context in AI models – and will publish monthly thereafter. Written by members of the ActiveFence R&D team, the articles will go deep on AI and other technical topics. Some of the blogs publishing today include:
- The Metaverse: Helping Platforms Keep Us Safe in New Digital Territory
- Constructing and Querying a Data Model for Online Harm
- How Did We Get to a Deep Learning Model for Symbol Recognition With a Small Amount of Labeled Information?
- How to Overcome Biased Data by Generating Synthetic Samples
ActiveFence’s R&D team consists of 70 people, who develop machine learning algorithms for detecting harmful content, research new methods for protecting user privacy, and help companies stay ahead of evolving threats and address potential threats before they reach users.
To learn more about how ActiveFence safeguards online platforms and users against online harm, please visit our website at www.activfence.com.
About ActiveFence:
ActiveFence is the leading Trust and Safety provider for online platforms, protecting over three billion users daily from malicious behavior and content. Trust and Safety teams of all sizes rely on ActiveFence to keep their users safe from the widest spectrum of online harms, including child abuse, disinformation, hate speech, terror, fraud, and more. We offer a full stack of capabilities with our deep intelligence research, AI-driven harmful content detection and moderation platform. ActiveFence protects platforms globally, in over 100 languages, letting people interact and thrive safely online.
Alice Data Advantage
Alice is the world’s largest collector and manager of adversarial intelligence data. Our data is the cornerstone for protecting platform, tech, and users online.
Learn More >What’s New from Alice
Securing Agentic AI: The OWASP Approach
In this episode, Mo Sadek is joined by Steve Wilson (Chief AI and Product Officer at Exabeam, founder and co-chair of the OWASP GenAI Security Project) to explore how OWASP is shaping practical guidance for agentic AI security. They dig into prompt injection, guardrails, red teaming, and what responsible adoption can look like inside real organizations.
Distilling LLMs into Efficient Transformers for Real-World AI
This technical webinar explores how we distilled the world knowledge of a large language model into a compact, high-performing transformer—balancing safety, latency, and scale. Learn how we combine LLM-based annotations and weight distillation to power real-world AI safety.
