Home / News / Reality Defender raises $15M for text, video, image detection deepfakes

Reality Defender raises $15M for text, video, image detection deepfakes

Reality Defender, one of several startups developing tools to detect deepfakes and other AI-generated content, raised $15 million in a Series A funding round led by DCVC, Comcast Ventures, Ex/ante, Parameter Ventures, and Nat Friedman’s AI Grant.

Co-founder and CEO Ben Colman plans to double Reality Defender’s 23-person team next year and improve its AI content detection models with the funds.

In an email interview, Colman told , “New methods of deepfaking and content generation will consistently appear, taking the world by surprise both through spectacle and the amount of damage they can cause.” Reality Defender can stay several steps ahead of these new generation methods and models before they appear publicly, being proactive about detection instead of reacting to what just appears today, by adopting a research-forward mindset.”

Reality Defender was founded in 2021 by former Goldman Sachs VP Colman, Ali Shahriyari, and Gaurav Bharaj. Shahriyari worked at Originate, a digital transformation tech consulting firm, and the AI Foundation, an animated chatbot startup. R&D leader Bharaj was Shahriyari’s AI Foundation colleague.

Reality Defender was nonprofit. Colman said the team sought outside funding after realizing the depthfakes problem and the growing commercial demand for deepfake-detecting technologies.

Colman doesn’t exaggerate scope. According to DeepMedia, a Reality Defender competitor that develops synthetic media detection tools, there have been three times as many video and eight times as many voice deepfakes posted online this year compared to 2022.

Deepfakes have increased due to the commoditization of generative AI tools.

Cloning a voice or creating a deepfake image or video used to cost hundreds to thousands of dollars and require data science expertise. However, platforms like voice-synthesizing ElevenLabs and open source models like Stable Diffusion, which generates images, have allowed malicious actors to launch deepfake campaigns at low cost in recent years.

Stable Diffusion and other generative AI tools were used by 4chan users to flood the internet with racist images this month. ElevenLabs has been used by trolls to imitate celebrities’ voices, creating memes, erotica, and hate speech. Chinese Communist Party state actors have created lifelike AI avatars of news anchors who discuss U.S. gun violence.

To prevent abuse, some generative AI platforms have filters and other restrictions. Like cybersecurity, it’s cat-and-mouse.

Colman warned that deepfaked social media content poses a major risk from AI-generated media. No legislation requires these platforms to scan deepfakes, unlike the law requiring them to remove child sexual abuse material and other illegal materials.

Reality Defender’s API and web app analyze videos, audio, text, and images for AI-driven changes to detect deepfakes and AI-generated media. Using “proprietary models” trained on in-house data sets “created to work in the real world and not in the lab,” Colman claims Reality Defender outperforms its competitors in deepfake accuracy.

“We train an ensemble of deep learning detection models, each focusing on its own methodology,” Colman said. “We learned long ago that the single-model, monomodal approach and lab-versus-real-world accuracy testing fail.”

Can any tool reliably detect deepfakes? Open question.

OpenAI, the company behind ChatGPT, pulled its AI-generated text detection tool due to its “low rate of accuracy.” One study suggests that edited deepfake video detectors can be fooled.

Deepfake detection models may amplify biases.

A 2021 University of Southern California paper found that some data sets used to train deepfake detection systems may underrepresent certain genders or skin colors. The coauthors said deepfake detectors can compound this bias, with some showing a 10.7% error rate difference by race.

Colman supports Reality Defender’s accuracy. He says the company uses “a wide variety accents, skin colors and other varied data” in its detector training data sets to reduce algorithm biases.

“We’re always training, retraining, and improving our detector models to fit new scenarios and use cases while accurately representing the real world and not just a small subset of data or individuals,” Colman said.

Call me cynical, but I doubt those claims without a third-party audit. Colman says Reality Defender’s business is strong despite my skepticism. Reality Defender serves governments “across several continents” and “top-tier” financial institutions, media companies, and multinationals.

The competition includes startups like Truepic, Sentinel, and Effectiv and incumbents like Microsoft with deepfake detection tools.

Reality Defender plans to launch a “explainable AI” tool that lets customers scan a document to see color-coded paragraphs of AI-generated text to maintain its position in the $3.86 billion deepfake detection software market, according to HSRC. Real-time voice and video deepfake detection for call centers is coming.

“Reality Defender will protect a company’s bottom line and reputation,” Colman said. Reality Defender helps the largest entities, platforms, and governments determine whether media is real or manipulated using AI to fight AI. This helps fight financial fraud, media disinformation, and government irreversible and damaging materials, to name three of hundreds of use cases.

About Chambers

Check Also

The Air Force has abandoned its attempt to install a directed-energy weapon on a fighter jet, marking another failure for airborne lasers

The U.S. military’s most recent endeavor to create an airborne laser weapon, designed to safeguard …