According to Yepic AI, it uses “deepfakes for good” and will “never reenact someone without their consent.” However, the company did what it said it would never do.
Yepic AI sent a TechCrunch reporter two “deepfaked” videos of the reporter, who had not consented to their likeness, in an unsolicited email pitch. In its pitch email, Yepic AI said it “used a publicly available photo” of the reporter to create two deepfaked videos of them speaking different languages.
The reporter asked Yepic AI to delete its illegally deepfaked videos.
Deepfakes are automated photos, videos, and audio that look or sound like people. While not new, generative AI systems allow almost anyone to easily create convincing deepfaked content of others, without their consent.
Yepic AI stated on its “Ethics” page that “Deepfakes and satirical impersonations for political and other purposes are prohibited.” The company stated in an August blog post: “We refuse to produce custom avatars of people without their express permission.”
The company wouldn’t say if it created deepfakes of others without permission.
Yepic AI CEO Aaron Jones told TechCrunch that the company is updating its ethics policy to “accommodate exceptions for AI-generated images that are created for artistic and expressive purposes.”
Jones explained the incident: “Neither I nor the Yepic team were directly involved in the videos in question. Our PR team confirmed that the video was created for the journalist to promote Yepic’s incredible technology.”
Jones said the videos and image used to create the reporter’s image were deleted.
As expected, deepfakes have duped unsuspecting victims into falling for scams and giving away their crypto or personal information by evading moderation systems. Fraudsters used AI to impersonate a company’s CEO to defraud employees of hundreds of thousands of euros. Before fraudsters started using deepfakes, people used them to make realistic-looking porn videos with women’s likenesses.