Deepfake proliferation poses risks to creator and brand safety
Deepfake proliferation poses risks to creator and brand safety. Learn how unauthorized use of AI-generated content impacts individuals and businesses.
Creator Michel Janse discovered that her likeness was being used without her consent to promote erectile dysfunction pills while she was on her honeymoon. She learned about this when people messaged her about seeing an ad that featured her, although she did not create such an ad. The ad, a deepfake, showed Janse discussing her husband's ED issues and used visuals from a personal YouTube video about her divorce. This experience, combined with the fact that the ad led to "basically pornographic" content, was deeply violating for Janse.
The rise of AI-generated deepfakes is a concern for creators and celebrities alike, including Jennifer Aniston, Taylor Swift, and Tom Hanks. The potential for damage is significant, as anyone can use someone's likeness to spread fake news or misinformation. While there are some state-level laws and federal legislation being introduced to regulate deepfakes, federal regulation has yet to be passed. Tech companies like Google, Meta, and TikTok have implemented guidelines for labeling and limiting AI-generated deepfakes, especially in the context of elections. However, the pace of legislation often lags behind technological advancements.
Reflecting on her experience, Janse acknowledged that she should have been more shocked or concerned but had mentally prepared for such a situation given the risks associated with the internet.
"We're living in a Black Mirror episode," as concerns over deepfakes rise, brands are exploring the use of authorized AI-generated likenesses. In 2018, Cara Delevingne collaborated with German retailer Zalando for over 290,000 localized ads featuring her likeness. Similarly, Bollywood star Shah Rukh Khan teamed up with Cadbury Celebrations and Ogilvy India in 2021 for a campaign where small-business owners could create ads with Khan mentioning their business by name. Queen Latifah and Lenovo had a similar campaign last year. Meta reportedly paid celebrities like Kendall Jenner and Tom Brady for permission to use their likenesses in its AI Personas.
As AI becomes more prevalent in influencer marketing and creative industries, the choice is between embracing it or fighting it. However, some brands have been inadvertently dragged into the AI conversation. Le Creuset, for instance, was featured in videos with deepfakes of stars like Selena Gomez and Taylor Swift offering giveaways of the cookware in a scam. Le Creuset clarified that it was not involved in any giveaway with Taylor Swift and directed users to its official social channels for legitimate promotions.
Some companies are exploring ways to monitor social media to detect unauthorized AI-generated use of their brands. For example, General Mills partnered with verification service Zefr last year to identify instances of user-uploaded, AI-generated content involving their brand. Zefr's Chief Commercial Officer, Andrew Serby, believes that the advertising industry will eventually seek to establish a common agreement governing the use and enforcement of AI, similar to the GARM Brand Safety and Suitability Framework.
When asked for advice for brands facing issues with deepfake videos, such as those involving Le Creuset, he suggested developing action plans and "structured policies" regarding generative AI. Otherwise, he warned, trying to catch up with the spread of AI-generated content will be like playing a game of Whac-a-Mole.
For those whose images are used without consent, there are limited options currently, according to Wasim Khaled, CEO and co-founder of Blackbird.AI, a narrative and risk intelligence service. Taking legal action against a company or individual using their likeness is one recourse.
However, Janse is not optimistic about this option. "If celebrities have struggled with this issue without much success, I hate to say that I'm pessimistic about seeing a change anytime soon, but I am," she said.
Trusting what's real
For Janse, finding solace in the situation came from knowing that her followers quickly recognized the deepfake as being inconsistent with her usual content.
"I have some faith in people being aware of what's happening these days," she said. "I know many people can fall for things like that, but my initial instinct was, 'It's okay, it'll be fine. People know I wouldn’t do this, people know I'm not married to a man named Michael.'"
Titus emphasized the need for more consumer education on distinguishing between fake and real content. One suggestion he proposed is for brands to mandate the declaration and disclosure of all AI content, though he acknowledged this might not happen immediately.
"It took a while before it became mainstream to have paid disclosure around influencer content," he said. "I would assume it's going to take some time for us to get there."
Titus warned that without clear labeling, brands, creators, and celebrities risk losing the trust of their audiences. Despite these risks, he noted that there are still significant benefits to using AI-generated likenesses for these groups, particularly in hyperpersonalizing messaging.
While Janse waits to see more regulations and disclosure requirements for AI content, she said this incident has prompted her to reconsider whether she'll share images of her children online when she becomes a parent or if she'd be comfortable granting a company permission to use her likeness.
"If you had asked me a couple of months ago, I would have said, 'If I felt it was safe, and if it was a significant amount of money, I would consider it,'" she said. "Now, having seen the dark side of things, I would really, really, really need to do my due diligence."