How to tell what’s real online

With help from Derek Robertson

The video opens with a woman in a black turtleneck, who says you can’t trust your eyes.

As it turns out, she’s an AI-generated deepfake. The woman is named Nina Schick, and although it looks like a completely natural video shot in a studio, she has never said those words, that way, in real life.

Scary stuff, right? But ah — the video also tells on itself. A labeled dropdown menu in the top left helpfully informs you that the video “contains AI-generated content.” There’s a timestamp, and a credit.

The video is the latest salvo in a war between people who create or share fake visuals and the people who want to find a way to flag them — basically, alert the public to what is real. It was released on Tuesday by technology company Truepic and production studio Revel.ai to promote the idea of a relatively new transparency standard for digitally created content.

Right now, so-called “deepfakes” are getting better and better — think about the relatively benign image of the pope in a spiffy Balenciaga jacket that fooled Twitter for a hot minute there. It was made by a Chicago-based construction worker using the AI image generator Midjourney. The growing unease around convincing, mass-producible synthetic media has prompted some big players in the tech industry to pursue the idea of an internal standard for authenticating content — something that companies and publishers can agree on and which consumers could look for when they decide what to believe.

Tuesday’s video is encrypted with a content certification standard called the C2PA. The technical-sounding name is just the acronym of the group behind it, the Coalition for Content Provenance and Authenticity. The C2PA standard is backed by a set of tech giants, including Adobe, Arm, Intel, Microsoft and Truepic. All those companies see a need to bolster trust in digital content in the era of generative AIs that are making it easier and easier to create convincing fake content.

Ok, but will it work?

A world in which the C2PA standard becomes the solution to a growing digital trust problem requires a great deal of coordination.

Without a law — and there’s no prospect of a law anytime soon — a whole chain of players would need to adopt it. Companies that build video and photo tools — including cellphone and camera manufacturers — would need to incorporate the C2PA authentication standard at the point of capture. Users, like the Bellingcat founder who created the AI-generated images of Trump’s arrest, would need to be proactive about including content credentials in the visuals they produce. Mainstream publishers and social media companies would need to look for the credentials before displaying the image on their platforms. Viewers would need to expect a little icon with a dropdown menu before they trust an image or video.

One point of the deepfake exercise was to publicly call out companies that aren’t participating, despite having access to content authentication tools like the C2PA. Schick, the person in the video, laid out C2PA’s case in an interview, naming companies that build AI tools for users to generate images and those that provide a platform for users to post these images:Why isn’t OpenAI doing it? Why isn’t Stability AI? Why aren’t Twitter or Facebook?” asked Schick.

Andrew Jenks, co-founder and chair of the C2PA project, sees the authentication standard as an important digital literacy effort whose closest parallel is the widespread adoption of the SSL lock icon that guarantees a secure Web page. “We had to train users to look for the little padlock icon that you see in every browser today,” Jenks said. “That was a really hard problem and it took a really long time. But it’s exactly the same kind of problem as we’re facing with media literacy today.”

By day, Jenks is a principal program manager at Microsoft’s Azure Media Security. “Everyone in the C2PA has a day job as well.” Jenks told me. “This is a volunteer army.”

But in the larger war against misinformation, not everyone thinks a new file standard will solve the big problems. Dr. Kalev Leetaru, a media researcher and senior fellow at George Washington University, pointed out that “fake images” are just one part of the issue. “Much of the image-based misinformation in the past was not edited imagery, but rather very real imagery shared with false context,” he said. And we already have strong tools to trace an image across the web and track it back to its origin, “but no platform deploys this in production. The problem is that social media is all about context-free resharing.”

And then there’s the wider world, where misinformation is, if anything, more dangerous than it is in the U.S. “We’re talking about this from the standpoint of the U.S. and the West.” Dr. Leetaru noted. “Even if this technology is rolled out on every new iPhone and Android phone that’s out there today, think about how long it’s going to take before it propagates outward across the world.”

Leetaru’s concern is that during the period before widespread adoption of a standard, images or video recordings from citizen journalists that don’t have the standard will be mistrusted. And anonymity can also be a critical tool for dissidents living under authoritarian governments, meaning an encryption tool intended to trace an image’s provenance back to its point of origin can backfire on those capturing the images. (For what it’s worth, Truepic and Microsoft announced a pilot program last week called Project Providence to authenticate images coming out of Ukraine, taken by Ukrainian users documenting the country’s cultural heritage.)

And to be clear: even its advocates don’t think the C2PA is a “silver bullet.” Jenks said the C2PA is simply “one part of what we in the security world would call defense in depth.”

Still, there’s growing support for the idea of authenticating images at their source. “I’ve done hundreds of meetings on C2PA technology at this point. I do not believe a single person has said, ‘That’s not something we need,’” Jenks told me.

the dawn of the ai scam

It was all but inevitable that “AI,” generally (and vaguely) defined, would become as powerful a tool for scams as it is for computing.

And so it’s gone in Texas, as POLITICO’s Sam Sutton reported yesterday afternoon for Pro subscribers: Regulators in three states have issued cease-and-desist orders to “YieldTrust.ai,” a trading platform that claimed to execute “70 times more trades with 25 times higher profits than any human trader could” through the power of AI, according to the Texas State Securities Board’s complaint.

Joe Rotunda, the Board’s Director of Enforcement, said the company was in reality promoting “the equivalent of nothing” — a platform that state analysis concluded could “Blacklist users and prevent them from withdrawing funds, receiving interest or receiving refunds,” “Change the used token at any time, potentially preventing users from withdrawing funds,” and a number of other risks. The publication of that report led YieldTrust.ai to announce it was shutting down, but regulators say it continued to accept investor funds.

It seems like a pretty unsophisticated scam, but Rotunda told Sam it’s likely just the beginning: “The initial scams are less sophisticated,” he said. “More sophisticated, more dangerous cases will come.” — Derek Robertson

chatgpt's european troubles

The Italian ban on ChatGPT might be just the beginning of OpenAI’s troubles in Europe.

POLITICO’s Clothilde Goujard and Gian Volpicelli reported today on how the company, which has no European office, is likely to bump up against the European Union’s more robust regulatory regime in other member states — specifically the General Data Protection Regulation under which Italy’s regulators announced their ban.

The European Consumer Organization also asked the EU and member governments to investigate ChatGPT last week, warning the Union’s upcoming AI Act might not take effect before “leaving consumers at risk of harm from a technology which is not sufficiently regulated during this interim period and for which consumers are not prepared.”

As Gabriela Zanfir-Fortuna, of the Future of Privacy Forum think tank, told Clothilde and Gian: “Data protection regulators are slowly realizing that they are AI regulators.” — Derek Robertson

tweet of the day

the future in 5 links

Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); Steve Heuser ([email protected]); and Benton Ives ([email protected]). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.