A Manhattan Project for AI safety

In responding to the rise of artificial intelligence, Washington has turned to its usual playbook, with the White House hosting tech CEOs as lawmakers float a variety of proposals on Capitol Hill.

But the speed at which AI is developing, and the dire warnings from many of those who understand the technology best, are, to put it mildly, unusual.

That’s why one think tanker immersed in the technology believes the federal response needs to include a super-charged research project that will force tech companies to coordinate their efforts, create cordoned-off environments to test risky advances and pour resources into studying how these large language models actually work.

In other words, a Manhattan Project for AI safety, as Samuel Hammond put it in an opinion essay published this afternoon in POLITICO Magazine.

Such an initiative, he argues, could forestall the risks of AI while giving regulators and technologists a chance to understand its inner workings well enough to make sure it does not bring about catastrophe.

Hammond is a senior economist at the Foundation for American Innovation, the new name for what until recently had been the Lincoln Network, a tech-focused think tank with a libertarian bent.

In recent months, he has been wrestling with the societal implications of AI’s rapid rise. In December, Hammond published an edition of his newsletter, Second Best, presciently titled “Before the Flood,” predicting the technology would strain many existing governance structures. It’s worth the click just to be reminded of the old days of five months ago, when the outputs of AI image generators still had a rough, dreamlike quality.

Compare that to the video Hammond created last month, a faux ad for Balenciaga — the avant-garde Paris fashion house known for its edgy marketing — featuring AI renderings of the George Mason University economics faculty reimagined as fashion icons.

Hammond told me he made the whole thing from scratch in a couple of hours, using publicly available media AI tools to generate scripts, mock-ups of their voices and the video itself.

The video’s subject is absurd and entertaining, but the output, and the speed with which it was created, are uncanny.

In calling for a second Manhattan Project, Hammond brings the perspective of someone thinking professionally about AI policy as he tinkers at the outer boundary of what’s possible with the technology,

His conclusion, as he writes in today’s piece: “Making the most of AI’s tremendous upside while heading off catastrophe will require our government to stop taking a backseat role and act with a nimbleness not seen in generations.”

In arguing for this approach, Hammond rules out a moratorium on AI development in favor of maximum engagement with the technology.

Indeed, recent events suggest that a moratorium might be impossible to enforce. On Thursday night, Bloomberg reported that a Google engineer recently warned in an internal document that open-source large language models threaten to out-compete privately owned versions being developed by tech companies.

In some sense, the cats are already out of the bag, and are rapidly evolving on their own in the wild. But this poses problems for a proactive federal response.

Attempting to corral, study and domesticate these AI models is not the kind of problem our 18th century governance architecture and 20th century federal agencies were built for.

The original Manhattan Project succeeded in beating another government research program, that of Nazi Germany.

A similar Project for AI alignment would not be a race against some other government, but against the progress of a technology that is developing, in large part, independently of any government.

One the hand, this strengthens the case for an exceptional, Manhattan Project-style effort. On the other hand, it raises the question of whether even that would be inadequate to this new and confounding sort of challenge.

ai act-ion

POLITICO’s Morning Tech turned its eye to Europe today, where European Union lawmakers are putting the finishing touches on the text of their sweeping AI Act.

As our colleague Mallory Culhane wrote, Europe’s Parliament is set to vote on the final text Thursday. From there, the text will go to the Parliament, individual EU member countries and the European Commission.

If you haven’t been paying attention, the act classifies various uses of the technology, like its inclusion in hiring practices or in medical devices, by their estimated level of risk to the end user, and then places increasingly onerous restrictions (and penalties) on the companies developing and implementing AI. It’s the most sweeping approach any polity worldwide has taken to the tech — one that, maybe expectedly, the U.S. and U.K. are likely to counter by giving companies more of a free hand.

So naturally as the action heats up, U.S. tech moguls are heading overseas to make their voices heard in the process as well, as Mallory also noted. OpenAI’s Sam Altman is scheduled to hit Brussels later this month, around the same time as Google CEO Sundar Pichai. — Derek Robertson

the future top 40

Ever turn on the radio, or Spotify, or (insert music streaming service here) and wish everything sounded only slightly different on the surface but was nevertheless deeply disorienting?

Well, you’re in luck, with your weirdly masochistic/novelty-seeking musical taste: The website aihits.co has collected the most listened-to AI-generated pop music. There is, of course, the (fake) collaboration between Drake and The Weeknd, “Heart On My Sleeve” (along with a slew of other Drake knockoffs that are close enough to his well-worn artistic sensibility). There are modestly reasonable facsimiles of deceased artists like Juice WRLD and XXXTentacion. There’s Kanye West covering Taylor Swift, or Carly Rae Jepsen, or Lana Del Rey.

The platform’s creator, Michael Sayman, claimed on Twitter that it drew 200,000 total listens in a recent week amid a parade of hyper-earnest cheerleading for its manufactured “hits.” A critical note, however: The strangest thing about the tunes collected on AI Hits isn’t their sometimes-spookily-closeness to their source material, but how they reflect what human listeners seem to want, which is to say, more of the same. Like South Park’s “memberberries,” AI-generated music fans are apparently clamoring for a stream of hits that remind them of already-existing hits, which they can… actually still listen to at any time, a strangely backward-looking use of such a revolutionary technology. — Derek Robertson

Tweet of the Day

the future in 5 links

  • AI could take on one of humanity’s most daunting tasks: Reaching inbox zero.
  • The tech industry might be slumping, but it’s a boom time for AI startups.
  • An NFT company spun off from the South China Morning Post is preserving history.
  • The global playing field for AI regulation remains uneven, to say the least.
  • …What does it actually mean to be “sentient,” anyway?

Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); Steve Heuser ([email protected]); and Benton Ives ([email protected]). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.