5 questions for Russell Wald

With help from Derek Robertson

Happy Friday, folks! Here to send you off to the weekend is our weekly feature: The Future in 5 Questions. Today we have Russell Wald — managing director for policy and society for the Stanford Institute for Human-Centered Artificial Intelligence (HAI). Wald is at the forefront of the academic world’s efforts to engage with and educate U.S. policymakers on AI as they figure out legislation for it.

Read on to hear Wald’s thoughts about the need to correct the balance between corporate AI research versus academic AI research, how synthetic media like TikTok filters and deep fakes are eroding public trust, and what it’s like as an insider to suddenly see your industry at the center of dinner conversations.

Responses have been edited for length and clarity.

What’s one underrated big idea?

NAIRR — or the National AI Research Resource. The NAIRR is an idea we had in 2019. We were able to get the presidents and provosts of 22 of the top 30 computer science universities to sign on to the idea as well.

Right now, you have a slow shift of academia AI faculty moving toward industry work. One reason for this, of course, is the salaries — but salaries aren’t always the most important thing, believe it or not. The other big reasons are access to computing power and to datasets. Without those, you cannot do the type of work that AI is starting to trend towards.

Now, we have this overwhelming shift towards industry-focused AI that focuses on shareholder value under shorter time horizons. In contrast, academia gives you longer time horizons and bigger breakthroughs. So, academia is where you get GPS, CRISPR, the internet — long-term breakthroughs that are later commercialized.

But there is a threat to that general balance in the AI ecosystem because of resource access. Now, many major innovations which have historically come out of research labs are now coming out of consumer-facing product labs under that shorter time horizon. This is potentially going to debase technology. This trend can be ameliorated by giving academia and nonprofit researchers access to desperately needed computing resources and datasets.

The NAIRR Task Force published its final report in January, which it turned over to the president and Congress. The ultimate goal outlined in the report is to have the government subsidize access to computing resources and unlock government datasets for academic and nonprofit researchers. So for example, we did a study about how health care algorithms are primarily coming from three different geographic areas: California, Massachusetts, and New York. That’s not representative of other states. Federal data can give you access to more than three states, and ensure underrepresented communities — like rural areas — and their health care dynamics are properly represented in these datasets. It allows us to train the next generation of researchers as well.

What’s a technology you think is overhyped?

Chatbots for mental health are overhyped. It’s not ready for primetime. Look at Kevin Roose’s experience with one of the most powerful GPT large language models. The problem here is that there’s an overwhelmingly strong need for access to mental health care. And we need to be careful not to fall into this trap of: “Well, let’s solve it with AI.”

Can I talk about a technology that’s underhyped though? Is that at all possible? There’s a little bit of a mental health aspect here as well.

What’s underhyped is synthetic media and how we look at it.

You have two zones here. On one end is the utilitarian aspect — our medical center is using Stable Diffusion to enhance radiological images. On the other end is the nefarious area — the deep fakes zone.

Often when I’m talking to policymakers about generative AI, they go right to deep fakes. And they always reference this one scenario — where an exquisite deep fake could be released by a foreign adversary 72 hours before an election to sway the population and the election.

But what that reminds me of is Leon Panetta’s idea of a cyber Pearl Harbor, well over a decade ago. In actuality, a cyber Pearl Harbor never happens. But that cyber Pearl Harbor idea creates a threshold. And if you could go back in time and say to Leon, underneath that threshold, trillions of dollars of wealth was lost, you might re-strategize differently. In actuality, what I’m most concerned about between the utilitarian zone and the nefarious zone is the ubiquitous amount of synthetic media: from TikTok filters that are making young girls look fully made up to generated images.

We are falling into this erosion of public trust — I’m very concerned that what we see today might be the last digital media that we actually have confidence in. Then you start to get into the liar’s dividend — a term that was coined by Bobby Chesney and Danielle Citron. A good example of this is the Roger Stone video, where Stone is talking in a seditious manner. And he says: that wasn’t me — that was a deep fake. Well, it was him. So we need to get much more serious about synthetic media. Is it the platforms that are amplifiers of this issue? Do we need to do watermarking? We need to spend a lot more time working on what we need to do here.

What book most shaped your conception of the future?

Dopamine Nation” by Dr. Anna Lembke. It’s a fabulous book — I really recommend it. Essentially, we’re awash with dopamine right now. This instant gratification of candy and food wasn’t always the human experience. And with every click, there’s just this consistent rise in dopamine — and it’s a problem because we’re oversaturated with it. This pleasure/pain balance is kind of overwhelming.

Anna and I are working on a grant project called “Addicted by Design: An Investigation of How AI-fueled Digital Media Platforms Contribute to Addictive Consumption.” To me, if you’re really concerned about the existential risk of AI, you should be looking at how very powerful micro targeting — knowing when to hit us and how best to optimize for us — could affect us. Anna’s book really shaped my whole consideration of the human experience alongside AI.

What could government be doing regarding tech that it isn’t?

Educate themselves. I’m not saying they need to go learn how to use TensorFlow. They need a basic understanding of what a technology’s impact is, what it can do and what it can’t. So we’ve spent an enormous amount of time working on educational programs. I’ve designed an AI boot camp for congressional staffers. I’m doing something for members of Congress themselves. We’ve worked with the GSA AI Community of Practice to create a virtual program for them to learn various aspects of AI.

We’re trying really hard to answer this call. But we are one organization and cannot educate the entire federal government. Decision-makers can’t be naive to the subject anymore. They need to do everything they can to become as informed as possible. They can reach out to organizations like ours and we can try to tailor things for them. Or we can direct them to a good online program or YouTube videos or sit in seminars. AI technology will affect us — it’s going to be integrated into our society. And if there’s naiveté here, it’s not going to work for anyone.

What has surprised you most this year?

The public awakening to AI. We keep hearing about how this is the chatGPT moment — I disagree with that. Because what’s happened is that things have actually come out of labs and they’re now at people’s fingertips. Society is feeling it now. You didn’t know when your Uber driver made a left or a right that it was AI-powered — you just sat there, probably on your phone, and didn’t even pay attention. I have a colleague who just told me how their 76-year-old dad tried chatGPT. “South Park” had an episode. So did “60 minutes.” It’s everywhere. I have requests from the White House to Salt Lake City’s city council.

It reminded me that sometimes when you are entrenched in a subject yourself, you think everyone is with you on the train. And now with all these model releases, it’s like, woah… AI is this thing that’s discussed at the dinner table. And now my family is like, “Oh, you really do something here.”

meta's european defense

Meta came out swinging yesterday in an ongoing debate with European telecom companies, arguing the growth of the metaverse won’t drive up costs as the telcos fear.

“We know that some European telecom operators have justified network fee proposals by speculating about capacity constraints caused by metaverse adoption – but this is nonsense,” Meta’s Kevin Salvadori and Bruno Cendon Martin write. “The development of the metaverse will not require telecom operators to grow capital expenditures for greater network investment… because metaverse adoption for the foreseeable future will continue to be driven predominantly through Virtual Reality (VR).”

Salvadori and Martin make the case that because almost all VR traffic goes through Wi-Fi, the European Commission’s proposal that big tech firms should help subsidize telecom infrastructure makes no sense. The Commission is currently holding a 12-week policy consultation to determine its path forward on expanding mobile bandwidth within the EU. — Derek Robertson

is gpt coming for your job?

OpenAI researchers uploaded a pre-print yesterday that takes a look at the potential impact of GPT-4 (and its heirs) on the labor market.

I’ll let them take it away: “Our findings indicate that approximately 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of GPTs, while around 19% of workers may see at least 50% of their tasks impacted,” they write. “The influence spans all wage levels, with higher-income jobs potentially facing greater exposure. Notably, the impact is not limited to industries with higher recent productivity growth.”

Another fun tidbit: “The model labeled 86 occupations as ‘fully exposed’” to GPT-4’s labor-shifting effects. But if you dig into the data and find you’re in one of them — as, uh, we are — don’t fret. The types of labor data that the researchers analyze to come to their conclusions are notoriously tricky, however. Like the personal computer before it, GPT-4 will surely change the workplace — the operative word being change itself. — Derek Robertson

tweet of the day

the future in 5 links

Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); Steve Heuser ([email protected]); and Benton Ives ([email protected]). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.