5 questions for Microsoft’s Natasha Crampton

With help from Derek Robertson

Welcome back to the Future in Five Questions where we’re trying something a little different this Friday. Our guest today is Natasha Crampton, Microsoft’s chief responsible AI officer. Previously lead counsel on the Aether Committee (Microsoft’s earlier advisory committee for senior leadership on all things AI), Crampton stepped into her current role when Microsoft founded its Office of Responsible AI in 2019. She helped architect the day-to-day principles steering Microsoft’s formidable bet on AI. Back in her days as a lawyer in Australia and New Zealand, Crampton specialized in copyright, privacy, and internet safety and security law.

Read on to hear her thoughts about teasing apart AI use-cases, government as a braintrust and keeping up with technological change.

And if you want the full, behind-the-scenes conversation with a tech policy power player about a rapidly shifting AI landscape (complete with gnarly follow-ups and fun tangents) — you can find that here:

(Spoiler: There’s a tiny scoop about GPT-4’s internal release date at the end.)

Responses below have been edited for length and clarity.

What’s one underrated big idea?

That responsible AI is a practice — not just a slogan or a set of principles. One of the things I’m most proud of during my time at Microsoft is operationalizing responsible AI across the company. We’ve engaged the very best to develop an actionable, responsible AI standard.

This is our internal playbook for how we develop and deploy AI systems. We’re actually on the second version of the Responsible AI Standard, because our product teams were thirsty for more concrete guidance and processes. We’ve come to this AI moment with more than six years of work building the infrastructure for Responsible AI. We’ve developed a practice that puts us in a good position to look ahead to these exciting, transformative use cases of the future.

Some of the learnings from our program, I think, are very helpful for feeding into a public policy conversation where we agree that we’ve got the same objective.

Building AI systems is not like building Word or Excel. Having multidisciplinary groups is critically important. And Microsoft cannot do this alone. In fact, Microsoft benefits from outside insights and initiatives, and research. You need to make sure there’s a two-way exchange with the world.

What’s a technology you think is overhyped?

Thinking about general purpose technologies like AI in these monolithic or abstract terms. When we lump a broad range of technologies into a single category, sometimes you end up with an all-or-nothing approach, or a one-size-fits-all solution. In reality, there are countless different ways that a diverse set of AI technologies can be applied.

Teasing apart those scenarios just leads to a more productive path forward. So we’re thinking about large language models today mostly in terms of chatbots. But in fact, there are exciting new applications like helping security operation centers around the world get ahead of their adversaries.

We need to stop thinking about things in the abstract. We need to focus more on what we are trying to achieve, what we are trying to avoid, and to try and calibrate those guardrails appropriately.

What book most shaped your conception of the future?

Azeem Azhar’s “The Exponential Age” has some great insights for the particular AI moment we’re in right now. He digs in on four general-purpose technologies — computing, biology, renewable energy and manufacturing — and he exposes this exponential gap between the advances powered by those technologies and the ability of our societal institutions to respond.

He had me interested from the very first chapter where he described his own first encounters with personal computing. I remember my dad bringing home our first Amiga 500 computer. His book does a great job connecting the dots between social, political, economic and technological trends. Ultimately, I think he’s right to conclude that technology is something that we can control. And humans are ingenious at forging the world that we want in response to technological change.

As it happens, he also has a great substack.

What could government be doing regarding tech that it isn’t?

Two things. First, I think it’s helpful to spend time engaging with technology companies and academics to better understand the technology. There is more to AI than just the models that we often talk about. There’s AI supercomputers. There’s clever applications that sit on top of these models. It will help to design better regulations for high-risk uses if our policy stakeholders have a better understanding of the technology and the policy intervention point.

The second thing is to bring together civil society and academia, technology companies and government leaders, to help chart a path forward. Serving as a convener during this important period is essential to charting the right path forward.

What has surprised you most this year?

Well, I had a bit more of a sneak peek than other people. What surprised me is how quickly people are adopting this next wave of AI personally and professionally. We see more and more businesses taking up this technology — from Fortune 500 companies to startups and scale-ups.

They’re taking this core technology and building exciting new scenarios: More efficient customer service, helping citizens fill out government forms, and reducing paperwork for doctors so that they can spend more time with their patients. These large language models and multimodal models, they’re quickly becoming a new computing paradigm. We’re all going to have the benefit of a much more natural interface with computing.

getting past the (AI) pause

The former chief of staff for the White House’s Office of Science and Technology Policy has some reservations about a recent call to hit pause on AI development. Marc Aidinoff, a visiting lecturer at the University of Mississippi, said the “pause AI” stance “buys into this notion that we built a way of thinking and doing computer science that tolerates a lack of explainability that we don’t accept in any other domain.” That notion, Aidinoff said, “undermines the arguments that the existing regulatory agencies can and should use their powers” to regulate AI technologies.

Aidinoff was responding to the Future of Life Institute, which released a report on Wednesday laying out seven policy recommendations spanning some of the most controversial, yet consequential areas of the AI debate, including third-party audits for general-purpose AI systems, standards for AI-generated content and access to computing power. The institute is the same organization that penned a high-profile open letter in March that called for an “AI pause,” though Mark Brakel, FLI’s policy director, said their latest report aims to push past the “pause vs. no pause” debate.

Aidinoff believes FLI’s policy recommendations are sound, but not all can or will be prioritized. To get a sense of what’s first in the queue, Aidinoff pointed to an older open letter from July 2021 urging the White House OSTP to prioritize equitable outcomes from the automated decision-making systems used for lending, housing and hiring. — Mohar Chatterjee

bots and humans living together, mass hysteria

How are actual engineers thinking about the future of human-robot interaction?

At The Conversation this week, University of Maryland Baltimore County computer science and electrical engineering professor Ramana Vinjamuri wrote about a few possible ways to help us work better with the machines.

A couple of somewhat freaky possibilities: “Emotional intelligence perception,” where physical therapy robots could, for example, use facial and body languages to detect if a patient is dissatisfied with a rehab activity and suggest another one. Or brain-computer interfacing, by which Vinjamuri speculates that “By accessing an individual’s brain signals and providing targeted feedback, this technology can potentially improve recovery time in stroke rehabilitation.”

And don’t fret, if that sounds potentially problematic. As Vinjamuri points out near the end of his survey, “Scientists and engineers studying the ‘dark side’ of human-robot interaction are developing guidelines to identify and prevent negative outcomes.” — Derek Robertson

tweet of the day

the future in 5 links

Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); Steve Heuser ([email protected]); and Benton Ives ([email protected]). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.