5 questions for SmartNews’ Rich Jaroslovsky

With help from Sam Sutton

Welcome back to our weekly feature, The Future in 5 Questions. This week I spoke with Rich Jaroslovsky, a veteran of digital media and former tech columnist who is now vice president for content at SmartNews, an app that algorithmically evaluates and curates the news for its users. We spoke about the hype cycle around generative AI, how it will make media literacy even more crucial, and the inevitability that there are some problems technology just can’t solve. The following has been condensed and edited for clarity and length:

What’s one underrated big idea?

The fact that technology can be used to overcome social divisions rather than just exacerbate them.

I’d love to see more work done on how to optimize technology and information to expose people more to differing viewpoints and puncture filter bubbles, which is something we’ve been trying to do at SmartNews for quite a while. What that takes is not necessarily further technological advancement, but the intent to wield that technology responsibly.

Technology is just a tool, like a hammer. The hammer doesn’t know whether it’s being used to drive a nail to build a Habitat for Humanity house or whether it’s being used to hit someone over the head.

What’s a technology you think is overhyped?

Generative AI, at least at this moment.

There tends to be a pattern with the introduction of major new technologies — whether it’s the internet, blockchain, or now AI — where there’s this period of irrational exuberance where people expect it to solve all the world’s ills, then inevitably, the bubble deflates. Then the pendulum sometimes has a tendency to swing too far in the other direction, which is when people figure out what the actual best applications of the technology are. The internet really was revolutionary but it took a while to figure out what it was really best for.

What book most shaped your conception of the future?

“The Soul of a New Machine” by Tracy Kidder. One of the lessons I took from that book was that the course that technology takes isn’t just a function of scientific achievement and cold logic, it’s both created and deployed by humans, with all our passions and frailties, and foibles and conflicts and possibilities. The fundamental lesson I took from the book is that technology is at least as powerful a reflector of society as it is a shaper of it.

What could government be doing regarding tech that it isn’t?

It’s absolutely imperative that the government does more at every level to promote information literacy and critical thinking.

We have to do more, and to start at a very early age, to educate kids and give them the skills and tools they need to judge the credibility of the information they’ll encounter online. Government could do more to incentivize, and in some cases, to carefully regulate the use of technology to at least maximize the chances that it’s being used responsibly. Note that I didn’t say to make sure it’s being used responsibly, because, the nature of technological progress is that it’s almost always going to outstrip our ability to come to terms with its implications.

Still, when it comes to issues like data privacy, the U.S. is lagging far behind. We should be at the forefront of figuring out the implications of generative AI and what its responsible use requires — disclosure when it’s being used, and watermarking of AI-generated information. We need to be more active than we are to address the implications of this tech.

What has surprised you most this year?

The sudden explosion of public awareness and interest about generative AI. This is after all the work that’s gone on for years, but we’re clearly right now in a moment where the broader public is waking up to the technology and its implications.

For somebody who’s been around as long as I have, this feels an awful lot like the mid-90s when the web began to penetrate public consciousness. I teach a course in the history of online news at the University of California, and one of the things I always emphasize to my students is how swiftly the web went from mystery to ubiquity. I show them a clip from the Today Show where the hosts are sitting around on camera going, what is this internet thing, anyway? What is that “at” symbol, what does it mean?

Within three years there are commercials touting the supposed technological superiority of DSL over cable in getting broadband to your home. The web went from totally obscure to completely ubiquitous, and it feels like we are in that moment right now with AI.

ai's got talent

WEST HOLLYWOOD — One of Hollywood’s top talent agencies isn’t making AI a top lobbying priority — at least for now.

At this stage it’s about staying “on the balls of your feet, and recognizing that this is a new space, a new policy segment,” said Ty Bland, former head of government affairs at the Creative Artist Agency. Bland now lobbies on behalf of the talent agency as an external lobbyist through his firm, Porter Tellus.

Lawmakers across the nation are racing to write rules for how the technology could reshape everything from labor markets to medical research. AI’s emergence as a powerful tool for creating digital content is now a flashpoint in the ongoing Hollywood writers strike, where the Writers Guild of America is insisting management offer assurances that their members’ scripts won’t be used to train generative programs that could eventually put them out of work, our Nick Niedzwiadek and Olivia Olander reported.

Of course, any AI applications that threaten the intellectual property and livelihood of CAA or its clients will ultimately be of interest. And if lawmakers aren’t taking that into account, they’re likely to start fielding a lot more calls from 213 area codes.

“I love the policymakers and the lawmakers that create legislation all across the country, but they’re not experts on a lot of these things,” Bland said. — Sam Sutton

not-so-'emergent' abilities

A group of Stanford researchers is arguing that one of AI’s most jarringly novel and futuristic qualities might not be all it seems.

In a paper titled “Are Emergent Abilities of Large Language Models a Mirage?,” they explore just what causes said abilities — in short, capabilities that its developers did not expect or program in, which an AI model gains as it grows more complex. The researchers write that they find “strong supporting evidence that emergent abilities may not be a fundamental property of scaling AI models.”

According to their findings, “emergent abilities” in AI might be less a novel trait of the models themselves than possibly an effect of our perception and measurement of them. The authors told Vice News in an email that “When we reconsidered the metrics we use to evaluate these tools, we found that they increased their capabilities gradually, and in predictable ways,” and that “researchers need to comprehend the implications of their chosen metric and should not be taken aback when their choices lead to predictable outcomes.”

Why does this matter? Well, when an AI model starts doing things you didn’t tell it to, it’s kind of… freaky. “It suggests that you might have one model that is well behaved and trustworthy, but if you train the next model with more data or with more parameters, the next model might (unpredictably) become toxic or deceptive or malicious,” the authors wrote. The reality might be much more banal — that is to say, human-driven. — Derek Robertson

Tweet of the Day

the future in 5 links

Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); Steve Heuser ([email protected]); and Benton Ives ([email protected]). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.