How (not) to future-proof the law

With help from Derek Robertson

On the last day of 2022, researchers released a study with an eye-opening finding: if you want to know what’s happening inside a room, it’s possible to use WiFi signals the way a ship uses sonar, to sketch out a picture of where people are standing and how they’re posing.

While this kind of imaging has been possible for years with purpose-built sensing technology like radar, advancements in AI make it possible to use common tools like WiFi antennas in the same way, and opens up the possibilities for new methods of tracking people’s movements.

It’ll likely be a long time before something like this can happen in your home — the tracking requires access to both the transmitter and receiver antennas to pick up those signals. But it does introduce a new question for regulators right now: If and when this starts happening, how are they supposed to think about this data?

Does an outline of you standing in a room, created and enhanced by AI, without any of your identifying features, belong to you? Is this data considered biometrics, which lawmakers consider a category of sensitive data? And how would you realistically opt out of something like this if you don’t even know it’s being done in a place like a mall or a parking lot?

None of this is a live issue yet. But it shows why future-proofing tech regulations is so difficult — technology often moves so quickly that by the time regulators can pass laws, or make rules, its capacities may have advanced beyond what the legislation was intended for.

It’s one reason privacy advocates are so concerned about the American Data Privacy and Protection Act, which Congress failed to pass in 2022 but is likely to be reintroduced this year.

The main worry stems from the bill’s compromise on the issue of preemption, which would prevent states from passing any privacy laws that the federal standard already touches on — and would invalidate any existing laws that fall under that umbrella.

Proponents of state-based privacy laws argue that states are where the “future-proofing” is mostly likely to happen. They argue that Congress moves too slowly to deal with tech advances — and also that tech companies often move quickly to dodge violations once major federal laws are set in stone, finding workarounds or developing new tracking methods that are outside of the laws’ coverage. States, meanwhile, can respond more quickly to new kinds of data privacy violations — and any federal law that preempts state law would exclude these quicker responses.

Tech industry groups like the idea of preemption because it creates a simpler national landscape for data privacy. They argue that multiple state privacy laws are too confusing to follow, and the more laws that get added, the more complex and expensive it will get for companies to stay on the right side of all those laws. TechNet, a group that includes Apple, Amazon, Google and Meta, argues that a single, federal standard would give businesses certainty with privacy regulations.

The downside, say privacy advocates, is that it also means the legal regime around tech can just get stuck.

When it comes to technology leapfrogging Congress, there’s plenty of history. The Health Insurance Portability and Accountability Act, for example, was passed in 1996 and sets privacy requirements for health information for medical providers. But it failed to foresee the rise of health apps and websites, where people provide medical information outside of those formal channels — information that can be shared and sold without violating the law.

Even when Congress does recognize the need to update older legislation, getting new regulations can be difficult. Last year, Sen. Ed Markey (D-Mass.) called for an update to the Children’s Online Privacy Protection Act he introduced in 1998, calling attention to growing concerns with tech’s effect on kids’ mental health. While it passed out of committee, it didn’t make it to a floor vote in Congress.

Meanwhile, the states have moved on the issue: California passed regulations last September with stricter regulations on children’s online privacy, and Illinois has landmark legislation on how people’s biometric information can be used.

Not every supporter of privacy laws sees the states as quite so crucial. During the debate over preemption, Rep. Frank Pallone (D-NJ), the former chair of the House Energy & Commerce committee and one of the co-sponsors behind ADPPA, noted that while states can pass privacy legislation, most states haven’t taken action, and that a federal bill is the best shot at getting the entire country covered.

The bill was drafted with the help of some key privacy experts, and they acknowledge that states are likely to be quicker to pay attention to tech advances. The proposed law allows for that to happen, to an extent: Even with the pre-emption clause, the federal regulation allows for states to pass privacy laws that the national standard doesn’t cover.

If new issues come up three years, five years from now, and it’s something that is not covered by ADPPA, then it’s not going to get preempted and states will have latitude to innovate there,” said David Brody, who advised on ADPPA’s drafting and is the managing attorney of the Digital Justice Initiative at the Lawyers’ Committee for Civil Rights.

the (ai-generated) sound of music

First, images and text — can AI master music, too?

Late last week, Google published a research paper showing off its newest AI-powered generative model: MusicLM, a model that generates quasi-professional-sounding music from text prompts in much a similar manner to tools like ChatGPT or Stable Diffusion.

You’re going to want to click through the above link to this one: The paper includes full examples of the prompts in question, from “The main soundtrack of an arcade game. It is fast-paced and upbeat, with a catchy electric guitar riff. The music is repetitive and easy to remember, but with unexpected sounds, like cymbal crashes or drum rolls” (remarkably accurate, and indeed pretty catchy) to “Epic soundtrack using orchestral instruments. The piece builds tension, creates a sense of urgency. An a cappella chorus sing in unison, it creates a sense of power and strength” (a little more work to do on the fidelity of this one, but impressively ominous nonetheless).

Much like MusicML’s counterparts in other mediums, the tool isn’t remotely strong enough to put its human competitors out of business yet. But also like those counterparts, the impressive early results show how it could be a powerful tool for creative types with limited resources. — Derek Robertson

more clarity on when, where, and whether robots are allowed to kill you

Last week DFD tackled the evergreen question: “Should a robot be allowed to kill you?

And late yesterday afternoon the author of that dispatch, POLITICO’s Matt Berg, had with his colleague Alexander Ward in National Security Daily another report on the U.S. and autonomous weapons on the battlefield.

The Department of Defense issued guidance yesterday clarifying exactly when, where, and how autonomous weapons are authorized, shedding light on a policy gray area between the emerging worlds of drone warfare and AI. The authors describe a slew of exemptions in the new DoD policy that make autonomous weapon use “much easier,” according to Zak Kallenborn, a policy fellow at George Mason University.

Kallenborn then tries to familiarize readers with a wonky policy debate by invoking man’s best friend — namely, a scenario in which the authors write “A robotic dog carrying supplies… could carry a weapon to defend itself without approval.”

“If Spot happens to wander near an enemy tank formation, Spot could fight. So long as Spot doesn’t target humans,” Kallenborn said. “Of course, clear offensive uses like turning Spot into a robo-suicide bomber would require approval, but there’s a lot of vagueness there.” — Derek Robertson

tweet of the day

the future in 5 links

Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Steve Heuser ([email protected]); and Benton Ives ([email protected]). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.