Something That Both Candidates Secretly Agree On

Harris and Trump’s records on AI are weirdly in sync.

Something That Both Candidates Secretly Agree On

If the presidential election has provided relief from anything, it has been the generative-AI boom. Neither Kamala Harris nor Donald Trump has made much of the technology in their public messaging, and they have not articulated particularly detailed AI platforms. Bots do not seem to rank among the economy, immigration, abortion rights, and other issues that can make or break campaigns.

But don’t be fooled. Americans are very invested, and very worried, about the future of artificial intelligence. Polling consistently shows that a majority of adults from both major parties support government regulation of AI, and that demand for regulation might even be growing. Efforts to curb AI-enabled disinformation, fraud, and privacy violations, as well as to support private-sector innovation, are under way at the state and federal levels. Widespread AI policy is coming, and the next president may well steer its direction for years to come.

On the surface, the two candidates couldn’t be further apart on AI. When AI has come up on the campaign trail, the focus has not been on substantive issues, but instead on the technology’s place in a supposed culture war. At a rally last winter, Trump railed against the Biden administration’s purported “use of AI to censor the speech of American citizens” (a contorted reference, perhaps, to an interview that week in which Secretary of Homeland Security Alejandro Mayorkas denounced the “politicization” of public education around the dangers of AI, including misinformation). Trump also said he would overturn Joe Biden’s executive order on AI—a sprawling document aiming to preserve consumer and civil rights while also spurring innovation—“on day one.” Then, over the summer, the GOP platform lambasted the “dangerous” executive order as slowing innovation and imposing “Radical Leftwing ideas” on the technology, perhaps referring to the order’s stated “dedication to advancing equity.” Elon Musk, now the most powerful Trump surrogate in the world, recently invited his followers to “imagine an all-powerful woke AI.” Harris, for her part, hasn’t discussed AI much as a candidate, but she is leading many of Biden’s AI efforts as vice president, and her economic platform mentions furthering “the commitments set forth in the 2023 AI Executive Order.”

[Read: The real AI threat starts when the polls close]

Such rhetoric is par for the course this election cycle: Trump in particular has never been known for nuance or gravity, and tearing down Biden is obviously his default position. What no one seems to remember, though, is that Biden’s “dangerous” executive order echoes not one but two executive orders on AI that Trump himself signed. Many of the policies around AI that President Biden and Vice President Harris have supported extend principles and initiatives from Trump’s term—such as efforts to establish federal funding for AI research, prepare American workers for a changing economy, and set safety standards for the technology. The two most recent presidential administrations even agreed on ensuring that federal AI use is nondiscriminatory. Trump’s approach to the technology, in turn, built on foundations laid during Barack Obama’s presidency.

In other words, despite how AI has been approached by their campaigns (that is, barely, or only in the shallowest terms), both candidates have real track records on AI, and those records are largely aligned. The technology appeared to be a rare issue driven for years by substance rather than partisanship, perhaps because prior to the launch of ChatGPT, it wasn’t on many Americans’ minds. With AI now assuming national importance, Trump has promised to tear that consensus down.

Still, there’s a good chance he won’t be able to—that reason and precedent will prevail in the end, if only because there’s already so much momentum behind what began during his own administration. “To the extent that the Trump administration worked on issues of science and technology policy, it worked on AI,” Alondra Nelson, a professor at the Institute for Advanced Study who previously served as the acting director of Biden’s Office of Science and Technology Policy, told me. And in doing so, it was inheriting priorities set under a man Trump has called “the most ignorant president in our history.” Near the end of his second term, Obama directed several federal agencies to study and plan for the growing importance of “big data” and AI, which culminated at the end of 2016 with the publication of a report on the “future of artificial intelligence,” as well as a national strategic plan for AI research and development. Those included broad suggestions to grow the federal government’s AI expertise, support private-sector innovation, establish standards for the technology’s safety and reliability, lead international conversations on AI, and prepare the American workforce for potential automation.

A few years later, Trump began to deliver on those recommendations through his executive orders on AI, a 2019 update to that strategic plan, and his White House’s guidance to federal agencies on using AI. “The Trump administration made AI a national technology priority,” Michael Kratsios, who served as the country’s chief technology officer under Trump and helped design his AI strategy, told Congress last October. In that testimony, Kratsios, who is currently the managing director of the start-up Scale AI, lauded much of Obama’s previous and Biden’s current work on AI—even criticizing Biden for not doing enough to implement existing policies—and noted the continued importance of supporting “high-quality testing and evaluation” of AI products.

Biden and Harris have since taken the baton. Trump’s first executive order in particular did “have a lot of the ingredients that got much more developed in Biden’s EO,” Ellen Goodman, a professor at Rutgers Law School who has advised the National Telecommunications and Information Administration on the fair and responsible use of algorithms, told me. “So when Trump says he’s going to repeal it with a day-one action, one wonders, what is it exactly that’s so offensive?” Even specific policies and programs at the center of Biden and Harris’s work on AI, such as establishing national AI-research institutes and the National AI Initiative Office, were set in motion by the Trump administration. The National Artificial Intelligence Research Resource, which Harris’s economic plan touts by name, originated with AI legislation that passed near the end of Trump’s term. Innovation, supporting American workers, and beating China are goals Harris and Trump share. Bluster aside, the candidates’ records suggest “a lot of similarities when you get down to the brass tacks of priorities,” Alexandra Givens, the president of the Center for Democracy & Technology, a nonprofit that advocates for digital privacy and civil rights, told me.

[Read: The EV culture wars aren’t what they seem]

To be clear, substantive disputes on AI between Harris and Trump will exist, as with any pair of Democratic and Republican presidential candidates on most issues. Even with broad agreements on priorities and government programs, implementation will vary. Kratsios had emphasized a “light touch” approach to regulation. Some big names in Silicon Valley have come out against the Biden administration’s AI regulations, arguing that they put undue burdens on tech start-ups. Much of the Republican Party’s broader message involves dismantling the federal government’s regulatory authority, Goodman said, which would affect its ability to regulate AI in any domain.

And there is the “Radical Leftwing” rhetoric. The Biden-Harris administration made sure the “first piece of work out the public would see would be the Blueprint for an AI Bill of Rights,” Nelson said, which outlines various privacy and civil-rights protections that anyone building or deploying AI systems should prioritize. Republicans seem to have a particular resistance to these interventions, which are oriented around such concepts as “algorithmic discrimination,” or the idea that AI can perpetuate and worsen inequities from race, gender, or other identifying characteristics.

But even here, the groundwork was actually laid by Trump. His first executive order emphasized “safety, security, privacy, and confidentiality protections,” and his second “protects privacy, civil rights, [and] civil liberties.” During his presidency, the National Institutes of Standards and Technology issued a federal plan for developing AI standards that mentioned “minimizing bias” and ensuring “non-discriminatory” AI—the very reasons why the GOP platform lashed out against Biden’s executive order and why Senator Ted Cruz recently called its proposed safety standards “woke.” The reason that Trump and his opponents have in the past agreed on these issues, despite recent rhetoric suggesting otherwise, is that these initiatives are simply about making sure the technology actually functions consistently, with equal outcomes for users. “The ‘woke’ conversation can be misleading,” Givens said, “because really, what we’re talking about is AI systems that work and have reliable outputs … Of course these systems should actually work in a predictable way and treat users fairly, and that should be a nonpartisan, commonsense approach.”

In other words, the question is ultimately whether Trump will do a heel turn simply because the political winds have shifted. (The former president has been inconsistent even on major issues such as abortion and gun control in the past, so anything is possible.) The vitriol from Trump and other Republicans suggests they may simply oppose “anything that the Biden administration has put together” on AI, says Suresh Venkatasubramanian, a computer scientist at Brown University who previously advised the Biden White House on science and technology policy and co-authored the Blueprint for an AI Bill of Rights. Which, of course, means opposing much of what Trump’s own administration put together on AI.

But he may find more resistance than he expects. AI has become a household topic and common concern in the less than two years since ChatGPT was released. Perhaps the parties could tacitly agree on broad principles in the past because the technology was less advanced and didn’t matter much to the electorate. Now everybody is watching.

Americans broadly support Biden’s executive order. There is bipartisan momentum behind laws to regulate deepfake disinformation, combat nonconsensual AI sexual imagery, promote innovation that adheres to federal safety standards, protect consumer privacy, prevent the use of AI for fraud, and more. A number of the initiatives in Biden’s executive order have already been implemented. An AI bill of rights similar to the Biden-Harris blueprint passed Oklahoma’s House of Representatives, which has a Republican supermajority, earlier this year (the legislative session ended before the bill could make it out of committee in the senate). There is broad “industry support and civil-society support” for federal safety standards and research funding, Givens said. And every major AI company has entered voluntary agreements with and advised the government on AI regulation. “There’s going to be a different expectation of accountability from any administration around these issues and powerful tools,” Nelson said.

When Obama, Trump, and Biden were elected, few people could have predicted anything like the release of ChatGPT. The technology’s trajectory could shift even before the inauguration, and almost certainly will before 2028. The nation’s political divides might just be too old, and too calcified, to keep pace—which, for once, might be to the benefit of the American people.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow