Takeaway | Fall 2023 Issue

Closing Note: A Time to Act

Artificial intelligence may do great things or great harm, depending on whether Congress gets it

By Jim Newton

The work featured in this issue of Blueprint makes two conclusions inescapably clear: Generative artificial intelligence holds great potential to address an array of human problems, and turning AI loose on those problems comes at significant risk. That is true whether the issue is a college term paper or climate change.

The consensus of some of the best minds in this field is that humanity would miss out on a historic opportunity if it attempted to bottle up this technology. Generative AI has the capacity to churn through immense volumes of data and produce heretofore unimagined works, stretching the human mind, exploring new solutions to deep problems. But plunging ahead without caution carries commensurate danger. AI may produce solutions that humanity abhors; its advice may be welcome, but conceding to it the power to make changes may surrender essential aspects of human agency and morality.

Sometimes the consequences may be small: Lawyers who rely on AI to produce a brief or a letter may discover — some already have — that chatbots, for reasons that remain a little mysterious, like to make stuff up. Students who turn to AI to write their papers may soon discover that it represents a novel form of plagiarism. As bad as that may prove to be for the exposed student or embarrassed lawyer, humanity will survive.

But as AI expands, its reach will make its idiosyncrasies more consequential. Professor Safiya Noble has documented the disturbing tendency of algorithms to reflect and perpetuate bias. If those algorithms are employed to make decisions about home loans or criminal sentences or in any number of areas where race and gender bias already work their mischief, then the effect could be to deepen the pernicious influences of racism and sexism.

And then there is the question of using AI as a tool of national defense. Congressman Ted Lieu has proposed legislation to block the use of AI in launching a nuclear war. It may be this century’s leading understatement to say that this seems like the least Congress might do.

But the task before Congress is alarmingly monumental. Noble, Sarah T. Roberts, John Villasenor and Tao Gao, all featured in this issue, agree that some form of oversight is needed to corral this technology, to acknowledge its great potential without allowing it such broad power that it wreaks havoc. But it is no small task to imagine the regulatory scheme that could achieve that.

Congress might attempt to bar racial discrimination in AI. But racial bias, by itself, is permitted under the Constitution. If someone wants to build an AI model that identifies White people as smart or Black people as beautiful or Latino/as as industrious, users might well be offended by the stereotypes reflected in those assumptions, but the First Amendment would almost certainly prevent Congress from outlawing them. If, on the other hand, those assumptions translated into discriminatory application of benefits or punishments, that might fall under Congress’ authority to regulate.

It also bears noting that Congress these days does not demonstrate much deep thinking. Few members have technical backgrounds, and the body seems more preoccupied with assessing Hunter Biden’s application for a gun permit than it does with crafting complex regulations to protect the world from disaster.

Finally, there is the geopolitical reality. Assuming that Congress has the intelligence and will to act — two big assumptions — its reach ends at America’s borders. Even a thorough regulatory regime for AI inside the United States would confront the reality that this technology is available, and growing fast, across the world.

That last point brings home the other important aspect of this discussion. AI is loose upon the world already, and it is growing at speeds that defy human perception. In the time it takes to read this note, AI bots have learned more than any human has ever processed over any lifetime.

AI is part of our world. It is growing, and it is growing fast. Congress may lack the expertise and coherence to act perfectly, but there is no time to wait for it to develop either. The moment is on us now.

Jim Newton

Jim Newton

Jim Newton is a veteran author, teacher and journalist who spent 25 years as a reporter, editor, bureau chief, editorial page editor and columnist at the Los Angeles Times. He is the author of four critically acclaimed books of biography and history, including "Man of Tomorrow: The Relentless Life of Jerry Brown." He teaches in Communication Studies and Public Policy at UCLA.

Post navigation