Table Talk | Fall 2023 Issue

Voices of Hope and Alarm

Leading experts consider the implications of AI

By Jim Newton

The sudden emergence of artificial intelligence has given scientists, policymakers, and others plenty to consider. As they wrestle with both the enormous potential and worrisome possibilities of this rapidly emerging technology, some are sounding the alarm while others see boundless potential. To sample some of that wide-ranging conversation, Blueprint here presents excerpts from some of the world’s leading thinkers in this area.

Their views run the gamut and offer a reminder that AI presents dizzying possibility along with genuine cause for concern — a balance that suggests the need for thoughtful regulation while also underscoring the challenge of such regulation.

Here, some excerpts from important statements and interviews with leading figures in this field.

 

On March 22, 2023, concerned scientists and others released a public letter warning of the rapid development and deployment of new forms of artificial intelligence, with uncertain implications for society:

“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?

“Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects.”

The letter specifically advised AI developers to pause: “Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

No such pause has been enacted, nor has any moratorium been instituted.

 

In May, hundreds of scientists and policymakers — from Sam Altman, the CEO of Open AI, to Bill Gates, Elon Musk, and Congressman Ted Lieu — released a public letter warning of AI’s profound implications for humanity. Posted at the website of the Center for AI Safety, its warning bluntly equated AI with the best-known threats to human existence:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Also in May, NPR’s Bobby Allyn interviewed Geoffrey Hinton, a British academic who has produced pioneering work on artificial intelligence for decades. In their conversation, Hinton explained why he did not sign the March letter calling for a pause or a government moratorium. An excerpt from their conversation:

“HINTON: These things could get more intelligent than us and could decide to take over, and we need to worry now about how we prevent that happening.

“ALLYN: He came to this position recently after two things happened — first, when he was testing out a chatbot at Google and it appeared to understand a joke he told it, that unsettled him; secondly, when he realized AI that can outperform humans is actually way closer than he previously thought.

“HINTON: I thought for a long time that we were, like, 30 to 50 years away from that. So I call that far away from something that’s got greater general intelligence than a person. Now, I think we may be much closer, maybe only five years away from that.

“ALLYN: Last month, more than 30,000 AI researchers and other academics signed a letter calling for a pause on AI research until the risks to society are better understood. Hinton refused to sign the letter because it didn’t make sense to him.

“HINTON: The research will happen in China if it doesn’t happen here because there’s so many benefits of these things, such huge increases in productivity.

“ALLYN: Now, what do those controls look like? How exactly should AI be regulated? Those are tricky questions that even Hinton doesn’t have answers to. But he thinks politicians need to give equal time and money into developing guardrails. Some of his warnings do sound a little bit like doomsday for mankind.

“HINTON: There’s a serious danger that we’ll get things smarter than us fairly soon and that these things might get bad motives and take control.”

 

Artificial intelligence tends to provoke extreme reactions. Some see it as a salvation, others foresee catastrophe. One who takes a more balanced view is Eric Schmidt, former CEO of Google. Here, an excerpt from a recent piece he wrote for MIT Technology Review:

“With the advent of AI, science is about to become much more exciting — and in some ways unrecognizable. The reverberations of this shift will be felt far outside the lab; they will affect us all.

“If we play our cards right, with sensible regulation and proper support for innovative uses of AI to address science’s most pressing issues, AI can rewrite the scientific process. We can build a future where AI-powered tools will both save us from mindless and time-consuming labor and also lead us to creative inventions and discoveries, encouraging breakthroughs that would otherwise take decades.

“AI in recent months has become almost synonymous with large language models, or LLMs, but in science there are a multitude of different model architectures that may have even bigger impacts. In the past decade, most progress in science has come through smaller, ‘classical’ models focused on specific questions. These models have already brought about profound advances. More recently, larger deep-learning models that are beginning to incorporate cross-domain knowledge and generative AI have expanded what is possible.

“Scientists at McMaster and MIT, for example, used an AI model to identify an antibiotic to combat a pathogen that the World Health Organization labeled one of the world’s most dangerous antibiotic-resistant bacteria for hospital patients. A Google DeepMind model can control plasma in nuclear fusion reactions, bringing us closer to a clean-energy revolution. Within health care, the U.S. Food and Drug Administration has already cleared 523 devices that use AI — 75% of them for use in radiology. …

“AI tools have incredible potential, but we must recognize where the human touch is still important and avoid running before we can walk. For example, successfully melding AI and robotics through self-driving labs will not be easy. There is a lot of tacit knowledge that scientists learn in labs that is difficult to pass to AI-powered robotics. Similarly, we should be cognizant of the limitations—and even hallucinations—of current LLMs before we offload much of our paperwork, research, and analysis to them.

“Companies like OpenAI and DeepMind are still leading the way in new breakthroughs, models, and research papers, but the current dominance of industry won’t last forever. DeepMind has so far excelled by focusing on well-defined problems with clear objectives and metrics. One of its most famous successes came at the Critical Assessment of Structure Prediction, a biennial competition where research teams predict a protein’s exact shape from the order of its amino acids.”

 

In September, Congress convened a closed-door session with tech leaders to discuss AI and possible regulatory responses. Attending were such notables as Schmidt, Gates, Musk, and Mark Zuckerberg.

Some criticized the gathering for over-representing tech leaders at the expense of others who are being affected by artificial intelligence without having any say in its rollout. Caitlin Seeley George, managing director of a digital rights group known as Fight for the Future, was among those who felt the meeting was unfairly stacked toward tech. She expressed her misgivings to the Guardian.

“People who are actually impacted by AI must have a seat at this table, including the vulnerable groups already being harmed by discriminatory use of AI right now. Tech companies have been running the AI game long enough and we know where that takes us — biased algorithms that discriminate against Black and brown folks, immigrants, people with disabilities and other marginalized groups in banking, the job market, surveillance and policing.”

 

Responding to the growing concerns about AI, the White House has proposed five principles intended to protect the public while encouraging the positive potential of AI. The principles, laid out in a Blueprint for an AI Bill of Rights, include “Safe and Effective Systems,” “Algorithmic Discrimination Protections,” “Data Privacy,” “Notice and Explanation,” and “Human Alternatives, Consideration, and Fallback.”

Here, an excerpt from the statement introducing the Bill of Rights:

“In America and around the world, systems supposed to help with patient care have proven unsafe, ineffective, or biased. Algorithms used in hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination. Unchecked social media data collection has been used to threaten people’s opportunities, undermine their privacy, or pervasively track their activity—often without their knowledge or consent.

“These outcomes are deeply harmful—but they are not inevitable. Automated systems have brought about extraordinary benefits, from technology that helps farmers grow food more efficiently and computers that predict storm paths, to algorithms that can identify diseases in patients. These tools now drive important decisions across sectors, while data is helping to revolutionize global industries. Fueled by the power of American innovation, these tools hold the potential to redefine every part of our society and make life better for everyone.

“This important progress must not come at the price of civil rights or democratic values, foundational American principles that President Biden has affirmed as a cornerstone of his Administration. …

“The Blueprint for an AI Bill of Rights is a guide for a society that protects all people from these threats — and uses technologies in ways that reinforce our highest values. … These principles help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs.”

Jim Newton

Jim Newton

Jim Newton is a veteran author, teacher and journalist who spent 25 years as a reporter, editor, bureau chief, editorial page editor and columnist at the Los Angeles Times. He is the author of four critically acclaimed books of biography and history, including "Man of Tomorrow: The Relentless Life of Jerry Brown." He teaches in Communication Studies and Public Policy at UCLA.

Post navigation

Related

research

Special Report: ‘Hey, AI, write us a story…’

AI imagines the end of the world — brought about by AI