Editor's Note | Fall 2023 Issue

Editor’s Note

Have we reached "The Terminator" moment?

By Jim Newton

Generative artificial intelligence is not just another computer program. It’s not a piece of software or a novel algorithm or even traditional AI. Generative AI does not merely mimic or predict human behavior, anticipating which book you might like to read or braking to avoid crashing into the car in front of you. It creates. It draws from a vast range of sources to produce new works, ones never touched by a human mind.

This became bracingly clear to many people in early 2023, when San Francisco-based OpenAI introduced ChatGPT to the consumer market. ChatGPT quickly demonstrated that an AI bot, given the simplest of prompts, could produce a piece of writing that comes fairly close to what a human might produce — if that human were in about 10th grade and not very gifted.

Launched in November of 2022, ChatGPT had 100 million users by January, 173 million by April.

Other applications soon followed, ones that produce art, fake photographs, screenplays — all manner of work that, until just a few months ago, seemed exclusively the product of human creativity.

Have we then arrived at “The Terminator” moment, the pivot when computers become sentient, armed not only with prodigious knowledge but also human-like characteristics — desires to improve, to care, to love, to defend themselves? Not exactly, but this still feels like a threshold in history, and a risky one, too.

The answers, of course, are less simple than in the movies. There is a serious argument that computers have now reached a form of sentience. They are aware. They can marshal facts and imitate humans, sometimes fooling humans themselves. They may not “feel” in the sense that we are accustomed to feeling, but they can express themselves in emotional terms. And even a modicum of humility should allow humans to acknowledge that we may feel differently from other living beings – we have different emotional structures from dogs, for instance, but no one who has ever loved a dog would deny that the dog was capable of being loved and loving in return. Sentience is not exclusively human.

As computers approach and attain something that resembles sentience, the next natural question is to consider what that may mean. That is the question at the heart of this issue of Blueprint.

The implications of generative AI are profound, and profoundly mixed. Generative AI offers, for instance, hope to the problem of climate change. Arresting the world’s slide into heat presents perhaps the greatest challenge ever to confront humanity, with dizzying technical and political obstacles. Artificial intelligence is unlikely to resolve the geopolitics of confronting climate change, but if it could help to generate technical solutions, it might lead the world to political consensus around those solutions.

At the same time, the prospect of turning over immense systems – national defense, the power grid and disbursement of government services, to name a few – to the control of technical overseers with objectives of their own is the stuff of the scariest science fiction. What if, to conjure just one scenario, AI concluded that the solution to climate change was the elimination of a billion humans? Given the power to act, how might it respond to that directive?

The other salient fact of generative AI is that it not only learns but it learns very, very quickly. It has access to the internet — something close to the sum of all human knowledge — and it never stops iterating, so while you take a moment to look up a fact, it has done so a million times over and has moved well beyond where you could go. ChatGPT may write like a so-so teenager today, but it will be better tomorrow, and better still the day after that. What, then, will humans be left to do?

These are among the gravest and most exciting questions that confront humanity today. With this issue, Blueprint hopes to pose and frame them — answering them is still a ways off — as well as to introduce some of the researchers and policymakers who are grappling with their dimensions. It is only through the collaboration of smart research and committed policy that we might find that balance where AI contributes its gifts without wrecking what matters.

Jim Newton

Jim Newton

Jim Newton is a veteran author, teacher and journalist who spent 25 years as a reporter, editor, bureau chief, editorial page editor and columnist at the Los Angeles Times. He is the author of four critically acclaimed books of biography and history, including "Man of Tomorrow: The Relentless Life of Jerry Brown." He teaches in Communication Studies and Public Policy at UCLA.

Post navigation

Related

research

The “Lie Factory”

Generative artificial intelligence is changing — and improving — life for humans. Can humans control it?