Research | Fall 2023 Issue

The “Lie Factory”

Generative artificial intelligence is changing -- and improving -- life for humans. Can humans control it?

By Lisa Fung

SARAH T. ROBERTS, ASSOCIATE PROFESSOR of information studies at UCLA, has spent the bulk of her career observing, evaluating, and studying online life and how it impacts the well-being of society. That has often led her to consider the need for U.S. government oversight to ensure ethical innovation by Silicon Valley companies, which may face different scrutiny overseas.

“My first and foremost concern is always about human beings, which strangely tends to be something that gets lost in the focus on technology and the debates around technology,” said Roberts, who is also co-founder of UCLA’s Center for Critical Internet Inquiry. “We don’t talk about how technology might actually work or how humans may actually be impacted.”

Her concerns have come to the forefront amid recent innovations in AI.

Though many may not be aware of it, AI touches nearly all aspects of everyday life. Websites use AI to offer product recommendations, create music playlists, or suggest streaming content. Banks use AI to detect fraudulent charges on your credit card. Traffic signals run using AI technology. Cars employ AI in GPS apps, voice-recognition features, and self-driving systems. If you use Alexa or Siri, you’re using AI.

Much focus, of late, has turned to generative AI. Unlike regular machine learning, which uses data and algorithms to predict results in order to perform a task — such as recognizing an image or a voice — generative AI uses the data it collects to create original material, such as text, images, audio, or video. Generative AI uses deep-learning models that can analyze large sets of raw data, such the entire works of Shakespeare or all of Wikipedia, and then generate new information based on what it has been programmed to “learn.”

The possibilities of technology that creates rather than merely analyzes or recognizes are boundless — ranging from the creative arts to medical research — and also worrisome. The same capacity that could guide responses to climate change also might slip away from human control, with uncertain and unsettling potential.

And there is big money at stake. Tech companies see potentially huge payoffs in generative AI, so Google, Microsoft, Meta, and other companies are racing to develop chatbots and other technologies, such as Midjourney and DALL-E 2, text-to-image generators, and Speechify, an AI voice generator. Perhaps the best-known is Open AI’s chatbot ChatGPT, which was released to the public in November of 2022 and has been making headlines ever since.

“It seems like we’re in a particularly evolutionary moment,” Roberts said. “Oftentimes, we lose sight of how much human decisionmaking and other human traits, like hubris or greed, go into creating tools that might not be in everyone’s best interest.”

“Thinking Machines”

AI HAS BEEN AROUND FOR DECADES. IN HIS 1950 seminal paper, “Computing Machinery and Intelligence,” Alan Turing, the father of modern computer science, poses the question: Can machines think? AI was recognized as a field of study in 1956 when John McCarthy, a professor at Dartmouth College, held a summer workshop to study “thinking machines.”

In popular culture, AI has regularly been portrayed in the form of villainous characters with minds of their own, like HAL-9000 in “2001: A Space Odyssey,” the murderous doll in “M3GAN,” or the eponymous world of “The Matrix.” Real-life applications of AI are less nefarious: analyzing large amounts of data based on trained scenarios; generating art or music; translating text; and facilitating drug development or patient treatments.

But AI’s rollouts into different walks of life has created new conflicts: Privacy breaches, identity theft, and workforce disruption are among the first; broader applications may encroach on other fields in ways that are hard to predict. Already, several class-action lawsuits have been filed, challenging the use of copyrighted material scraped from the internet and used in AI creations or as data to train computers. Concern that actors would be replaced with digital replicas or that AI would be used to generate scripts were among the key areas of contention in the SAG-AFTRA and WGA strikes in Hollywood.

“All of these issues are really intertwined,” Roberts said. “There have been a lot of claims over the years about what computers can do and can’t do, and usually it’s oversold. That’s been the case with AI for some time.”

Although some worry that AI will destroy industries or render certain job types irrelevant, Roberts believes it’s more likely the technology will devalue work because machines can do a passable job of the things humans do well.

“What companies always want to do is lower labor costs, and one way is to show that, hey, we’ve got machines that can pretty much replace you,” she said. “Oh, they generate bogus citations. Well, we can live with that. Oh, the writing style is pretty crap, and there’s no creativity. We can live with that.”

Human input remains a necessary ingredient to the technology. And the technology is only as good as it is trained to be — by humans. Engineers who write the AI algorithms may use data that reflects their personal biases, or their data could be flawed. For example, the use of AI for predictive policing or facial recognition technology has been shown to disproportionally target Black and Latino people. And because AI is generative, those algorithms are only a launch point; the problems may grow worse as bots learn and grow from the internet, itself home to bias, lies, and the full multitude of other human failings.

“There’s an adage in computer science and in software engineering: ‘Garbage in, garbage out.’ I think of that a lot with regard to AI,” Roberts said. “How are we dealing with some of these systemic problems that this tech will not only mirror because it is being built on data that exists in the world already — that are likely biased, flawed, incorrect, etc. — because that’s what the internet is made up of. It may also not only mirror that but may amplify that or put new garbage into the world.”

Machines are good at certain tasks, such as spam detection. But “a completely computational solution is a bad idea, for a lot of reasons,” she said. The more complex, challenging, and difficult material is best evaluated by humans.

It’s rare that situations are straightforward. For example, Roberts said, imagine that a computer has been asked to prevent the dissemination of images that are harmful to children. If it discovers a video that shows a child in distress, who’s been harmed and is bleeding, it could detect those properties — Child. Bleeding. Distress. — and delete the video. But, she said, what if the video is from a war zone and the people who posted the images did so to call attention to atrocities. A machine wouldn’t have a moral compass to distinguish between the meaning or context of those images.

“Hollywood maybe is the first industry to respond in this way to AI,” Roberts said, “but they won’t be the last — and they mustn’t be the last.”

From chatbot to lies

AFTER SPENDING 30 YEARS ON THE INTERNET and more than a decade studying the secretive world of commercial content moderation, including a stint working at Twitter, Roberts has become an internationally recognized expert on internet safety. Her research and book, Behind the Screen: Content Moderation in the Shadows of Social Media helped to expose harmful labor practices by mainstream social media companies that hire low-wage workers to screen and evaluate posts and remove offensive material. She documents the emotional and psychological toll this job takes on employees, many of them contract workers, whose mere existence was — and still is — largely denied by the social media companies.

Her interest in AI was piqued when chatbots were touted as a potential solution or auxiliary tool for content moderation. So, before ChatGPT became so widely used, Roberts began casually experimenting with it.

What she discovered, she said, was “a big lie factory.”

“If you’ve ever used ChatGPT, the first impression you have is, ‘Oh my God, it’s already doing something,’ which is an interesting design feature to kind of instill confidence,” she said of the application’s immediate response to her natural language queries about important academic works on content moderation. At first, it listed a book by a colleague. Then came other citations, many new to her. She realized something was amiss when she didn’t recognize about 90% of those citations in her area of expertise.

“One of the authors it kept naming in the citations had the last name of Roberts but a different first initial — it was like it was pulling from me, kind of, but remixing it,” she said. “The citations looked completely legitimate. They were using real people’s names; they were citing actual journals in the field of internet studies that are legitimate. Real publishers were mentioned for the books. It was just weird.”

Roberts tried to verify one of the listed journal articles. “It gives the volume, it gives the issue, it gives the page numbers, and I thought, well, let me go look,” she recalled. “And it was totally ginned up and bogus.”

Today the prevalence of fabricated sources and faulty data is well documented. AI’s propensity to make up information, a phenomenon called “hallucination,” can happen for a number of reasons, such as datasets used for training that are incomplete, inaccurate, or contain biases. Because they lack human reasoning, AI tools can’t filter out these inconsistencies, and the resulting output may amplify the misinformation. Developers can address the lies and misstatements by adding new guardrails — but first they must be detected. That often falls to the public, Roberts said, and often there’s no way for everyday users to know how accurate the information is.

“The insidious part is that they present themselves as value-neutral, which, of course, they’re not,” she said. “Presenting themselves as value-neutral is dangerous because it gives the public a false sense about their veracity.”

The world as beta

PART OF THE REASON FOR CONCERN, Roberts said, is because “there’s just no one who seems to be able to reasonably forestall any of this. The result is that the entire world becomes the beta tester for something that maybe should have a longer period of break-in before it’s unleashed.”

The alarm has come from some of those who seem well-positioned to be worried.

The breakneck speed at which AI is developing led top AI researchers, engineers, and other notables earlier this year to release a 22-word statement on the “risk of extinction” that should be “a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Among those who signed that statement were Geoffrey Hinton, the so-called godfather of AI, Bill Gates, OpenAI CEO Sam Altman, and Anthropic CEO Dario Amodei.

The statement came on the heels of an open letter signed by Apple Computer co-founder Steve Wozniak, Tesla CEO Elon Musk, and others in the technology field, calling on all AI labs to pause training of AI systems more powerful than ChatGPT-4 for six months.

A Pew Research Center canvass of 305 technology innovators and developers, business and policy leaders, researchers, and activists found that many anticipate great advancements in both healthcare and education between now and the year 2035, thanks to AI. Many of the experts, however, expressed concerns, ranging from the speed at which the technology is developing and how it will be used to fears echoing those raised in the statement from the AI researchers.

Even the World Health Organization issued an advisory calling for rigorous oversight to ensure “safe and ethical AI for health.”

In June, U.S. Rep. Ted Lieu (D-Calif.) introduced bipartisan legislation that would create a national commission to make recommendations on the best ways to move forward on AI regulation. (Lieu is profiled elsewhere in this issue of Blueprint.)

One month later, the Washington Post reported that the Federal Trade Commission had opened an expansive investigation into whether OpenAI “engaged in unfair or deceptive privacy or data security practices.”

Shortly after that, leaders of seven major AI companies — Microsoft, Google, Amazon, Meta, Anthropic, Inflection, and OpenAI — met with President Joe Biden at the White House and agreed to voluntary safeguards for AI.

“Americans are seeing how advanced artificial intelligence and the pace of innovation have the power to disrupt jobs and industries,” Biden said at a news conference. “These commitments are a promising step, but we have a lot more work to do. Realizing the promise of AI by managing the risk is going to require some new laws, regulations, and oversight.”

Balancing promise and alarm

“Let’s not be foolish — of course there are positive applications of those technologies that are very appropriate and that can push the needle positively as a social good,” Roberts said. “I welcome advances in medicine — cancer research and discovery. Those are good things. But that doesn’t mean that we should fully release this extraordinary computational power without any type of guardrails or any kinds of admonishments about what could go wrong. It seems like there might be a middle ground that we could find.”

A model for those safeguards is already used by agencies such as the U.S. Food and Drug Administration in its regulation of pharmaceuticals, the U.S. Department of Agriculture with its standards for food protection, the Federal Communications Commission with legacy broadcasters, and the U.S. Environmental Protection Agency with its environmental standards. These agencies put the onus on the industry or developer to prove it has lived up to safety and other kinds of regulatory guidelines.

“There are ways to introduce friction into these processes that could give the time and space needed to do more evaluation,” Roberts said. “As of today, that just doesn’t exist. Tech gets to do whatever it wants to do and release it and unleash it on the public. They have really gotten away with very little intervention in a way that other industries don’t. And the public has very little recourse.”

Roberts said she remains cautiously optimistic about the recent efforts by the U.S. government, and she hopes it sparks more public conversations. But, she said, much work is needed. The European Union, she said, has been more assertive than the U.S. in trying to regulate technology.

“Many people argue that they get it wrong in the EU. That may be. But they’re trying. The answer is not do nothing because no one can get it right. That’s never been a solution,” she said. “Even though we’re late, it shouldn’t be an excuse not to do anything.”

Lisa Fung

Lisa Fung

Lisa Fung is a Los Angeles-based writer and editor, who has held senior editorial positions at The Los Angeles Times and TheWrap.com.

Post navigation

Related

profile

Congressman Ted Lieu: Regulating Tech

Congress grapples with a new challenge that may test its unity, expertise