Research | Fall 2023 Issue

The Human Impact of Tech

Keeping humans in mind as technology marches ahead

By Lauren Munro

JOHANNES GUTTENBERG INVENTED THE printing press in the mid-1400s. The first computers were being developed more than 100 years ago. The public has had access to the World Wide Web for 30 years. Phones could unlock with a fingerprint, then a facial scan. Now, artificial intelligence creates music, generates art, writes essays, and engages in conversation with its users. Technological innovation has demonstrated exponential growth, and it’s time to evaluate where this growth is headed.

Tao Gao doesn’t subscribe to streaming services. He doesn’t like the aggregation of his personal data. It’s not an issue about technology itself but how streaming companies use technology to collect his personal information to capitalize off him.

Gao, jointly appointed to the departments of statistics, communication, and psychology at UCLA, is an assistant professor and researcher with an academic background in psychology. His involvement with artificial intelligence began with a desire to replicate human behavior through machines. He received his Ph.D. in cognitive psychology from Yale, where he took an interest in cognitive modeling. “We have some idea of how the human mind works,” Gao said in an interview, “[and] we want to build an engineering system so that we can mimic the human mind.”

He described the process of building these models as “reverse engineering.” After creating a model of the human mind, he gave a human and the model the same tasks. If they succeeded or failed in similar ways, he said, “The model really captured how the human mind works.”

Even more than creating cognitive models, Gao is passionate about ensuring their transparency. “If [someone] makes a mistake, but they tell us why they are making that mistake, as long as they can explain transparently why they are making a certain kind of error, you can still build some kind of trust on top of that,” he said. “These days, we have trouble with machines as they get more and more powerful. We have no idea why they are being so powerful.” By refining the cognitive development of these models, Gao aims to ensure a transparent relationship between humans and machines.

ChatGPT changes conversation

AFTER EARNING HIS PH.D., GAO JOINED MIT’S Center of Brain, Mind, and Machine as a post-doctoral fellow. There, he continued to pair research of human intelligence with that of machines. Before moving to UCLA, he also was a research scientist in the Computer Vision and Machine Learning labs at GE Global Research.

At UCLA, Gao conducts AI-related research under a grant from the U.S. Department of Defense, where he offers his cognitive science perspective. He also teaches courses in the UCLA departments of communication and statistics. Since March of 2022, Gao has taught a course titled AI and Society, which discusses the ethical, legal, and economic implications of artificial intelligence. From self-driving cars and video surveillance to job displacement and voting manipulation, the course covers a wide range of AI-related topics. Although the course is relatively young, he said, “I’ve had to dramatically change at least one-third of my material [since March of 2022]. [And] things have gotten really crazy since then.”

Gao was referring to the release of  large language model-based chatbot, ChatGPT, at the end of 2022. “GPT shows up, and now this is a completely different game,” he said. “Scientists working on this topic are still reckoning with it. I don’t know to what degree the public actually experienced that.”

GPT, Gao said, is “the first time Google is being seriously challenged by something new.”

As he discussed the applications of AI software, including ChatGPT, Gao shared his excitement about how he personally and professionally benefits. He was born in China and is not a native English speaker. He noted GPT’s ability to diminish language barriers.

He said GPT also speeds up his writing process. “I’m in academia. I need to write a lot of papers,” he said. “I always find it painful to turn an outline of an idea into polished writing. But now this part completely disappears. I just need to think. I can focus on the most lucrative parts.” He applauds GPT for making his research more enjoyable.

“But I can’t just let it write my papers, and what it writes would not pass the peer review process,” he said. This is because large language models are built from big data. They look for statistical patterns in the language they are fed. “GPT writing … won’t be new or fresh or sharp because it’s reusing the most common language. It’s anti-innovative.”

Possibilities and risks

GPT CAN BE A POWERFUL TOOL FOR RESEARCH outside of academia as well, Gao said. Medical research, for example, publishes findings beyond what any one person could read in a lifetime. He sees great potential for large language models to efficiently consume and connect these findings, leading to new discoveries. For the self-employed and small-business owners, Gao said, GPT offers a “golden age” for quick, cheap legal or financial advice.

As artificial intelligence demonstrates its emerging capabilities, however, concern arises about its socioeconomic impact. “Instead of an AI tool that can do your chores and take out your trash,” Gao said, “we have a tool that can write poems and replace your job. That’s what’s unexpected. …

“This is a moment for us to think,” he said. “Do we really need something more powerful than GPT4? What happens after GPT4?” If AI becomes capable of creating and innovating, what jobs will be left? “If you are the one making the decisions — making the call, asking the questions — I don’t think GPT is going to hurt your job. It might make your job easier. If you are the one summarizing or searching, then [your] job could be easily replaced by GPT.”

Job displacement by automation isn’t new, but it will be amplified with the development of artificial intelligence. Is regulation to protect workers feasible? Gao thinks this discussion has been needed for a while, and he expresses an urgency for lawmakers to pay more attention.

“Only a few players can train large language models,” Gao said. He worries that AI technologies are being monopolized by a small number of private companies because of high development costs. He advocates equal access to technology and its innovation.

Nationalizing AI research and innovation could be hazardous, he said. “We don’t want to end up in an arms race of AI against different countries.” If nations begin rushing the development of AI, there’s risk in the unintentional prioritization of capability over safety. “It might not be as dangerous as a nuclear weapon, but it would be much more difficult to control.”

If AI innovation continues to be fueled by private funding instead of public research, Gao is concerned about what might be going on behind the curtain. The few players with resources to develop artificial intelligence, he said, “care about profits.” And as long as there’s profit, he doubts that there will be regulation to enforce the allocation of jobs strictly for humans without AI interference.

For Gao, one distinction is crucial: What is human and what isn’t?

“We are social creatures. We enjoy talking to other people. We enjoy sharing our humanity. What’s the point in replacing that?” he asked. He worries that the technologies being created will generate a scarcity of human social interaction or, as he put it, “blur the lines of humanity.”

“Let’s make it very clear: What is AI, and what is human. And do not mix them.”

To his dismay, Gao said, companies with a stated mission to “connect people” have a real mission to increase user screen time. “We’re not the user — we’re generating revenue for them,” he said. One sensible form of regulation Gao wishes for would be to ban companies using AI from hooking users to the company’s product.

Meanwhile, he will physically rent the movies and other shows he wants to watch.

Lauren Munro

Lauren Munro

Lauren Munro is a recent UCLA graduate interested in the ethical implications of technology and advertising.

Post navigation

Related

research

The “Lie Factory”

Generative artificial intelligence is changing — and improving — life for humans. Can humans control it?