Research | Fall 2023 Issue

The Law and the Machines

As technology leaps ahead, the law struggles to keep pace

By Jon Regardie

ON A WEDNESDAY AFTERNOON IN JUNE, THE United States Senate’s Judiciary Committee considered a question unwieldy even by its history, with a hearing titled “Artificial Intelligence and Intellectual Property — Part 1: Patents, Innovation and Competition.” Over 94 minutes, elected officials heard testimony from and posed questions to five industry experts, which included multiple Californians, a fact that Sen. Alex Padilla noted with pride.

Key to the discussion was whether, when AI is utilized, “ownership” of any advance should belong to those using the system, or rather to the creator of the AI model in the first place.

“I think this is one of an endless number of hearings that we’re going to have to have to make sure we get it right,” remarked Sen. Thom Tillis (R.-N.C.), who separated himself from many legislators by mentioning that, in the mid-1980s, he worked on AI when the focus was on character and voice recognition. He added, in a comment that was simultaneously right on the mark and an extraordinary understatement, “As the tools continue to explode, the challenges are going to be great.”

The Washington, D.C., hearing was the type of detailed examination that could cause eyes to glaze over. But when it comes to AI and legal impacts, patents and copyright are just the beginning. Talk to any attorney or legal scholar and they’ll describe the nascent efforts to anticipate how the technology will impact the field, and where benefits and pitfalls lie. Everyone agrees that change is coming — the real question is, will the apple cart be upset, or will it be immolated and then replaced, perhaps by something designed by a machine?

When looking at this form of technology, many think the best point of comparison is previous technologies. Will AI’s impact on the legal sector be like when Google allowed the world to search for, er, everything, instantly? Is it as seismic as the internet itself? Or even larger?

That was the idea proffered when I broached the subject to Paul Rohrer, deputy chair of the real estate practice in the Los Angeles office of the prominent law firm Loeb & Loeb. “One of my partners said, ‘This is like when computers first started to come into use. First all we did was play with them. Then we got faster with them,’” Rohrer remarked.

“That’s where we are at this moment,” he added. “Attorneys in their spare time are playing with it and working it into their routine where they can, but it isn’t quite ready to be used as predominantly or as effectively as we think it will be within six months to a year.”

Where technology and law meet

THE SENATE, OF COURSE, IS NOT THE ONLY body seeking to answer questions that people are and will be asking, including queries that would have been unthinkable even a few years ago. There’s also the UCLA Institute for Technology, Law & Policy (ITLP), which was formed in 2020 and brings together experts and practitioners from the UCLA School of Law and the Samueli School of Engineering.

“So many of the really pressing policy questions today,” said John Villasenor, the faculty co-director for the ITLP, “have a nexus where you have both technology and law.”

Villasenor, who also is a professor of electrical engineering, law, public policy, and management at UCLA, was on the panel that testified before the Senate. In a recent interview, he expounded on why the seemingly disparate entities need to be working closely together.

“If you think about all the debates about artificial intelligence policy and digital privacy and cybersecurity, driverless cars, these are all areas where you have both the technology angle and the law and policy angle,” he said. “So having a formal UCLA entity that’s housed jointly in Law and Engineering that engages some of these issues, it’s important.”

If you are not a lawyer but have thought at all about AI and the legal field, chances are it’s because of a well-publicized gaffe that occurred this spring in Manhattan. U.S. District Judge P. Kevin Castel grew angry when attorneys in a case involving a lawsuit against Avianca Airlines filed a legal document that included references to cases that didn’t exist — it turned out that when the attorneys used ChatGPT for research, it invented the cases, and the lawyers did not thoroughly check the facts. A judicial dressing-down and public ridicule followed.

While that opened the door to a sort of Murphy’s Law view of generative AI, and added to the worries of those fretting about a Skynet-style machine takeover, Villasenor urged tapping the brakes.

“I think there is a lot of doomsaying about AI and a lot of fearmongering about it,” he stated. “I don’t think it’s going to be the end of civilization as we know it. I think it’s going to be largely a positive technology. Like any technology there will be instances where it’s used for malicious or otherwise problematic purposes. But I think on balance it’s going to be positive.”

That doesn’t mean the road will be without potholes. One topic in legal circles is what happens, and who is ultimately responsible, when a journalist or someone else spreads a falsehood generated by an AI request. Then there is another aspect of “ownership.” In July, comedian Sarah Silverman sparked headlines for joining class-action lawsuits against OpenAI (the maker of ChatGPT) and Meta related to copyright infringement. Other suits have followed.

One widely cited paper, “Talkin’ ‘Bout AI Generation,” notes that AI bots confound traditional views of copyright by borrowing so widely and instantly from unknown sources that they “break out of existing legal categories.” The authors of that paper attempt to address those issues by creating a “supply chain” that replicates AI’s work and identifies those responsible at each stage — a starting point for thinking about how to confront legal liability.

Beyond the familiar terrain of copyright and defamation — staples of communications law — lie more distant and uncertain territories in the law. Who will be held responsible when a robot police dog kills a suspect? Will AI be employed to consider evidence, represent defendants, or impose sentences in criminal proceedings? What will become of judges? These questions, yet to present themselves in cases, haunt the dreams of AI theorists.

Gains and hurdles

ONE THING IS CERTAIN: THE LAW IS FAR behind technology. In March, on the Brookings Institution website, Villasenor penned an article titled “How AI will revolutionize the practice of law.” It detailed some of the changes coming, both opportunities and challenges. Chief among the benefits is improved efficiency — Villasenor discussed the task of pulling salient information out of huge sets of documents during the discovery phase. “AI will vastly accelerate this process, doing work in seconds that without AI might take weeks,” he wrote.

A domino benefit of freed-up hours could be lowered costs and thus a broadening of access to legal services, including for clients who might now be locked out. At the same time, the article mentioned that attorneys will have to learn a new suite of skills, and humans will be required to make sure that, as happened in New York, the machine’s work is reliable.

Rohrer points out that in his practice, if he needs to draft a certain kind of legal letter, he could potentially give AI the parameters and point to past examples, and the work would, again, be completed in seconds. Before the technology, he might have handed the task to a junior attorney, who could spend several hours on it. And, as the attorneys learned the hard way in the Avianca Airlines case, the resulting letter would need to be reviewed by a human to check AI’s tendency to make up facts and cases.

That exemplifies the crossroads of gains and hurdles — Rohrer frees up time and potentially reduces costs for a client. But the tool could remove a valuable learning opportunity and create room for errors.

“The areas that are being replaced are the sort of routine things you would do when you are junior learning how to be senior,” Rohrer explained. “My concern is for the generation coming after me: How do you know how to manage the machine if you don’t know how to do what the machine is doing?”

Another facet of AI and the legal field involves how services will be deployed. Certain national or global firms with extensive resources may hire engineers and tech wizards to develop their own proprietary in-house models. Small or midsize firms may, at least in the early stage, contract with or get off-the-shelf services from one of a batch of companies providing AI legal services.

In April the start-up Harvey announced that it had raised $21 million from investors, with the aim “to redefine professional services, starting with legal.” Then there’s Casetext, a 10-year-old business that in March debuted CoCounsel, which it dubs “the world’s first reliable AI legal assistant.”

This barely scratches the surface of future generative AI issues. There will be questions as to whether government legal divisions, which have a reputation for moving slower than their private-sector counterparts, get on board with AI quickly, or fall behind.

Could AI, for instance, make decisions about who is entitled to benefits from certain government programs? It could almost certainly speed up notoriously slow practices, but it also has been shown to engage in racial and gender discrimination, and infecting government systems with those types of biases raises whole new areas of concern.

There is also training the next generation. Villasenor is teaching a course called “Digital Technologies and the Constitution.” The description details an examination of some elements of AI.

For all the advances and uncertainties that loom in the future, there is something else that experts seem to agree on — no matter how good the machine, there is still not only a place but a requirement for skilled humans in the legal field. AI can’t help ease the concerns of an antsy client or supply the strategic wisdom of an experienced counselor. As Villasenor noted in his Brookings article, artificial intelligence lacks the power to make a convincing argument to a jury.

In other words, the future of the legal field may be increasingly technical, but humans matter.

Said Villasenor, “We’ll still need good, competent attorneys to engage with the many challenging issues where attorney services are so important.”

Jon Regardie

Jon Regardie

Jon Regardie spent 15 years as editor of the Los Angeles Downtown News. He is now a freelance writer contributing to Los Angeles Magazine and other publications. jregardie@gmail.com

Post navigation

Related

research

The “Lie Factory”

Generative artificial intelligence is changing — and improving — life for humans. Can humans control it?