Statement | Does section 230 protect ChatGPT? Congress should say that.

Comment

The early days of Microsoft’s ChatGPT were something of a repeat of Internet history—starting with excitement about these inventions and ending in trepidation about the damage they could do. ChatGPT and other “big language models”—artificially intelligent systems trained on large amounts of text—can turn out to be liars, or racists, or terrorist accomplices explaining how to build dirty bombs. The question is: When that happens, who is responsible?

Section 230 of the Communications Decency Act says that services – from Facebook and Google to movie review aggregators and mommy blogs with comment sections – should not be responsible for most material from third parties. It is quite easy in these cases to distinguish between the platform and the person posting. Not so with chatbots and AI assistants. Few have grappled with whether Section 230 gives them protection.

Consider ChatGPT. Enter a question and it will provide an answer. It doesn’t simply display existing content such as a tweet, video or website originally contributed by someone else, but rather writes its own contribution in real time. The law says that a person or entity becomes liable if they “develop” content themselves “in part”. And does it not qualify as development to transform e.g. a list of search results for a summary? Furthermore, the contours of each AI contribution are informed significantly by the AI’s creators, who have set the rules for their systems and shaped their output by reinforcing behaviors they like and discouraging those they don’t.

But at the same time, all responses from ChatGPT, as one analyst put it, a “remix” of third-party material. The tool generates its answers by predicting which word will appear next in a sentence based on which words appear next in sentences across the web. And as much as creators behind a machine inform its output, so do the users who ask questions or engage in conversation. All of this suggests that the degree of protection afforded to AI models may vary by how much a given product incites versus synthesizes, as well as by how deliberately a user has caused a model to produce a given response .

So far there is no legal clarity. Supreme Court Justice Neil M. Gorsuch said during oral argument in a recent case involving Section 230 that AI “generates controversy today that would be content that goes beyond selecting, selecting, analyzing or digesting content” — with a hypothesis of “that is not protected.” Last week, the authors of the provision agreed with his analysis. But the companies working on the next frontier deserve a firmer response from lawmakers. And to find out what that answer should be, it’s worth revisiting the history of the Internet.

Researchers believe that Section 230 was responsible for the web’s mighty growth in its formative years. Otherwise, endless lawsuits would have prevented any fledgling service from becoming a network as indispensable as a Google or a Facebook. That’s why many call Section 230 the “26 words that created the Internet.” The problem is that many now, in retrospect, believe that lack of consequences encouraged the Internet not only to grow, but to grow out of control. With artificial intelligence, the country has a chance to act on the lesson it has learned.

That lesson should not preemptively strip Section 230 immunity from large language models. After all, it was well that the Internet could grow even as its ills did. Just as websites couldn’t hope to expand without the protection of Section 230, these products can’t hope to offer a wide range of answers on a wide range of topics, in a wide range of applications – which is what we should have them to do — without legal protection. Yet the US also cannot afford to repeat its biggest mistake in internet governance, which was not governing much at all.

Lawmakers should give the new AI models the temporary refuge of Section 230 while they see what happens when this industry starts to boom. They should sort through the conundrum these tools create, such as who is responsible, e.g. in a defamation case if a developer is not. They should investigate complaints, including legal proceedings, and assess whether they could be avoided by changing the immunity regime. In short, they should let the Internet of the future grow like the Internet of the past. But this time they have to be careful.

Posten’s view | About the editors

Editorials represent the views of The Post as an institution, as determined through debate among members of the editorial board, based in the Opinions section and separate from the newsroom.

Members of the editorial board and focus areas: Opinion Editor David Shipley; Deputy Opinion Editor Karen Tumulty; Associate Opinion Editor Stephen Stromberg (national politics and policy, legal affairs, energy, environment, health care); Lee Hockstader (European Affairs, based in Paris); David E. Hoffman (Global Public Health); James Hohmann (domestic and electoral politics, including the White House, Congress and governors); Charles Lane (Foreign Affairs, National Security, International Economics); Heather Long (Economics); Associate Editor Ruth Marcus; and Molly Roberts (Technology and Society).

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top
%d bloggers like this: