The Guardian

A lawyer may have got ChatGPT to do his research, but he isn’t even AI’s biggest fool What I’m reading

John Naughton’s recommendations

John Naughton

TAugust 2019, when Roberto Mata was a passenger on an Avianca flight 670 from El Salvador to New York and a metal food and drink trolley allegedly injured his knee. As is the American way, Mata duly sued Avianca and the airline responded by asking that the case be dismissed because “the statute of limitations had expired”. Mata’s lawyers argued on 25 April that the lawsuit should be continued and appending a list of over half a dozen previous court cases that apparently set precedents supporting their argument.

Avianca’s lawyers and Judge P Kevin Castel then dutifully embarked on an examination of these “precedents”, only to find that none of the decisions or the legal quotations cited and summarised in the brief existed.

Why? Because ChatGPT had made them up. Whereupon, as the New York Times report puts it, “the lawyer who created the brief, Steven A Schwartz of the firm Levidow, Levidow & Oberman, threw himself on the mercy of the court… saying in an affidavit that he had used the artificial intelligence program to do his legal research – ‘a source that has revealed itself to be unreliable’.”

This Schwartz, by the way, was no rookie straight out of law school. He has practised law in the snakepit that is New York for three decades But he had, apparently, never used ChatGPT before, and “therefore was unaware of the possibility that its content could be false”. He had even asked the program to verify that the cases were real, and it had said “yes”. Aw, shucks.

One is reminded of that old story of the chap who, having shot his father and mother, then throws himself on the mercy of the court on the grounds that he is now an orphan. But the Mata case is just another illustration of the madness about AI that currently reigns. I’ve lost count of the number of apparently sentient humans who have emerged bewitched from conversations with “chatbots” – the polite term for “stochastic parrots” who do nothing else except make statistical predictions of the most likely word to be appended to the sentence they are at that moment engaged in composing.

But if you think the spectacle of ostensibly intelligent humans being taken in by robotic parrots is weird,

Keeping it lo-tech

Tim Harford has written a characteristically thoughtful column for the Financial Times on what neo-luddites get right – and wrong – about big tech.

Stay woke

Margaret Wertheim’s Substack features a very perceptive blogpost on AI as symptom and dream.

Much missed

Martin Amis (right) on Jane Austen over on the Literary Hub site is a nice reminder (from 1996) of the novelist as critic.

then take a moment to ponder the positively surreal goings-on in other parts of the AI forest.

Last week, for example, a large number of tech luminaries signed a declaration that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”. Many of these folks are eminent researchers in the field of machine learning, including quite a few who are employees of large tech companies. Some time before the release, three of the signatories – Sam Altman of OpenAI, Demis Hassabis of Google DeepMind and Dario Amodi of Anthropic (a

company formed by OpenAI

to the White House to share with the president and vicepresident their fears about the dangers of AI, after which Altman made his pitch to the US Senate, saying that “regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models”. Take a step back from this for a moment. Here we have senior representatives of a powerful and unconscionably

rich industry – plus their supporters and colleagues in elite research labs across the world – who are on the one hand mesmerised by the technical challenges of building a technology that they believe might be an existential threat to humanity, while at the same time calling for governments to regulate it. But the thought that never seems to enter what might be called their minds is the question that any child would ask: if it is so dangerous, why do you continue to build it? Why not stop and do something else? Or at the very least, stop releasing these products into the wild?

The blank stares one gets from the tech crowd when these simple questions are asked reveal the awkward truth about this stuff. None of them – no matter how senior they happen to be – can stop it, because they are all servants of AIs that are even more powerful than the technology: the corporations for which they work. These are the genuinely superintelligent machines under whose dominance we all now live, work and have our being. Like Nick Bostrom’s demonic paperclip-making AI, such superintelligences exist to achieve only one objective: the maximisation of shareholder value; if pettifogging humanistic scruples get in the way of that objective, then so much the worse for humanity. Truly, you couldn’t make it up. ChatGPT could, though.

Opinion

en-gb

2023-06-04T07:00:00.0000000Z

2023-06-04T07:00:00.0000000Z

https://guardian.pressreader.com/article/281921662435250

Guardian/Observer