From Benedict Evans:
In the late 1990s, the UK Post Office deployed a new point-of-sale computer system, built for it by Fujitsu. Almost immediately, post-masters, who are self-employed and generally small independent retailers, started reporting that it was showing shortfalls of cash; the Post Office reacted by launching prosecutions for theft. Close to a thousand people were convicted over the next 15 years, many more were falsely accused and forced to settle, and there were at least four suicides.
This is so incredibly sad.
But we don’t look at this scandal and say that we need Database Ethics, or that the solution is a SQL Regulator. This was an institutional failure inside Fujitsu and inside the Post Office, and in a court system failing to test the evidence properly.
Yes, a 1,000 × this, and the same logic applies to copyright concerns about ChatGPT and the like. The NYTimes suit shows how a properly prompted GPT can spit out verbatim, plagiarized prose. So what? If properly prompted I can do that as well, but it doesn’t make me illegal, it makes doing it illegal.
ChatGPT isn’t prompting itself. For ChatGPT to regurgitate a copyrighted work someone has to ask it to, and if they do so in private (or if the prompter is the Times and already owns the copyright) then it’s not a crime, just like making a copy of a newspaper and never showing anyone isn’t a crime. Now, if I repeat the Times experiment, and I re-publish a copyrighted work without appropriate attribution and in violation of fair use, then that’s obviously illegal — regardless of whether I did it with a generative pre-trained transformer or my ⌘, C, and V keys. In other words, the publisher is the plagiarist and the criminal, not the pencil or the software. Similarly, if you’re an artist and you think someone used an AI to rip off your art, the AI part is immaterial. Use the existing laws, convince a court it’s not a derivative work, and get your justice.
I honestly don’t understand what all the fuss is about.