The Guardian has joined other news outlets—including CNN, Reuters, the Washington Post, Bloomberg, and the New York Times—in blocking OpenAI from using its content to power artificial intelligence products such as ChatGPT. A response to: The Guardian blocks ChatGPT owner OpenAI from trawling its content Protecting intellectual property is important, but there’s a problem: What will happen to the quality of information left for training AI? If honest and respected sources are blocked, that leaves a void ready to be filled by purveyors of fake news. The Guardian’s block on ChatGPT using its content is bad news
I say, starve the beast if we can. Allow it to become more and more unreliable to the point where it becomes unusable. OR, how about we get going on some legislation around the world already? The politicians sure seem to be taking their sweet time. Probably has to do with how rich business owners and shareholders are enjoying the increased profits reaped from replacing human workers with generative AI.
I could care less about less about the quality of AI. There are other ways, as there always have been, to find accurate information. If anything, people will just have to realize that AI really isn't there to do work for them. I'm glad to see these news organizations blocking AI. It's a step in the right direction, IMO.
Expecting politicians to fix a problem is a mistake. Look at the last 30 years with the internet. Laws are often written by lobbyists, not the politicians who sponsor them. The politicians themselves are often clueless about an issue until it gets to the point of impacting their election chances.
I wouldn't say this is a step in the right direction, just this is the next step. It's hard to say what's the best thing for this technology because there seem to be two camps 1) those who hate it and want it to die 2) those who want it to grow as big and as fast as possible (ethics be damned). Those of us who want this technology to develop but in an ethical and responsible way are a minority (or too silent majority) and we have to sit around and say "lets see what happens next" Well this is what happens next, and then something happens after that. I don't know what is best for the technology but I understand this move, and pray it leads developers to consider making proper changes... though past experience suggest that won't happen.
@West Angel, you raise a valid point there. But Authors of Speculative fiction have been debating this in their work as far back as Asimov, and Clarke. Asimov's three laws while addressing several concerns were not thought through as well as they could have been. To clarify my point: First Law A robot may not injure a human being or, through inaction, allow a human being to come to harm. That law would have robots wrapping people in bubble wrap figuratively speaking. They wouldn't allow humans to do any of the potentially dangerous things we do for sport. I can't let you go skiing, you might break your leg, etc. Then there are the examples of Hal 9000, in 2001, and Skynet from the terminator series. These were all warnings of the potential of what we humans could create. And on the other side of the coin we have had authors who showed how AI could be a benefit to humans. So this is not a new debate. It has just increased in relevancy as we now have the ability to make it a reality. We are at the stage where we need to establish both ethical rules and a way to enforce those rules. The medical profession might be a good model for this. The legal profession, not so much, as we have seen questionable ethics from that area many times without much corrective action being taken.
Does that mean if a writer learns to write by reading their favourite author, and takes those lessons to heart, are they committing copyright theft?
This means that a writer shouldn't use AI (also known as LLMs for what we are referring to here) to write a story.
I wonder if this will lead to leasing 'deals' where ChatGPT pays a certain amount of money for the right to scrape certain news sites. Or it will simply scrape the ones that essentially copy stories from places like NYT anyway.