For sure, but that's mostly because I hate when people steal my lines. Those posts are gone now, so no need to reference further. I'd considered editing to remove that line from my response. As far as keeping a discipline in how you use AI, there is the notion of thin edge of a wedge. I received a snotty email at work in response to one of mine that was measured and the very pinnacle of restraint. I figured my respondent either hadn't properly read my email or got AI to write the response. I settled on the latter when a part of what I wrote converted "offence" to "offense" when (mis)quoting my text. It was annoying as hell. People are stupid and AI will make them stupider. If that's sounds arrogant, bear in mind that I'm people too.
I'm damaged by it. I once tried to write like the AI wanted me to once, and I didn't like the result at all, and that taught me to avoid its writing suggestions like the plague.
I’ve put most of the teddies back in the play pen … if we could keep them there that’d be good By and large if we can avoid calling each other names that’s be good and likewise avoid getting personal in other ways ( one member has lost the right to post in this thread)
Undead pirate pineapple. *LOL* Or a dragon clown, with big orange polka dots and webbed feet. Fair enough. Oh - so, is ChatGPT more verbose? Then yes, I agree. Verbosity doesn't make a sentence clearer, simplicity and brevity are best. (Just look at the Sir Humphrey speeches from Yes Minister). Verbosity can make a sentence funnier, I suppose. (Again, Sir Humphrey). I'm not sure why, by the way, but after the latest Windows Update, it decided to install its AI "helper" into Notepad. (Not sure how many people here use Windows; Notepad is like Word and other word processors, except that you can only take notes and do bolds/italics/etc., not do anything complicated like tables and so on). This AI "helper" (called CoPilot, I think?) is a pain. I use Notepad sometimes to write HTML code, and it told me that the <BODY> tag was offensive (and should be changed to <MAN> or <WOMAN> ... what?), and that the closing </BODY> tag was unnecessary. No further comments, m'lud ... Yikes, I must have missed that show. *shrug* No worries, Moose. I don't tend to indulge in ad hominem attacks.
Copious drug use fuelled Philip K Dick's imagination. What's the difference between pharmacological and technological enhancement of creativity?
If I'm to be honest, about the same as having a few drinks to relax on a first date and paying for a prostitute over the internet. But that's my opinion, based on what I value in reading and writing. You who are comfortable with using AI in whatever way have no more to gain trying to change my opinion than I have trying to change yours.
Yes - but I can ask you to try and understand what I'm doing with it. "Using AI" can mean different things to different people.
because the drugs didn't take ideas from other authors - even whacked off his box the ideas were still his
OK, so here’s what I’m seeing. For some folks, AI is a tool they use to brainstorm ideas and talk about their characters with. For others, it’s basically anathema to all things creativity, and it slowly replaces your ability to create for yourself as you become dependent on the bot to tell you what it thinks your characters should do. My other question, you say Chat GPT doesn’t do anything other than make connections to certain words, it won’t borrow entire scenes for itself. My question is, how do you know that? Even if you turn off the option to allow it to share it with the wider model, how do you know it isn’t secretly doing it and is lying to you? Let's say you upload files, like snippets of your writing, or your personal journal if you wanted to use it as a therapy/venting bot? How do you know it isn't secretly sending it to Open AI? Even if someone isn't at the computer reading a sappy vampire romcom novel (nothing wrong with writing one, anyone reading this who is writing one -- you compose your little heart out) what if it took what you upload, broke it down, dispersed it far and wide and suddenly other users have access to a character name that you created, or a concept that you invented? EDIT: Let’s use my example from earlier, with the urban fantasy set in the Deep South. Let’s say I upload scene snippets and character bios into CHATGPT, how would I know no one else could see it? That I’m writing a fantasy set in Mobile, Alabama about an Ork rebellion and a human girl’s fight to support them? That they won’t see snippets like ‘ORK’ ‘MOBILE’ ‘WIZARD’ and write a fantasy about that?
There's enough research about LLMs out there to explain how it work. I'm not a crazy conspiracy theorist who sees secrets and lies in every shadow. Given that LLMs have been trained by exposing them to billions and billions of web pages and other text out there on the internet, if it stored everything it has been exposed to, its database would have to be... the size of the internet. If it was secretly doing any of that, we would know by now. Two can share a secret if one of them is dead, and a secret of this size would have leaked. It can't even produce the text, or the get details of, a book by famous author correct, the chances of it using my text is nil.
To your credit, though, ideas mean nothing. There will always be urban fantasy stories. Hell, there may already be dozens of urban fantasy stories in the Deep South, and we just don't know about 'em because they (sadly) hadn't reached a wider audience. Even if the AI doesn't steal your ideas, or share it with the wider world, there is a concern that a writer would not know how to check themselves. They'd use it as a crutch, asking the AI 'what about this' or 'what about that' and forgetting that it's... an AI, not a real person. It tells the user what the user programmed it to say. Like I could tell it Alkiba (the ork girl from my fantasy) is a loyal, steadfast kind soul, then ask it: "Is it in her character to comfort Heather?" The AI would say 'yes'. If I asked it, "Is it in her character to slap Heather?" More than likely the AI would say 'It would go against her character'. How would it know this? Because I had to teach it who Alkiba was. Even if in the story, she does it in a moment of rage, and there's a good in-plot for it, the AI doesn't always get it right or fail to see the context. All it'd see is 'Alkiba, the kind soul, slapping Heather? No! No compute!" However, it could, depending on its intelligence, offer feedback like potential scenarios as to why Alkiba would resort to violence. The fear, from what I'm seeing, is that it risks causing the writer to lose their own ability to create because all they'd have to do is plug in basic stuff and then ask the AI 'what should they do next?' or 'Is it in their character to do xyz?' Instead of me trying to figure out why Alkiba would break character and slap Heather, the AI would tell me. THAT'S what's scary. The AI would be writing the story, and I'd merely be the scribe. Recalling the scenarios I mentioned earlier. I wouldn't have come up with that. It'd be the AI. It makes the whole point of creation rather moot and fake, y'know? EDIT: Of course, one counterargument could be that it's no different than sitting down with a person who gives you the scenarios. Like I could make a thread right now where I ask y'all to help me figure out why a kind Ork girl would suddenly slap the bejeesus out of Heather -- after I gave y'all the TL;DR of the work and the two characters in question. Y'all would likely give me the run-down of a few scenarios and discuss what would prompt her to do that. I can hear the pro-AI folks saying, 'Does that make you less of a writer because you didn't think of it on your own? Someone else had to?'
Which is a legitimate concern. There will be people who use AI as a crutch, and I'm not sure what you can do about that. People can use AI in various different ways, and some people are more comfortable with using it more heavily than others. I'm comfortable with my writing enough to know what I care to use it for and what I find it useful for, which is from experience. But millions of people use Grammarly because they aren't comfortable enough with their own writing. Personally, I don't think a novice writer should be using AI. You need a certain level of experience to know when to take shortcuts - just like you need that to know when telling is acceptable, and why you should not obsessively go through your work and remove adverbs.
Also, the latter is legal. The former is ... murky - it depends on which drugs the author is using and where he/she is using them. I agree that novice writers shouldn't use AI. But I wonder about the last sentence here (i.e. not using adverbs). Stephen King famously advised prospective authors not to use adverbs, because they're a "tell" instead of a "show". That seems like sound advice. Is there a counter-argument to that? Yes, writing isn't mathematics, and there isn't one right answer. I'm simply curious why adverbs might be considered a good thing?
If your work is focused on the dialogue itself. When I write Vance, I use them extensively, because the characters and their reactions are characterised almost entirely through the dialogue, not necessarily their physical reactions. If it's done correctly, you don't even notice it (unless you're looking for it). Vance did this most of the time, or omitted physical reactions entirely and let the dialogue do the work. It is an older style of writing, but it still works. When I read that someone has gone through their work and removed every single -ly word, I always think they're taking some advice much too literally.
I do get what you're saying. I'd liken it to a kid kicking a ball against a wall after his friends have gone home and the wall is still there. Practising inside foot, outside, trap, spin, force and reaction and a bunch of skills that may have some use on the football field but won't have Real Madrid calling any time soon. I'd also agree that AI is unlikely to plagiarize anything any of us write here, unless it goes viral and gets "hit" by lots of internet users. It's still contributing though, the AI is still picking up on popular narratives and storing them with all the other inputs. As well as that, the AI isn't just learning from what you might write. It's also learning from how you use the tool, how you phrase, refine and steer prompts. It's still feeding the machine and we're gonna need a bigger boat! Sorry, wrong movie. It's like the wall has learned to return the ball more efficiently to the kid's foot so they don't have the bother of trapping it or fetching it from under the hedge, and will do the same for the next kid that shows up. Physics may be different to Thought, but the comparison still holds true for me.
Yes, it is, but I don't think that's necessarily a bad thing. Anything that improves a tool is good. I'll be pulling it in one direction, while others pull it in another. My influence on it though is probably about the same as my gravitational influence on the earth though. However, you can create your own GPT and train it specifically on your work and style, if you want it to be more personalised.
I did ask CharGPT and the TL;DR is that according to it, it only stores files for technical operational purposes and if you delete the chats and uploaded documents, it’s deleted from the system. And turning the “improve model for everyone” off ensures privacy. That’s according to it. Look, at the end of the day, people gonna plagiarize — we’ve been doing that since Ancient times when stories were oral. There’s a similar chance someone here could steal an idea from another member. People do shitty things. MLK, Jr. famously yoinked from Ghandi for example. AI just makes it easier. If you REALLY wanna make sure no one ever yoinks your idea: never put it on the internet because once it’s there, it’s there forever.
Oh, trust me, you don't need ChatGPT for ideas to converge, and for accusations of plagiarism to be bandied around. I got accused of plagiarism in a competition on another site. I had apparently posted a story with the same idea as him a day after he did so started crying about being plagiarised, and no AI was involved (and I didn't plagiarise him, I hadn't even read his story).
So basically, if I wrote the story I’ve toyed with for almost two decades about a blind detective in Colonial America, someone’s gonna yell at me because it’s been done before.
Well, detectives in colonial america has been done and blind detectives have been done so who knows, you might get some people moaning.
After discussing this with ChatGPT: ———————————- TL;DR: OpenAI’s systems are secure, and with "Improve model for everyone" turned off, your data won’t be used for training. However, it may be temporarily retained for operational purposes. Delete the conversation if possible, and contact OpenAI support for reassurance. Future uploads should be anonymized, broken into smaller sections, or handled in more controlled environments. You're likely fine, but these steps can give you extra peace of mind. Key Takeaway For operational purposes, data is retained temporarily to monitor, secure, and troubleshoot the service—but not for training or other long-term uses unless explicitly allowed. If privacy is a concern, focus on anonymizing sensitive content before uploading or reviewing OpenAI's specific policies. Summary: The story details you shared are handled securely and are only temporarily retained for operational purposes. OpenAI doesn’t actively analyze or "care" about your content beyond ensuring the platform works properly. By deleting the conversation, you’ve ensured that your story is no longer stored or accessible. You're in good shape! ——————— Legit?
Perfectly legit. I can go back and read or continue a conversation from months ago if I wish, so obviously, it must be stored somewhere. By contrast, almost anything you post here is cached and stored by Google. But then, I've seen people refuse to post their story idea on to a forum because they're afraid someone will steal it. Dude (or dudette), no one's going to steal your idea. And even if they do, so what? Their execution will be different from yours. Hunger Games is not Battle Royale, even if the premise is essentially the same.