Rob Mccart wrote to JIMMYLOGAN <=-
But, in this case, if the info wasn't available, at least it didn't
make something up.. B)
I've heard stories of people saying 'AI' made something up, but
I've yet to run across that...
Nightfox wrote to jimmylogan <=-
Re: Re: ChatGPT Writing
By: jimmylogan to Nightfox on Tue Dec 02 2025 11:15 am
AI makes things up fairly frequently. It happens enough that they call
it AI hallucinating.
Yes, that's what I'm talking about. I've not experienced that - THAT I KNOW OF.
One thing I've seen it quite a bit with is when asking ChatGPT to make
a JavaScript function or something else about Synchronet.. ChatGPT doesn't know much about Synchronet, but it will go ahead and make up something it thinks will work with Synchronet, but might be very wrong.
We had seen that quite a bit with the Chad Jipiti thing that was
posting on Dove-Net a while ago.
One thing I've seen it quite a bit with is when asking ChatGPT to make a
JavaScript function or something else about Synchronet.. ChatGPT doesn't
know much about Synchronet, but it will go ahead and make up something it
thinks will work with Synchronet, but might be very wrong. We had seen
that quite a bit with the Chad Jipiti thing that was posting on Dove-Net
a while ago.
Yeah, I've been given script or instructions that are outdated, or just flat out WRONG - but I don't think that's the same as AI hallucinations... :-)
Maybe my definition of 'made up data' is different. :-)
Nightfox wrote to jimmylogan <=-
Re: Re: ChatGPT Writing
By: jimmylogan to Nightfox on Tue Dec 02 2025 11:45 am
One thing I've seen it quite a bit with is when asking ChatGPT to make a
JavaScript function or something else about Synchronet.. ChatGPT doesn't
know much about Synchronet, but it will go ahead and make up something it
thinks will work with Synchronet, but might be very wrong. We had seen
that quite a bit with the Chad Jipiti thing that was posting on Dove-Net
a while ago.
Yeah, I've been given script or instructions that are outdated, or just flat out WRONG - but I don't think that's the same as AI hallucinations... :-)
Maybe my definition of 'made up data' is different. :-)
What do you think an AI hallucination is?
AI writing things that are wrong is the definition of AI
hallucinations.
https://www.ibm.com/think/topics/ai-hallucinations
"AI hallucination is a phenomenon where, in a large language model
(LLM) often a generative AI chatbot or computer vision tool, perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate."
This is the first time I've seen the objective definition. I was told, paraphrasing here, 'AI will hallucinate - if it doesn't know the answer it will make something up and profess that it is true.'
If you ask me a question and I give you an incorrect answer, but I believe that it is true, am I hallucinating? Or am I mistaken? Or is my information outdated?
You see what I mean? Lots of words, but hard to nail it down. :-)
Nightfox wrote to jimmylogan <=-
Re: Re: ChatGPT Writing
By: jimmylogan to Nightfox on Wed Dec 03 2025 07:57 am
This is the first time I've seen the objective definition. I was told, paraphrasing here, 'AI will hallucinate - if it doesn't know the answer it will make something up and profess that it is true.'
If you ask me a question and I give you an incorrect answer, but I believe that it is true, am I hallucinating? Or am I mistaken? Or is my information outdated?
You see what I mean? Lots of words, but hard to nail it down. :-)
It sounds like you might be thinking too hard about it. For AI, the definition of "hallucinating" is simply what you said, making something
up and professing that it's true. That's the definition of
hallucinating that we have for our current AI systems; it's not about
us. :)
If you ask me a question and I give you an incorrect answer, but I
believe that it is true, am I hallucinating? Or am I mistaken? Or is
my information outdated?
Also, try asking your AI to give you an 11-word palindrome.
Time saw raw emit level racecar level emit raw saw time.
But again, is it 'making something up' if it is just mistaken?
For example - I just asked about a particular code for homebrew on an older machine. I said 'gen2' and that means the 2nd generation of MacBook Air power adapter. It's just a slang that *I* use, but I've used it with ChatGPT many times. BUT - the last code I was talking about was for an M2.
So it gave me an 'M2' answer and not Intel, so I had to modify my request. It then gave me the specifics that I was looking for.
So that's hallucinating?
And please don't misunderstand... I'm not beating a dead horse here - at least not on purpose. I guess I don't see a 'problem' inherent with incorrect data, since it's just a tool and not a be all - end all thing.
Also, try asking your AI to give you an 11-word palindrome.
Time saw raw emit level racecar level emit raw saw time.
If you ask me a question and I give you an incorrect answer, but I
believe that it is true, am I hallucinating? Or am I mistaken? Or is
my information outdated?
But again, is it 'making something up' if it is just mistaken?
In the case of AI, yes.
Gonna disagree with you there... If wikipedia has some info that is wrong, and I quote it, I'm not making it up. If 'it' pulls from the same source, it's not making it up either.
I've heard of people who
are looking for work who are using AI tools to help update their resume,
as well as tailor their resume to specific jobs. I've heard of cases
where the AI tools will say the person has certain skills when they
don't.. So you really need to be careful to review the output of AI
tools so you can correct things. Sometimes people might share
AI-generated content without being careful to check and correct things.
I'd like to see some data on that... Anecdotal 'evidence' is not always scientific proof. :-)
If that's the definition, then okay - a 'mistake' is technically a hallucination. Again, that won't prevent me from using it as the tool it
Sometimes people might share AI-generated content without being careful to check and correct things.
But that 'third option' - you're saying it didn't 'find' that somewhere
in a dataset, and just made it up?
Just look at all the recent scandals around people filing court cases prepared by ChatGPT which refer to legal precedents where either the case was irrelevant to the point, didn't contain what ChatGPT said it did or didn't exist at all.
I've not seen/read those. Assuuming you have some links? :-)
I mean... those are 11 words... with a few duplicates... Which can't even be arranged into a palindrome because "saw" and "raw" don't have their corresponding "was" and "war" palindromic partners...
I just asked for it, as you suggested. :-)
If that's the definition, then okay - a 'mistake' is technically a hallucination. Again, that won't prevent me from using it as the tool it
It's not a "technically" thing. "Hallucination" is simply the term used for AI producing false output.
Google Gemini looked it up and reported that my trash would be picked up on Friday. The link below the Gemini result was the official link from the city, which *very* clearly stated that it would be picked up on Monday.
Not sure where Gemini got its answer, but it might as well have been made up! :D ---
LOL - yep, an error. But is that actually a 'made up answer,' aka hallucinating?
it's been flat out WRONG before, but never insisted it was
Time saw raw emit level racecar level emit raw saw time.
A solid effort(?)
I just asked for it, as you suggested. :-)
That was the output.
Mortar wrote to jimmylogan <=-
Time saw raw emit level racecar level emit raw saw time.
Not a palindrome. The individual letters/numbers must read the same in both directions. Example: A man, a plan, a canal - Panama.
| Sysop: | John F Kennedy |
|---|---|
| Location: | Quebec, Canada |
| Users: | 20 |
| Nodes: | 15 (0 / 15) |
| Uptime: | 149:13:22 |
| Calls: | 218 |
| Calls today: | 218 |
| Messages: | 1,582 |
| Posted today: | 2 |