• Re: ChatGPT Writing

    From jimmylogan@VERT/DIGDIST to Rob Mccart on Tuesday, November 25, 2025 20:37:23
    Rob Mccart wrote to JIMMYLOGAN <=-


    But, in this case, if the info wasn't available, at least it didn't
    make something up.. B)


    I've heard stories of people saying 'AI' made something up, but
    I've yet to run across that...



    ... Xerox Alto was the thing. Anything after we use is just a mere copy.
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From phigan@VERT/TACOPRON to jimmylogan on Wednesday, November 26, 2025 06:00:54
    Re: Re: ChatGPT Writing
    By: jimmylogan to Rob Mccart on Tue Nov 25 2025 08:37 pm

    I've heard stories of people saying 'AI' made something up, but
    I've yet to run across that...

    How much do you use AI? And, are you sure you haven't and just didn't notice?

    I don't use it aside from maybe sometimes reading what the search result AI thing says, and when I search for technical stuff I get bad info in that AI box at least half the time!

    Also, try asking your AI to give you an 11-word palindrome.

    ---
    þ Synchronet þ TIRED of waiting 2 hours for a taco? GO TO TACOPRONTO.bbs.io
  • From jimmylogan@VERT/DIGDIST to Nightfox on Tuesday, December 02, 2025 11:45:50
    Nightfox wrote to jimmylogan <=-

    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Tue Dec 02 2025 11:15 am

    AI makes things up fairly frequently. It happens enough that they call
    it AI hallucinating.

    Yes, that's what I'm talking about. I've not experienced that - THAT I KNOW OF.

    One thing I've seen it quite a bit with is when asking ChatGPT to make
    a JavaScript function or something else about Synchronet.. ChatGPT doesn't know much about Synchronet, but it will go ahead and make up something it thinks will work with Synchronet, but might be very wrong.
    We had seen that quite a bit with the Chad Jipiti thing that was
    posting on Dove-Net a while ago.

    Yeah, I've been given script or instructions that are outdated, or just
    flat out WRONG - but I don't think that's the same as AI hallucinations...
    :-)

    Maybe my definition of 'made up data' is different. :-)



    ... "Road work ahead" ... I sure hope it does!
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From Nightfox@VERT/DIGDIST to jimmylogan on Tuesday, December 02, 2025 12:49:42
    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Tue Dec 02 2025 11:45 am

    One thing I've seen it quite a bit with is when asking ChatGPT to make a
    JavaScript function or something else about Synchronet.. ChatGPT doesn't
    know much about Synchronet, but it will go ahead and make up something it
    thinks will work with Synchronet, but might be very wrong. We had seen
    that quite a bit with the Chad Jipiti thing that was posting on Dove-Net
    a while ago.

    Yeah, I've been given script or instructions that are outdated, or just flat out WRONG - but I don't think that's the same as AI hallucinations... :-)

    Maybe my definition of 'made up data' is different. :-)

    What do you think an AI hallucination is?
    AI writing things that are wrong is the definition of AI hallucinations.

    https://www.ibm.com/think/topics/ai-hallucinations

    "AI hallucination is a phenomenon where, in a large language model (LLM) often a generative AI chatbot or computer vision tool, perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate."

    Nightfox

    ---
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From jimmylogan@VERT/DIGDIST to Nightfox on Wednesday, December 03, 2025 07:57:33
    Nightfox wrote to jimmylogan <=-

    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Tue Dec 02 2025 11:45 am

    One thing I've seen it quite a bit with is when asking ChatGPT to make a
    JavaScript function or something else about Synchronet.. ChatGPT doesn't
    know much about Synchronet, but it will go ahead and make up something it
    thinks will work with Synchronet, but might be very wrong. We had seen
    that quite a bit with the Chad Jipiti thing that was posting on Dove-Net
    a while ago.

    Yeah, I've been given script or instructions that are outdated, or just flat out WRONG - but I don't think that's the same as AI hallucinations... :-)

    Maybe my definition of 'made up data' is different. :-)

    What do you think an AI hallucination is?
    AI writing things that are wrong is the definition of AI
    hallucinations.

    https://www.ibm.com/think/topics/ai-hallucinations

    "AI hallucination is a phenomenon where, in a large language model
    (LLM) often a generative AI chatbot or computer vision tool, perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate."

    Outdated code or instructions is not 'nonsensical' just wrong.

    This is the first time I've seen the objective definition. I was told, paraphrasing here, 'AI will hallucinate - if it doesn't know the answer
    it will make something up and profess that it is true.'

    If you ask me a question and I give you an incorrect answer, but I
    believe that it is true, am I hallucinating? Or am I mistaken? Or is
    my information outdated?

    You see what I mean? Lots of words, but hard to nail it down. :-)



    ... Spilled spot remover on my dog. :(
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From Nightfox@VERT/DIGDIST to jimmylogan on Wednesday, December 03, 2025 08:54:25
    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Wed Dec 03 2025 07:57 am

    This is the first time I've seen the objective definition. I was told, paraphrasing here, 'AI will hallucinate - if it doesn't know the answer it will make something up and profess that it is true.'

    If you ask me a question and I give you an incorrect answer, but I believe that it is true, am I hallucinating? Or am I mistaken? Or is my information outdated?

    You see what I mean? Lots of words, but hard to nail it down. :-)

    It sounds like you might be thinking too hard about it. For AI, the definition of "hallucinating" is simply what you said, making something up and professing that it's true. That's the definition of hallucinating that we have for our current AI systems; it's not about us. :)

    Nightfox

    ---
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From jimmylogan@VERT/DIGDIST to Nightfox on Wednesday, December 03, 2025 09:02:12
    Nightfox wrote to jimmylogan <=-

    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Wed Dec 03 2025 07:57 am

    This is the first time I've seen the objective definition. I was told, paraphrasing here, 'AI will hallucinate - if it doesn't know the answer it will make something up and profess that it is true.'

    If you ask me a question and I give you an incorrect answer, but I believe that it is true, am I hallucinating? Or am I mistaken? Or is my information outdated?

    You see what I mean? Lots of words, but hard to nail it down. :-)

    It sounds like you might be thinking too hard about it. For AI, the definition of "hallucinating" is simply what you said, making something
    up and professing that it's true. That's the definition of
    hallucinating that we have for our current AI systems; it's not about
    us. :)

    But again, is it 'making something up' if it is just mistaken?

    For example - I just asked about a particular code for homebrew on an
    older machine. I said 'gen2' and that means the 2nd generation of MacBook Air power adapter. It's just a slang that *I* use, but I've used it with ChatGPT many times. BUT - the last code I was talking about was for an M2.

    So it gave me an 'M2' answer and not Intel, so I had to modify my request.
    It then gave me the specifics that I was looking for.

    So that's hallucinating?

    And please don't misunderstand... I'm not beating a dead horse here - at
    least not on purpose. I guess I don't see a 'problem' inherent with incorrect data, since it's just a tool and not a be all - end all thing.



    ... Basic programmers never die, they gosub and don't return
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From Bob Worm@VERT/MAGNUMUK to jimmylogan on Wednesday, December 03, 2025 17:47:11
    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Wed Dec 03 2025 07:57:33

    Hi, jimmylogan.

    If you ask me a question and I give you an incorrect answer, but I
    believe that it is true, am I hallucinating? Or am I mistaken? Or is
    my information outdated?

    If a human is unsure they would say "I'm not sure", or "it's something like..." or "I think it's..." - possibly "I don't know".

    Our memories aren't perfect but it's unusual for us to assert with 100% confidence that something is correct when it's not. Apparently today's AI more-or-less always confidently asserts that (correct and incorrect) things are fact because during the training phase, confident answers get scored higher than wishy washy ones. Show me the incentives and I'll show you the outcome.

    A colleague of mine asked ChatGPT to answer some technical questions so he could fill in basic parts of an RFI document before taking it to the technical teams for completion. He asked it what OS ran on a particular piece of kit - there are actually two correct options for that, it offered neither and instead confidently asserted that it was a third, totally incorrect, option. It's not about getting outdated code / config (even a human could do that if not "in the know") - but when it just makes up syntax or entire non-existent libraries that's a different story.

    Just look at all the recent scandals around people filing court cases prepared by ChatGPT which refer to legal precedents where either the case was irrelevant to the point, didn't contain what ChatGPT said it did or didn't exist at all.

    BobW

    ---
    þ Synchronet þ >>> Magnum BBS <<< - magnumbbs.net
  • From Bob Worm@VERT/MAGNUMUK to jimmylogan on Wednesday, December 03, 2025 17:55:56
    Re: Re: ChatGPT Writing
    By: jimmylogan to phigan on Tue Dec 02 2025 11:15:44

    Also, try asking your AI to give you an 11-word palindrome.

    Time saw raw emit level racecar level emit raw saw time.

    I mean... those are 11 words... with a few duplicates... Which can't even be arranged into a palindrome because "saw" and "raw" don't have their corresponding "was" and "war" palindromic partners...

    A solid effort(?)

    BobW

    ---
    þ Synchronet þ >>> Magnum BBS <<< - magnumbbs.net
  • From Nightfox@VERT/DIGDIST to jimmylogan on Wednesday, December 03, 2025 10:03:52
    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Wed Dec 03 2025 09:02 am

    But again, is it 'making something up' if it is just mistaken?

    In the case of AI, yes.

    For example - I just asked about a particular code for homebrew on an older machine. I said 'gen2' and that means the 2nd generation of MacBook Air power adapter. It's just a slang that *I* use, but I've used it with ChatGPT many times. BUT - the last code I was talking about was for an M2.

    So it gave me an 'M2' answer and not Intel, so I had to modify my request. It then gave me the specifics that I was looking for.

    So that's hallucinating?

    Yes, in the case of AI.

    And please don't misunderstand... I'm not beating a dead horse here - at least not on purpose. I guess I don't see a 'problem' inherent with incorrect data, since it's just a tool and not a be all - end all thing.

    You don't see a problem with incorrect data? I've heard of people who are looking for work who are using AI tools to help update their resume, as well as tailor their resume to specific jobs. I've heard of cases where the AI tools will say the person has certain skills when they don't.. So you really need to be careful to review the output of AI tools so you can correct things. Sometimes people might share AI-generated content without being careful to check and correct things.

    So yes, it's a problem. People are using AI tools to generate content, and sometimes the content it generats is wrong. And whether or not it's "simply mistaken", "hallucination" is the definition given to AI doing that. It's as simple as that. I'm surprised you don't seem to see the issue with it.

    Nightfox

    ---
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From Night_Spider@VERT to jimmylogan on Thursday, December 04, 2025 08:50:06
    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Wed Dec 03 2025 09:02 am

    On a guess Id say thats because if how the AI determines facts, and gow it determines paradox, us humans cone across a paradox and accept it or ignore it but never tey to solve it, unlike an ai who isnt programmed usually to dismiss paradoxes instead of solve them, but the very existence of a paradix kinda breaks the framework

    ---
    þ Synchronet þ Vertrauen þ Home of Synchronet þ [vert/cvs/bbs].synchro.net
  • From Mortar@VERT/EOTLBBS to jimmylogan on Thursday, December 04, 2025 11:57:46
    Re: Re: ChatGPT Writing
    By: jimmylogan to phigan on Tue Dec 02 2025 11:15:44

    Also, try asking your AI to give you an 11-word palindrome.

    Time saw raw emit level racecar level emit raw saw time.

    Not a palindrome. The individual letters/numbers must read the same in both directions. Example: A man, a plan, a canal - Panama.

    ---
    þ Synchronet þ End Of The Line BBS - endofthelinebbs.com
  • From Mortar@VERT/EOTLBBS to jimmylogan on Thursday, December 04, 2025 12:07:09
    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Wed Dec 03 2025 07:57:33

    If you ask me a question and I give you an incorrect answer, but I
    believe that it is true, am I hallucinating? Or am I mistaken? Or is
    my information outdated?

    If you are the receiver of the information, than no. It'd be like if I told you a dream I had, does that mean you experienced the dream?

    ---
    þ Synchronet þ End Of The Line BBS - endofthelinebbs.com
  • From Nightfox@VERT/DIGDIST to jimmylogan on Thursday, December 04, 2025 11:04:46
    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Wed Dec 03 2025 08:58 pm

    But again, is it 'making something up' if it is just mistaken?

    In the case of AI, yes.

    Gonna disagree with you there... If wikipedia has some info that is wrong, and I quote it, I'm not making it up. If 'it' pulls from the same source, it's not making it up either.

    For AI, "hallucination" is the term used for AI providing false information and sometimes making things up - as in the link I provided earlier about this. It's not really up for debate. :)

    I've heard of people who
    are looking for work who are using AI tools to help update their resume,
    as well as tailor their resume to specific jobs. I've heard of cases
    where the AI tools will say the person has certain skills when they
    don't.. So you really need to be careful to review the output of AI
    tools so you can correct things. Sometimes people might share
    AI-generated content without being careful to check and correct things.

    I'd like to see some data on that... Anecdotal 'evidence' is not always scientific proof. :-)

    That seems like a strange thing to say.. I've heard about that from job seekers using AI tools, so of course it's anecdotal. I don't know what scientific proof you need to see that AI produces incorrect resumes for job seekers; we know that from job seekers who've said so. And you've said yourself that you've seen AI tools produce incorrect output.

    The job search thing isn't really scientific.. I'm currently looking for work, and I go to a weekly job search networking group meeting, and AI tools have come up there recently. Specifically, recently there was someone there talking about his use of AI tools to help customize his resume for different jobs & such, and he talked about needing to check the results of what AI produces, because sometimes AI tools will put skills & things on your resume that you don't have, so you have to make edits.

    If that's the definition, then okay - a 'mistake' is technically a hallucination. Again, that won't prevent me from using it as the tool it

    It's not a "technically" thing. "Hallucination" is simply the term used for AI producing false output.

    Nightfox

    ---
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From Mortar@VERT/EOTLBBS to Nightfox on Thursday, December 04, 2025 13:57:04
    Re: Re: ChatGPT Writing
    By: Nightfox to jimmylogan on Wed Dec 03 2025 10:03:52

    Sometimes people might share AI-generated content without being careful to check and correct things.

    That's how AIds gets started.

    ---
    þ Synchronet þ End Of The Line BBS - endofthelinebbs.com
  • From Bob Worm@VERT/MAGNUMUK to jimmylogan on Thursday, December 04, 2025 22:14:55
    Re: Re: ChatGPT Writing
    By: jimmylogan to Bob Worm on Wed Dec 03 2025 20:58:51

    Hi, jimmylogan.

    But that 'third option' - you're saying it didn't 'find' that somewhere
    in a dataset, and just made it up?

    The third option was software that ran on a completely different product set. A reasonably analogy would be it's like saying that an iPhone runs MacOS.

    Just look at all the recent scandals around people filing court cases prepared by ChatGPT which refer to legal precedents where either the case was irrelevant to the point, didn't contain what ChatGPT said it did or didn't exist at all.

    I've not seen/read those. Assuuming you have some links? :-)

    I guess you should be able to read this outside the UK: https://www.bbc.co.uk/news/world-us-canada-65735769

    Some others: https://www.legalcheek.com/2025/02/another-lawyer-faces-chatgpt-trouble/

    https://arstechnica.com/tech-policy/2023/05/lawyer-cited-6-fake-cases-made-up-by-chatgpt-judge-calls-it-unprecedented/

    https://www.theregister.com/2024/02/24/chatgpt_cuddy_legal_fees/

    It's enough of a problem that the London High Court ruled earlier this year that lawyers caught citing non-existent cases could face criminal charges. So I'm probably not hallucinating it :)

    BobW

    ---
    þ Synchronet þ >>> Magnum BBS <<< - magnumbbs.net
  • From Bob Worm@VERT/MAGNUMUK to jimmylogan on Thursday, December 04, 2025 22:23:25
    Re: Re: ChatGPT Writing
    By: jimmylogan to Bob Worm on Wed Dec 03 2025 20:58:51

    Hi, jimmylogan.

    I mean... those are 11 words... with a few duplicates... Which can't even be arranged into a palindrome because "saw" and "raw" don't have their corresponding "was" and "war" palindromic partners...

    I just asked for it, as you suggested. :-)

    I think it was Phigan who asked but yeah, I guessed that came from an LLM rather than a human :)

    Not that I use LLMs myself - if I ever want the experience of giving very clear instructions but getting a comically bad outcome I can always ask my teenage son to do something around the house :D

    BobW

    ---
    þ Synchronet þ >>> Magnum BBS <<< - magnumbbs.net
  • From Bob Worm@VERT/MAGNUMUK to Nightfox on Thursday, December 04, 2025 22:35:32
    Re: Re: ChatGPT Writing
    By: Nightfox to jimmylogan on Thu Dec 04 2025 11:04:46

    Hi, Nightfox.

    If that's the definition, then okay - a 'mistake' is technically a hallucination. Again, that won't prevent me from using it as the tool it

    It's not a "technically" thing. "Hallucination" is simply the term used for AI producing false output.

    "Hallucination" sounds much less dramatic than "answering incorrectly" or "bullshitting", though.

    Ironically there was an article on The Register last year saying that savvy blackhat types had caught onto the fact that AI kept hallucinating non-existent libraries in the code it generated so they created some of them, with a sprinkle of malware added in, naturally:

    https://www.theregister.com/2024/03/28/ai_bots_hallucinate_software_packages/

    Scary stat from that article: "With GPT-4, 24.2 percent of question responses produced hallucinated packages". Wow.

    BobW

    ---
    þ Synchronet þ >>> Magnum BBS <<< - magnumbbs.net
  • From Dumas Walker@VERT/CAPCITY2 to jimmylogan on Friday, December 05, 2025 09:44:04
    Re: Re: ChatGPT Writing
    By: jimmylogan to Dumas Walker on Tue Dec 02 2025 11:15:44

    Google Gemini looked it up and reported that my trash would be picked up on Friday. The link below the Gemini result was the official link from the city, which *very* clearly stated that it would be picked up on Monday.

    Not sure where Gemini got its answer, but it might as well have been made up! :D ---

    LOL - yep, an error. But is that actually a 'made up answer,' aka hallucinating?

    Well, it didn't get it from any proper source so, as far as I know, it made it up! :D
    ---
    þ Synchronet þ CAPCITY2 * capcity2.synchro.net * Telnet/SSH:2022/Rlogin/HTTP
  • From phigan@VERT/TACOPRON to jimmylogan on Friday, December 05, 2025 17:52:18
    Re: Re: ChatGPT Writing
    By: jimmylogan to phigan on Tue Dec 02 2025 11:15 am

    it's been flat out WRONG before, but never insisted it was

    You were saying you'd never seen it make stuff up :). You certainly have.
    Just today I asked the Gemini in two different instances how to do the same exact thing in some software. One time it gave instructions for one method, and the second time it said the first method wasn't possible with that software and a workaround was necessary.

    Time saw raw emit level racecar level emit raw saw time.

    Exactly, there it is again saying something is a palindrome when it isn't.

    Example of a palindrome:
    able was I ere I saw elba

    Not a palindrome:
    I palindrome I

    ---
    þ Synchronet þ TIRED of waiting 2 hours for a taco? GO TO TACOPRONTO.bbs.io
  • From phigan@VERT/TACOPRON to jimmylogan on Friday, December 05, 2025 18:02:25
    Re: Re: ChatGPT Writing
    By: jimmylogan to Bob Worm on Wed Dec 03 2025 08:58 pm

    A solid effort(?)

    I just asked for it, as you suggested. :-)

    That was the output.

    I was the one who suggested it, insinuating that your AI couldn't do it, and you proved me right :).

    ---
    þ Synchronet þ TIRED of waiting 2 hours for a taco? GO TO TACOPRONTO.bbs.io
  • From poindexter FORTRAN@VERT/REALITY to Mortar on Friday, December 05, 2025 17:28:19
    Mortar wrote to jimmylogan <=-

    Time saw raw emit level racecar level emit raw saw time.

    Not a palindrome. The individual letters/numbers must read the same in both directions. Example: A man, a plan, a canal - Panama.

    An auto-palindrome?

    There was an XKCD comic about wikipedia editors that illustrated a page
    for the word "Manalanteau" - https://xkcd.com/739/

    "A malamanteau is a neologism for a portmanteau crfeated by incorrectly combining a malapropism with a neologism. It itself it a portmanteau
    of..."



    --- MultiMail/Win v0.52
    þ Synchronet þ .: realitycheckbbs.org :: scientia potentia est :.