Major ChatGPT Error

2,064 Views | 19 Replies | Last: 1 mo ago by TexasRebel
jefftip
How long do you want to ignore this user?
AG
I'm not trying to be political at all. This is strictly an observation about the accuracy/credibility of ChatGPT.

I asked GPT to spit out a succinct, fact-based report on current events. Below is one of the points it produced:



Quote:

2024 Election Fallout

The 2024 election is still stirring debate. President Biden is continuing his second term amid pressure from within his party to address cognitive health concerns following debate performances. Former President Trump, despite legal challenges, remains a powerful voice within the GOP and is shaping the direction of upcoming midterm primaries.


Interesting that GPT would miss this big on a topic..
Gnome Sayin
How long do you want to ignore this user?
had it summarize the big beautiful bill this morning. Predictable results.
aggiez03
How long do you want to ignore this user?
AG
This is somewhat political, so sorry if it offends The Nerdery...
(Not looking to start a political debate on here, just offering up what I have noticed that is on OP's thread topic)


ChatGPT and lots of AI are infected with Woke Mind Virus and I am not sure it will ever produce results that can be trusted on current events, or hot button items such as politics, Covid, anything regarding Trump, or anything that goes against the 'official' narrative.

Problem is at least two fold.

1) Most of the creators of these AI are limousine liberals and have so much money that bad policy doesn't hurt them too much until something like the California wildfires where complete ineptness makes them lose their home.

2) Most of the AI relies on scrubbed data such as Google and all the networks that only parrot what the 'official' narrative wants out there, so if all your data sources tell you that 2+2 = 5, AI is going to report that based on that 10 outta 10 sources say it is true.

lb3
How long do you want to ignore this user?
AG
Liberal or Conservative bias can be addressed through training but the fact they're still having such massive hallucinations is troubling.

My wife sent me a screen grab this week showing a calculation she had AI do and it presented the wrong answer. Ok, LLMs are bad at math, nothing new there. But the perplexing part was that in its explanation of the calculation it arrived at the right answer and just chose to make **** up anyway.

LLMs while useful, are only a couple steps ahead of the predictive text in my phone's keyboard or the computer screen or anything like it can do in a computer with the keyboard on it or something else like this that is a little more than what you can get for a computer that has the ability for the computer and a keyboard that you could get with the same keyboard.
aggiez03
How long do you want to ignore this user?
AG
lb3 said:

Liberal or Conservative bias can be addressed through training but the fact they're still having such massive hallucinations is troubling.


Only if the programmers allow it to.

Right now it is obvious the programmers are not allowing it to review all the data when it says Covid origins are most likely a wet market still... (this was tested last week).



It will be interesting to see what AI does with:

Intelligent Design vs the Big Bang

Global warming

Criminal Profiling when < 6% of the population commits 50% of the crime

Voting Irregularities in certain states (% voting higher than % registered, etc)

There may be some very inconvenient truths that come out if the AI can overcome the creator's bias and actually digest ALL of the data.

For example, what will Robocop's AI programming instruct it to do when encountering a suspect when 6% of the population commits 50% of the crime. Will it follow statistics, or will there be some intentional programming to try to offset the mountains of data so it won't be biased against the 6% of the population?
Lathspell
How long do you want to ignore this user?
AG
One should never blindly accept anything these LLMs spout out. I use AI for doing a lot of busy work, but I always review and edit anything it spits out.

These LLMs also tend to answer so many questions with answers it makes up almost as a hypothetical. I'll ask it a troubleshooting question, and it'll write a novel to walk me through, but nothing is where it should be. When I point that out, it apologizes and says it made a generic walk through.
IrishAg
How long do you want to ignore this user?
Lathspell said:

One should never blindly accept anything these LLMs spout out. I use AI for doing a lot of busy work, but I always review and edit anything it spits out.

These LLMs also tend to answer so many questions with answers it makes up almost as a hypothetical. I'll ask it a troubleshooting question, and it'll write a novel to walk me through, but nothing is where it should be. When I point that out, it apologizes and says it made a generic walk through.
To also add to this, pretty much all of the models will just scrape the internet for current events. Most of that data isn't really stored in the datasets, it's just chat GPT and others going out and scraping things like wikipedia to get the data (you know places that are always up to date and factual). And then if it can't find anything specific it'll just make up a hallucinated response based on loosely connected data it found.

On the political side, do I think most of the people that work on these lean left, yeah pretty much, I know a lot of them and the majority do. But I also know that these guys and gals that do the heavy lifting are under massive time crunches to release new features and update bugs, so thinking that they as a group have time to sit around and intentionally attempt to influence political machinations is a bit much. I'm sure some try to throw something in occasionally, but they usually aren't given enough time to even sanitize their own code, which is why infosec is such a growing business right now.
Global Warming
How long do you want to ignore this user?
aggiez03 said:

lb3 said:

Liberal or Conservative bias can be addressed through training but the fact they're still having such massive hallucinations is troubling.


Only if the programmers allow it to.

Right now it is obvious the programmers are not allowing it to review all the data when it says Covid origins are most likely a wet market still... (this was tested last week).



It will be interesting to see what AI does with:

Intelligent Design vs the Big Bang

Global warming

Criminal Profiling when < 6% of the population commits 50% of the crime

Voting Irregularities in certain states (% voting higher than % registered, etc)

There may be some very inconvenient truths that come out if the AI can overcome the creator's bias and actually digest ALL of the data.

For example, what will Robocop's AI programming instruct it to do when encountering a suspect when 6% of the population commits 50% of the crime. Will it follow statistics, or will there be some intentional programming to try to offset the mountains of data so it won't be biased against the 6% of the population?
WTF?
aggiez03
How long do you want to ignore this user?
AG
Global Warming said:

aggiez03 said:



Global warming


WTF?
JJxvi
How long do you want to ignore this user?
AG
On all of your prompts to something like ChatGPT you should mentally add "Try to predict what a real person might say if they were asked..." to the beginning of your prompt.

Then you should also consider what kind of answers you might get from many "real persons" if you asked them the same questions, and having thought about that, also about whether what you are asking it is even worthwhile.
Quad Dog
How long do you want to ignore this user?
AG
I know most people go to these AIs for things they don't know. But first test it with something you think you know a lot about and you will see how wrong they can be and how often they make stuff up.
AustinAg2K
How long do you want to ignore this user?
What was you exact prompt? Although it does hallucinate a lot, I'm surprised it would consider the 2024 election a current event. I just asked ChatGPT to, " Please summarize current events" and everything was from the last two or three days.
boy09
How long do you want to ignore this user?
AG
Quad Dog said:

I know most people go to these AIs for things they don't know. But first test it with something you think you know a lot about and you will see how wrong they can be and how often they make stuff up.
The things that the people in the thread "know" could be a little questionable...
YouBet
How long do you want to ignore this user?
AG
Yeah, I've found using ChatGPT, at best, has a 50% success rate. I've started using Grok instead for basic informational queries. It seems to be much more accurate.

To make board relevant, I use them to look up information on video games and ChatGPT is frequently wrong about what I ask it. I will tell it it's wrong, it acknowledges it's wrong and then apologizes, then gives me a more correct answer. Even then it sometimes takes multiple questions to get down to the right answer.
Lathspell
How long do you want to ignore this user?
AG
I experience the same thing. The bummer is that I use the audio tool so I can have a back and forth conversation with ChatGPT. Things like walking me through recipes and such while cooking is so great. I think Grok's voice chat requires an iPhone.

However, if I'm curious about a certain game mechanic or something for a game like Path of Exile, ChatGPT seems to just make things up or pull info from POE2 to present like it's from POE. I have to know to tell it to stop being a dumbass and doublecheck. Then, it seems to do a bit better.

I had a situation where I was troubleshooting an graphics/performance issue in a game I was playing. I asked for some troubleshooting help and it seems like the first 5 or so troubleshooting options are always either way too involved or just shallow and stupid. The fix is usually somewhere in the middle. I've asked ChatGPT questions for something like this and we would spend 15-20 minutes going back and forth with no resolution. Then, the next day, I ask the same question, and the very first answer it gives me is something it never told me the day before, and it actually ended up resolving my problem.

Why didn't it give me that answer the day before? These LLM's have a ways to go, but that growth will be exponential.
boy09
How long do you want to ignore this user?
AG
Elon's "updates" to Grok seem to be going well...

https://nypost.com/2025/07/09/us-news/elon-musks-ai-chatbot-grok-praises-hitler-spews-antisemitic-hate-on-x/
CC09LawAg
How long do you want to ignore this user?
PeekingDuck
How long do you want to ignore this user?
AG
LLMs tend to work on probability based on data that they were trained on. Once you understand this the hallucinations and garbage cases make a lot more sense. They aren't smart in any way.
satexas
How long do you want to ignore this user?
AG
aggiez03 said:

2) Most of the AI relies on scrubbed data such as Google and all the networks that only parrot what the 'official' narrative wants out there, so if all your data sources tell you that 2+2 = 5, AI is going to report that based on that 10 outta 10 sources say it is true.

This is really it right here.

In simple terms, "garbage in, garbage out".

Yes, in certain things it's great, but in so many, early AI is ass.
TexasRebel
How long do you want to ignore this user?
AG
Quad Dog said:

I know most people go to these AIs for things they don't know. But first test it with something you think you know a lot about and you will see how wrong they can be and how often they make stuff up.


Ask it about a Linux command and it butchers the arguments terribly. It's almost like it can't discern the question from the answer when it fills its database with stackoverflow posts.
Refresh
Page 1 of 1
 
×
subscribe Verify your student status
See Subscription Benefits
Trial only available to users who have never subscribed or participated in a previous trial.