Z3phyr said:
Nothing "bothers" me in the emotional sense but there are a few things about AI that are objectively… messy, and worth keeping an eye on:
1. Confident wrong answers (the big one)
AI can sound very sure while being completely off. That's dangerous in fields like engineering , medicine, or law because it feels trustworthy even when it shouldn't be.
2. Garbage in garbage out
AI reflects its training data. If the data is biased, outdated, or just wrong, the output inherits that. It doesn't "know better" it averages patterns.
3. Over-reliance
People start skipping first-principles thinking. Instead of checking calcs or code requirements, they trust the output. That's how you end up with a system that "looks right" but fails in the field.
It feels like a precocious new college grad to me. Usually right, very capable, always certain. Good with handling non-critical tasks, needs their work checked especially for anything important. Good for idea generation and new angles. Definitely shouldn't be blindly trusted.
I started using it (Google Gemini) for diet and exercise tracking and advice. It's mostly good. Issues I've had with it: struggles knowing what time of day it is, I have to watch it on calorie tracking because it has a bias towards telling me I'm meeting my goals when I'm really over or under, and it can be long winded and repetitive. Working on training/programming those errors out of it. It's particularly good at coming up with meal planning ideas and providing positive feedback. It's kind of strange to me that a computer is better at the creative stuff than it is at the computing stuff.