DaiTengu wrote to Mortar <=-
Re: AI a Lie
By: Mortar to jimmylogan on Mon Oct 06 2025 11:25 am
Thank you! I'm constantly trying to explain this to others.
Unfortunetly, trying to explain LLM makes peoples eyes glaze over, so I can understand why marketing and media folks labeled it as AI.
AI isn't a terrible name. The "intelligence" is artificial. (not real intelligence)
That said, all modern "AI" is just a very advanced version of the
system that predicts what word you're going to type next on your phone.
Dumas Walker wrote to JIMMYLOGAN <=-
And I think if we stop calling it AI, which is technically
a misnomer, it might make it less frightening... It is
LLM - Language Learning Module - and has absolutely no
sentience behind it. It's not 'intellegent,' it is just
programmed to respond and such in a way that is comfortable
to us.
It might be a misnomer, but I am not sure about "no" sentience. It has been proven that AI/LLM is more likely than a human to "cheat" in order
to get the outcome it wants. Whether that is a sign of "some"
sentience, or if it is merely a sign that machines don't have ethics,
is a subject for debate.
phigan wrote to Dumas Walker <=-
Re: AI a Lie
By: Dumas Walker to JIMMYLOGAN on Tue Oct 07 2025 09:02 am
been proven that AI/LLM is more likely than a human to "cheat" in order to get the outcome it wants. Whether that is a sign of "some" sentience, or
if it is merely a sign that machines don't have ethics, is a subject for debate.
Not sure sentience really has anything to do with it. What the computer knows is that it has an objective. If "cheating" allows it to achieve
its objective faster, what is really stopping it? Some algorithm that
says it won't cheat some X percent of the time?
| Sysop: | KJ5EKH |
|---|---|
| Location: | Siloam Springs, Ar. |
| Users: | 4 |
| Nodes: | 4 (1 / 3) |
| Uptime: | 11:45:46 |
| Calls: | 3 |
| Files: | 1,164 |
| D/L today: |
2 files (15K bytes) |
| Messages: | 8,786 |