2 points | by slipknotfan 5 hours ago ago
9 comments
It's not actually "artificial intelligence" but more like "fake intelligence".
Assembling a plausible sounding sentence doesn't mean that you know what you're talking about.
The number of people who fail to grasp this is mind boggling.
And how do you explain that they can do advanced mathematics?
How do you explain when they can't? If they *really* understand math, why is the error rate so high?
According to a 2025 Stanford HAI report, large language models fail basic multi-step arithmetic up to 40% of the time without external tools.
https://medium.com/@dojolabs.main/why-does-ai-get-math-wrong...
2025... have you checked latest models? and even, this talk is irrelevant when we know in few months/years this exact topic will be solved.
we know in few months/years this exact topic will be solved
You may know this somehow --- but I don't. Without a fundamental re-design, the basic problem will remain.
I don't believe it is possible to apply statistics to predict answers without significant errors.
yes but most humans (also without tools, to do a fair comparison) also make significant errors, WAY more than Opus 4.7 and GPT-5.4 xhigh
yes but most humans (also without tools, to do a fair comparison) also make significant error
Humans adopted the use of computers because they provided accurate answers at low cost.
At least until recently. Now, LLMs provide questionable answers at high cost.
but LLMs + Tools (computers) right now beat large majority of humans (and we know this gap will keep widening), so how does that make them "not intelligent"?
> appearing to be human while having no actual intelligence?
Isn't a P-zombie about consciousness, not intelligence?
It's not actually "artificial intelligence" but more like "fake intelligence".
Assembling a plausible sounding sentence doesn't mean that you know what you're talking about.
The number of people who fail to grasp this is mind boggling.
And how do you explain that they can do advanced mathematics?
How do you explain when they can't? If they *really* understand math, why is the error rate so high?
According to a 2025 Stanford HAI report, large language models fail basic multi-step arithmetic up to 40% of the time without external tools.
https://medium.com/@dojolabs.main/why-does-ai-get-math-wrong...
2025... have you checked latest models? and even, this talk is irrelevant when we know in few months/years this exact topic will be solved.
we know in few months/years this exact topic will be solved
You may know this somehow --- but I don't. Without a fundamental re-design, the basic problem will remain.
I don't believe it is possible to apply statistics to predict answers without significant errors.
yes but most humans (also without tools, to do a fair comparison) also make significant errors, WAY more than Opus 4.7 and GPT-5.4 xhigh
yes but most humans (also without tools, to do a fair comparison) also make significant error
Humans adopted the use of computers because they provided accurate answers at low cost.
At least until recently. Now, LLMs provide questionable answers at high cost.
but LLMs + Tools (computers) right now beat large majority of humans (and we know this gap will keep widening), so how does that make them "not intelligent"?
> appearing to be human while having no actual intelligence?
Isn't a P-zombie about consciousness, not intelligence?