Even the article acknowledges that the model used to do this test is flawed.
Google's AI Overview is incredible. It's an instant correct answer for self-verifying questions, and it's right most of the time for reasonably complex questions. If the first page of results would contain the answer to your question, and your question can be answered with only one prompt, it's right almost every time.
> If the first page of results would contain the answer to your question
You can find complaints on this site, not too long ago, that Google was failing to have good results anymore. I don't feel the ranking has particularly improved since then.
I don't think that "right almost every time" is enough. It's different when you're searching for an answer yourself and you're expecting to dig for right one among others not so right. But when you get one answer from LLM you either trust it or don't. If you do you're bound to be lied to from time to time and face the consequences. If you don't, you're back to searching manually anyway.
I find myself questioning LLMs a healthy habit these days.
Considering the accuracy of journalists in general (contentious subject, but the studies I've seen seem to confirm the gell-mann amnesia effect to some degree), I'd say AI Overview isn't bad at least.
I recently searched for information on a potential pet poisoning. The google overview had the decimal spot in the wrong place.... confusing a lethal amount for a trivial amount. My pet was fine, but had it actually eaten more, and had I used the google answer as my yardstick, it could have not been.
Even the article acknowledges that the model used to do this test is flawed.
Google's AI Overview is incredible. It's an instant correct answer for self-verifying questions, and it's right most of the time for reasonably complex questions. If the first page of results would contain the answer to your question, and your question can be answered with only one prompt, it's right almost every time.
> If the first page of results would contain the answer to your question
You can find complaints on this site, not too long ago, that Google was failing to have good results anymore. I don't feel the ranking has particularly improved since then.
I don't think that "right almost every time" is enough. It's different when you're searching for an answer yourself and you're expecting to dig for right one among others not so right. But when you get one answer from LLM you either trust it or don't. If you do you're bound to be lied to from time to time and face the consequences. If you don't, you're back to searching manually anyway.
I find myself questioning LLMs a healthy habit these days.
Considering the accuracy of journalists in general (contentious subject, but the studies I've seen seem to confirm the gell-mann amnesia effect to some degree), I'd say AI Overview isn't bad at least.
This is the point I should have made. You've got more intelligence than the average human professional in the same role would give you.
I recently searched for information on a potential pet poisoning. The google overview had the decimal spot in the wrong place.... confusing a lethal amount for a trivial amount. My pet was fine, but had it actually eaten more, and had I used the google answer as my yardstick, it could have not been.
Worst piece of enshitification in my daily life.