Until the logs are released it is going to be impossible to say whether the AI simply provided factual information to reasonable queries. E.g. did the shooter ask "When is X location the busiest?", or did they ask "What is the best time to kill the most people at X location?".
Until more details like these come to light, I am going to reasonably take this as click bait.
I'm pretty confident Big AI have robust filtering to prevent answering these questions. You don't have to spell it out.
The problem is bad actors (i.e, power hungry sociopaths) have convinced the public that it's reasonable to assert liability claims on you simply because you have some intangible association to someone that committed a crime. This shows up in things like KYC laws making it impossible for certain kinds of legal businesses to use the banking system. It also shows up when states use the courts to sue gun manufacturers for crimes committed with legally manufactured items.
We should expect to see companies pursuing legal action against Big AI for their own security blunders. Presumably, at some future point we will see the capabilities of Mythos as commonplace (otherwise they're tacitly admitted to intractable scaling limitations). It will be easy for lawyers to make the same argument that Big AI is just as liable as a bank or gun manufacturer for the actions of its customers.
Until the logs are released it is going to be impossible to say whether the AI simply provided factual information to reasonable queries. E.g. did the shooter ask "When is X location the busiest?", or did they ask "What is the best time to kill the most people at X location?".
Until more details like these come to light, I am going to reasonably take this as click bait.
"When is X location the busiest?" - Google maps already tells you this information.
I'm pretty confident Big AI have robust filtering to prevent answering these questions. You don't have to spell it out.
The problem is bad actors (i.e, power hungry sociopaths) have convinced the public that it's reasonable to assert liability claims on you simply because you have some intangible association to someone that committed a crime. This shows up in things like KYC laws making it impossible for certain kinds of legal businesses to use the banking system. It also shows up when states use the courts to sue gun manufacturers for crimes committed with legally manufactured items.
We should expect to see companies pursuing legal action against Big AI for their own security blunders. Presumably, at some future point we will see the capabilities of Mythos as commonplace (otherwise they're tacitly admitted to intractable scaling limitations). It will be easy for lawyers to make the same argument that Big AI is just as liable as a bank or gun manufacturer for the actions of its customers.
LLMs don’t work in a predictably deterministic way that makes it easy to filter out these kinds of responses.
It’s gotten better, but it’s still typically pretty easy to bypass protections that are currently in place.
I think parents point is why those protections need to be there at all.
[dead]