The painful irony of bragging about/lamenting your new model's cybersecurity capabilities within and in response to leaking all the information about it due to poor cybersecurity.
Can't wait for the "$LAST_MODEL was amazing but this is the one that will change everything."
Particularly, if lax CMS leaked internal memos that their new model automatically uses vulnerabilities in CMS. And that’s what gets them attention as too risky.
This pattern of AI companies describing their own products as so spectacularly effective that they're dangerous really is a remarkable piece of propaganda engineering.
What is happening here would be easily understood and obvious by everybody if it the head of marketing for a food company was on TV talking about how pretty soon everybody will be eating their food, and how it's so unbelievably tasty that it might cause people to leave their families and abandon all other hobbies in pursuit of their delicious product, utterly destroying society as we know it.
I mean maybe it'll happen eventually. Maybe we'll all end up with a wire stuck in the back of our heads, floating in a vat of nutritious goo. But what we've seen so far has been an excellent, highly useful, and certainly groundbreaking industrial automation product.
Maybe they could just write a blog post telling us what the thing does and how much it costs and when we can try it.
Big if true. Any alleged "step change" over 4.6 is worth paying attention to.
Also, Hegseth did a big fucky wucky if true - since they went scorched earth with Anthropic, this could mean denying the US frontier model capabilities.
It would be pretty amusing if Anthropic emerges as the clear winner (for some period of time) and other governments can use them but the US government cannot.
Current admin is certainly too prideful to walk something like this back. After all, admitting mistakes is unmasculine, and we have very manly men in charge of war round these parts!
This is exactly the outcome you would expect authoritarians to generate. Not interested in facts, more interested in subservience. The good news is that surrounding yourself with incompetence is how authoritarian regimes fall. The bad news is, well, the country where this is playing out.
The painful irony of bragging about/lamenting your new model's cybersecurity capabilities within and in response to leaking all the information about it due to poor cybersecurity.
Can't wait for the "$LAST_MODEL was amazing but this is the one that will change everything."
> The AI lab left the material, including what appeared to be a draft blog post announcing a new model, in an unsecured, public data lake
My tinfoil theory is that it was left by them to be discovered by the public.
I believe a lot of 'leaks' you see these days are, at least some what, intentional.
That model name does not look good in French: https://fr.wikipedia.org/wiki/Mythomanie ( https://en.wikipedia.org/wiki/Pathological_lying ).
it's greek you peasants
Yeah I'm confused by the comments here.
"a pattern of beliefs expressing often symbolically the characteristic or prevalent attitudes in a group or culture" (Merriam-Webster)
This doesn't seem obscure to me. It's what a model encodes in its weights. Am I falling into the xkcd 2501 trap?
The English word for it is similar too (Mythomaniac)
absolutely. Also doesn't help that the slang for Mythomanie is the short form "mytho".
https://archive.ph/ZMsZ1
The "every new model is THE one" cycle is getting a bit old but the Capybara tier thing is actually worth paying attention to.
TFA states "Capybara and Mythos appear to refer to the same underlying model"
would be hilarious if they get declared a SCR and end up with the best model
Particularly, if lax CMS leaked internal memos that their new model automatically uses vulnerabilities in CMS. And that’s what gets them attention as too risky.
This pattern of AI companies describing their own products as so spectacularly effective that they're dangerous really is a remarkable piece of propaganda engineering.
What is happening here would be easily understood and obvious by everybody if it the head of marketing for a food company was on TV talking about how pretty soon everybody will be eating their food, and how it's so unbelievably tasty that it might cause people to leave their families and abandon all other hobbies in pursuit of their delicious product, utterly destroying society as we know it.
I mean maybe it'll happen eventually. Maybe we'll all end up with a wire stuck in the back of our heads, floating in a vat of nutritious goo. But what we've seen so far has been an excellent, highly useful, and certainly groundbreaking industrial automation product.
Maybe they could just write a blog post telling us what the thing does and how much it costs and when we can try it.
Big if true. Any alleged "step change" over 4.6 is worth paying attention to.
Also, Hegseth did a big fucky wucky if true - since they went scorched earth with Anthropic, this could mean denying the US frontier model capabilities.
It would be pretty amusing if Anthropic emerges as the clear winner (for some period of time) and other governments can use them but the US government cannot.
Current admin is certainly too prideful to walk something like this back. After all, admitting mistakes is unmasculine, and we have very manly men in charge of war round these parts!
This is exactly the outcome you would expect authoritarians to generate. Not interested in facts, more interested in subservience. The good news is that surrounding yourself with incompetence is how authoritarian regimes fall. The bad news is, well, the country where this is playing out.