Yup! It's so scary good we're not gonna sell it - change our minds... by investing gazillions of dollars. Well, one thing is for sure: if it exists, Palantir will be using it!
They are selling it, just not to script kiddies. Or do you think Apple, the Linux Foundation, etc are just participating in a massive conspiracy/rugpull?
So, suddenly, you see, open-source projects, which thousands of hackers have been trying to hack for decades, are gonna get hacked by an LLM. I need to see at least one such mitigation by Mythos, then we can assess the situation.
>Many of the "thousands" of bugs and vulnerabilities it found are in older software, or are impossible to exploit.
So?
Modern software is designed with a defense in depth model, so it often requires chaining multiple vulnerabilities to get a successful exploit. But individual vulnerabilities still need finding and fixing because people might find vulnerabilities in the other isolation layers later.
I swear every time an LLM does something useful, the usual band of skeptics bends over backwards trying to invent reasons to dismiss it.
The argument is that it is older software in the sense that it's unmaintained because better alternatives exist.
Also, I don't believe it is fair to dismiss skeptics as inventing reasons. If anything, "believers" are bending over backwards to praise Anthropic even though they didn't actually release anything.
I swear every time an LLM does something stupid, the usual band of AI hype pushers bends over backwards trying to invent reasons that it's actually good.
Exactly. If I had a nickel for every mention of "just wait 'til the next release!!" as some sort of justification for whatever's going on right now, I'd be a rich man.
Im still waiting for a project that is not a 'pet project' that is mostly LLM-assisted that Wow's me. Why is it taking so long I wonder? Hmm, perhaps all this 'intelligence' is neat. But it is not what pushes humanity forward - which ultimately is what matters. That's the whole point of expending resources...
The whole announcement is full of FUD and fearmongering. They say they can't talk about 99% of the findings, they claim one of their experiments went "public" and posted to some very obscure websites no one has heard of, but didn't disclose which websites were these...
They are saying "trust me, bro, I have a superhacker model" and proceed to show 0 evidence that it is what they are hyping.
I think we are learning as we go along much of what goes on in the world.
E.g. even though LLMs can generate code and we have agents - the profession of software engineers is not being destroyed. The demand for software engineers in the labour market is still strong.
Also a thing that wasnt spoken about loudly (and for good reason) is that code is not perfect - this means bugs/vulnerabilities are there. And the reality is, it is optimal to have done this - for it were not done, the release of software and the moving of resources towards other projects would slow. Aka slowing down economic activity.
When these models come out and/or the responsible disclosure period elapses, I wonder if the naysayers will admit they were wrong, or if they'll just continue naysaying the latest bit of news about AI.
In my view the naysayers always simply been moving the goalposts, and never admit when they were wrong. "AI just produces slop" -> "AI can't write useful code" -> "AI can't take SWE jobs" -> [we are here]
Oh we are way-ay-ay past this line of argumentation. The AI skeptic community has been far more right than wrong over several years now, whereas the hype-all-the-AI-things crowd has been proven laughably wrong on a fairly regular basis.
And there were people like you claiming that the jobs of SWE's would start imploding the moment LLMs could generate sufficient code that could be considered production grade.
You, just like 'them' are no better. It would be better if all those on the extreme ends could be muted - the truth is closer to the middle.
Nearly all big tech code is AI written today. What about that isn't "production-grade?"
Didn't Oracle just lay off 30k people, Meta plans on laying off thousands more this year, Microsoft and Amazon have already done layoffs in the name of AI?
So at what point should naysayers update their priors? How many times must people be proven wrong to think "maybe I have the wrong perspective here after all?"
Full disclosure, I used to be a skeptic myself in the early days, but I think being a skeptic today is pure stubbornness, not rational.
> Nearly all big tech code is AI written today. What about that isn't "production-grade?"
I do not think the recent reliability and quality of major services would be considered acceptable if it weren't for a domestic tech oligopoly systematically lowering standards.
The U.S. auto industry also shows that oligopolies can shut out competition for quite some time and steer their captive market into accepting mediocrity, but innovation continues anyway. It's quite the contrast seeing domestic automakers turn their backs on EVs at the same time the Chinese are advertising 10 minute flash charging.
Yup! It's so scary good we're not gonna sell it - change our minds... by investing gazillions of dollars. Well, one thing is for sure: if it exists, Palantir will be using it!
They are selling it, just not to script kiddies. Or do you think Apple, the Linux Foundation, etc are just participating in a massive conspiracy/rugpull?
So, suddenly, you see, open-source projects, which thousands of hackers have been trying to hack for decades, are gonna get hacked by an LLM. I need to see at least one such mitigation by Mythos, then we can assess the situation.
>Many of the "thousands" of bugs and vulnerabilities it found are in older software, or are impossible to exploit.
So?
Modern software is designed with a defense in depth model, so it often requires chaining multiple vulnerabilities to get a successful exploit. But individual vulnerabilities still need finding and fixing because people might find vulnerabilities in the other isolation layers later.
I swear every time an LLM does something useful, the usual band of skeptics bends over backwards trying to invent reasons to dismiss it.
The argument is that it is older software in the sense that it's unmaintained because better alternatives exist.
Also, I don't believe it is fair to dismiss skeptics as inventing reasons. If anything, "believers" are bending over backwards to praise Anthropic even though they didn't actually release anything.
I swear every time an LLM does something stupid, the usual band of AI hype pushers bends over backwards trying to invent reasons that it's actually good.
Exactly. If I had a nickel for every mention of "just wait 'til the next release!!" as some sort of justification for whatever's going on right now, I'd be a rich man.
Im still waiting for a project that is not a 'pet project' that is mostly LLM-assisted that Wow's me. Why is it taking so long I wonder? Hmm, perhaps all this 'intelligence' is neat. But it is not what pushes humanity forward - which ultimately is what matters. That's the whole point of expending resources...
Quite possible.
And this attitude is how you get compromised in the near future.
No reason to expect capabilities of models are going to stop.
The whole announcement is full of FUD and fearmongering. They say they can't talk about 99% of the findings, they claim one of their experiments went "public" and posted to some very obscure websites no one has heard of, but didn't disclose which websites were these...
They are saying "trust me, bro, I have a superhacker model" and proceed to show 0 evidence that it is what they are hyping.
I think we are learning as we go along much of what goes on in the world.
E.g. even though LLMs can generate code and we have agents - the profession of software engineers is not being destroyed. The demand for software engineers in the labour market is still strong.
Also a thing that wasnt spoken about loudly (and for good reason) is that code is not perfect - this means bugs/vulnerabilities are there. And the reality is, it is optimal to have done this - for it were not done, the release of software and the moving of resources towards other projects would slow. Aka slowing down economic activity.
I read that it is 5x costly. Could that be the reason for the marketing?
In other news I hear "OpenAI backs Illinois bill that would limit when AI labs can be held liable" and people are losing their minds over it.
https://news.ycombinator.com/item?id=47717587
And here we learn that Mythos is not a big deal. Are there people who believe both?
One day I hear it is all a marketing pitch, another day I hear it can literally end earth so it should be regulated.
How do I reconcile this?
[dead]
When these models come out and/or the responsible disclosure period elapses, I wonder if the naysayers will admit they were wrong, or if they'll just continue naysaying the latest bit of news about AI.
In my view the naysayers always simply been moving the goalposts, and never admit when they were wrong. "AI just produces slop" -> "AI can't write useful code" -> "AI can't take SWE jobs" -> [we are here]
Oh we are way-ay-ay past this line of argumentation. The AI skeptic community has been far more right than wrong over several years now, whereas the hype-all-the-AI-things crowd has been proven laughably wrong on a fairly regular basis.
And there were people like you claiming that the jobs of SWE's would start imploding the moment LLMs could generate sufficient code that could be considered production grade.
You, just like 'them' are no better. It would be better if all those on the extreme ends could be muted - the truth is closer to the middle.
Nearly all big tech code is AI written today. What about that isn't "production-grade?"
Didn't Oracle just lay off 30k people, Meta plans on laying off thousands more this year, Microsoft and Amazon have already done layoffs in the name of AI?
So at what point should naysayers update their priors? How many times must people be proven wrong to think "maybe I have the wrong perspective here after all?"
Full disclosure, I used to be a skeptic myself in the early days, but I think being a skeptic today is pure stubbornness, not rational.
> Nearly all big tech code is AI written today. What about that isn't "production-grade?"
I do not think the recent reliability and quality of major services would be considered acceptable if it weren't for a domestic tech oligopoly systematically lowering standards.
The U.S. auto industry also shows that oligopolies can shut out competition for quite some time and steer their captive market into accepting mediocrity, but innovation continues anyway. It's quite the contrast seeing domestic automakers turn their backs on EVs at the same time the Chinese are advertising 10 minute flash charging.
With all due respect, you sound like a bozo - Ive seen you edit your post about 5 times.
What matters is the aggregate activity in the labour market - for which - the activity associated with demand for software engineers is healthy.