Interesting that Amodei is the only major tech executive I can think of at the moment with a spine or any semblance of a moral compass. OpenAI/Google et al. will gleefully comply with any such requests, no matter how dangerous or unethical. The "problem" that the US govt faces here is that they are kind of tacitly admitting Claude has the most powerful models right now, otherwise they would just cancel all contracts and go to Gemini/OpenAI. It feels like a bluff, so they are trying to bully them into compliance.
> The Pentagon is also considering severing its contract with Anthropic and declaring the company a supply chain risk, which would require a plethora of other companies that work with the Pentagon to certify that Claude isn't used in their workflows.
If Anthropic believes they are in a position to become the main player in the "AGI" space, they should just say "ok then" and let this happen. Their growth strategy looks realistic and sustainable and not necessarily relying on sleazy defense contracts (aka making the taxpayer subsidize their growth, as is so common lately) - it would probably give them a lot of good will with consumers too.
However, I've yet to see in the last 10-15 years a major tech company make the "right" choice so I am probably just wishcasting.
Yeah this standoff is worth at least 10 Super Bowl ads in good publicity. The Pentagon is saying "Claude is the best so we need to use it but you need to stop acting ethically". I'm almost wondering if someone in the administration has a stake in Anthropic because this is such a boost.
Their threat to label it a supply chain risk also feels toothless because they've basically admitted that using Claude is a benefit, so by their own logic they're be shooting themselves in the foot to ban contractors from using it.
Yes, I agree, and this is a moment to prove they aren’t full of it - and it also seems like a very good move when the rest of the world seems increasingly vary wary of tech that even whiffs of US govt involvement.
I am not at all a skeptic anymore on this stuff and the science is well beyond me, but from what I think I know about alignment issues, and anthropic’s intense focus on solving these, it would not surprise me at all if we learn that catering to US whims on AI safety will result in the model actually getting worse or causing intense, 2nd and 3rd order unintended consequences. I’m not saying I believe there is a Terminator sequence of events happening, but if I did believe that, the headlines right now would look exactly what that would look like.
Alignment is the biggest issue for me - in terms of getting these things to actually behave in an environment where it is absolutely necessary that they behave. If I had to guess, that’s probably why the military is preferring to use it. Claude tooling is the only thing I have used yet in this hype cycle that actually I can get to behave how I want and obeys (arguably, and often to a fault).
However I also believe we’re in the worst possible timeline so the moment we get a taste of something that works as promised, it’ll be ripped away because the govt decides to do something stupid or build a moat around its use in a way to make it less useful, and favor other more “compliant” competitors.
Either way I bet there are some wild board room discussions going on at Anthropic right now.
>Interesting that Amodei is the only major tech executive I can think of at the moment with a spine or any semblance of a moral compass. OpenAI/Google et al
it doesn't strike me as interesting at all; anthropic was literally foundeded on the whole concept of 'a less evil and morally aligned LLM' when he broke from oAI. Google and oAI don't stand to uproot their entire origin raison d'etre when they participate in nefarious shit.
I wonder what kind of morally aligned and ethical work Amodei was doing for Baidu & Google, before he had leverage to appear moral and ethical in dealings with the US govt, you know -- two companies that are famously ethical and moral.
google famously “dont be evil” as their core mantra, and facebook used to actually be in the business of connecting friends with one another - in this day and age I genuinely cannot understand the position that you should trust what companies say vs how they act (or will act in the future)
Let's say Anthropic refuses to do this. What actually happens next?
Or lets say they refuse and the government comes against them hard in some way, and Anthropic still really doesn't want to do it, so they just dissolve the entire company. Is that a potential way out, at least?
I mean, I realise they'd be losing billions by doing that and putting thousands out of work, but given that unaligned military AI could destroy the world...
Seems like the two main threats are Defense Production Act and Supply Chain Risk. I'd assume Anthropic would sue if either were invoked. I could imagine Supply Chain Risk being easier to push back on because it's pretty clearly being used punitively rather than because of an actual risk. DPA might be a bit harder to push back on if the banned functionality (i.e. mass surveillance and autonomous weapons) exists in the LLM itself and it's just a matter of disabling external checks. If the banned functionality is baked into the training data/weights directly they could probably push back on the DPA by saying the functionality isn't something they can reasonably create.
Only other precedent I can think of in the case where pushback fails is Lavabit with Edward Snowden's email, but I feel like Anthropic is too big to "fail" in the same way Lavabit did to avoid complying. The penalty for refusing to comply with the Defense Production act is $10k and/or a year in prison, but I think if the government actually pursued that they would burn a bunch of bridges and Amodei would be a folk hero.
The military dependence on AI was a key point in Project 2027.
"The President is troubled. Like all politicians, he’s used to people sucking up to him only to betray him later. He’s worried now that the AIs could be doing something similar. Are we sure the AIs are entirely on our side? Is it completely safe to integrate them into military command-and-control networks?69 How does this “alignment” thing work, anyway? OpenBrain reassures the President that their systems have been extensively tested and are fully obedient. Even the awkward hallucinations and jailbreaks typical of earlier models have been hammered out."
Basically yes. The Trump regime is made up of the absolute worst kind of people. They seem utterly incapable of comprehending real solutions to any of the problems that we face. The only thing they know how to do is bark orders to do whatever simple thing can fit inside their own heads, and then resort to bullying if the target does not comply. It is prudent to avoid giving such people any more capabilities, which will inevitably be abused to harm our society. And the really sad part is how many otherwise-intelligent people they fooled (and continue to fool) with their hollow chest-thumping.
Interesting that Amodei is the only major tech executive I can think of at the moment with a spine or any semblance of a moral compass. OpenAI/Google et al. will gleefully comply with any such requests, no matter how dangerous or unethical. The "problem" that the US govt faces here is that they are kind of tacitly admitting Claude has the most powerful models right now, otherwise they would just cancel all contracts and go to Gemini/OpenAI. It feels like a bluff, so they are trying to bully them into compliance.
> The Pentagon is also considering severing its contract with Anthropic and declaring the company a supply chain risk, which would require a plethora of other companies that work with the Pentagon to certify that Claude isn't used in their workflows.
If Anthropic believes they are in a position to become the main player in the "AGI" space, they should just say "ok then" and let this happen. Their growth strategy looks realistic and sustainable and not necessarily relying on sleazy defense contracts (aka making the taxpayer subsidize their growth, as is so common lately) - it would probably give them a lot of good will with consumers too.
However, I've yet to see in the last 10-15 years a major tech company make the "right" choice so I am probably just wishcasting.
"tech executive", "spine", "moral compass" -- it's illegal to these words together in the same sentence... except for this one.
Yeah this standoff is worth at least 10 Super Bowl ads in good publicity. The Pentagon is saying "Claude is the best so we need to use it but you need to stop acting ethically". I'm almost wondering if someone in the administration has a stake in Anthropic because this is such a boost.
Their threat to label it a supply chain risk also feels toothless because they've basically admitted that using Claude is a benefit, so by their own logic they're be shooting themselves in the foot to ban contractors from using it.
Yes, I agree, and this is a moment to prove they aren’t full of it - and it also seems like a very good move when the rest of the world seems increasingly vary wary of tech that even whiffs of US govt involvement.
I am not at all a skeptic anymore on this stuff and the science is well beyond me, but from what I think I know about alignment issues, and anthropic’s intense focus on solving these, it would not surprise me at all if we learn that catering to US whims on AI safety will result in the model actually getting worse or causing intense, 2nd and 3rd order unintended consequences. I’m not saying I believe there is a Terminator sequence of events happening, but if I did believe that, the headlines right now would look exactly what that would look like.
Alignment is the biggest issue for me - in terms of getting these things to actually behave in an environment where it is absolutely necessary that they behave. If I had to guess, that’s probably why the military is preferring to use it. Claude tooling is the only thing I have used yet in this hype cycle that actually I can get to behave how I want and obeys (arguably, and often to a fault).
However I also believe we’re in the worst possible timeline so the moment we get a taste of something that works as promised, it’ll be ripped away because the govt decides to do something stupid or build a moat around its use in a way to make it less useful, and favor other more “compliant” competitors.
Either way I bet there are some wild board room discussions going on at Anthropic right now.
>Interesting that Amodei is the only major tech executive I can think of at the moment with a spine or any semblance of a moral compass. OpenAI/Google et al
it doesn't strike me as interesting at all; anthropic was literally foundeded on the whole concept of 'a less evil and morally aligned LLM' when he broke from oAI. Google and oAI don't stand to uproot their entire origin raison d'etre when they participate in nefarious shit.
I wonder what kind of morally aligned and ethical work Amodei was doing for Baidu & Google, before he had leverage to appear moral and ethical in dealings with the US govt, you know -- two companies that are famously ethical and moral.
google famously “dont be evil” as their core mantra, and facebook used to actually be in the business of connecting friends with one another - in this day and age I genuinely cannot understand the position that you should trust what companies say vs how they act (or will act in the future)
Let's say Anthropic refuses to do this. What actually happens next?
Or lets say they refuse and the government comes against them hard in some way, and Anthropic still really doesn't want to do it, so they just dissolve the entire company. Is that a potential way out, at least?
I mean, I realise they'd be losing billions by doing that and putting thousands out of work, but given that unaligned military AI could destroy the world...
Seems like the two main threats are Defense Production Act and Supply Chain Risk. I'd assume Anthropic would sue if either were invoked. I could imagine Supply Chain Risk being easier to push back on because it's pretty clearly being used punitively rather than because of an actual risk. DPA might be a bit harder to push back on if the banned functionality (i.e. mass surveillance and autonomous weapons) exists in the LLM itself and it's just a matter of disabling external checks. If the banned functionality is baked into the training data/weights directly they could probably push back on the DPA by saying the functionality isn't something they can reasonably create.
Only other precedent I can think of in the case where pushback fails is Lavabit with Edward Snowden's email, but I feel like Anthropic is too big to "fail" in the same way Lavabit did to avoid complying. The penalty for refusing to comply with the Defense Production act is $10k and/or a year in prison, but I think if the government actually pursued that they would burn a bunch of bridges and Amodei would be a folk hero.
Archive: https://archive.is/20260224182829/https://www.axios.com/2026...
The military dependence on AI was a key point in Project 2027.
"The President is troubled. Like all politicians, he’s used to people sucking up to him only to betray him later. He’s worried now that the AIs could be doing something similar. Are we sure the AIs are entirely on our side? Is it completely safe to integrate them into military command-and-control networks?69 How does this “alignment” thing work, anyway? OpenBrain reassures the President that their systems have been extensively tested and are fully obedient. Even the awkward hallucinations and jailbreaks typical of earlier models have been hammered out."
This is fascism. What law are they breaking? What authority does hegseth have? This feels alot like the threat of state violence to silent dissent.
Basically yes. The Trump regime is made up of the absolute worst kind of people. They seem utterly incapable of comprehending real solutions to any of the problems that we face. The only thing they know how to do is bark orders to do whatever simple thing can fit inside their own heads, and then resort to bullying if the target does not comply. It is prudent to avoid giving such people any more capabilities, which will inevitably be abused to harm our society. And the really sad part is how many otherwise-intelligent people they fooled (and continue to fool) with their hollow chest-thumping.
Kakistocracy.
How is any of this okay? What mental model of the world makes sense of this?
Fox News host Whiskey Pete can fuck right off.