I understand the vision, but how does this work on a global scale. e.g. American employees refuse to build this, but China's don't.
Edit: I originally ended with "What would have happened if Germany had a nuclear bomb and America didn't?", but I think it distracted from the point I was trying to make so moving this to an edit. I'm not trying to ask "is the US the bad guy". I'm trying to ask how to balance personal anti war sentiments with the realities of the world (specifically in this case keeping up in an arms race).
>American employees refuse to build this, but China's don't.
How about you articulate the threat from an AI powered China to people outside of AI powered China and discuss potential methods to counter that, instead of insisting capabilities be developed just in case.
>is the US the bad guy
Yes
>I'm trying to ask how to balance personal anti war sentiments with the realities of the world
Insist on open information, never surrender consent willingly and demand justification for everything. As always.
PRC isn't going to do any of the things you are asking for, and no one expects them to. The threat of an AI powered China is really obvious to me, but apparently the idea of "IP theft and industrial sabotage, but at scale with AI agents instead of human meat sacks" is hard to clearly articulate.
One method, beyond AI powered kill chains, to counter an AI powered China is of course strategic weapons.
Well game theory aside, the reality is if PRC weaponizes AI, there's a chance they may use it in the future. If US weaponizes AI, they'll definitely be using it to kill people within the calendar year. Employees have to factor that in, for PRC worker their killing people is hypothetical, for US worker, it's inevitable.
Do the same thing we did with the nuclear arms race: Treaties to limit and control it.
Obviously, we would have had more political leverage if our leaders had started working on a treaty before they crossed enough moral red lines to start a tech revolt, but we did not elect the sort of leaders that would do that.
The obvious countries sign those treaties for political reasons. There are countries officially pretending they don’t have any nuclear warheads, but it is well known by everyone they have plenty and a capacity to deliver them. They sandbag for political, but also for religious reasons and that’s scary.
treaties worked because both sides had large quantity of bombs. In this case, certain people do not want US to have AI bomb, while China and others will have it.
The reason it works is when you have less participants in an effort you have slower progress in that endeavor. Brilliant employees prohibiting their entire org to not support the development of bad things prevents less brilliant employees from doing bad things.
It is sort of like computers are amazing but can also be a privacy nightmare. Software engineers don’t help or coordinate with black hat hackers. So black hat hackers have a harder time refining their systems.
Well, then military use of some US commercial AI systems will be subject to minimal restrictions while Chinese AI might not be.
Thus some people avoid having to see their work used for killing people or in mass surveillance, so that they're actually able to contribute to AI development instead of leaving the field.
That’s exactly why I think the principled position is naive in a tragedy of the commons situation we’re in - it isn’t a sci fi story with a happy ending, it’s the Manhattan project and 70+ years ago nazi and japanese data centers doing foundational model training would’ve been bombed to smithereens at any cost.
I'm going to give a shout out here to an episode of the excellent podcast Hardcore History, specifically Episode 59: The Destroyer of Worlds [1].
The development of the atomic bomb created a debate in American policy circles about how the US should react. Within a few years, the same debate occurred over developing thermonuclear weapons. The same question kept coming up: what if the enemy has these weapons and we don't?
Dan Carlin's position, which I happen to agree with, is that America chose wrong. It became both belligerent and paranoid to a degree that just wasn't the case before WW2. If you look up the history of regime changes at the hands of the US [2] then you can see it went into overdrive after 1945.
Part of the problem here I think is projection, the psychological phenomenon. It's also a cultural phenomenon. So, for example, when you have a historically oppressed people who are being potentially freed, the oppressors will fret that the formerly oppressed will rise up and kill them. This is projection.
We saw this exact thing play out with Emancipation. There was no mass revenge violence by the former slaves. If anything, there was more violence by the former oppressors against freed slaves and a system that excuded the violence (eg the Colfax massacre [3]).
I think nations can be guilty of this too. The US sees any other global power as a potential hegemonic, imperialist power that will dominate and exploit everyone around them because, well, that's what we do.
We also see this in how we view AI as a resource. We see it as something to be owned and gatekept such that some US company will become insanely wealthy further extracting every last dollar from every person on Earth.
So your comment belays a common fear that China will displace us as a global hegemonic, imperialist power despite there being zero evidence that China behaves in that fashion. American propaganda runs deep and the projection is strong so this will immediately cause some to say "but Tibet" or "but Taiwan" without really knowing anything any of those situations.
As just one example, the One China policy is the official policy of the US, the EU and almost every nation on Earth. "They might invade" I preemptively hear. They won't, partly because they can't but really because they don't need to. If the world already has the One China policy, why do anything? Oh and I said they can't because they can't. They don't have that military capability. If you think that, you don't know anything about war. Crossing 100 miles of ocean to invade an island with a army of over 500,000 is simply not possible.
Let me put it this way: the 17 or so miles of the English Channel stopped the German war machine despite having millions of soldiers.
Anyway, back to the point: this whole argument of "what if China does military AI?" is (IMHO) projection. If anything, China has shown that they won't allow a US tech company to control and gatekeep AI (eg by rreleasing DeepSeek). And if China gets AI, they're more than likely to use it to further raise people out of poverty and automate away more menial jobs without making those displaced workers homeless.
You seem to be laboring under the naive belief that mainland China is a rational actor which will refrain from attacking Taiwan over fear of heavy losses and possible defeat. You might have been correct at some point, but that situation no longer obtains. Xi Jinping has successfully purged all potential rivals and personally taken over centralized control of all important decisions. We have no visibility into his thinking, so we have to assume the worst. If he orders the PLA to go then they'll go, regardless of consequences. Part of preparing for the eventuality involves building more effective autonomous weapons. There is no realistic alternative.
So I follow a number of China scholars and experts and I've yet to see any consensus about what these military purges actually mean.
It could be about corruption. You see this in the Russian military where paid-for tanks didn't exist because the generals had pocketed the money. It could be to have an expansionist policy. It could well be to not have an expansionist policy. The point is that nobody really knows yet.
But the string I really wanted to pull at was this idea that China isn't a "rational actor". It's lazy and really a thought-terminating cliche. It's certainly no basis for analysis or policy-making. It's kind of the final boss of justification. "Putin/Saddam/Xi/Castro/Maduro is crazy". That really just means you don't understand what's going on or want to ignore the facts.
We now have 50+ years (since really the end of the Cultural Revolution) of China acting in a very rational, very intentional and very long-term way. Xi's own history here is pretty interesting. He went from privileged child (his father was one of Mao's lieutenants) to being banished to working his way up through the party's ranks over decades.
It's a mistake (IMHO) to view Xi as a singular actor, let alone as a irrational autocrat. While the PRC and the CCP might be relatively new the systems and political structures can probably be traced back thousands of years. I'm thinking particularly of the bureaucratic reforms of the Qin Dynasty some ~2300 years ago.
What cannot be ignored is that a billion Chinese have seen a massive improvement in their living conditions during their lifetimes. Almost all of the people pulled out of extreme poverty in the 20th century were because of China (~800M). So although China is authoritarian, the government is extremely popular because of that increase in living conditions. It's something that we in the West have a hard time fathoming because our living conditions have been in decline since at least the 1970s.
> And if China gets AI, they're more than likely to use it to further raise people out of poverty and automate away more menial jobs without making those displaced workers homeless.
Your comment is very optimistic. But the quoted part reminded me of something I heard (again) about China using slave labor in their lithium mines:
> The US sees any other global power as a potential hegemonic, imperialist power that will dominate and exploit everyone around them because, well, that's what we do.
In the Cold War, this was the correct approach, the USSR was that.
You seem to really like history. Maybe you're ready to graduate from podcasts to reading books and primary sources. Fair warning: you might end up with a picture of history that is less cartoonish and motivated.
And maybe you can read a book about adding to the conversation instead of navel gazing oh superior intelligent one who has read so many books but can't add a comment or reference a book to point to a concept that could help add to the shared pool meaning.
The good books, unlike the good podcasts, can rarely be reduced to a single forum comment. You don't read them to cite them as a zinger in an online back-and-forth. You read lots of them, and you cross-reference them with the world around you, to slowly build up a view of the world that's irreducibly complex. You read them to escape yourself and your times -- the exact opposite of "navel gazing", in a sense.
Most books add to "the shared pool [of] meaning", as you say. Pick any one; I didn't have a specific one in mind. The commenter to whom I was responding is in a state where pretty much any well-written book about history would help them out a lot. Something written before 1980 might be especially illuminating.
It might take many books, if they want their comprehension of history to actually be "hardcore".
> American employees refuse to build this, but China's don't.
It's not American employees vs. China employees. No need to villainize China at every opportunity. Most Chinese employees are more similar to American employees than you think.
It's {top candidates who have their pick of employers} have the luxury to refuse to build this.
Mid-tier dude who can't land a job at any of the top AI companies and can code with Cursor and trying to pay their rent or medical bills will absolutely build AI for the military in return for having their rent paid.
This is regardless of whether it is in the US or China.
This is not answering the question.. and HN ain't US only.
You can say the same for any other country... What if Japan employee refuse, but American want that anyway? What if China employee refuse, but Russia employee want that anyway?
The implication are still the same -- social, culture, jurisdiction, national interest, company interest don't share the same boundary and don't align on their priorities.
Is there any reason to think that autonomous weapons are a critical strategic capability? It's hard to see what an unpiloted drone can do that a remotely piloted drone can't, other than perhaps human rights violations.
The simple version: Weapons systems are quickly advancing to the point where many of them can navigate and operate independent of human control. The obvious question here is at which point do we give these platforms release authority for lethal weapons. It becomes impractical to require (or even imagine, really) there to be a human "pilot" operating every single drone when you have hundreds or thousands of them operating in theater. That's really what this is about.
Think of it this way: mines installed in the seabed in wars past were "dumb", in that a passing ship had to happen into it. Imagine systems deployed underwater that were mobile, contained multiple torpedoes, and could strike warships with little to no warning given their small acoustic signature. It's the same principal as a mine (you leave it one spot, hope an enemy ship comes by), but the capabilities are far more advanced. If the system is not at least semi-autonomous than it might as well be a dumb mine again.
Remotely piloted drones can't operate at long ranges in a conflict against a near-peer adversary such as China. All of the high-bandwidth communications links will be degraded by a combination of jamming, cyber attacks, and anti-satellite weapons. Remote piloting will only be reliable using fiber optic cables (very short range) or direct line-of-sight transmission. So hardly practical in the Pacific theater of operations.
In an existential conflict no one cares about human rights. That's something for the winners to worry about after the shooting stops.
There is only self regulation, ultimately, at the top. I think it's still progress to see these groups specifically call out their moral hesitations, even if it doesn't go anywhere - it gives people ground to realize that others share their concerns. All movements, all progress starts from people putting their stance out there and getting a conversation going around the topic; that builds mindshare and eventually a demand for change.
There's always this comment, saying that its useless to possibly govern or resist advancement or development or use of weapons capable of indiscriminate killing.
If the world actually worked like they believe it does, if restraint were just not possible, the world would have been destroyed at least 3 documented times over.
We've banned the other account for repeatedly breaking the site guidelines, but - WTF? I don't want to ban you because it doesn't look like you have a habit of posting like this, but please don't do this kind of thing here again.
Why is there any controversy about defending one's nation being "good" or "bad"?
I can not believe what I am reading here, and how the single comment supporting defending one's country is so heavily downvoted. Qatar has poisoned Western online communities such that all defence of the United States is considered taboo? I don't even live in the US and I am frightened by what I see here.
The controversy isn't about defending one's country, it's about you and the parent comment author assuming what this is all about without reading the article.
The core of the issue about autonomous use of AI in mass surveillance of Americans and autonomous use of AI in automated weapons that make kill decisions. Anthropic is perfectly fine with working with the War Department and "defending one's nation".
But they are not okay with their AI being used to make a mockery of the 4th amendment and making automated kill/no-kill decisions about actual human lives.
Oh I believe it’s important to defend the country, but not because it’s a popular opinion. I dislike any statement that believes truth is based on consensus.
The resistance goes out the window the first time an American is gunned down by an autonomous system. They should do whatever possible to prevent that outcome.
We already forgotten about this already? [0] Where was the open letter then?
Both companies (Google, OpenAI [0]) have defense contracts. At this point, the best course of action is to leave Google and OpenAI if you disagree with that (they won't).
Piggybacking on deeply integrated information and connectivity within society that was marketed and adopted under the guise of trust and an ethos of not being evil is pathetic.
Build, train, develop and maintain an AI for military if needed. When a government is scared of individuals they've clearly lost their edge.
Surveillance is information gathering, and synthesizing insights that were previously undiscovered. It makes more information, albeit not shared information.
I suppose it decreases the percentage of information that’s free. But it increases the amount of total information.
About free speech suppression, I don’t see any of that happening. If anything, there is too much being said from across the political spectrum. I’d be happy if people would say less, especially in hyperbole. I can’t see where free speech has been impinged at all.
This gets a giant eye roll from me. Are you really so naive that you thought working on AI for a giant tech company, creating software that is capable of finding deep patterns in massive amounts of data... and it wasn't going to used by the Defense / Intelligence industry? If you are so against the US government, and you are working for ANY big tech company you are aiding the Intelligence and Defense industry. Government uses AWS and Azure. Intelligence agencies use the data and tools of Meta / Google / Apple / etc.
Am I the only one who remembers the prime directive of google, much easier to understand than 'organizing the worlds information' etc. etc. It was simpler.
It is when "defense" means invasion and subjugation of other countries. All countries pose their military operations as "defense." Inquiring minds should ask if a country surrounded on sides by two oceans with two pacified neighbors has any real threats or merely opportunities for cheap labor, market access, and mineral rights abroad.
This has been going on for a very long time (read what Smedley Butler said in "War is a Racket"), but after the Iraq War, the credibility of the US should be somewhere in hell.
I remember they successfully got Google out of a military contract in the first admin (and briefly vilified by the right for that). that's not going to work now. Workers have a lot less power and the CEO is buddies with Trump
As the article says, the workers didn't petition the CEO, they petitioned the head of Google AI who's already expressed solidarity with Anthropic. If they can convince Jeff Dean, I don't think Sundar necessarily gets a say; it's a lot easier to stick your head in the sand and ignore things than to fire one of your most widely respected engineers because he won't help the Pentagon build Terminator robots.
My one concern in this whole thing is that if these slightly less benevolent, but still have some morality, companies don't engage, we'll be left with companies like OAI and xAI engaging and you just know that's not going to make things better for anyone.
Which companies are these? Google and Facebook already bribed Trump under the cover of “settling” a frivolous case he personally brought forward. Tim Cook personally donated to his inauguration fund and gave him an expensive trinket. The Netflix CEO is now kissing up to him trying to get the WB acquisition approved. Even companies that are hurt by the tariffs won’t say anything bad about him. The only CEO that has spoken out against any of his policies is Chase’s CEO.
> it's a lot easier to stick your head in the sand and ignore things than to fire one of your most widely respected engineers because he won't help the Pentagon build Terminator robots.
Wouldn't it be more like he would leave on his own and the company would keep moving along? Why would they fire him?
I mean, right. Why would they fire him? The Pentagon isn't demanding some concrete technical action that Jeff Dean has to personally perform or could personally obstruct, so it wouldn't make any sense. That's why I don't think Google executives can realistically stop him from announcing a similar policy if he wants to.
Google employees must think this is pre 2024. The employer has the power and doesn’t mind laying off people who don’t tow the company line and all of the CEOs bend over and bribe the President - ie “settling” frivolous lawsuits brought by Trump himself over “censorship” when he was out of office
I think a lot of software companies are going to learn just how much employee power remains tomorrow, in the very likely event that the Pentagon issues an order purporting to ban all defense contractors from using Claude.
I understand the vision, but how does this work on a global scale. e.g. American employees refuse to build this, but China's don't.
Edit: I originally ended with "What would have happened if Germany had a nuclear bomb and America didn't?", but I think it distracted from the point I was trying to make so moving this to an edit. I'm not trying to ask "is the US the bad guy". I'm trying to ask how to balance personal anti war sentiments with the realities of the world (specifically in this case keeping up in an arms race).
>American employees refuse to build this, but China's don't.
How about you articulate the threat from an AI powered China to people outside of AI powered China and discuss potential methods to counter that, instead of insisting capabilities be developed just in case.
>is the US the bad guy
Yes
>I'm trying to ask how to balance personal anti war sentiments with the realities of the world
Insist on open information, never surrender consent willingly and demand justification for everything. As always.
The threat seems straightforward to me; information warfare.
Then the follow up question, how do you combat that? Not likely through developing similar technology.
The US is a major exporter of that. Including Google itself via the YouTube recommendations algorithm.
How has the threat model changed?
Keep your shit patched. You dont need LLM targeting of Drone based weapons to patch your servers.
I think you've missed the point.
PRC isn't going to do any of the things you are asking for, and no one expects them to. The threat of an AI powered China is really obvious to me, but apparently the idea of "IP theft and industrial sabotage, but at scale with AI agents instead of human meat sacks" is hard to clearly articulate.
One method, beyond AI powered kill chains, to counter an AI powered China is of course strategic weapons.
Well game theory aside, the reality is if PRC weaponizes AI, there's a chance they may use it in the future. If US weaponizes AI, they'll definitely be using it to kill people within the calendar year. Employees have to factor that in, for PRC worker their killing people is hypothetical, for US worker, it's inevitable.
Not to worry, xAI would do it even if Google didn't.
Also, Anthropic didn't actually refuse to work on all military stuff. They have some conditions, which isn't the same thing.
Anthropic has conditions, the Pentagon has billions of carrots and millions of sticks.
Do the same thing we did with the nuclear arms race: Treaties to limit and control it.
Obviously, we would have had more political leverage if our leaders had started working on a treaty before they crossed enough moral red lines to start a tech revolt, but we did not elect the sort of leaders that would do that.
The obvious countries sign those treaties for political reasons. There are countries officially pretending they don’t have any nuclear warheads, but it is well known by everyone they have plenty and a capacity to deliver them. They sandbag for political, but also for religious reasons and that’s scary.
treaties worked because both sides had large quantity of bombs. In this case, certain people do not want US to have AI bomb, while China and others will have it.
The reason it works is when you have less participants in an effort you have slower progress in that endeavor. Brilliant employees prohibiting their entire org to not support the development of bad things prevents less brilliant employees from doing bad things.
It is sort of like computers are amazing but can also be a privacy nightmare. Software engineers don’t help or coordinate with black hat hackers. So black hat hackers have a harder time refining their systems.
Well, then military use of some US commercial AI systems will be subject to minimal restrictions while Chinese AI might not be.
Thus some people avoid having to see their work used for killing people or in mass surveillance, so that they're actually able to contribute to AI development instead of leaving the field.
That’s exactly why I think the principled position is naive in a tragedy of the commons situation we’re in - it isn’t a sci fi story with a happy ending, it’s the Manhattan project and 70+ years ago nazi and japanese data centers doing foundational model training would’ve been bombed to smithereens at any cost.
Are we comparing the PRC to early 40s Germany/Japan?
I'm going to give a shout out here to an episode of the excellent podcast Hardcore History, specifically Episode 59: The Destroyer of Worlds [1].
The development of the atomic bomb created a debate in American policy circles about how the US should react. Within a few years, the same debate occurred over developing thermonuclear weapons. The same question kept coming up: what if the enemy has these weapons and we don't?
Dan Carlin's position, which I happen to agree with, is that America chose wrong. It became both belligerent and paranoid to a degree that just wasn't the case before WW2. If you look up the history of regime changes at the hands of the US [2] then you can see it went into overdrive after 1945.
Part of the problem here I think is projection, the psychological phenomenon. It's also a cultural phenomenon. So, for example, when you have a historically oppressed people who are being potentially freed, the oppressors will fret that the formerly oppressed will rise up and kill them. This is projection.
We saw this exact thing play out with Emancipation. There was no mass revenge violence by the former slaves. If anything, there was more violence by the former oppressors against freed slaves and a system that excuded the violence (eg the Colfax massacre [3]).
I think nations can be guilty of this too. The US sees any other global power as a potential hegemonic, imperialist power that will dominate and exploit everyone around them because, well, that's what we do.
We also see this in how we view AI as a resource. We see it as something to be owned and gatekept such that some US company will become insanely wealthy further extracting every last dollar from every person on Earth.
So your comment belays a common fear that China will displace us as a global hegemonic, imperialist power despite there being zero evidence that China behaves in that fashion. American propaganda runs deep and the projection is strong so this will immediately cause some to say "but Tibet" or "but Taiwan" without really knowing anything any of those situations.
As just one example, the One China policy is the official policy of the US, the EU and almost every nation on Earth. "They might invade" I preemptively hear. They won't, partly because they can't but really because they don't need to. If the world already has the One China policy, why do anything? Oh and I said they can't because they can't. They don't have that military capability. If you think that, you don't know anything about war. Crossing 100 miles of ocean to invade an island with a army of over 500,000 is simply not possible.
Let me put it this way: the 17 or so miles of the English Channel stopped the German war machine despite having millions of soldiers.
Anyway, back to the point: this whole argument of "what if China does military AI?" is (IMHO) projection. If anything, China has shown that they won't allow a US tech company to control and gatekeep AI (eg by rreleasing DeepSeek). And if China gets AI, they're more than likely to use it to further raise people out of poverty and automate away more menial jobs without making those displaced workers homeless.
[1]: https://www.dancarlin.com/product/hardcore-history-59-the-de...
[2]: https://en.wikipedia.org/wiki/United_States_involvement_in_r...
[3]: https://en.wikipedia.org/wiki/Colfax_massacre
You seem to be laboring under the naive belief that mainland China is a rational actor which will refrain from attacking Taiwan over fear of heavy losses and possible defeat. You might have been correct at some point, but that situation no longer obtains. Xi Jinping has successfully purged all potential rivals and personally taken over centralized control of all important decisions. We have no visibility into his thinking, so we have to assume the worst. If he orders the PLA to go then they'll go, regardless of consequences. Part of preparing for the eventuality involves building more effective autonomous weapons. There is no realistic alternative.
So I follow a number of China scholars and experts and I've yet to see any consensus about what these military purges actually mean.
It could be about corruption. You see this in the Russian military where paid-for tanks didn't exist because the generals had pocketed the money. It could be to have an expansionist policy. It could well be to not have an expansionist policy. The point is that nobody really knows yet.
But the string I really wanted to pull at was this idea that China isn't a "rational actor". It's lazy and really a thought-terminating cliche. It's certainly no basis for analysis or policy-making. It's kind of the final boss of justification. "Putin/Saddam/Xi/Castro/Maduro is crazy". That really just means you don't understand what's going on or want to ignore the facts.
We now have 50+ years (since really the end of the Cultural Revolution) of China acting in a very rational, very intentional and very long-term way. Xi's own history here is pretty interesting. He went from privileged child (his father was one of Mao's lieutenants) to being banished to working his way up through the party's ranks over decades.
It's a mistake (IMHO) to view Xi as a singular actor, let alone as a irrational autocrat. While the PRC and the CCP might be relatively new the systems and political structures can probably be traced back thousands of years. I'm thinking particularly of the bureaucratic reforms of the Qin Dynasty some ~2300 years ago.
What cannot be ignored is that a billion Chinese have seen a massive improvement in their living conditions during their lifetimes. Almost all of the people pulled out of extreme poverty in the 20th century were because of China (~800M). So although China is authoritarian, the government is extremely popular because of that increase in living conditions. It's something that we in the West have a hard time fathoming because our living conditions have been in decline since at least the 1970s.
> And if China gets AI, they're more than likely to use it to further raise people out of poverty and automate away more menial jobs without making those displaced workers homeless.
Your comment is very optimistic. But the quoted part reminded me of something I heard (again) about China using slave labor in their lithium mines:
https://www.state.gov/forced-labor-in-chinas-xinjiang-region...
HN really can't handle comments like these huh.
> The US sees any other global power as a potential hegemonic, imperialist power that will dominate and exploit everyone around them because, well, that's what we do.
In the Cold War, this was the correct approach, the USSR was that.
You seem to really like history. Maybe you're ready to graduate from podcasts to reading books and primary sources. Fair warning: you might end up with a picture of history that is less cartoonish and motivated.
What primary sources are you referring to? Come with receipts next time instead of just vitriol.
And maybe you can read a book about adding to the conversation instead of navel gazing oh superior intelligent one who has read so many books but can't add a comment or reference a book to point to a concept that could help add to the shared pool meaning.
The good books, unlike the good podcasts, can rarely be reduced to a single forum comment. You don't read them to cite them as a zinger in an online back-and-forth. You read lots of them, and you cross-reference them with the world around you, to slowly build up a view of the world that's irreducibly complex. You read them to escape yourself and your times -- the exact opposite of "navel gazing", in a sense.
Most books add to "the shared pool [of] meaning", as you say. Pick any one; I didn't have a specific one in mind. The commenter to whom I was responding is in a state where pretty much any well-written book about history would help them out a lot. Something written before 1980 might be especially illuminating.
It might take many books, if they want their comprehension of history to actually be "hardcore".
> American employees refuse to build this, but China's don't.
It's not American employees vs. China employees. No need to villainize China at every opportunity. Most Chinese employees are more similar to American employees than you think.
It's {top candidates who have their pick of employers} have the luxury to refuse to build this.
Mid-tier dude who can't land a job at any of the top AI companies and can code with Cursor and trying to pay their rent or medical bills will absolutely build AI for the military in return for having their rent paid.
This is regardless of whether it is in the US or China.
With current leadership, I think we're closer to Germany in this analogy.
This is not answering the question.. and HN ain't US only.
You can say the same for any other country... What if Japan employee refuse, but American want that anyway? What if China employee refuse, but Russia employee want that anyway?
The implication are still the same -- social, culture, jurisdiction, national interest, company interest don't share the same boundary and don't align on their priorities.
I don’t think they’re refusing all military involvement. Autonomous-decision making is the problematic part.
The US military has deployed fully autonomous weapons systems since 1979. If you're worried about that then you're a little late.
It seems weird to equivocate the capabilities of "ai" in 1979 to what we have now; clearly it is on a different level.
Yes, but we both know this is not the same kind of “autonomous weapons”.
my brother in Christ, what do you think the 40's America was like?
This kind of inflammatory nonsense serves no purpose other than to be insulting and provocative.
I think it signals to other people that they should not feel alone in that "this is all **ed up". To that, I appreciated the comment.
Is there any reason to think that autonomous weapons are a critical strategic capability? It's hard to see what an unpiloted drone can do that a remotely piloted drone can't, other than perhaps human rights violations.
The simple version: Weapons systems are quickly advancing to the point where many of them can navigate and operate independent of human control. The obvious question here is at which point do we give these platforms release authority for lethal weapons. It becomes impractical to require (or even imagine, really) there to be a human "pilot" operating every single drone when you have hundreds or thousands of them operating in theater. That's really what this is about.
Think of it this way: mines installed in the seabed in wars past were "dumb", in that a passing ship had to happen into it. Imagine systems deployed underwater that were mobile, contained multiple torpedoes, and could strike warships with little to no warning given their small acoustic signature. It's the same principal as a mine (you leave it one spot, hope an enemy ship comes by), but the capabilities are far more advanced. If the system is not at least semi-autonomous than it might as well be a dumb mine again.
Remotely piloted drones can't operate at long ranges in a conflict against a near-peer adversary such as China. All of the high-bandwidth communications links will be degraded by a combination of jamming, cyber attacks, and anti-satellite weapons. Remote piloting will only be reliable using fiber optic cables (very short range) or direct line-of-sight transmission. So hardly practical in the Pacific theater of operations.
In an existential conflict no one cares about human rights. That's something for the winners to worry about after the shooting stops.
You don't need modern ai for that, it's been done decades ago.
Modern tools lend themselves more to information warfare and deobfuscation.
Faster decisions, less fatigue, etc.
It can turn around and bomb you.
If we're going to have to rely on self-regulation for this, we're already doomed.
There is only self regulation, ultimately, at the top. I think it's still progress to see these groups specifically call out their moral hesitations, even if it doesn't go anywhere - it gives people ground to realize that others share their concerns. All movements, all progress starts from people putting their stance out there and getting a conversation going around the topic; that builds mindshare and eventually a demand for change.
Sure, but we’re currently so fucked that even self-regulation is clearly superior to kneeling to the Mad King and his drunkard Secretary of War.
As much as I applaud the intention, the genie has been out of the bottle on this one for many years already.
There's always this comment, saying that its useless to possibly govern or resist advancement or development or use of weapons capable of indiscriminate killing.
If the world actually worked like they believe it does, if restraint were just not possible, the world would have been destroyed at least 3 documented times over.
Don't listen to them.
I think theyre referencing the previous Google employee protests of the company working with the Us Military
I think theyre referencing the Google AI already being used to make killing people more efficient
United States 1945
Soviet Union 1949
United Kingdom 1952
France 1960
China 1964
Israel 1966
India 1974
South Africa 1979
Pakistan 1998
North Korea 2006
also https://en.wikipedia.org/wiki/Nuclear_latency
also, see what Ukraine and Iran got for their "restraint".
What are you trying to say exactly?
that development and advancement of nuclear weapons had not, in fact, been contained.
How do you know that the list wouldn't have been longer otherwise?
Sweden would have had its own nuclear bombs if not for the political opposition. That's at least one more that would have been on the list.
The point is that the list isn't longer because the United States and USSR chose to not make it longer.
will AI non-proliferation treaty make any difference when twenty countries possess weaponized superintelligence?
Arguably it has always been there, considering the US military sponsored so many computing projects.
The line should be "no" not "limited domestic use".
100 google employees wow
And they'll be terminated by Jan 2027. Anything too scandalous will be done in secrecy thanks to code&project silos.
Next victims of "AI productivity gains"
200
every change starts with a few people, and then it grows
Google is grandfathered into a few preexisting defense contracts. Any red lines you draw may have already been crossed.
Literally yesterday google changed how secrets work. Its very possible to introduce change.
Do you have any references for this? I'd like to know more.
> every change starts with a few people, and then it grows
your opinion is defense contracts are bad
my opinion is defense contracts are good
who is correct? probably me since 99.9% of Googlers won’t leave over this
[flagged]
He’s certainly mostly right about
> me since 99.9% of Googlers won’t leave over this
Of course maybe not 99.9% but almost certainly >= 95%
We've banned the other account for repeatedly breaking the site guidelines, but - WTF? I don't want to ban you because it doesn't look like you have a habit of posting like this, but please don't do this kind of thing here again.
https://news.ycombinator.com/newsguidelines.html
sorry about that, I appreciate the reminder. Sometimes it's hard to engage in conversations with obvious bots/bad actors in good faith.
Why is there any controversy about defending one's nation being "good" or "bad"?
I can not believe what I am reading here, and how the single comment supporting defending one's country is so heavily downvoted. Qatar has poisoned Western online communities such that all defence of the United States is considered taboo? I don't even live in the US and I am frightened by what I see here.
The controversy isn't about defending one's country, it's about you and the parent comment author assuming what this is all about without reading the article.
The core of the issue about autonomous use of AI in mass surveillance of Americans and autonomous use of AI in automated weapons that make kill decisions. Anthropic is perfectly fine with working with the War Department and "defending one's nation".
But they are not okay with their AI being used to make a mockery of the 4th amendment and making automated kill/no-kill decisions about actual human lives.
Oh I believe it’s important to defend the country, but not because it’s a popular opinion. I dislike any statement that believes truth is based on consensus.
"Defending one's nation" and "capitulating to the people in charge like Hegseth" are very much not the same thing.
Very much doubt Google will take principled stance
Given Jeff Dean’s political activity on X, I’m guessing he’s aligned to the resistance too. Not sure the rest of management is interested in caving.
The resistance goes out the window the first time an American is gunned down by an autonomous system. They should do whatever possible to prevent that outcome.
https://www.nytimes.com/2025/12/31/magazine/ukraine-ai-drone...
We already forgotten about this already? [0] Where was the open letter then?
Both companies (Google, OpenAI [0]) have defense contracts. At this point, the best course of action is to leave Google and OpenAI if you disagree with that (they won't).
[0] https://www.theguardian.com/technology/2025/jun/17/openai-mi...
I say stay, and do a subtlety bad job there.
Sabotage? You are openly advocating the internal sabotage of US defense capability?
If the US government is treating its constitution hardly any better than toilet paper (and it clearly is) that is not an unreasonable thing to do.
If the current trajectory continues those “defensive” capabilities will obviously be used against the citizens paying for them..
Piggybacking on deeply integrated information and connectivity within society that was marketed and adopted under the guise of trust and an ethos of not being evil is pathetic.
Build, train, develop and maintain an AI for military if needed. When a government is scared of individuals they've clearly lost their edge.
Your manager and colleagues are not idiots.
US leaders have realized how much power China CCP has over its citizens. They want to do that. Same with EU.
What makes US US are their transparent integral markets and ability for information , people and goods to move freely.
By using AI for mass surveillance, information can’t move that freely. Free speech gets suppressed. Can’t call the emperor naked.
The Emporer is naked with a poopy diaper. But we can’t say that aloud.
That seems counter intuitive to me.
Surveillance is information gathering, and synthesizing insights that were previously undiscovered. It makes more information, albeit not shared information.
I suppose it decreases the percentage of information that’s free. But it increases the amount of total information.
About free speech suppression, I don’t see any of that happening. If anything, there is too much being said from across the political spectrum. I’d be happy if people would say less, especially in hyperbole. I can’t see where free speech has been impinged at all.
This gets a giant eye roll from me. Are you really so naive that you thought working on AI for a giant tech company, creating software that is capable of finding deep patterns in massive amounts of data... and it wasn't going to used by the Defense / Intelligence industry? If you are so against the US government, and you are working for ANY big tech company you are aiding the Intelligence and Defense industry. Government uses AWS and Azure. Intelligence agencies use the data and tools of Meta / Google / Apple / etc.
The letter: https://notdivided.org/ (https://news.ycombinator.com/item?id=47174964)
Am I the only one who remembers the prime directive of google, much easier to understand than 'organizing the worlds information' etc. etc. It was simpler.
Don't be evil.
Google has been evil for at least a decade, if not longer than that.
This is just pigslop masquerading as a moral stand.
What happened to the OG Google that cared about users, prioritized honest search, fast performance, and didn't murder pages with ads?
> evil
They never removed "don't be evil", they just changed where it is in the document.
Honestly I want tech companies is to make our military strong, just not under this guy. He’s going to turn around and use it directly on Americans.
"Don't do evil"
Oh, wait...
Defending one's own country is not evil, no matter how much money Qatar pours into Western social media influencers.
> Defending one's own country is not evil
What if the your country’s government is the biggest threat to it, though?
I can't remember the US ever being in a position of defense, no matter how much AIPAC handlers blackmail Western politicians with Epsteins.
It is when "defense" means invasion and subjugation of other countries. All countries pose their military operations as "defense." Inquiring minds should ask if a country surrounded on sides by two oceans with two pacified neighbors has any real threats or merely opportunities for cheap labor, market access, and mineral rights abroad.
This has been going on for a very long time (read what Smedley Butler said in "War is a Racket"), but after the Iraq War, the credibility of the US should be somewhere in hell.
US did rename the Department of Defense to Department of War so not sure how much posing is left..
It's not black and white. There is an entire spectrum of completely justifiable and extremely questionable uses of military power by the US.
Aiding the your nation is not evil, in fact its the opposite, its Good.
Aiding Hegseth / The Heritage Foundation is not aiding the US. If anything, it's the exact opposite.
They need to unionize quickly to protect their employment and include this as part of their bargaining
I remember they successfully got Google out of a military contract in the first admin (and briefly vilified by the right for that). that's not going to work now. Workers have a lot less power and the CEO is buddies with Trump
As the article says, the workers didn't petition the CEO, they petitioned the head of Google AI who's already expressed solidarity with Anthropic. If they can convince Jeff Dean, I don't think Sundar necessarily gets a say; it's a lot easier to stick your head in the sand and ignore things than to fire one of your most widely respected engineers because he won't help the Pentagon build Terminator robots.
> If they can convince Jeff Dean, I don't think Sundar necessarily gets a say
It's Demis they need to convince, not Jeff Dean.
My one concern in this whole thing is that if these slightly less benevolent, but still have some morality, companies don't engage, we'll be left with companies like OAI and xAI engaging and you just know that's not going to make things better for anyone.
Which companies are these? Google and Facebook already bribed Trump under the cover of “settling” a frivolous case he personally brought forward. Tim Cook personally donated to his inauguration fund and gave him an expensive trinket. The Netflix CEO is now kissing up to him trying to get the WB acquisition approved. Even companies that are hurt by the tariffs won’t say anything bad about him. The only CEO that has spoken out against any of his policies is Chase’s CEO.
It’s not a great situation, no doubt, but after Kristi Noel’s luxury jet I’m willing to hope that their capacity for grift outweighs their competence.
> it's a lot easier to stick your head in the sand and ignore things than to fire one of your most widely respected engineers because he won't help the Pentagon build Terminator robots.
Wouldn't it be more like he would leave on his own and the company would keep moving along? Why would they fire him?
I mean, right. Why would they fire him? The Pentagon isn't demanding some concrete technical action that Jeff Dean has to personally perform or could personally obstruct, so it wouldn't make any sense. That's why I don't think Google executives can realistically stop him from announcing a similar policy if he wants to.
Google employees must think this is pre 2024. The employer has the power and doesn’t mind laying off people who don’t tow the company line and all of the CEOs bend over and bribe the President - ie “settling” frivolous lawsuits brought by Trump himself over “censorship” when he was out of office
I think a lot of software companies are going to learn just how much employee power remains tomorrow, in the very likely event that the Pentagon issues an order purporting to ban all defense contractors from using Claude.
The kind of government contracts that Claude might lose pale in comparison with contracts that Google/GCP go after
"Google employees make demands of US Military from nap pods, ball pit."
I assume by red lines they are referring to a life-sized tic-tac-toe game board painted in a hallway.