Probably should just replace their software engineers too. I don't mean to throw shade, but automotive software is so bad I'd bet my life that Claude could do better.
Nevermind that the update cycle seems to be 6-10 months for changes like "You can now reset your radio presets directly from the radio settings menu", while bugs like temperature control resetting to max cool every start-up never get fixed.
I kept reading about how bad Android Auto was for years but we finally bought a more modern used car and I can't believe they would ship that experience to customers. I had a week where I just had to unpair and re-pair Everytime I got in the car.
I would love to read about why that stuff is the way it is from the engineers, hmm that might be a good spelunking. I really must be missing something that makes it harder than I think it really should be.
Maybe they prioritized it low, assigned it to a group of 3 devs, and it functioned the first time they demoed it to management, so it shipped. All other devs would’ve been working on their in-house software.
US car software, ya, I've never seen such trash in my life. This said I don't really have any complaints about my Hyundai software. It works, doesn't crash, and does what I want it too.
My - admittedly 8 year old - Mitsubishi has atrocious UX. State resets itself at odd times (cruise control disables EV mode, and resets regenerative braking settings, eg.), preferences are forgotten, if I want to listen to the radio I have to cycle through AM before FM every time, the touchscreen is slow to respond, the WiFi password for connecting to the app can only be gotten from a dealer, buttons do different things depending on context, etc.
UX in my Audi Q5 (2024) is terrible. With two phones in the car you never know which one is connected and whose google maps is being currently displayed. And then come the buttons designed with a contempt for a driver. I recently had to change a flat tire which is a story in itself. German engineering is soooo different these days.
I really wish that cars were legislated to have documented APIs/canbus. It would be great to be able to load an app that set my car up the way I like it, instead of having to change a bunch of settings every time I start it (EV mode on above 10% battery, eco mode accelerator mapping, single pedal driving all the way off. Every. Single. Time.)
Re Germans: I’m not sure it’s a new thing. I can remember trying to uninstall a seat in a 90s BMW and wondering how they had managed to make something that could be accomplished with 4 bolts into something so complex.
we have a 2017 and 2020 ioniq hybrid and the android auto has been flawless since we got them. except for updates from google on the phone temporarily breaking things, that is.
It's really annoying how at some ASIL levels you need 100% code coverage of unit tests. With AI, all you have to do is to get your agent to generate the tests! Likewise with all the MISRA C requirements. Need your cyclomatic complexity to be less than 10? It's just one prompt away! Now your spaghetti code can easily satisfy the safety requirements with much less effort.
As someone who's worked on this kind of stuff at GM, I don't really get the exuberance in this particular space (not just the comment I'm responding to).
If you want 100% coverage, you just autogenerate the test cases. LLMs can't properly check MISRA requirements, so they're really just a layer on top existing automated checkers. Same for complexity metrics, it doesn't get merged if it violates that (or it's a vendor dependency you won't touch anyway).
If you care about the spirit of the rules, they're not that big a difference. If you don't care, there are already ways to do this. In either case they're an incremental change, not what I'd call a godsend.
I hear this all the time. Why does it matter? Punishing a human for making a mistake does not prevent mistakes, nor does it undo the harm of the mistake. A human saying "my bad, I messed up" and an AI saying "my bad, I messed up" are equally worthless, in a functional sense.
"Punishing a human for making a mistake does not prevent mistakes" This statement suggests you don't believe in some combination of neuroplasticity as a concept or the arrow of time.
Tell the families of the people who died on 737 MAX disasters. "Don't worry - everything's going to be okay! The engineers learned from their mistakes - accountability works, you have nothing to be sad about!"
Tell the family of the person killed by a semi truck driver who showed up to work drunk or high: "Don't worry - the driver went to jail! Accountability prevented anything bad from happening!"
Accountability alone fails to prevent deadly mistakes millions of times a day; millions of mistakes are avoided daily through process, redundancy, independent review, and formal methods.
"Accountability prevents mistakes" is a comforting delusion. In reality, accountability is only marginally related to whether or not mistakes are made.
What are you even on about mate? Sure accountability doesn’t prevent all mistakes. Guess what, nothing prevents all mistakes. Accountability can help prevent some mistakes some of the time. It sounds like you’re suggesting getting rid of the concept of accountability because it doesn’t prevent ALL mistakes. Way to throw the baby out with the bath water.
"Accountability alone fails to prevent deadly mistakes millions of times a day"
...in his desperation to finally win an argument online our hero advanced, grimly ignoring the concept of Engineering.
"millions of mistakes are avoided daily through process, redundancy, independent review, and formal methods."
Ahh spoke too soon, Engineering has finally joined the chat. So what mechanism do you propose lead to the foundation of process, redundancy, independent review, and formal methods?
Punishing humans does, in fact, prevent mistakes. Or rather, the threat of punishment causes people to be careful to avoid mistakes, and that prevents mistakes. Sure, this doesn't work 100% of the time, but it does work and has throughout human history. Meanwhile, there's no equivalent paradigm for LLMs.
Even if you could threaten an LLM with punishment for making mistakes, you might get longer CoTs, but that wouldn't prevent mistakes in LLMs. The lack of accountability isn't the reason that LLMs make mistakes - adding accountability wouldn't change anything.
If a human messes up enough eventually they well get fired, fined or jailed. An AI will not.
A human also knows they might get punished if it messes up bad enough, which might cause it to think twice before doing something bad. For an AI there is a reward, but there is no risk.
So while both might lie, only the human will be worried that it will be found out. That makes a difference.
I hear you, but isn't the human in the loop precisely the one who should be putting their foot down and saying "no, the AI shouldn't be writing the tests to begin with", which would bring us full circle?
You say that like all humans are alike: that they all care about getting fired, fined, or jailed; that they're even considering punishment when they're making their decisions; that risk factors into decision making.
What you are describing is a hypothetical "rational person". In real life, even the most rational people you know do completely irrational things routinely.
The Therac-25 engineers were accountable. The 737 MAX engineers were accountable. Accountability is doing much less work in the safety story than you seem to think.
The real work is done by process, redundancy, independent review, formal methods. None of these inherently require someone to be penalized for making mistakes, and penalizing people for making mistakes is a demonstrably, empirically unreliable mechanism for preventing mistakes.
> agent development, model engineering, AI-native workflows -- point directly at where large-enterprise demand is heading.
I don't understand these words. Does "AI-native workflow" mean vibe coding?
I am now seeing a lot of roles asking for "AI-enabled engineers". And I am not sure what that means either. I am sort of afraid to ask because the answer will probably confuse me even more. Maybe it's my understanding of what LLMs are and how they work that makes these words mean very little to me.
> We are seeking an AI Agent Engineer to design, build, and operationalize AI-powered agents that enhance employee productivity and decision-making in a complex enterprise environment. The ideal candidate combines strong AI/ML foundations, hands-on experience with agent frameworks, and a pragmatic approach to delivering business value in partnership with cross-functional teams.
Does not look like pure vibe-coding to me. More like developing wrappers over LLMs.
Based on that description, we still have no idea what they are looking for, since it the only meaningful words in the first sentence are "design", "build", and "AI-powered agents". "Operationalize" isn't even a word, and "enhance employee productivity and decision-making" communicates nothing, except maybe that this is tools- or HR-oriented and not product-oriented. "Complex enterprise environment" can be omitted.
Interesting. I saw contractor rates dropping here too. The ones I saw recently were at pre-pandemic levels so from 6 years ago. The large increase in cost of living from that time makes it even worse. The funny part is that the skills and experience that are being asked for are at a senior level. So I guess that would be senior level vibe coder at junior rates?
Management literally doesn't realize that you can't exactly vibe code a serious project? Maybe they just don't care and it's an experiment. Super low risk I guess, depending on the city they might pay the floor cleaner more than that.
FWIW, I interpreted the article as saying they're not looking for vibe coding, but AI model development per se:
"...In practical terms, GM is looking for people who know how to build with AI from the ground up — designing the systems, training the models, and engineering the pipelines — not just use AI as a productivity tool."
I mean agentic coding is a thing but anyone could learn that in a day or week. So the idea of throwing away people that could be the most productive with ai of course make no sense. But it's big corporations, not everything has to make sense. It's most likely a dressed up pure cost saving framed in 2026-lingo.
I was going to say that sounds like short term gain for long term pain. But I'm guessing if there are any issues in the future, the government would just bail them out.
What you really want is nobody vibe coding, because then your stuff will actually work. But that doesn't get the stock price to go up by announcing your "AI focus".
AI-ensbled engineers == any engineer without sufficient life experience to inherently tell that delegating huge swathes of implementation to a stochastic parrot is a dangerous and fundamentally unsafe idea. It's spun around "knowing how to use the tools" but somehow the whole part around "knowing when not to use them and refusing to use them in those contexts" is conveniently written out of the definition of AI enabled. I personally sed s_AI/ enabled/ engineers_engineers/ with/ unreasonable/ risk/ tolerance_g.
When the business field turns you into a pariah because you refuse to bow to the uninitiated's desires tends to be my red line for an exit. I will not do reasonably foreseeable net harmful work for an industry that won't take no for an answer. My definition for reasonably foreseeable sets the minimum bar for analysis of consequence at enumerating 3rd and 4th degree consequences minimum.
They don't want engineers. They want Operators for whom thinking isn't in the job description.
"We laid off engineers to hire engineers with stronger AI skills" is what 2026 sounds like when a company means "we laid off engineers to pay less." The AI wrapper doesn't change the trade, it just makes the wage cut quotable in the press release.
It's crazy that this still works for the companies. Anyone with any experience here knows whats happening. But the reporters can still just parrot whatever these companies say. Who cares if it's more lies, right?
Their tech positions were really the only common path to get continued raises and promotions starting from the bottom in GM for the past few decades. Most other positions only ever got hired from outside the company because internal hires expected raises for it, so to me this just looks like gm putting their tech department under the same self destructive hiring policies as the rest of the company.
I haven’t heard of a single software job in the last decade that offers anything resembling “training”. You sink or swim based on your previous knowledge, what you can glean from the codebase and coworkers, and how skilled you are at self-teaching.
I found that sort of thinking is no longer a part of corporate culture(in AU at least). As in, investing in your staff and planning for the future. Resources are meant to be used and abused for benefit and that's it, consequences be damned.
A lot of people are un-trainable. I've worked with people who got angry when the Instructor showed up and the instructor wouldn't just give them the completed exercises, while most of the other people just sat on their laptops checking emails ignore the training all together. I've worked with people that want training on how to use Claude because they can't figure out "just type what you want in the box, and it does it for you." I've worked with people who were amazed when I googled for answers and were like "I should do that to!" (but didn't)
People will demand training, ignore it, and continue to be a drain on the company. There will be those people out there that have one very very very specific skill and that's all they want to do; "I remove people from Active directory that have names that start with A,B or C" or "I run this Ansible Playbook someone else wrote, and that's my entire job."
I don’t believe anyone trying to hire 20 bucks an hour for vibe coders is a serious employer. You’d make more money being a barista in some cities or a substitute teacher.
Best excuse they got to fire well paid and experienced proffessionals that have worked for them for decades and replace them with low paid new hires. But GM isn't in a great spot to be weathering any negative downstream effects. And after seeing how they treat employees for decades ill have zero sympathy for them when things go downhill.
Get a low paid entry level job at a company. Do a good job over many years, or decades. Raises, promotions etc brings your salary up to a decent level. The company can’t have that. They use AI to cut you and start over with another low paid noob.
Man, the only advice I can give people is do not sacrifice time with your loved ones for a company that doesn’t give a shit. Your kid is only going to graduate once. Those family vacations are priceless in the long run. Hell, I take time off to hang out with my dogs now and then. The job can wait.
That's the neat part, many don't need to worry about missing their kid's graduation because they can't afford neither kids nor the safety to have them (home ownership, etc).
I've been down quite a few rabbit holes like that which made me think that a lot of major 'issues' appear to be meticulously engineered to protect a certain set of interests at the expense of others.
It's like; "Damm, houses are expensive, I'm going to live in a caravan" then you realize you can't park it on your own land without council approval... Then you find out that council will never approve due to it "negatively impacting the charm of the area."
Then you become homeless and realize that you can't legally put your tent anywhere and all the camping sites in the wilderness which you used to go to as a child now charge you fees to stay there and have rangers patrolling constantly (paid for by your own tax money you used to pay). Also, you can't get a job without an address and it's a literal catch-22... Then if you lose hope and start doing drugs, bad actors (possibly sponsored by foreign states) put fentanyl in the drug supply to finish you off. Then the media fully covers it up by distracting people with slop.
People are dying and it is covered up in the most targeted, effective way imaginable... They are not only killed, they are blamed for what is system failure on the way out. "Should have gotten a job," or "Shouldn't have done drugs." And the people doing the most blaming and defending the system are passive-income shareholders who have a lot of time on their hands and sit at home all day and further rig the politics in their favour. It's cooked all the way down.
It's like the dystopian book "Brave New World" is looking pretty good by comparison to where we're heading. At least in BNW, the "savages" had a designated reserve they could escape to.
This will never stop being the stupid approach. Destroy morale, lose institutional knowledge, waste months or years getting new folks up to speed, all for skills that could be developed in house and with targeted hiring by a functioning organization.
Probably should just replace their software engineers too. I don't mean to throw shade, but automotive software is so bad I'd bet my life that Claude could do better.
Nevermind that the update cycle seems to be 6-10 months for changes like "You can now reset your radio presets directly from the radio settings menu", while bugs like temperature control resetting to max cool every start-up never get fixed.
I kept reading about how bad Android Auto was for years but we finally bought a more modern used car and I can't believe they would ship that experience to customers. I had a week where I just had to unpair and re-pair Everytime I got in the car.
I would love to read about why that stuff is the way it is from the engineers, hmm that might be a good spelunking. I really must be missing something that makes it harder than I think it really should be.
Maybe they prioritized it low, assigned it to a group of 3 devs, and it functioned the first time they demoed it to management, so it shipped. All other devs would’ve been working on their in-house software.
US car software, ya, I've never seen such trash in my life. This said I don't really have any complaints about my Hyundai software. It works, doesn't crash, and does what I want it too.
My - admittedly 8 year old - Mitsubishi has atrocious UX. State resets itself at odd times (cruise control disables EV mode, and resets regenerative braking settings, eg.), preferences are forgotten, if I want to listen to the radio I have to cycle through AM before FM every time, the touchscreen is slow to respond, the WiFi password for connecting to the app can only be gotten from a dealer, buttons do different things depending on context, etc.
CarPlay works great though.
It’s not just American cars
UX in my Audi Q5 (2024) is terrible. With two phones in the car you never know which one is connected and whose google maps is being currently displayed. And then come the buttons designed with a contempt for a driver. I recently had to change a flat tire which is a story in itself. German engineering is soooo different these days.
I really wish that cars were legislated to have documented APIs/canbus. It would be great to be able to load an app that set my car up the way I like it, instead of having to change a bunch of settings every time I start it (EV mode on above 10% battery, eco mode accelerator mapping, single pedal driving all the way off. Every. Single. Time.)
Re Germans: I’m not sure it’s a new thing. I can remember trying to uninstall a seat in a 90s BMW and wondering how they had managed to make something that could be accomplished with 4 bolts into something so complex.
we have a 2017 and 2020 ioniq hybrid and the android auto has been flawless since we got them. except for updates from google on the phone temporarily breaking things, that is.
Well, if all the training data for auto software was trash anyway, then you know what they say:
Carbage in, carbage out.
AI is a godsend for automotive coding.
It's really annoying how at some ASIL levels you need 100% code coverage of unit tests. With AI, all you have to do is to get your agent to generate the tests! Likewise with all the MISRA C requirements. Need your cyclomatic complexity to be less than 10? It's just one prompt away! Now your spaghetti code can easily satisfy the safety requirements with much less effort.
As someone who's worked on this kind of stuff at GM, I don't really get the exuberance in this particular space (not just the comment I'm responding to).
If you want 100% coverage, you just autogenerate the test cases. LLMs can't properly check MISRA requirements, so they're really just a layer on top existing automated checkers. Same for complexity metrics, it doesn't get merged if it violates that (or it's a vendor dependency you won't touch anyway).
If you care about the spirit of the rules, they're not that big a difference. If you don't care, there are already ways to do this. In either case they're an incremental change, not what I'd call a godsend.
I can't tell if this is sarcastic or not, but it seems insane to let the AI write the tests.
AI can't be held accountable, it shouldn't be writing the tests that determine whether car systems function correctly.
If you wouldn’t let AI run your nuclear power plant, you need to drink more of the AI kool-aid.
All these luddites out here. The cats out of the bag. Get with the program. Give the AI nukes already.
You’re absolutely right! I shouldn’t have launched the nukes. Would you like to learn more about nuclear safety?
>AI can't be held accountable
I hear this all the time. Why does it matter? Punishing a human for making a mistake does not prevent mistakes, nor does it undo the harm of the mistake. A human saying "my bad, I messed up" and an AI saying "my bad, I messed up" are equally worthless, in a functional sense.
"Punishing a human for making a mistake does not prevent mistakes" This statement suggests you don't believe in some combination of neuroplasticity as a concept or the arrow of time.
Tell the families of the people who died on 737 MAX disasters. "Don't worry - everything's going to be okay! The engineers learned from their mistakes - accountability works, you have nothing to be sad about!"
Tell the family of the person killed by a semi truck driver who showed up to work drunk or high: "Don't worry - the driver went to jail! Accountability prevented anything bad from happening!"
Accountability alone fails to prevent deadly mistakes millions of times a day; millions of mistakes are avoided daily through process, redundancy, independent review, and formal methods.
"Accountability prevents mistakes" is a comforting delusion. In reality, accountability is only marginally related to whether or not mistakes are made.
What are you even on about mate? Sure accountability doesn’t prevent all mistakes. Guess what, nothing prevents all mistakes. Accountability can help prevent some mistakes some of the time. It sounds like you’re suggesting getting rid of the concept of accountability because it doesn’t prevent ALL mistakes. Way to throw the baby out with the bath water.
"Accountability alone fails to prevent deadly mistakes millions of times a day"
...in his desperation to finally win an argument online our hero advanced, grimly ignoring the concept of Engineering.
"millions of mistakes are avoided daily through process, redundancy, independent review, and formal methods."
Ahh spoke too soon, Engineering has finally joined the chat. So what mechanism do you propose lead to the foundation of process, redundancy, independent review, and formal methods?
Punishing humans does, in fact, prevent mistakes. Or rather, the threat of punishment causes people to be careful to avoid mistakes, and that prevents mistakes. Sure, this doesn't work 100% of the time, but it does work and has throughout human history. Meanwhile, there's no equivalent paradigm for LLMs.
Even if you could threaten an LLM with punishment for making mistakes, you might get longer CoTs, but that wouldn't prevent mistakes in LLMs. The lack of accountability isn't the reason that LLMs make mistakes - adding accountability wouldn't change anything.
If a human messes up enough eventually they well get fired, fined or jailed. An AI will not.
A human also knows they might get punished if it messes up bad enough, which might cause it to think twice before doing something bad. For an AI there is a reward, but there is no risk.
So while both might lie, only the human will be worried that it will be found out. That makes a difference.
There is a human in the loop that either prompted the agent or approved the code. So it doesn't matter if the AI is accountable or not.
I hear you, but isn't the human in the loop precisely the one who should be putting their foot down and saying "no, the AI shouldn't be writing the tests to begin with", which would bring us full circle?
You say that like all humans are alike: that they all care about getting fired, fined, or jailed; that they're even considering punishment when they're making their decisions; that risk factors into decision making.
What you are describing is a hypothetical "rational person". In real life, even the most rational people you know do completely irrational things routinely.
The Therac-25 engineers were accountable. The 737 MAX engineers were accountable. Accountability is doing much less work in the safety story than you seem to think.
The real work is done by process, redundancy, independent review, formal methods. None of these inherently require someone to be penalized for making mistakes, and penalizing people for making mistakes is a demonstrably, empirically unreliable mechanism for preventing mistakes.
Irony detected, but it's 2026, so nobody can be sure.
> agent development, model engineering, AI-native workflows -- point directly at where large-enterprise demand is heading.
I don't understand these words. Does "AI-native workflow" mean vibe coding?
I am now seeing a lot of roles asking for "AI-enabled engineers". And I am not sure what that means either. I am sort of afraid to ask because the answer will probably confuse me even more. Maybe it's my understanding of what LLMs are and how they work that makes these words mean very little to me.
You can just check any of the vacancies on their website to see what they're looking for.
Example: AI Agent Engineer (https://search-careers.gm.com/en/jobs/jr-202606937/ai-agent-...)
> We are seeking an AI Agent Engineer to design, build, and operationalize AI-powered agents that enhance employee productivity and decision-making in a complex enterprise environment. The ideal candidate combines strong AI/ML foundations, hands-on experience with agent frameworks, and a pragmatic approach to delivering business value in partnership with cross-functional teams.
Does not look like pure vibe-coding to me. More like developing wrappers over LLMs.
Based on that description, we still have no idea what they are looking for, since it the only meaningful words in the first sentence are "design", "build", and "AI-powered agents". "Operationalize" isn't even a word, and "enhance employee productivity and decision-making" communicates nothing, except maybe that this is tools- or HR-oriented and not product-oriented. "Complex enterprise environment" can be omitted.
I would definitely pass on this job.
I didn't want to paste all the job requirements from that link here, but fair point.
Yea it's vibe/agentic coding. That's what about half of the current jobs are right now. It's really sad. Saw a vibe coder job today, 20bucks an hour
Interesting. I saw contractor rates dropping here too. The ones I saw recently were at pre-pandemic levels so from 6 years ago. The large increase in cost of living from that time makes it even worse. The funny part is that the skills and experience that are being asked for are at a senior level. So I guess that would be senior level vibe coder at junior rates?
Management literally doesn't realize that you can't exactly vibe code a serious project? Maybe they just don't care and it's an experiment. Super low risk I guess, depending on the city they might pay the floor cleaner more than that.
FWIW, I interpreted the article as saying they're not looking for vibe coding, but AI model development per se:
"...In practical terms, GM is looking for people who know how to build with AI from the ground up — designing the systems, training the models, and engineering the pipelines — not just use AI as a productivity tool."
I can guarantee you GM is not training any of their own models.
I mean agentic coding is a thing but anyone could learn that in a day or week. So the idea of throwing away people that could be the most productive with ai of course make no sense. But it's big corporations, not everything has to make sense. It's most likely a dressed up pure cost saving framed in 2026-lingo.
I was going to say that sounds like short term gain for long term pain. But I'm guessing if there are any issues in the future, the government would just bail them out.
Yup thats exactly what they want
Cheaper younger people who dont think vibe coding is bad
Is this a good idea - probably not
It's a mistake. What you really want is senior engineers vibe coding.
It would be like hiring a junior to lead a team. They're the worst choice for that role.
What you really want is nobody vibe coding, because then your stuff will actually work. But that doesn't get the stock price to go up by announcing your "AI focus".
Vibe coded self driving cars sounds very interesting.
AI-ensbled engineers == any engineer without sufficient life experience to inherently tell that delegating huge swathes of implementation to a stochastic parrot is a dangerous and fundamentally unsafe idea. It's spun around "knowing how to use the tools" but somehow the whole part around "knowing when not to use them and refusing to use them in those contexts" is conveniently written out of the definition of AI enabled. I personally sed s_AI/ enabled/ engineers_engineers/ with/ unreasonable/ risk/ tolerance_g.
When the business field turns you into a pariah because you refuse to bow to the uninitiated's desires tends to be my red line for an exit. I will not do reasonably foreseeable net harmful work for an industry that won't take no for an answer. My definition for reasonably foreseeable sets the minimum bar for analysis of consequence at enumerating 3rd and 4th degree consequences minimum.
They don't want engineers. They want Operators for whom thinking isn't in the job description.
[dead]
"We laid off engineers to hire engineers with stronger AI skills" is what 2026 sounds like when a company means "we laid off engineers to pay less." The AI wrapper doesn't change the trade, it just makes the wage cut quotable in the press release.
It's crazy that this still works for the companies. Anyone with any experience here knows whats happening. But the reporters can still just parrot whatever these companies say. Who cares if it's more lies, right?
They couldn't be bothered to train their staff?
Their tech positions were really the only common path to get continued raises and promotions starting from the bottom in GM for the past few decades. Most other positions only ever got hired from outside the company because internal hires expected raises for it, so to me this just looks like gm putting their tech department under the same self destructive hiring policies as the rest of the company.
It really is time for them to die as a company.
They hire anew to lower salaries, while also gaining staff with "stronger AI skills."
Then they hire people with even higher salaries than the original devs to clean up the God awful mess.
That's the neat part, they don't. Maybe they'll contract some group at some point, maybe not. Then later they'll ask the .gov for a big Ole bailout.
I haven’t heard of a single software job in the last decade that offers anything resembling “training”. You sink or swim based on your previous knowledge, what you can glean from the codebase and coworkers, and how skilled you are at self-teaching.
I found that sort of thinking is no longer a part of corporate culture(in AU at least). As in, investing in your staff and planning for the future. Resources are meant to be used and abused for benefit and that's it, consequences be damned.
A lot of people are un-trainable. I've worked with people who got angry when the Instructor showed up and the instructor wouldn't just give them the completed exercises, while most of the other people just sat on their laptops checking emails ignore the training all together. I've worked with people that want training on how to use Claude because they can't figure out "just type what you want in the box, and it does it for you." I've worked with people who were amazed when I googled for answers and were like "I should do that to!" (but didn't)
People will demand training, ignore it, and continue to be a drain on the company. There will be those people out there that have one very very very specific skill and that's all they want to do; "I remove people from Active directory that have names that start with A,B or C" or "I run this Ansible Playbook someone else wrote, and that's my entire job."
Laid off ai workers are cheaper. I saw a job posting for a vibe coder, 20 bucks an hour.
I don’t believe anyone trying to hire 20 bucks an hour for vibe coders is a serious employer. You’d make more money being a barista in some cities or a substitute teacher.
Sure, but they probably aren't living in those cities. In-person $20/hr in expensive city vs remote $20/hr in cheap cost-of-living area.
It's out there. Maybe it's not serious but it is kind of a weird thing to post on multiple job boards.
perhaps they tried that and the staff wasn't willing to be trained. have you ever worked at a company like that of that size?
[dead]
Best excuse they got to fire well paid and experienced proffessionals that have worked for them for decades and replace them with low paid new hires. But GM isn't in a great spot to be weathering any negative downstream effects. And after seeing how they treat employees for decades ill have zero sympathy for them when things go downhill.
You won't be asked, your tax dollars will just be used to bail them out, either directly or through tariffs. Because democracy.
You see we socialize the losses and capitalize the profits. Works great for everyone
> And after seeing how they treat employees for decades
To say nothing of their cars.
Shameless investor signalling.
Get a low paid entry level job at a company. Do a good job over many years, or decades. Raises, promotions etc brings your salary up to a decent level. The company can’t have that. They use AI to cut you and start over with another low paid noob.
Man, the only advice I can give people is do not sacrifice time with your loved ones for a company that doesn’t give a shit. Your kid is only going to graduate once. Those family vacations are priceless in the long run. Hell, I take time off to hang out with my dogs now and then. The job can wait.
That's the neat part, many don't need to worry about missing their kid's graduation because they can't afford neither kids nor the safety to have them (home ownership, etc).
It really seems like that doesn't it?
I've been down quite a few rabbit holes like that which made me think that a lot of major 'issues' appear to be meticulously engineered to protect a certain set of interests at the expense of others.
It's like; "Damm, houses are expensive, I'm going to live in a caravan" then you realize you can't park it on your own land without council approval... Then you find out that council will never approve due to it "negatively impacting the charm of the area."
Then you become homeless and realize that you can't legally put your tent anywhere and all the camping sites in the wilderness which you used to go to as a child now charge you fees to stay there and have rangers patrolling constantly (paid for by your own tax money you used to pay). Also, you can't get a job without an address and it's a literal catch-22... Then if you lose hope and start doing drugs, bad actors (possibly sponsored by foreign states) put fentanyl in the drug supply to finish you off. Then the media fully covers it up by distracting people with slop.
People are dying and it is covered up in the most targeted, effective way imaginable... They are not only killed, they are blamed for what is system failure on the way out. "Should have gotten a job," or "Shouldn't have done drugs." And the people doing the most blaming and defending the system are passive-income shareholders who have a lot of time on their hands and sit at home all day and further rig the politics in their favour. It's cooked all the way down.
It's like the dystopian book "Brave New World" is looking pretty good by comparison to where we're heading. At least in BNW, the "savages" had a designated reserve they could escape to.
GM needs to focus on making better, safer, cheaper cars.
"Cheaper" is indeed what you get when you reduce payroll.
Can't wait to see the JDs asking for 10 years of LLM experience.
"not all" permanent headcount reductions ...
Tells you everything you need to know.
I would like one vibecoded airbag please
Opens 50% of the time everytime
Unless they plan on scooping up AI researchers, I legitimately don't understand what "stronger AI skills" is even supposed to mean.
People who can use Claude and its peers effectively. AI skills = prompting and slop troubleshooting.
No AI No Life...
I would say: Great, let them crash and burn. But when (not if) their cars do the same, it will cost a toll of human victims.
Seems like there will be a small elite that wants to go up to a space station and shit on top of all the citizens below :D.
This will never stop being the stupid approach. Destroy morale, lose institutional knowledge, waste months or years getting new folks up to speed, all for skills that could be developed in house and with targeted hiring by a functioning organization.
This is only stupid if you value quality of outcome. If your sole metric is make stock price go up long enough to cash out this shit is brilliant.
Ya, we burned the planet to the ground, but for a few short moments we created a lot of shareholder value.
Well, the US auto execs visited China and realized that their days are numbered. So why not extract as much profit as possible?
Firing people with institutional knowledge? So what? It's going to improve profits short-term.
Why can’t US auto makers automate like China did?
Isn't it obvious? They aren't using AI to automate their business. That's why GM is doing this.