The fun of software dev has always been the building and feeling of accomplishment of it. Hard agree that the fun part it is being taken away from us. We are being forced into administrators and middle managers. And worst of all, it's managers of robots, not even humans.
Of course there's a lot of value to be created by being able to talk to humans and then formulate the task of actually building the thing to our robot minions. But dammit it does feel that someone took the fun away from it at the same time!
Someone on the internet said: I wanted robots to do my dishes and laundry so that I could focus and art and music - not the other way around! This is the way it feels in coding as well. I wanted robots to do the boring managerial tasks so that I could focus on the fun puzzles - not the other way around!
This resonates a lot with how my role has evolved with these changes in tooling. What I’ve found helps is a distinct divide between what I really care about in design & quality of code vs. behaviour-first stuff where things can be “good enough”.
For example, I love setting AI on building out support tools. I have quickly written something to improve dead-letter queue management and replayability to support an active issue, where the underlying code quality didn’t matter as much as its correctness. On the other hand, when I’m building a new feature in an internal SDK, it’s a problem I want to think deeply about. Every line of code and how it drives the developer experience matters so much more to me, it’s something I want to do and not offload to an agent (who even with the best AGENTS.md is probably going to have differences in structure, style etc.)
I’ve found I’m able to (for now at least) carve out tasks that give me that joy of programming, by qualifying them on how much I feel it needs that personal touch.
The fun of software dev has always been the building and feeling of accomplishment of it. Hard agree that the fun part it is being taken away from us. We are being forced into administrators and middle managers. And worst of all, it's managers of robots, not even humans.
Of course there's a lot of value to be created by being able to talk to humans and then formulate the task of actually building the thing to our robot minions. But dammit it does feel that someone took the fun away from it at the same time!
Someone on the internet said: I wanted robots to do my dishes and laundry so that I could focus and art and music - not the other way around! This is the way it feels in coding as well. I wanted robots to do the boring managerial tasks so that I could focus on the fun puzzles - not the other way around!
This resonates a lot with how my role has evolved with these changes in tooling. What I’ve found helps is a distinct divide between what I really care about in design & quality of code vs. behaviour-first stuff where things can be “good enough”.
For example, I love setting AI on building out support tools. I have quickly written something to improve dead-letter queue management and replayability to support an active issue, where the underlying code quality didn’t matter as much as its correctness. On the other hand, when I’m building a new feature in an internal SDK, it’s a problem I want to think deeply about. Every line of code and how it drives the developer experience matters so much more to me, it’s something I want to do and not offload to an agent (who even with the best AGENTS.md is probably going to have differences in structure, style etc.)
I’ve found I’m able to (for now at least) carve out tasks that give me that joy of programming, by qualifying them on how much I feel it needs that personal touch.