I disagree. Unless you’re focussed on right now, in which case case… maybe? Depends on scale.
I have a few scattered thoughts here but I think you’re caught up on how things are done now.
A human expert in a field is the customer.
Do you think, say, gpt5 pro can’t talk to them about a problem and what’s reasonable to try and build in software?
It can build a thing, with tests, run stuff and return to a user.
It can take feedback (talking to people is the key major things LLMs have solved).
They can iterate (see: codex) deploy and they can absolutely write copy.
What do you really think in this list they can’t do?
For simplicity reduce it to a relatively basic crud app. We know that they can make these over several steps. We know they can manage the ui pretty well, do incremental work etc. What’s missing?
I think something huge here is that some of the software engineering roles and management become exceptionally fast and cheap. That means you don’t need to have as many users to be worthwhile writing code to solve a problem. Entirely personal software becomes economically viable. I don’t need to communicate value for the problem my app has solved because it’s solved it for me.
Frankly most of the “AI can’t ever do my thing” comments come across as the same as “nobody can estimate my tasks they’re so unique” we see every time something comes up about planning. Most business relevant SE isn’t complex logically, interestingly unique or frankly hard. It’s just a different language to speak.
Disclaimer: a client of mine is working on making software simpler to build and I’m looking at the AI side, but I have these views regardless.
I expect that customers who have those needs would much rather hire somebody to be the intermediary with the LLM writing the code than take on that role themselves.
You'll get the occasional high agency non-technical customer who decides to learn how to get these things done with LLMs but they'll be a pretty rare breed.
This may be a timeframe issue but I sincerely doubt anyone wants to hire someone to be an intermediary. They just want the thing done.
I know that right now few want to sit in front of claude code, but it's just not that big of a leap to move this up a layer. Workflows do this even without the models getting better.
I have a few scattered thoughts here but I think you’re caught up on how things are done now.
A human expert in a field is the customer.
Do you think, say, gpt5 pro can’t talk to them about a problem and what’s reasonable to try and build in software?
It can build a thing, with tests, run stuff and return to a user.
It can take feedback (talking to people is the key major things LLMs have solved).
They can iterate (see: codex) deploy and they can absolutely write copy.
What do you really think in this list they can’t do?
For simplicity reduce it to a relatively basic crud app. We know that they can make these over several steps. We know they can manage the ui pretty well, do incremental work etc. What’s missing?
I think something huge here is that some of the software engineering roles and management become exceptionally fast and cheap. That means you don’t need to have as many users to be worthwhile writing code to solve a problem. Entirely personal software becomes economically viable. I don’t need to communicate value for the problem my app has solved because it’s solved it for me.
Frankly most of the “AI can’t ever do my thing” comments come across as the same as “nobody can estimate my tasks they’re so unique” we see every time something comes up about planning. Most business relevant SE isn’t complex logically, interestingly unique or frankly hard. It’s just a different language to speak.
Disclaimer: a client of mine is working on making software simpler to build and I’m looking at the AI side, but I have these views regardless.