Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

True for now because models are mainly used to implement features / build small MVPs, which they’re quite good at.

The next step would be to have a model running continuously on a project with inputs from monitoring services, test coverage, product analytics, etc. Such an agent, powered by a sufficient model, could be considered an effective software engineer.

We’re not there today, but it doesn’t seem that far off.





> We’re not there today, but it doesn’t seem that far off.

What time frame counts as "not that far off" to you?

If you tried to bet me that the market for talented software engineers would collapse within the next 10 years, I'd take it no question. 25 years, I think my odds are still better than yours. 50 years, I might not take the bet.


Great question. It depends on the product. For niche SaaS products, I’d say in the next few years. For like Amazon.com, on the order of decades.

If the niche SaaS product never required a talented engineer in the first place, I'd be inclined to agree with you. But even a niche SaaS product requires a decent amount of engineering skill to maintain well.

Agreed.

I've played around with agent only code bases (where I don't code at all), and had an agent hooked up to server logs, which would create an issue when it encounters errors, and then an agent would fix the tickets, push to prod and check deployment statuses etc. Worked good enough to see that this could easily become the future. (I also had it claude/codex code that whole setup)

Just for semantic nitpicking, I've zero shot heaps of small "software" projects that I use then throw away. Doesn't count as a SAAS product but I would still call it software.


The article "AI can code, but it can't build software"

An inevitable comment: "But I've seen AI code! So it must be able to build software"


> The next step would be to have a model running continuously on a project with inputs from monitoring services, test coverage, product analytics, etc. Such an agent, powered by a sufficient model, could be considered an effective software engineer.

Building an automated system that determines if a system is correct (whatever that means) is harder to build than the coding agents themselves.


I agree that tooling is maturing towards that end.

I wonder if that same non-technical person that built the MVP with GenAI and requires a (human) technical assistance today, will need it tomorrow as well. Will the tooling be mature enough and lower the barrier enough for anyone to have a complete understanding about software engineering (monitoring services, test coverage, product analytics)?


> I agree that tooling is maturing towards that end.

That's what every no-programming-needed hyped tool has said. Yet here we are, still hiring programmers.


I’ve heard “we’re not there today, but it doesn’t seem that far off” since the beginning of the AI infatuation. What if, it is far off?

It's telling to me that nobody who actually works in AI research thinks that it's "not that far off".



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: