Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One of the main problems with arguing with LLMs is your complaint becomes part of the prompt. Practically all LLMs have will take "don't do X" and do X, because part of "don't do X" is "do X," and LLMs have no fundamental understanding of negation.


That depends entirely on how well trained a given LLM is.

Gemini is notoriously bad at multi-turn instruction following, so this holds strongly for it. Less so for Claude Opus 4 or GPT-5.


Not really true these days. Claude code follows my instructions correctly when I tell it not to use certain patterns.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: