The context window is the silent killer. I run multi-step LLM pipelines where each step produces artifacts the next step needs, and the single biggest improvement was externalizing state to files rather than keeping everything in the conversation. By step 7 or 8, the model has either forgotten your constraints from step 1 or starts contradicting itself. Write intermediate outputs to disk, start fresh contexts for each phase, and pass in only what's needed. Treat LLM sessions like stateless functions, not like pair programming sessions.
The context window is the silent killer. I run multi-step LLM pipelines where each step produces artifacts the next step needs, and the single biggest improvement was externalizing state to files rather than keeping everything in the conversation. By step 7 or 8, the model has either forgotten your constraints from step 1 or starts contradicting itself. Write intermediate outputs to disk, start fresh contexts for each phase, and pass in only what's needed. Treat LLM sessions like stateless functions, not like pair programming sessions.