I really wish that there was a product that tags the LLM's thinking steps to the code that it's generating or modifying. Comments aren't sufficient as it's hard to track them across edits. Think something along comments in Google Docs: "I modified this block to prevent a race condition."
That would help me understand what the model is doing way better.
Saving the compute on NotebookLM audio: https://cdn.discordapp.com/attachments/1028194515577741322/1...
I really wish that there was a product that tags the LLM's thinking steps to the code that it's generating or modifying. Comments aren't sufficient as it's hard to track them across edits. Think something along comments in Google Docs: "I modified this block to prevent a race condition." That would help me understand what the model is doing way better.