Even with Claude level context, using more like local models helps. Restate, restart often. That mixed with chat and axiom restatements gets further than a single mega chat.
Context-based question & answer pairs is how this person maintains mental bandwidth. It is magic to others, who happily overrun their buffers and throw up their hands.
Life is a lot more like those irritating test questions that we care to admit. The ones with unnecessary details included that point to wrong answers.
Just apply the SSH, or for extra "madness", the SSH-WL synthesis to the problem. A hint, stop treating the LLM as the head of the ensamble. It's a gamma, a worm-toungue, the verbal knowledge expert who just read something on reddit. He is the interface. In between, in-and-out.
Even with Claude level context, using more like local models helps. Restate, restart often. That mixed with chat and axiom restatements gets further than a single mega chat.
That's why Claude's Projects feature is so powerful.
Ill have to try it. Only started paying last week.
Context-based question & answer pairs is how this person maintains mental bandwidth. It is magic to others, who happily overrun their buffers and throw up their hands.
Life is a lot more like those irritating test questions that we care to admit. The ones with unnecessary details included that point to wrong answers.
Just apply the SSH, or for extra "madness", the SSH-WL synthesis to the problem. A hint, stop treating the LLM as the head of the ensamble. It's a gamma, a worm-toungue, the verbal knowledge expert who just read something on reddit. He is the interface. In between, in-and-out.