Discussion about this post

User's avatar
GH's avatar

I've been thinking about AI being a mirror to the users input, and results being a magnification back.

In an AI Chat session, each new prompt and response is totally new and disconnected from anything on the AI, which is different than the user side, who remembers what they typed last.

AI is always working with "big static training data" and then any notes you give it (files, chat history) and then the latest note which is the prompt which it uses as it's latest directive.

We can't do anything about the existing big training data, good or bad, its there. So all differences in results are the payload notes and the prompt.

And if you can put that together well, you get completely different results than if you can't.

Most people just can't put a good enough packet together, and don't understand the process, so they will never be able to magnify their best ideas, and will mostly get versions of the training data (slop), rather than something they can bring and magnify even further.

For a long time I was only using AI for some of the simpler or very technical portions coding, because there are so many things it messes up and can't do, but after really refining the process, I have got some things I didn't think I could build, and would not have tried to build without it.

Dave's avatar

I have to laugh. Larry Correia echos the same people who said, "McDonalds and fast food is never going to take off, people prefer authentic cooked food, not nugget shaped chicken slop!".

Of course, McDonald's slogan now is "Billions and billions served".

The downstream implication is that most people care solely about the story beats in the same way that McDonald's gives people the flavor beats of salt, sweet, crisp. Nuance is for the comparatively tiny discerning eaters, uh, readers.

41 more comments...

No posts

Ready for more?