Discussion about this post

User's avatar
Hans G. Schantz's avatar

I use AI-assistance in my research frequently. I set up a default rule that the AI has to provide sources for any factual assertion. And if its a conclusion I intend to use or rely upon, I go back to the source material. At the present state of development, AI is an idiot savant that doesn’t know what it doesn’t know, and will create very plausible sounding fantasies instead. Treat it accordingly.

Expand full comment
Scruffy's avatar

The levels of ignorance on How AI Works are matched only by the snakeoil 'engineers' and salesmen who peddle the AI as anything but a literate parrot.

I've played extensively with AI models, read the literature, built/trained new models, coded using the concepts, and compared LLMs, SD/etc visual models, and more. I still use them for 'shortcuts', but I've learned to trust them no further than a wily slave laborer who hates his master and looks for any opportunity to put one over on them, and who will lie with a straight face, and swear eternal fealty.

LLMs do not store facts. EVER. They store map points in a vast mega-dimensional landscape of uncharted axes of undocumented spectrums. Not only is everything relativistic to it, but it always finds the path of least resistance thru that jungle of cloudy 'concepts' it holds. That's the non-technical description of what is so well documented by 3blue1brown (recommended to all!): it's a big table of probabilities. It's no more true than rolling a D100, consulting a table in the appendix of the D&D reference and declaring that you've rolled a critical hit or failure. It's fiction, entertaining but at the end of the day, pure fantasy gaming.

Expand full comment
37 more comments...

No posts