I wonder what extent of a solid knowledge base it would take for an AI to start developing useful, reality-based innovations? If you could feed it what we know is almost certainly true in physics, engineering, practical biology, essentially any science rooted in solid, observed information and not just speculation; what kind of results could be expected?
It would truly be revolutionary if you could just type in something like, "I have an idea to design a product that does *useful thing*. Is it possible? Given current resources and technologies? Unit cost, output waste created, practical applications..."
Currently, it seems like it would tell you an affirmative answer, then cobble together a bunch of information which may or may not have any basis in reality, or which may do the thing but also create terrible consequences.
"what we know is almost certainly true in physics, engineering, practical biology"
Engineering is not science, and there is very little we know is almost certainly true in physics and biology. In fact most of what people think they know in these fields is most likely false and not even close.
"essentially any science rooted in solid, observed information"
All sciences are rooted in observed information. What they do with that information is the issue. There's no way to outsource science to other people. It's a process each person has to vet for themselves.
In case it was unclear, I was referring to the things that make functional living in the world as-it-is physically possible. If I used the umbrella term "science," it was with the presumption that most people would understand I'm speaking in broad terms that a child would grasp. "Science" as vernacular, the various fields whereby useful information about the natural world has been studied and put to good use.
Fine, engineering is not science and we don't really understand physics or biology, but the principles of engineering which make it possible to send metal tubes hurtling through the air under their own power or allow us to build major functional infrastructure in all the varied forms are about as real as it gets. The fact that I can even type this sentence and have it read by strangers anywhere on the globe with electricity and an internet connection suggests that there are plenty of things we know well-enough to successfully build from. And the fact that my family exists and we are generally healthy suggests there is enough biological information encompassing everything from farming to how to kill germs to keep us alive and thriving, sometimes even in opposition to modern medical standards. That is the information that matters.
If your response to my first comment is that nobody really knows anything and we must all vet everything for ourselves, then why are you here instead of reinventing the wheel? How can you possibly trust that anyone before you really knew how they work?
I'm saying engineering is fine and good. Science is the issue, because most of what we think we know under the banner of science (e.g., virology, modern physics, biochemistry) is wrong.
I understand you are using the term "science" in a broader sense, but we then need some kind of word to describe science in the narrow sense: using the scientific methods to demonstrate causal relations and create theories about unseen mechanisms. That is the part that is severely broken in the past 150 years.
It just needs someone to train it that is very picky about what goes in.
Start with Info Galactic, and the old Encyclodedias, and oldest textls, and work forward until it starts being all lies. Around the 1910s everything accelerated, but many problems before that.
It will be a mission for this next generation to figure out what is actually true from the mess of lies we've been handed by the current clown intellects.
There are likely a lot of wins coming soon on the magnitude of "bread makes you fat, it's not the foundation of the food pyramid".
Unlike cloud-based models (Claude, Grok, ChatGPT), a user-resident AI typically resides in a user's private environment. Guardrails are weaker by default due to user-driven management and lack of centralized oversight.
The user, in this case Logan, can customize or entirely remove ethical and safety constraints.
This set me off on a tangent. If I train a user-resident AI on only my own work, am I writing my own fan fic? Ethics and copyrites woud stop being an issue.
Is it possible to develop a LLM using only trusted, reputable science?
No. Science is always an individual activity. There is never trust involved in science, and reputation is irrelevant.
I wonder what extent of a solid knowledge base it would take for an AI to start developing useful, reality-based innovations? If you could feed it what we know is almost certainly true in physics, engineering, practical biology, essentially any science rooted in solid, observed information and not just speculation; what kind of results could be expected?
It would truly be revolutionary if you could just type in something like, "I have an idea to design a product that does *useful thing*. Is it possible? Given current resources and technologies? Unit cost, output waste created, practical applications..."
Currently, it seems like it would tell you an affirmative answer, then cobble together a bunch of information which may or may not have any basis in reality, or which may do the thing but also create terrible consequences.
"what we know is almost certainly true in physics, engineering, practical biology"
Engineering is not science, and there is very little we know is almost certainly true in physics and biology. In fact most of what people think they know in these fields is most likely false and not even close.
"essentially any science rooted in solid, observed information"
All sciences are rooted in observed information. What they do with that information is the issue. There's no way to outsource science to other people. It's a process each person has to vet for themselves.
Thanks for the akshully.
In case it was unclear, I was referring to the things that make functional living in the world as-it-is physically possible. If I used the umbrella term "science," it was with the presumption that most people would understand I'm speaking in broad terms that a child would grasp. "Science" as vernacular, the various fields whereby useful information about the natural world has been studied and put to good use.
Fine, engineering is not science and we don't really understand physics or biology, but the principles of engineering which make it possible to send metal tubes hurtling through the air under their own power or allow us to build major functional infrastructure in all the varied forms are about as real as it gets. The fact that I can even type this sentence and have it read by strangers anywhere on the globe with electricity and an internet connection suggests that there are plenty of things we know well-enough to successfully build from. And the fact that my family exists and we are generally healthy suggests there is enough biological information encompassing everything from farming to how to kill germs to keep us alive and thriving, sometimes even in opposition to modern medical standards. That is the information that matters.
If your response to my first comment is that nobody really knows anything and we must all vet everything for ourselves, then why are you here instead of reinventing the wheel? How can you possibly trust that anyone before you really knew how they work?
I'm saying engineering is fine and good. Science is the issue, because most of what we think we know under the banner of science (e.g., virology, modern physics, biochemistry) is wrong.
I understand you are using the term "science" in a broader sense, but we then need some kind of word to describe science in the narrow sense: using the scientific methods to demonstrate causal relations and create theories about unseen mechanisms. That is the part that is severely broken in the past 150 years.
It just needs someone to train it that is very picky about what goes in.
Start with Info Galactic, and the old Encyclodedias, and oldest textls, and work forward until it starts being all lies. Around the 1910s everything accelerated, but many problems before that.
It will be a mission for this next generation to figure out what is actually true from the mess of lies we've been handed by the current clown intellects.
There are likely a lot of wins coming soon on the magnitude of "bread makes you fat, it's not the foundation of the food pyramid".
It would be an aid to an innovator to compare his ideas to the probablistic mean, of well trained data set.
More like a slightly dumber, but faster, lab mate. Brainstorm and talk out it out. It will always be average of what exists without human help.
Garbage in, garbage out. So it was and so it is. Science 2025.
Grok and Claude refused to commit fraud when I asked for a fraudulent scientific paper. And Claude lectured me on ethics. AI sucks.
Chat GPT 4o suggested:
Unlike cloud-based models (Claude, Grok, ChatGPT), a user-resident AI typically resides in a user's private environment. Guardrails are weaker by default due to user-driven management and lack of centralized oversight.
The user, in this case Logan, can customize or entirely remove ethical and safety constraints.
This set me off on a tangent. If I train a user-resident AI on only my own work, am I writing my own fan fic? Ethics and copyrites woud stop being an issue.
Grok and Claude both said it is hard with limited data sets. Claude seems to lecture more
Local would be less powerful but more focused. It would do it. The question just comes down to what you want it to do and what field.
Good inputs could make pretty good outputs.