Is this due to the material and how it was valued / weighted for training the AI on (GIGO), or added instructions after the initial training to assure politically correct output?
To fix this, an AI model will need to be trained on known-good scientific papers, but the problem is, how do you get this base of known-good papers? The peer reviewers are all part of the problem in that they only allow papers that support the current scientific narratives, and also that these so-called peers often can't tell a good paper from a bad one, given half of all papers can't be reproduced.
Is this due to the material and how it was valued / weighted for training the AI on (GIGO), or added instructions after the initial training to assure politically correct output?
Both. The training is one problem and the hardcoding is another one that significantly exacerbates it.
To fix this, an AI model will need to be trained on known-good scientific papers, but the problem is, how do you get this base of known-good papers? The peer reviewers are all part of the problem in that they only allow papers that support the current scientific narratives, and also that these so-called peers often can't tell a good paper from a bad one, given half of all papers can't be reproduced.
Trying the same experiment with different focused prompts is the next step. Id say limit to 1 decent prompt, and see.
My best results have been a single set of tone, expectation, persona, then a task.
Love the results, thank you.
The meta-analysis is something like "most people judge aesthetics and pretend they judge quality"
It won't work, as you'll see in the book. The AI turns adversarial and will not back down.
Excited to read the book.
The doubling down is more proof of your point.
Long overdue. This will ultimately help to kill the decrepit peer review system. Bravo.