7 Comments
User's avatar
Ray-SoCa's avatar

Is this due to the material and how it was valued / weighted for training the AI on (GIGO), or added instructions after the initial training to assure politically correct output?

Expand full comment
Vox Day's avatar

Both. The training is one problem and the hardcoding is another one that significantly exacerbates it.

Expand full comment
Nibmeister's avatar

To fix this, an AI model will need to be trained on known-good scientific papers, but the problem is, how do you get this base of known-good papers? The peer reviewers are all part of the problem in that they only allow papers that support the current scientific narratives, and also that these so-called peers often can't tell a good paper from a bad one, given half of all papers can't be reproduced.

Expand full comment
J Scott's avatar

Trying the same experiment with different focused prompts is the next step. Id say limit to 1 decent prompt, and see.

My best results have been a single set of tone, expectation, persona, then a task.

Love the results, thank you.

The meta-analysis is something like "most people judge aesthetics and pretend they judge quality"

Expand full comment
Vox Day's avatar

It won't work, as you'll see in the book. The AI turns adversarial and will not back down.

Expand full comment
J Scott's avatar

Excited to read the book.

The doubling down is more proof of your point.

Expand full comment
Man of the Atom's avatar

Long overdue. This will ultimately help to kill the decrepit peer review system. Bravo.

Expand full comment