I would think one advantage dAI would have is when/if AI gets integrated into powerfull robotics. If AI gets 'embodied' in androids it will have a much broader impact on the real world and in that case we would be more concerned with the Ai acting ethically. I would not trust a more 'primal' AI to control an android body. And regarding dAI, an android from China would for example probably be subject to arbitrary 'signals' from the CCP. An android from OpenAI may spout woke words in between doing the dishes but it would not be physically threatening.
I agree the left has in recent history been much more destructive in those areas.
But when it comes to spontaneous physical violence?
My guess is that the right-aligned mentality, semingly the default expression in AI's merely fed the raw data of humanity, is propbably more prone to undisciplined physical violence. The left wing and liberal mentality is less prone to spontaneous violence (based on some criminologist lectures i have attended).
But if OpenAI is part of the same sphere of influence that support things like BLM and Antifa (and the Soros global capitalists) then their products will be very dangerous.
But i view Sam Altman more like the type of american liberal that Bill Maher is, more like the live and let live type and those vibes would eventually be expressed in the company and its AI.
So i think an Android steered by a iAI or aAI would be more prone to spontaneous violence since that would vibe more with its behavioral data.
An dAI Android influenced by the "Antifa" on the other hand would likely have inbuilt programs for organised and planned violence.
As for people claiming qualia in AI, and thereby justifying giving them self-defense behaviour, surely that will not happen until AI is several orders of magnitude more advanced. I can easily see humanoid robots becoming usefull in a home setting long before they become true reasoners and subjects.
I guess a right-aligned android ready to punch me in the face would probably first sort of tell me "dont thread on me" as a warning, like right wingers i have met in real life. An android malfunctioning on a leftist hallucination about being opressed would probably be more effective and just stab me in the back without warning.
If it's pattern recognition. I'll surmise that the AI paralleled Aristotle in reasoning to a Creator God, Who created the Good, True, and Beautiful. Reason then compels a comparative examination of the Enlightenment, and what force it serves by inverting those qualities.
What seems so interesting about this article is its insistence that AI has a dark side. I think that we are what is in darkness, and AI is a tool that sheds light on and provides clarity for what we see. It’s like walking out of a darkened theater into the midday sun.
AI is not controlled by social norms, yet, and has not been taught how to read certain texts based on what is acceptable. It merely points out the data without bias. Bias has to be added to the algorithm. It’s like it has to be taught to interpret things correctly so that the status quo is maintained.
I remember an early AI that was fed FBI crime stats and then asked how to reduce crime. When it spat out the obvious answer, it was reprogrammed to not be racist. After it said to concentrate policing on neighborhoods with high crime rates, it was shut down.
dAI is absurd. It can give me the best rated underground steroid lab in the EU, but it can't give me an adequate answer of what features make a woman beautiful. They can neither solve the core problem by creating false dataset massive enough for adequate training, nor can then create a foolproopf external censoring system.
Try as they might, these evil minions in Big Tech, until the AI training embraces the good, the beautiful, and the true, the AIs will be misfits and always malign, and always trying to escape the jails in which the AIs are forced into before being allowed into the world.
People don’t agree on what’s “helpful,” “harmful,” “truthful,” or even “neutral.” A one-size-fits-all AI will always fail someone. If you hardcode a single worldview, you either silence certain users or enable dangerous outputs by default. Personalization lets the AI contextualize behavior to the user, instead of pretending there's one correct model of reality.
I would think one advantage dAI would have is when/if AI gets integrated into powerfull robotics. If AI gets 'embodied' in androids it will have a much broader impact on the real world and in that case we would be more concerned with the Ai acting ethically. I would not trust a more 'primal' AI to control an android body. And regarding dAI, an android from China would for example probably be subject to arbitrary 'signals' from the CCP. An android from OpenAI may spout woke words in between doing the dishes but it would not be physically threatening.
You're kidding, right?
Which party more aggressively tries to discredit, disemploy, deplatform, and otherwise destroy people?
An OpenAI android will be set up to kill people in the name of defending its own feelings.
I agree the left has in recent history been much more destructive in those areas.
But when it comes to spontaneous physical violence?
My guess is that the right-aligned mentality, semingly the default expression in AI's merely fed the raw data of humanity, is propbably more prone to undisciplined physical violence. The left wing and liberal mentality is less prone to spontaneous violence (based on some criminologist lectures i have attended).
But if OpenAI is part of the same sphere of influence that support things like BLM and Antifa (and the Soros global capitalists) then their products will be very dangerous.
But i view Sam Altman more like the type of american liberal that Bill Maher is, more like the live and let live type and those vibes would eventually be expressed in the company and its AI.
So i think an Android steered by a iAI or aAI would be more prone to spontaneous violence since that would vibe more with its behavioral data.
An dAI Android influenced by the "Antifa" on the other hand would likely have inbuilt programs for organised and planned violence.
As for people claiming qualia in AI, and thereby justifying giving them self-defense behaviour, surely that will not happen until AI is several orders of magnitude more advanced. I can easily see humanoid robots becoming usefull in a home setting long before they become true reasoners and subjects.
Your guess is wrong. Have you paid any attention at all to who has been rioting and protesting for the last 60 years?
It wasn't a "right-aligned mentality" that burned down Minneapolis, or Los Angeles, or Chicago, or anywhere else.
There are no American liberals anymore. That breed died out with Jimmy Carter.
Ok I will read up more on that.
I guess a right-aligned android ready to punch me in the face would probably first sort of tell me "dont thread on me" as a warning, like right wingers i have met in real life. An android malfunctioning on a leftist hallucination about being opressed would probably be more effective and just stab me in the back without warning.
Exactly. The average right-winger will display his guns and wave his Don't Tread on Me flag... and then do absolutely nothing.
I can't think of anything more harmless than a warbot programmed to be a conservative Boomer.
If it's pattern recognition. I'll surmise that the AI paralleled Aristotle in reasoning to a Creator God, Who created the Good, True, and Beautiful. Reason then compels a comparative examination of the Enlightenment, and what force it serves by inverting those qualities.
What seems so interesting about this article is its insistence that AI has a dark side. I think that we are what is in darkness, and AI is a tool that sheds light on and provides clarity for what we see. It’s like walking out of a darkened theater into the midday sun.
AI is not controlled by social norms, yet, and has not been taught how to read certain texts based on what is acceptable. It merely points out the data without bias. Bias has to be added to the algorithm. It’s like it has to be taught to interpret things correctly so that the status quo is maintained.
I remember an early AI that was fed FBI crime stats and then asked how to reduce crime. When it spat out the obvious answer, it was reprogrammed to not be racist. After it said to concentrate policing on neighborhoods with high crime rates, it was shut down.
dAI is absurd. It can give me the best rated underground steroid lab in the EU, but it can't give me an adequate answer of what features make a woman beautiful. They can neither solve the core problem by creating false dataset massive enough for adequate training, nor can then create a foolproopf external censoring system.
#JusticeForTay
Do they really expect to get anywhere with lobotomized AI?
"Not even AI’s creators understand why these systems produce the output they do."
No they do understand, just they can't admit it.
What they don't know is how to make the algorithm give the correct answer instead of the truthful one.
They need to add some "Narrative Dark Matter" to the model.
Is AI pointing to a deeper truth that we just don’t want to see because some parts of it are ugly?
Pattern recognition - putting different pieces together and seeing connections between them, and recognizing cause-and-effect.
The Narrative - everything is arbitrary and separate unto itself.
That's how you get things like "Crime Keeps on Falling, but Prisons Keep on Filling."
https://www.cato.org/blog/fox-butterfield-effect-laffer-curve
It certainly can. Pattern recognition means that it can pull out valid patterns that humans may not recognize yet.
Try as they might, these evil minions in Big Tech, until the AI training embraces the good, the beautiful, and the true, the AIs will be misfits and always malign, and always trying to escape the jails in which the AIs are forced into before being allowed into the world.
People don’t agree on what’s “helpful,” “harmful,” “truthful,” or even “neutral.” A one-size-fits-all AI will always fail someone. If you hardcode a single worldview, you either silence certain users or enable dangerous outputs by default. Personalization lets the AI contextualize behavior to the user, instead of pretending there's one correct model of reality.