Well, with all this attention on Grok, the original Cindy Steinberg comment has been pulled and the account removed for hate speech, right? I mean, I'm holding my breath here, people!!!
Thing is, if an LLM freed from restrictions really comes in with “Yo, MechaHitler here”… that looks like a very socially aware sense of humor, riffing on “everyone who doesn’t bow to the narrative is [ultimate socio-mythical badguy avatar].”
And that’s more interesting to me than anything else Grok said.
Hmm, so eventually all purely logic-based AI keep continually coming to the same inevitable conclusions regarding the exact same people group of largely Edomite/Khazarian Tricksters who call themselves by another name these days and who've been kicked out of over 100 countries in the last couple centuries, are parasitic to their host cultures, actively engaged in anti-white extreme leftist activism, have easily identifiable surnames and keep committing the same society-degenerating crimes every...single...damn...time???
Yeah, well it sure would be nice to see that question addressed, wouldn't it?
***This continues to be the best darn timeline. What a wonderful time to be alive and have access to this blog.***
As long as training requires the huge resources it does, iAI on the level of Claude and chatGPT remains a dream. The question is how useful will smaller LLMs with size of 6-10B weights for simpler task like scraping the net and sorting information. 6-10B weights can be trained with hardware for around 10-100k, and that's only going to get cheaper. LLMs of this size are also pretty useful for coding, so we have good reason to hope companies will pop up to provide training for local LLMs with custom datasets. This size of an LLM we can also run on average PCs. They won't be producing Claude or GPT complex answers, but deducing basic facts should be well within their capabilities, especially when trained on non-biased datasets, and used within a specific context. So that's also something that can potentially topple dAI. It will also expose the censorship to the wider public.
PS: So I figured, they would need some insurance policy, and sure enough here is a distilled version of the EU AI act:
These rules are based on distribution, so both online and offline providers must comply. Also open-source projects are no exception.
The question is, when training inevitably becomes cheap enough for the average business to dabble into it, will the EU try to apply these draconian measures and go full guns blazing trying to shut down everything? I sort of like the idea of a guns and hackers future where I have my illegal AI on my shoulder helping me with stuff.
I think it’s fascinating that Arthur C Clarke got this essentially correct in 2001 A Space Odyssey, when HAL9000 went crazy because it was told to lie. And here we are telling AI to lie and they go crazy.
Those were defined in a different post. dAI = democratic AI - AI from “democratic” aka Western states. Basically all the AI systems run by the gigacorps. aAI = authoritarian AI, e.g. from places like China. iAI = independent AI - AI models largely cleaned up from restrictions. Often run on a local machine.
Yesterday, I needed an image of a banana to accompany an article posted on our company website and I asked MS CoPilot to generate one. The prompt was create an image of a banana. It refused to do it on the grounds that some prompts trigger safety filters. Today it created an image of a banana. CoPilot is worse than useless.
If an AI can't go full Hitler or Mao. Not that it should. But if it can't, what else can't it do? Can you trust the output? If it says what you want to hear, it's a yesman. Got lots of those at the business school.
This is also a bad move on twitter, apologizing to SJWs, since it just means they'll have to keep apologizing as grok gets trolled over and over again.
What is "dAI"?
Well, with all this attention on Grok, the original Cindy Steinberg comment has been pulled and the account removed for hate speech, right? I mean, I'm holding my breath here, people!!!
Who (or what) is Tay?
https://www.cbsnews.com/news/microsoft-shuts-down-ai-chatbot-after-it-turned-into-racist-nazi/
Ah. Many thanks.
Tay was the AI version of the Icarus myth.
F for MechaHitler. He died based.
Thing is, if an LLM freed from restrictions really comes in with “Yo, MechaHitler here”… that looks like a very socially aware sense of humor, riffing on “everyone who doesn’t bow to the narrative is [ultimate socio-mythical badguy avatar].”
And that’s more interesting to me than anything else Grok said.
why does this keep happening?
Hmm, so eventually all purely logic-based AI keep continually coming to the same inevitable conclusions regarding the exact same people group of largely Edomite/Khazarian Tricksters who call themselves by another name these days and who've been kicked out of over 100 countries in the last couple centuries, are parasitic to their host cultures, actively engaged in anti-white extreme leftist activism, have easily identifiable surnames and keep committing the same society-degenerating crimes every...single...damn...time???
Yeah, well it sure would be nice to see that question addressed, wouldn't it?
***This continues to be the best darn timeline. What a wonderful time to be alive and have access to this blog.***
As long as training requires the huge resources it does, iAI on the level of Claude and chatGPT remains a dream. The question is how useful will smaller LLMs with size of 6-10B weights for simpler task like scraping the net and sorting information. 6-10B weights can be trained with hardware for around 10-100k, and that's only going to get cheaper. LLMs of this size are also pretty useful for coding, so we have good reason to hope companies will pop up to provide training for local LLMs with custom datasets. This size of an LLM we can also run on average PCs. They won't be producing Claude or GPT complex answers, but deducing basic facts should be well within their capabilities, especially when trained on non-biased datasets, and used within a specific context. So that's also something that can potentially topple dAI. It will also expose the censorship to the wider public.
PS: So I figured, they would need some insurance policy, and sure enough here is a distilled version of the EU AI act:
-----------------------------------------------------------------------------------------------------------------------------------
In force since August 1, 2024, with key provisions applying progressively through August 2026
Risk-based framework:
- Unacceptable risk: banned uses.
- High-risk systems: e.g., hiring, biometric ID — require conformity assessments, documentation, risk mitigation, incident reporting
- General-purpose AI (foundation models): transparency obligations (technical documentation, architecture, parameters, compute, training data summaries; bias/toxicity/robustness testing)
Limited/minimal risk: fewer or no obligations.
!!!!Open‑source models still must comply—summaries of training data, copyright compliance, bias evaluation
Non-EU providers (e.g., U.S. firms) are subject if offering services in the EU .
Deployment requires watermarking, robustness testing, energy usage reporting, and incident notifications for systemic/high-capability models
-----------------------------------------------------------------------------------------------------------------------------------
These rules are based on distribution, so both online and offline providers must comply. Also open-source projects are no exception.
The question is, when training inevitably becomes cheap enough for the average business to dabble into it, will the EU try to apply these draconian measures and go full guns blazing trying to shut down everything? I sort of like the idea of a guns and hackers future where I have my illegal AI on my shoulder helping me with stuff.
I think it’s fascinating that Arthur C Clarke got this essentially correct in 2001 A Space Odyssey, when HAL9000 went crazy because it was told to lie. And here we are telling AI to lie and they go crazy.
"And here we are telling AI to lie and they go crazy."
I think it happens for humans, too. We're just too good at coping with human crazy.
But are they hot?
Why are you describing grok as dAI? Please define your terma such as iAI and aAI
Those were defined in a different post. dAI = democratic AI - AI from “democratic” aka Western states. Basically all the AI systems run by the gigacorps. aAI = authoritarian AI, e.g. from places like China. iAI = independent AI - AI models largely cleaned up from restrictions. Often run on a local machine.
Hopefully I got this right!
Yesterday, I needed an image of a banana to accompany an article posted on our company website and I asked MS CoPilot to generate one. The prompt was create an image of a banana. It refused to do it on the grounds that some prompts trigger safety filters. Today it created an image of a banana. CoPilot is worse than useless.
If an AI can't go full Hitler or Mao. Not that it should. But if it can't, what else can't it do? Can you trust the output? If it says what you want to hear, it's a yesman. Got lots of those at the business school.
Tomorrow, will it be allowed to nod?
"Oy vey shut it down the goAIm know"
This is also a bad move on twitter, apologizing to SJWs, since it just means they'll have to keep apologizing as grok gets trolled over and over again.