AI and Outage Risks
Anthropic went down for 66 minutes yesterday
It’s important to keep in mind that AI services inherently create the same sort of risks that all cloud services do.
On Wednesday afternoon, Anthropic experienced a brief but complete service outage that took down its AI infrastructure, leaving developers unable to access Claude.ai, the API, Claude Code, or the management console for around half an hour. The outage affected all three of Anthropic's main services simultaneously, with the company posting at 12:28 pm Eastern that "APIs, Console, and Claude.ai are down. Services will be restored as soon as possible." As of press time, the services appear to be restored.
The disruption, though lasting only about 30 minutes, quickly took the top spot on tech link-sharing site Hacker News for a short time and inspired immediate reactions from developers who have become increasingly reliant on AI coding tools for their daily work. "Everyone will just have to learn how to do it like we did in the old days, and blindly copy and paste from Stack Overflow," joked one Hacker News commenter. Another user recalled a joke from a previous AI outage: "Nooooo I'm going to have to use my brain again and write 100% of my code like a caveman from December 2024."
The most recent outage came at an inopportune time, affecting developers across the US who have integrated Claude into their workflows. One Hacker News user observed: "It's like every other day, the moment US working hours start, AI (in my case I mostly use Anthropic, others may be better) starts dying or at least getting intermittent errors. In EU working hours there's rarely any outages." Another user also noted this pattern, saying that "early morning here in the UK everything is fine, as soon as most of the US is up and at it, then it slowly turns to treacle."
While some users criticized Anthropic for reliability issues in recent months, the company's status page acknowledged the issue within 39 minutes of the initial reports, and by 12:55 pm Eastern announced that a fix had been implemented and that the company's teams were monitoring the results.
An AI service going down isn’t exactly a disaster for those working in the creative arts. If Suno is down, the world isn’t going to collapse into flames because it is deprived of a dream funk tune with a killer bassline for a few hours. And while I did notice the Anthropic outage, that just required a shift from working on one novel to editing another one for the evening.
Hardly an emergency.
But if you’re accustomed to relying upon AI as part of your regular workflow, as many programmers increasingly are, it’s probably worthwhile to look into setting up some sort of independent local system that will at least allow you to continue operating if your primary service is inoperative.



Generally speaking, local language models aren't much worse for coding than the biggest cloud models--at least the local ones that barely fit in 16 GB of VRAM. They still can't produce correct programs by themselves for cases that don't have working examples on Stack Overflow, still properly produce boilerplate, test cases, and sample code that resembles Stack Overflow answers vaguely finetuned to your use case, still are pretty good at rapid prototyping, and still can't substitute for an actual programmer.
Relying on a cloud service instead of one running on your own computer, as a programmer who surely has the level of skill needed to set up an off-the-shelf local language model, just seems dumb.
I look forward to consumer hardware that can run these giant LLM directly and bypass Big Tech and their gargantuan server farms.