Rendered at 20:27:39 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
NikolaNovak 1 days ago [-]
I work in IT, have all my life, but these stories still have a sense of bizarre unreality to me, a dream sequence that isn't of real world.
I understand that some companies and people find it extremely empowering and accelerating and convenient to plug AI into prod, but I come from diametrically opposite culture of old school DBA / sysadmin mentality, rather than "move fast and break things" modern dev mentality.
Once it was explained to me, authoritatovely, that hallucinations are mathematically impossible to eliminate, there's just no way I'm not "air / human gapping" any kind of LLM from any kind of prod.
I get these headlines are sensationalist and these cases may or may not be extreme or unusual/unrepresentative, but it's stunning to me how many people go through mandatory AI 101 training, are basically made to acknowledge that LLM will make things up confidently, and promptly forget that. I have executives sending me market research that's fully made-up and techies that are saying software is dead AI can make a payroll system in 5 minutes and everybody wanting to plug LLM into everything. And I'm not saying LLM is useless like some people, I use it multiple times a day for various things - I just cannot imagine giving it root / sysadm access to prod system and database :-/
(even The "unhinged apologies" - unless I'm mistaken, that too is basically fancy autocomplete, correct? It's not that AI "acknowledges" or "understands" or "fesses up" when things went wrong, as even technical media presents it as. It's just what training material / RLHF built as statistical response to a mistake. )
Corrado 15 hours ago [-]
I think that an eventual consequence of using AI is that more and more of these things will happen. Like you, I'm experienced enough to know that you separate environments completely; nothing shared. This is wisdom and something AI will never obtain. The up-and-coming programmers and engineers using AI for everything will never learn these lessons because they won't be exposed to the problems or really forced to think about these outcomes.
rafaelmn 1 days ago [-]
> Once it was explained to me, authoritatovely, that hallucinations are mathematically impossible to eliminate
That's a weak criteria - hallucinations are mathematically impossible to eliminate in humans.
_aavaa_ 1 days ago [-]
Humans can be held responsible; what are you gonna do to the AI? Wipe the context?
oceansky 22 hours ago [-]
I was going to say that at least the human brain is deterministic, but a Google search say this is not a scientific consensus
simplyluke 1 days ago [-]
The current sentiment within basically all of silicon valley is to remove every possible guardrail and accelerate AI adoption as fast as possible, consequences be damned.
The uptime of major websites recently should be a tell of how well that's going.
standardly 1 days ago [-]
I've noticed a general decline in performance across several major applications within the past year or so. Not making any accusations yet, because it could be placebo, or coincidence, or selective bias... but I have my suspicions.
pando85 22 hours ago [-]
It’s not AI’s fault. It’s like leaving an inexperienced intern alone with all the production passwords and encouraging them to experiment.
Blaming the AI or the cloud provider is like deploying an unverified tool you “found somewhere”, or running a forum script meant for a different version or a “similar enough” environment.
That’s what staging environments are for.
SaucyWrong 20 hours ago [-]
Reckless engineering team deletes their own production DB. Blames everyone else. Old news.
cheald 1 days ago [-]
If you wouldn't give it to an enthusiastic junior dev, don't give it to AI, period.
JimsonYang 1 days ago [-]
Can someone more technical explain the cause of this?
No seperate production and development keys and builds? Seems like a casual mistake-rather than the sensationalist media it’s trying to be
kioleanu 13 hours ago [-]
AI identified a problemm namely a credential missmatch and it decided it needs to delete a volume in order to fix that. Then it went and searched inside the whole codebase for a token that allows it to do that, found a production token meant for something else and issued the deletion request with said production token.
On the other end, the cloud company had something, _by design_, that also deletes any backups if you delete the volume.
tommy29tmar 14 hours ago [-]
I’d read it that way. The issue is not that Claude can delete things; it’s that one session apparently had enough access to touch prod, run destructive commands, and also wreck the rollback path. Staging/prod separation helps, but backups should be on a different set of credentials too. Otherwise “restore it” is just another action the agent can damage.
I understand that some companies and people find it extremely empowering and accelerating and convenient to plug AI into prod, but I come from diametrically opposite culture of old school DBA / sysadmin mentality, rather than "move fast and break things" modern dev mentality.
Once it was explained to me, authoritatovely, that hallucinations are mathematically impossible to eliminate, there's just no way I'm not "air / human gapping" any kind of LLM from any kind of prod.
I get these headlines are sensationalist and these cases may or may not be extreme or unusual/unrepresentative, but it's stunning to me how many people go through mandatory AI 101 training, are basically made to acknowledge that LLM will make things up confidently, and promptly forget that. I have executives sending me market research that's fully made-up and techies that are saying software is dead AI can make a payroll system in 5 minutes and everybody wanting to plug LLM into everything. And I'm not saying LLM is useless like some people, I use it multiple times a day for various things - I just cannot imagine giving it root / sysadm access to prod system and database :-/
(even The "unhinged apologies" - unless I'm mistaken, that too is basically fancy autocomplete, correct? It's not that AI "acknowledges" or "understands" or "fesses up" when things went wrong, as even technical media presents it as. It's just what training material / RLHF built as statistical response to a mistake. )
That's a weak criteria - hallucinations are mathematically impossible to eliminate in humans.
The uptime of major websites recently should be a tell of how well that's going.
Blaming the AI or the cloud provider is like deploying an unverified tool you “found somewhere”, or running a forum script meant for a different version or a “similar enough” environment.
That’s what staging environments are for.
No seperate production and development keys and builds? Seems like a casual mistake-rather than the sensationalist media it’s trying to be
On the other end, the cloud company had something, _by design_, that also deletes any backups if you delete the volume.