
Fear Not The AI, But The Automation
16.04.2025
It’s astonishing how the tempo of technology has accelerated. Tasks that once took weeks or months now happen in seconds. In the past, a critical message might travel on a ship, or a telegram that still relied on human relay. Today, entire systems can be launched or brought down at the push of a button. Information moves at light speed, and so do errors. A single click can send a million emails or initiate a financial trade across the globe. The same speed that enables instant communication also means a mistake or malicious act can propagate faster than any human could intervene. In short, modern systems operate on a hair-trigger, and sometimes that trigger is all too easy to pull.
This isn’t a theoretical concern. Consider that a leaked cloud credential posted online can be exploited within minutes by automated attackers [1]. When things go wrong now, they go wrong quickly. The stage is set: unprecedented speed, unprecedented scale. But what truly keeps cybersecurity folks up at night isn’t an AI suddenly achieving sentience – it’s the very real, present-day risk from the automation and integration we’ve woven into every aspect of our lives.
Over the last decade, we have eagerly handed over countless decisions to algorithms, scripts, and bots. From DevOps pipelines to trading desks, automation has replaced humans more extensively than might appear on the surface. Code deploys itself via CI/CD (Continuous Integration/Continuous Deployment) pipelines, servers auto-scale based on traffic, and bots handle tasks from customer support chats to financial transactions. In theory, this reduces human error and speeds up innovation. In practice, it sometimes means mistakes happen at machine speed – blindingly fast.
We’ve seen dramatic examples of automation gone awry. One infamous case is Knight Capital’s $440 million glitch. In August 2012, Knight Capital (a major trading firm) deployed new software that accidentally activated an old, dormant bit of code on their servers. Within 45 minutes, their trading algorithms executed around 4 million unintended stock orders, buying high and selling low repeatedly. By the time engineers scrambled to pull the plug, Knight had lost about $440 million – roughly three times its annual profit – and was teetering on bankruptcy. The company described it as a “trading glitch,” but it was essentially an automated system running wild without adequate safeguards [2].
We have even more recent examples - just a few days back, we observed another reminder of how fragile our systems have become under the weight of automation. The U.S. Treasury market saw a sudden spike in yields after a sell-off was triggered not by humans, but by automated trading systems. When the yield on the 10-year bond hit a certain threshold, algorithmic trades began liquidating massive positions, amplifying volatility without any direct human oversight. This was reportedly linked to the unwinding of leveraged "basis trades" used by hedge funds — a strategy that became unprofitable as prices shifted and margin calls kicked in automatically. In minutes, automated processes turned a market fluctuation into a mini-crisis. No malice, no mistake, just automation doing precisely what it was told to do, with no one watching closely enough [3].
Automation errors aren’t confined to finance. Even our development tools and processes can backfire spectacularly. In 2017, for example, a GitLab engineer was troubleshooting a database issue and inadvertently ran a script on the production database instead of a replica. The result? It deleted 300 GB of live data – issues, merge requests, comments – in an instant. Five different backup/replication systems failed to recover it immediately. GitLab’s team famously live-tweeted the frantic recovery efforts, but the incident underscores a sobering point: a single stray command in an automated environment can cause catastrophic loss. There have been numerous such incidents across industries - from misconfigured scripts that email every customer by mistake to CI/CD deployments that take down critical services. The power of automation means a typo or logic bug isn’t just a small oops; it can become a massive, fast-moving disaster before anyone blinks [4].
So, while DevOps gurus preach “automate all the things” (with good reason for efficiency), each script and API we put in place is also a potential disaster in making. And it’s not only internal mistakes we worry about - these automated systems are increasingly exposed to the wider world, which introduces our next challenge.
In earlier eras, critical industrial systems - power grids, water treatment plants, factory control systems - were isolated from the internet (either by technical limitation or by design). Sabotage meant getting in, physically. Not anymore. Today, operational technology (OT) like ICS/SCADA are being hooked up to corporate networks and even the internet for convenience and efficiency. This convergence of IT and OT means you can monitor or control a factory or a city’s water supply from across the country – or, unfortunately, across the world.
The Oldsmar, Florida water plant incident in 2021. A system once safely isolated was now remotely accessible - and someone tried to increase sodium hydroxide levels in the city’s water. Initially reported as a hack, later investigation suggested it may have been the operator himself, mistakenly triggering changes while navigating the system. Regardless of intent, remote access was active where it shouldn't have been - and a single misstep nearly poisoned a city [5].
Oldsmar was a wake-up call, but it’s far from an isolated case. Security researchers have been warning for years that many critical systems are just one Shodan search away from discovery (Shodan is a search engine for internet-connected devices, often used to find unprotected webcams, routers, and yes, industrial control systems). Recent studies by Censys and others found over 145,000 industrial control devices worldwide directly exposed to the internet. Many are trivially easy to connect to, often protected only by default passwords or outdated software. It’s the equivalent of leaving the factory backdoor not just unlocked, but propped wide open [6].
Why are these systems online? Sometimes for convenience - a vendor’s maintenance, remote monitoring by engineers, or integration with modern data systems. But whatever the reason, once online, they inherit all the risks of the internet. We’ve seen attackers manipulate power grids (as in the Ukraine 2015 blackout) and cripple oil pipelines (Colonial Pipeline ransomware in 2021) by targeting the digital control layers of physical infrastructure. Even when attackers aren’t deliberately targeting ICS (Industrial Control Systems), automated scans and malware can accidentally hit them, with unpredictable, possibly dangerous results. For instance, a piece of generic ransomware doesn’t intend to shut off a factory’s valves, but if it infects an HMI (Human-Machine Interface) for a factory, the outcome could be more than a data breach - it could be a physical accident.
In summary, we’ve networked everything from traffic lights to thermostats to power plants, often faster than we’ve secured them. The result is that a misconfigured water pump or an unpatched PLC (Programmable Logic Controller) can be discovered by a bot and potentially manipulated by anyone, anywhere. The threat isn’t a Terminator-like AI deciding to poison a city’s water; it’s the very human decision to connect that water system to the internet without proper safeguards. We have, in effect, automated and exposed critical infrastructure, creating a new class of vulnerabilities that didn’t exist when a water plant was a bunch of analog valves and dials behind a locked door.
Figure: The growth of vulnerabilities, exploits, and zero days since 1993. Also included is a timeline of major events involving vulnerabilities since 1993. Source: X-Force Threat Intelligence Index 2024
Fraud and Cybercrime at Scale
Automation isn’t only the domain of well-meaning engineers and control systems; it’s also the favorite tool of cybercriminals. The same scalability and speed that benefit businesses are a boon for attackers. Why should a fraudster manually send phishing emails or try passwords one by one when they can script it or use a botnet to do it thousands of times per second? Modern fraud and cybercrime operate at a staggering scale thanks to automation.
Take phishing, the age-old art of the scam email, supercharged by automation. By some estimates, 1.2% of all emails sent are malicious – that sounds small until you realize it equates to about 3.4 billion phishing emails every single day. These phishing campaigns aren’t a guy in his basement hitting “send” over and over; they’re orchestrated by botnets and kits that personalize each email, rotate through dummy sending accounts, and bypass basic filters. The result is a global blast of bait that can hook even a tiny fraction of recipients and still yield plenty of stolen passwords or malware infections. In fact, IBM’s security team noted that phishing remains the leading initial attack vector, responsible for 41% of incidents they analyzed. It’s practically “cybercrime as a service”, with automation doing the heavy lifting [7].
Another example is credential stuffing: hackers take large dumps of stolen usernames/passwords from past breaches and automatically try them on other websites (betting that people reuse passwords - which they do). This is highly automated, and a single bot can attempt millions of logins on hundreds of sites. According to IBM X-Force research, there was a 71% year-over-year increase in attacks using valid credentials (i.e., logging in with stolen creds rather than hacking in) [8]. That surge reflects how attackers find it easier to let automation test logins at scale, essentially logging in through the front door without setting off alarms. It’s like trying every key until one fits, but doing it with a robot that can try billions of keys per minute, overnight. No sophisticated AI needed - a simple script and a list of passwords will do. And the impact is huge: one successful credential-stuffing attack can compromise thousands of accounts in an hour, enabling fraud from draining bank accounts to filing fake tax returns en masse or sending a "well-crafted" email from a CEO, CFO, or General Manager to the finance department (a.k.a Business Email Compromise Fraud).
Figure: Timeline of FBI-reported losses due to BEC attacks and the adoption of Microsoft 365. Source: X-Force Threat Intelligence Index 2024
We’re also seeing automation fuel the rise of synthetic identities in fraud. This is where criminals programmatically create fake people, combining real stolen data (e.g., real Social Security number, passport number, etc.) with fabricated info to create a “person” that exists only on paper. They can then use these fake identities to apply for loans, credit cards, unemployment benefits, you name it. It’s assembly-line fraud. A recent TransUnion report flagged synthetic identity fraud as the fastest-growing type of financial crime in 2024, warning that emerging tools (like AI) make it "much easier and faster to create completely realistic-looking, fabricated identities”. Imagine a bot that can spin up a thousand fake personas, complete with credit histories and social media profiles, and start mass-applying for credit. That’s where we are headed (if not already there). Organized crime rings are leveraging these techniques to defraud government aid programs and banks at a scale that would be impossible to do manually [9].
Even classic cyberattacks like malware distribution and network intrusion have been turbocharged. Botnets (networks of infected computers under an attacker’s control) can automatically scan for vulnerable systems across the internet and launch attacks without direct human involvement at each step. One moment of human configuration - and then the script runs on its own, perhaps millions of times, seeking any unlocked door on the internet. The 2021 “Elektra” campaign" is a good example: it autonomously scanned GitHub for leaked cloud credentials and spun up crypto-mining servers within minutes of a leak. That’s cybercrime at machine speed and global scale.
All of this is to say: criminals don’t need a superhuman AI to wreak havoc. Well-governed AI or not, poorly governed automation is already doing the job. Phishing emails by the billion, fake identities by the thousand, and constant probing of every online system, it’s happening right now, every day. The threat to the average organization or person is less likely to be some genius AI plotting against them, and far more likely to be a dumb, brute-force script flooding their inbox or testing their passwords. Quantity has a quality all its own.
Now, let's paint a very plausible scenario - one that might keep senior managers and executives awake at night. Picture a large, complex organization, say a power utility or a cloud services company. It’s late afternoon, and a junior IT engineer is finishing up some changes to an automation script – perhaps something mundane like a script to sync data or restart a set of servers for maintenance. They’ve tested it in a staging environment, and it all looks good. Feeling the pressure to get things done quickly (everything is fast, remember?), they decide to deploy it to production systems before clocking out, confident it will “just work.” They run the script... and in an instant, it begins doing something terribly wrong.
Maybe a parameter was off, or perhaps the script wasn’t supposed to run on all the servers at once, but due to a tiny oversight, it does. Suddenly, critical systems start shutting down. In a power utility scenario, maybe that script was pointed at the live grid control network by mistake, and now it’s sending a shutdown signal to dozens of substations. City lights flicker and go dark. In a cloud service scenario, perhaps the script is erroneously deleting active storage instead of old logs and within seconds, customer data across the globe is vanishing. Alarms start ringing (literally and figuratively). Panicked engineers try to stop the process, but the automation is faster than human intervention. By the time someone pulls the plug or kills the script, the damage is done - a massive outage or data loss affecting millions of users.
Sound far-fetched? Unfortunately, it’s not. We’ve seen smaller-scale versions of this in real life. Recall the Amazon Web Services (AWS) S3 outage in 2017: an engineer ran a command to debug a billing issue and accidentally took much more server capacity offline than intended, due to a typo in the automation input. That one slip took down a huge swath of the internet for four hours – popular websites and services simply vanished, all because a script did exactly (the wrong thing) what it was told. AWS later admitted the tooling allowed too much to happen from a single command [10].
Or think of the time a major telecom’s automated update knocked out 911 emergency call service for several states – no malicious hacker, just an internal error propagated at light speed. These are unintentional, human-triggered catastrophes amplified by automation.
In our hypothetical power grid incident, it wasn’t an evil AI deciding humans are pesky and need to be plunged into darkness. It was a well-meaning (if maybe inexperienced) human, using an automated tool without fully understanding the potential impact or without sufficient checks in place. The risk here is not a malevolent intelligence; it’s the lethal combination of speed + connectivity + lack of oversight. A script that can do great things in milliseconds can also do awful things in milliseconds if mis-aimed. And when our systems are tightly coupled (business networks connected to control systems, everything connected to everything), a local mistake can cascade into a much bigger catastrophe.
What could prevent these scenarios? Checks and balances:
But implementing those sometimes falls in second place to moving fast. It’s a cultural issue as much as a technical one - a blindness that “nothing could go wrong” until it does. The irony is thick: we fear an AI uprising, yet we’ve created a world where a summer intern with a keyboard has, in some ways, more power to disrupt society than a rogue AI (at least for now). The intern doesn’t need to be evil, just unlucky or under-supervised.
"We gave everyone the power of gods — and forgot to teach them mythology.”
The quote above encapsulates our predicament. Our modern systems would be seen as marvels of technology just a few decades ago. We can deploy global software updates, move billions of dollars, or shut down a factory with a few keystrokes.
In cybersecurity and fraud prevention circles, there’s a saying: “Amateurs hack systems, professionals hack people.” Perhaps we should add: “and professionals increasingly hack automation.” Because why fight an intelligent enemy when you can simply ride the express train of someone’s automated processes? The defenses we build must reckon with this reality. We need to govern our automation, put safety nets under our continuous deployments, add circuit-breakers to our trading bots, and double-check which network that script is really connected to.
The real threat to humanity today isn’t a scheming artificial general intelligence plotting our downfall. It’s the far more mundane and immediate danger of automated systems running unchecked. It’s the combination of blind automation + exposed systems + fallible humans that creates a powder keg. Whether it’s a trading algorithm gone wild, a water plant team that didn’t secure TeamViewer, or an overzealous continuous deployment pipeline, the pattern is the same. We’re living in an era where accidentally causing a catastrophe is easier than ever, simply because everything is so interconnected and sped up.
Disclaimer:
This isn’t to say that AI won’t eventually turn against us - who knows what version of "SkyNet-as-a-Service" might bring. But the sobering truth is: we might not make it that far. The real danger today isn’t malevolent intelligence - it’s blind automation, exposed systems, and well-meaning humans with just enough access to be catastrophic.
[2] www.cio.com
[3] www.reuters.com
[4] bytesizeddesign.substack.com
[7] keepnetlabs.com
[9] statescoop.com
[10] www.theverge.com