The Promise vs. The Reality
I’ve been thinking a lot about the promise of AI versus its current, messy reality. We are being sold a vision of streamlined productivity, of effortless efficiency. Instead, we’re witnessing the slow “enshittification” of our digital spaces, and it’s seeping through the corporate firewall. The most visible version of this is what people call “AI Slop”—the endless stream of bizarre, low-quality, legally dubious, AI-generated content flooding the internet. I’m looking at you, Sora 2 🙂
A More Dangerous Internal Version
But there’s a more dangerous, internal version of this I'm seeing more and more: Workslop. Think of it as the enshittification of work. Oh, and if you’re unfamiliar with the term, Enshittification is a term coined by writer Cory Doctorow to describe the gradual decline in quality and user experience that happens when digital platforms — or systems in general. The American Dialect Society selected it as its 2023 Word of the Year, and it’s included in most online dictionaries.
AI Slop vs. Workslop
First, let's get the terms straight. AI slop is the stuff you see online. It’s the bizarre images, the generic blog posts that say nothing, and the audio clips that feel just a little off. This content is designed for volume, not value, aiming to game algorithms and grab fleeting attention. It’s annoying, and it erodes our trust in the digital world.
Workslop, on the other hand, is the corporate cousin to AI slop. It’s specific to the professional world, and it’s far more insidious. This is the AI-generated report, presentation, or memo that looks polished and complete on the surface. But underneath the slick formatting, it’s full of holes. The work is incomplete, it lacks critical context, or it’s just plain wrong. It masquerades as progress but is actually a productivity bomb waiting to go off.
The key difference is where the burden falls. AI slop is a problem for the internet and your feed. Workslop is a problem for your team. It shifts the labor to the next person in line, forcing them to spend their time interpreting, correcting, or simply redoing the subpar work.
AI Slop |
Workslop AI-generated work that looks complete but isn't |
|
|
A Real-World Example
An example pulled from recent headlines is Deloitte’s high-profile refund with the Australian government. Australia contracted with Deloitte to conduct an independent review of their compliance framework. Their final report contained several AI-generated errors. They will be refunding at least part of their $440,000 fee but the brand and trust damage is much more significant.
The Hidden Costs of a Shortcut
If you think this is a niche problem, the data says otherwise. In a recent joint study by BetterUp Labs and the Stanford Social Media Lab, a staggering 40% of US workers reported receiving workslop in the last month alone. I’m sure this is actually considerably higher.
What’s the real damage? Researchers found that nearly two hours of productivity are lost for every single instance of workslop. When you scale that across an organization, it translates into millions of dollars in hidden costs. This might explain why, according to the same research, 95% of organizations report seeing no actual return on their AI investments. The tools meant to eliminate work are quietly creating more of it.
40% Workers Affected Received workslop in the last month |
2 Hours |
94% |
A Case Study: When AI QA Needs QA
I saw this firsthand with a client recently. They were evaluating AI tools for quality assurance in their contact center and decided to build a solution internally instead of buying from a top vendor. The initial pilot looked fantastic. The system reviewed 64% of all customer interactions, generated sentiment analysis, and produced beautiful dashboards tracking customer issues. The team was thrilled.
Then the business users—the actual QA team—got their hands on it.
Once they started digging into the details, they realized the AI’s findings were frequently inaccurate or missed the core intent of the customer’s call. Trust evaporated almost immediately. The QA team couldn’t take any of the tool's outputs at face value. They ended up performing QA on their own QA tool, completely defeating its purpose. The system was running, burning through compute resources, but adoption stalled because the workslop it produced created more effort, not less.
The Trust Tax
This is the real, lasting damage of workslop. It’s not just about lost hours; it’s about the erosion of trust. Half of the employees in the BetterUp survey said receiving low-effort, AI-generated work makes them view their colleagues as less reliable, creative, or competent. Many admitted they would actively avoid working with those people in the future.
This creates a dangerous dynamic. Your most experienced people, skeptical but willing to experiment, will feel compelled to double-check every AI-assisted output, duplicating effort. Meanwhile, your more junior people may rely on the tools uncritically, propagating flawed work throughout the organization. This tax on user trust slows everything down.
So, what do we do?
Banning AI isn't the answer; the genie is out of the bottle. We also have to recognize that one bad experience can sour an employee on AI entirely. With the constant narrative about AI taking jobs, people are already looking for reasons to distrust it, and a flood of workslop gives them plenty of ammunition.
The responsibility falls on leaders to set clear and intelligent boundaries and frame the narrative correctly. We need to establish that AI needs humans, and vice versa, and that we'll work through the issues together. This means creating firm guardrails for how, when, and why AI tools are used, defining what "good" looks like, and creating norms where AI is treated as a first-draft assistant, not a final-product generator.
AI isn’t the enemy — lazy application and lack of oversight is. The temptation to automate everything is strong, but the goal isn’t to replace thinking; it’s to amplify it. Without that guidance, we’re just inviting the enshittification of the internet right into our own house.