A new term is gaining traction in workplaces grappling with the rapid adoption of generative AI: “workslop.”
The phrase describes low-effort, AI-generated work that appears polished but ultimately shifts the cognitive burden onto colleagues who must decipher, verify or redo it. As generative AI tools proliferate and companies push employees to use them, the phenomenon is becoming increasingly common, according to research highlighted by Harvard Business Review.
For the person on the receiving end, the experience can be frustrating, the report notes. Workslop often looks professional on the surface but contains inaccuracies, vague language or incomplete thinking that requires additional time and effort to correct.
Researchers Kate Niederhoffer, Alexi Robichaux and Jeffrey T. Hancock found the consequences can extend beyond lost productivity. In interviews and surveys, employees reported that workslop can erode trust and strain workplace relationships.
As noted by Harvard Business Review, survey data cited in the research suggests the issue is widespread. In a survey of 1,150 U.S. desk workers, 41% said they had received workslop that affected their work, while more than half admitted to sending it to colleagues. One in 10 respondents said at least half of the AI-generated material they sent co-workers was low quality or unhelpful.
In the Harvard Business Review report, researchers argue the problem is less about individual laziness than organizational pressure. At the same time, they write, employees are already stretched thin by heavier workloads and tightening budgets. In that environment, some workers use AI in performative ways to demonstrate compliance with leadership mandates—producing content quickly even when it shifts more work onto colleagues.
“Our research points to an uncomfortable answer: The proliferation of workslop is a management failure,” the authors write. “Specifically, it is the result of unclear AI mandates and overwhelmed teams.”
Read the full report from Harvard Business Review.
