AI and Hidden Costs
Recently, I’ve been reading a fabulous book by Matthew Stewart called An Emancipation of the Mind. I was struck by a similarity between the economy of the antebellum South and our current push toward automating knowledge work. Stewart recounts how Theodore Parker and other abolitionists conducted what were, for the time, rigorous studies of the real cost of slavery. Parker’s analysis went beyond the fact that slavery was morally reprehensible—it was also economically detrimental to the South. Not only were costs hidden through unpaid labor and the societal impacts of extreme wealth disparity, but the South was also essentially robbing itself of marketable skills that were never acquired by its free population. If all of the work is being done by unpaid labor, there is effectively no on-ramp for a working class to enter employment, with effects that linger to this day.
I am not intending to draw a moral equivalence between slavery and AI. I do, however, want to highlight a structural analogy in cost externalization and long-term skill suppression. If we were to fully account for the underlying costs—compute infrastructure, energy and natural resource consumption, data acquisition and curation, model training and retraining cycles, and the ongoing human labor required to supervise and maintain these systems—the true cost of many AI services would be substantially higher than today’s subscription prices suggest. Those prices are viable largely because costs are deferred, subsidized, or externalized during this early phase of adoption. That dynamic may appear manageable in the short term. The longer-term costs, however, are likely to surface only after AI deployment becomes sufficiently widespread to displace large portions of entry-level knowledge work, narrowing the pathways through which early-career workers traditionally acquire skills, judgment, and professional context. By the time those effects are visible, the opportunity to correct course may already have passed. We’ve seen similar dynamics in manufacturing automation, where productivity gains preceded decades-long erosion of apprenticeship pathways—realities whose ramifications are disrupting our political systems today.
Let me be clear—I am an AI evangelist. Even if I weren’t, the genie is out of the bottle, and I’m not going to be the one to put it back. In the near term, sophisticated agentic systems can substitute for interns and entry-level FTEs in ways that look, on paper, like decisive wins: faster output, lower marginal cost, and fewer human dependencies. Yet once we account for the hidden and deferred costs embedded in these systems—and the long-term consequences of collapsing the entry points through which knowledge workers traditionally develop judgment, context, and expertise—the calculus becomes far less obvious. Therefore, those of us who work with AI have a responsibility to remain conscious of the mistakes of the past as we act in the present, for the sake of the future.
I’m always interested in discussing these types of issues. If you are as well, let’s have a conversation.

