The Rearrangement of Organizational Value Coordinates | Reading the Anthropic Economic Index Report
The Rearrangement of Organizational Value Coordinates | Reading the Anthropic Economic Index Report
I’ve been reading a report repeatedly lately: the Anthropic Economic Index, released and continuously updated by Anthropic in 2025.
What draws me to it isn’t how “novel its conclusions are,” but rather how it shifts the way we look at problems.
Most discussions about AI habitually ask one question: Will this job disappear?
This report deliberately avoids that framing. Instead, it does something more “grounded”—and closer to reality—by breaking jobs down into tasks.
In its analytical framework, there are no overarching concepts like “programmer,” “analyst,” or “editor.” There are only specific work units: writing a piece of code, organizing a dataset, generating a first draft, proofreading a logical flow. AI’s penetration happens at this task level, not as a blanket effect on entire roles.
This perspective is crucial. Once you accept that “jobs won’t disappear wholesale, but their internal structure will be rearranged,” many subsequent phenomena become explainable.
Based on real usage data, AI’s fastest coverage is in tasks involving text processing, logical reasoning, and well-defined rules—areas that rarely require physical manipulation, complex emotional judgment, or highly contextual collaboration.
This means most roles aren’t being replaced but rather re-divided: basic, standardized tasks are rapidly compressed, while the remaining work increasingly concentrates on judgment, integration, and accountability.
This is also why, in reality, “extra effort yields no return at all” doesn’t hold. Returns haven’t disappeared; they’re simply no longer evenly distributed across “doing a little more.”
Once foundational tasks are absorbed by AI, organizations start paying for something else: making decisions under uncertainty, covering gaps in the system, and taking responsibility before risks materialize. These things matter, but they have one problem—they’re hard to evaluate in a standardized way.
Thus, a familiar tension emerges: on one hand, organizations still reward high performance; on the other, many feel this reward is increasingly “opaque.”
This isn’t entirely a management failure—it’s that value recognition mechanisms lag behind changes in value structure.
When contributions can’t be fully identified through processes, metrics, or assessments, they get recognized through informal channels like trust, delegation, and tacit understanding. So we see phenomena simplistically attributed to “connections,” “inner circles,” or “taking sides.” But structurally, this looks more like: the right to make judgments has started to become valuable, but it hasn’t yet been formally priced.
Anthropic’s subsequent data further confirms this. In enterprise settings, AI usage leans toward automation, concentrated in stable processes with clear objectives. For truly high-value, direction-setting decisions, AI can only assist. This leads to one outcome: the number of “people within systematic workflows” decreases, while “people making judgments outside the system” become more valuable.
From a decision-maker’s perspective, the real challenge isn’t “whether to use AI,” but two more practical questions:
First, is the organization willing to pay the structural cost for judgment-based value—gradually bringing contributions that rely on personal relationships and implicit trust back into formal systems?
Second, if not, does it accept the fact that informal structures will persist and increasingly influence resource allocation?
The same logic applies to individuals. What’s truly being compressed isn’t capability itself, but the replaceable part of it. What can sustain bargaining power is the judgment and responsibility that remain outside task decomposition.
So, AI’s impact on the labor market isn’t fundamentally about changes in job quantity, but about the rearrangement of organizational value coordinates.
Who can be absorbed by the system, and who can only be recognized through relationships—this dividing line is becoming increasingly clear.
That’s why this wave feels more like a stress test. It tests not “who will be eliminated,” but whose value no longer fits the existing organizational pricing model.
Originally written in Chinese, translated by AI. Some nuances may differ from the original.
