Anthropic’s Economic Index offers a snapshot of how organisations and individuals are using large language models, based on the company’s analysis of 1 million consumer interactions on Claude.ai and 1 million enterprise API calls, all dated from November 2025. The report says its findings are based on observed usage patterns, rather than surveys or interviews with decision-makers.
Limited use cases dominate
Anthropic found that usage clusters around a relatively small set of tasks. The 10 most frequently performed tasks accounted for nearly a quarter of consumer interactions and almost a third of enterprise API traffic. Much of that activity focused on code creation and modification.
The report says this concentration has remained relatively constant over time, suggesting Claude’s value is strongest in proven task areas rather than broad, general deployments. It also indicates that organisations may see better results from targeted AI rollouts tied to specific use cases where large language models are effective.
Augmentation outperforms automation
On consumer platforms, collaborative use was more common, with users iterating over multiple turns in a conversation. In enterprise API usage, businesses more often pursued automation to generate efficiency gains.
However, Anthropic observed that while Claude performed well on shorter tasks, quality declined as tasks became more complex or required longer “thinking time.” Tasks estimated to take humans several hours showed lower completion rates than shorter ones. The report notes that longer tasks were more successful when users broke them into smaller steps and corrected outputs through iteration.
White-collar work dominates, with different task splits
Anthropic said most queries were associated with white-collar roles. It also observed differences by geography, noting that poorer countries tended to use Claude more often in academic settings than in the United States.
The report suggested that AI may shift work unevenly within roles. It noted that travel agents could lose complex planning tasks to the model while retaining more transactional work, while in other roles, such as property managers, routine administrative tasks may be handled by AI while higher-judgment tasks remain with humans.
Reliability reduces productivity expectations
Anthropic said claims that AI could boost annual labour productivity by 1.8% over a decade may need to be adjusted downward to about 1% to 1.2% once added labour and costs are considered. The report attributes this reduction to the work required around AI systems, including validation, error handling and rework.
The report also said outcomes depend on whether AI complements human work or substitutes for it, with substitution success tied to the complexity of tasks.
Anthropic noted a near-perfect correlation between the sophistication of prompts and successful outcomes, highlighting that results are strongly shaped by how users interact with the technology.
Key takeaways for leaders
The report said AI delivers value fastest in specific, well-defined areas and that AI-human collaboration outperforms full automation for complex work. It also warned that reliability limits and additional overhead reduce predicted productivity gains, and that workforce impacts will depend on task mix and complexity rather than job titles alone.
#Anthropic #Economic #Index #Finds #Concentrated #Coding #Reliability #Limits #Productivity #Gains