
Insights & Analysis
Jun 12, 2025
AI Agents at Work: Why Alignment with Human Preferences Matters as Much as Capability
Technical possibility without human preference risks wasted investment, resistance, and diminished impact.
1. What Most AI Strategies Miss: Preference vs Capability
AI adoption strategies often prioritise technical feasibility (ie. what models can do) over what people want them to do. But a recent Stanford study challenges this mindset.
In their WORKBank research, involving 1,500 U.S. workers and 52 AI experts, Shao et al. (2025) mapped task-level automation desire versus AI capability across occupations. The findings expose a critical blind spot in enterprise AI strategy:
2. The Four Zones of AI Adoption
The study categorises tasks into four distinct zones:
Zone | Meaning |
---|---|
Green Light | High desire, high capability. The sweet spot for AI adoption. |
Red Light | Low desire, high capability. Technically feasible but culturally rejected. |
Opportunity | High desire, low capability. Innovation whitespace for R&D investment. |
Low Priority | Low desire, low capability. Best left untouched. |
Key findings include:
46% of tasks had positive worker desire for automation, with the top motivation being time saved for higher-value work.
Arts, media, and education showed the strongest resistance to AI automation, driven by the intrinsic value of human creativity and expertise.
Worryingly, 41% of AI investments currently target Red Light or Low Priority zones, suggesting significant misallocation of resources.
Source: Shao et al., 2025
3. Designing AI Agents for Human Alignment
The research reveals a strategic imperative:
AI success depends as much on adoption desire as it does on model capability.
Here’s how leaders can act:
✅ Audit Task-Level Preferences
Go beyond capability mapping. Survey teams to identify which tasks they want automated versus supported. Alignment data is as critical as technical assessments.
✅ Invest in Opportunity Zones
Tasks with high desire but low capability represent R&D opportunities with strong built-in adoption demand. Prioritise these for next-stage pilots.
✅ Design for Augmentation First
Workers consistently prefer AI agents that collaborate rather than replace. Agents should enhance decision-making, creativity, and confidence—not diminish them.
✅ Rethink Skills and Structures
As AI takes over repetitive tasks, demand will shift towards interpersonal, organisational, and integrative skills. Training strategies must reflect this pivot.
The Bottom Line
The future of AI agents in work isn’t determined solely by what’s technically possible. It’s shaped by what people value, trust, and choose to use.
Otherwise, we risk building powerful systems no one wants: a costly failure in strategy, not technology.
Changelog