The Quiet Crisis of AI Culture
- Janine Dormiendo
- Aug 25
- 3 min read
How Company Culture Can Unintentionally Sabotage AI Adoption

A UK Example: The Alan Turing Institute
In late 2024 and into 2025, the Alan Turing Institute, the UK’s national centre for data science and AI, found itself in a cultural crisis while pursuing strategic transformation. What began as a restructuring and vision realignment triggered widespread unrest. Staff raised concerns over leadership transparency, a shift in focus away from inclusive, public-interest research toward defence projects, and decisions that felt disconnected from the institute’s collaborative values.
By December 2024, nearly 100 employees had signed a letter expressing no confidence in leadership—pointing to a decline in morale, growing burnout, and a lack of trust. The response from leadership, emphasising commitment to integrity and honesty, felt rehearsed to many, reinforcing a sense of cultural distance.
This wasn’t a failure of technological capability—it was a fracture in the human fabric that holds innovation together: when cultural alignment falters, even institutions fueled by intellectual capital can stall.
Culture Speaks Louder Than Strategy
Culture isn’t confined to strategy documents. It’s lived in everyday decisions, patterns of trust, and unspoken norms and rituals. At the Turing Institute, staff felt innovation was being redefined by governance shifts rather than grounding values. When those shifts didn’t resonate with the people charged with carrying them forward, AI progress slowed—not because of code, but because of connection.
What Often Stalls AI, Quietly
1. Disconnection Breeds Withdrawal. Staff at the institute withdrew from active participation. When vision no longer matches collective values, people disengage. AI initiatives—even with strong funding—found themselves on shaky ground.
2. Vision Without Engagement Creates Resistance. Strategic change was met not with enthusiasm but with skepticism. Without inclusive dialogue, staff felt sidelined—not engaged.
3. Lack of Trust Stops Progress. Leadership claims of transparency felt performative. Without acknowledging that perception gap, trust eroded. AI-driven services, even in research settings, depend on trust—among collaborators and within the mission itself.
How Culture Can Welcome AI In, Gently
Integrating AI effectively isn’t just a technology rollout—it’s a cultural one. These practices can help ensure people feel aligned, valued, and motivated to engage:
Create Inclusive Spaces for Dialogue Involve people early. At every level, invite input on how new technology or strategy reshapes purpose. Make people feel valued. Deal with fears and resistance head on.Â
Prioritise Transparency Over Reassurance When change is uncomfortable, honest communication—even about what’s uncertain—is more grounding than perfect messaging. People pick up on authenticity.Â
Narrate the Journey, Not Just the Vision Shared experiences—about where the process wavered or where unexpected insight emerged—build collective ownership and adaptability. Include your people on that journey, include them on the ‘hero’s’ journey.Â
Preparing People for an AI-Shaped Future
AI adoption isn’t only about systems—it’s about the people asked to work differently, learn differently, and lead differently. The organisations that navigate this well are those that invest not just in technology, but in the skills, confidence, and human capabilities that allow people to thrive alongside it.
To support this shift, we’ve created a free resource:
A practical framework to help you future-proof your workforce by building the blend of AI fluency and essential human skills—like critical thinking, collaboration, and emotional intelligence—that tomorrow’s work demands.
Because sustainable AI adoption is never just about what machine intelligence can do. It’s about what people are empowered to become.
.png)