top of page

Psychological Safety and the Workplace AI Evolution

I co-created and co-hosted the Amplify AI: Fundamentals for People Leaders event on the 28th March 2025 in London, it was a success for a combination of reasons. The Attendees were engaged, the Speakers and Panelists were open to sharing their knowledge without any filter, and we set a very specific tone at the beginning of the event.


We started with a panel on Psychological Safety & Overcoming Resistance to AI. Because that panel was the ethos of the event.We wanted every attendee to believe this event was a space for curiosity over caution, a place where no one would feel embarrassed asking questions, exploring ideas, or admitting what they didn’t yet understand about AI.

We encouraged everyone to arrive with a ‘beginners mindset’; an openness to learn, share, and absorb the knowledge on offer. This is in contrast to the AI hype cycle, were self-proclaimed experts wish to hold us hostage to their knowledge.

“If your mindset is unprejudiced…it is open to everything. In the beginner’s mind, there are may possibilities, but in the expert’s mind, there are few.” - Shunryu Suzaki

Psychological Safety + AI

That belief - that it was safe to take interpersonal risks - is the kind of foundation we need more of in the evolving AI landscape.And once you create that kind of environment, people open up. You hear honest reflections, confessions of uncertainty, and shared excitement. You see clearly that everyone is in a different place on their AI journey.I spoke to a number of wonderful humans during the Amplify AI event, and their companies were at different stages of AI implementation, and what I observed, was varied human emotions and reactions to this current state.

Why? - One of the reasons is that AI-induced job insecurity is very real and being aware of the behaviours and outcomes that might result from this within your workforce, will be key to delivering successful workplace and job role change.


AI-induced job insecurity

AI-induced job insecurity is unlike traditional job insecurity, which is a result of economic uncertainty and organisational restructuring; AI-induced job insecurity is the perceived threat that artificial intelligence tools and systems could:

  • Replace your role,

  • Reduce your influence or decision-making authority,

  • Automate key areas of knowledge or your responsibility.

A caveat here is that this isn’t always tied to actual job loss - it’s often the anticipation or feeling that AI will diminish an individual’s relevance or value at work.

Building on AI-induced job insecurity, these three areas were helpful to me as I explored the wider context of the impact:


Social Exchange Theory

Social Exchange Theory is a lens to understand how employees respond to the implementation of AI in organisations. SET suggests that relationships - whether between individuals or between employees and the organisation - are built on reciprocal exchanges. If employees perceive the AI implementation as beneficial (e.g., reducing mundane tasks, improving decision-making), they may reciprocate through positive behaviours like knowledge sharing. However, if AI threatens their status, autonomy, or job security, they may withhold cooperation or engage in protective behaviours.

In short, SET helps explain why some employees respond positively to AI tools while others resist, depending on the perceived costs and benefits in the exchange relationship with the organisation.


Knowledge Hiding Behaviour (KHB)

A growing amount of research has identified the link between AI system implementation and knowledge hiding behaviours. These behaviours include:

  • Evasive hiding – Providing misleading information or pretending ignorance.

  • Playing dumb – Claiming not to have the requested knowledge.

  • Rationalised hiding – Justifying withholding knowledge (e.g., confidential or not yet ready to share).

These behaviours are amplified when employees feel that AI systems monitor their work closely, automate knowledge-based tasks, or replace decision-making roles traditionally held by humans. The perceived threat from AI can lead to protective knowledge behaviours, where employees deliberately restrict access to their expertise to maintain control or relevance.

In short, the presence of AI tools can create a fear of knowledge exploitation, i.e., the concern that one's expertise will be captured, codified, and used by AI in ways that reduce personal value or job security.


Impact on Psychological Safety

As AI tools and systems are implemented, it can create a growing feeling that undermines psychological safety, especially when not accompanied by clear communication and trust-building measures. As we all know, psychological safety is crucial for collaboration and innovation.

With this in mind, as People Leaders, we need to be aware of the following:

  • Decreased psychological safety when AI systems are perceived as opaque (“black box”) or used to surveil and judge employees.

  • Lower willingness to share knowledge in environments where AI is associated with redundancy risk or performance monitoring.

  • We need managerial actions to explicitly support psychological safety, such as involving employees in AI design decisions, ensuring transparency, and reassuring them about the continued value of human input.


3 x States of AI implementation

Based on my research and the conversations I had with folks at Amplify AI, I started to think about the conditions and/or environment that would influence the degree to which psychological safety was present.


The result was a loosely categorisation of 3 x States of AI implementation:


Analysis Paralysis

  • State description: The tech is there, the curiosity is bubbling, but IT compliance and risk teams are still in a holding pattern. So the tools sit, shiny and unused.

  • Tool(s) adoption: AI tools are installed, but human hands are off-limits.

  • Leaders visibility: Leadership is aware of the tools, but unclear on how - or when - they’ll be safely deployed.


Forming-to-Norming

  • State description: There’s a buzz of exploration, but also a lack of coordination. Strategy? Not quite. Guardrails? Still vague. But you can feel a culture trying to find its rhythm

  • Tool(s) adoptionTeams are experimenting, some sprinting, others stumbling - all in silos.

  • Leaders visibility: Leadership sees movement, but often lacks a clear picture of what’s working, where it’s happening, or who’s leading the charge.


Shadow State (aka. Chaos Mode)

  • State description: Innovation is happening in the shadows, with little oversight, and a whole lot of risk. The Leadership have no visibility.

  • Tool(s) adoption: AI tools are everywhere, but no one really knows where or how they’re being used.

  • Leaders visibility: Leadership might assume adoption is low - or controlled - when in fact, it’s neither.


Continuous learning journey

Amplify AI was signpost on my own AI personal learning journey, and the more I read, the more conversations I have, and the more time I invest using the AI tools and systems available - both experimentally and intentionally - I am confident that my AI literacy with increase.

I intend to focus my continuous learning on three areas:

  1. Evolving the 3 x States of AI Implementation Assessment Framework: Through user research, deeper analysis, and collaboration with the People & Talent community to test its usefulness and refine its structure.

  2. Exploring the intersection of AI and psychological safety in the workplace: Examining how AI impacts trust, voice, and risk-taking at work, and identifying where opportunities or harms might emerge. This includes ongoing conversations with credible sources such as organisational psychologists, mental health researchers, and workplace wellbeing experts, including those with academic credentials like PhDs in Psychology.

  3. Creating a shared learning space for People & Talent Leaders: A trusted, safe community to ask questions, test ideas, and navigate the emerging complexities of AI together.

Comments


bottom of page