In This Article:

Share Article

AI for a radically respectful workplace

The role of AI in the workplace can expand to address biases and prejudices in the system.

The role of AI in the workplace is often limited to productivity and efficiency. The bigger opportunity is in behavioral training for systemic justice.

An interview with Kim Scott. Edited by Sadhana Balaji.

As artificial intelligence (AI) is becoming mainstream, its application is primarily focused on productivity, efficiency, doing things faster, doing more things, doing things all day and night, and so on. While this is important to the growth of any organization, it overlooks a fundamentally transformative opportunity: AI’s ability to create a just organization, empowering people to do more impactful work.

Studies after studies (sample here) have shown that when employees are not engaged, they don’t do their best work. And the foundation of such engagement is fairness and justice in the workplace. 

The foundation of employee engagement — and employee success — is fairness and justice in the workplace.

In her book ‘Just Work: How to Root Out Bias, Prejudice, and Bullying to Build a Kick-ass Culture of Inclusivity,’ Kim Scott has written extensively about creating systemic justice at work. In this essay, we summarize salient points from a conversation with Kim to show how AI can help achieve a just workplace.

Before AI can help, it must understand the difference between bias, prejudice, and bullying. 

  • Bias is “not meaning it.”. It’s mostly an unconscious thought
  • Prejudice is “meaning it”— a conscious belief reflecting some kind of unfair or inaccurate stereotype
  • Bullying is “being mean” and intending to cause harm, even if there isn’t necessarily a belief, conscious or unconscious behind it

Once you’ve trained AI to differentiate these three obstacles to fairness and justice, it can help train/coach employees to be more inclusive. She imagines it can work in several ways.

Rooting out bias with AI

The most common application of AI today is a chatbot. In this case, a bias-slaying bot can consistently support employees in weeding bias. For instance, it can observe employee conversations, and when it identifies bias, it can use the ‘I’ statement, such as, “I don’t think you intended it that way; would you like to rephrase?” or “I don’t think you meant that the way it sounded, would you like to know why?” This can help employees identify their biases and learn how to overcome them.

There are already tools today that do this in some way. For instance, Textio uses AI to provide inclusive language guidance to any written work, such as performance reviews, feedback, reports, or articles. It helps interrupt bias in real time, creating an overall culture of inclusivity.

Rooting out prejudice with AI

Prejudice is slightly more complex because there is a level of intent to it. In this case, the speaker/writer knows what they’re doing. So, the responses from the chatbot need to be different and tailored, such as an ‘It’ statement. The chatbot can say, “It is illegal to say that,” or “It is a violation of the code of conduct to speak that way.” This will help employees learn where the boundary is between one person’s freedom to believe what they want and their right to impose it on another.

This is not easy to articulate, especially not for a bot. But that’s where the real challenge lies for organizations developing AI for this purpose. When we build products that can identify and address prejudice, we can make the act of ‘calling out’ private and empathetic, giving people the chance to reform themselves.

Rooting out bullying with AI

As a behavior in the workplace, bullying crushes creativity and collaboration. Therefore, there needs to be consequences for it. An AI product can identify instances of bullying and inform the speaker. For example, it would say, “Kim, you’ve been talking more than your fair share of the time in this meeting,” or “That is aggressive language.”

Over time, it could also track and report bullying. At the end of every meeting, it could give you a report of how you treated others. This can be included in yearly performance reviews, making employees more accountable for their actions.

The challenge here is that aggression isn’t objective; it is a matter of how others respond too. For example, one person might be comfortable with high-volume debates, while another might need a more gentle approach. AI might not yet be able to adapt to these nuances. But with some human attention, we can make things work much better than they currently are.

The primary advantage of AI intervention in rooting out bias, prejudice, and bullying is that it softens any defensiveness an employee might have. 

When someone is told they’re biased, prejudiced, or bullying, it induces a great deal of defensiveness. A non-judgmental bot can make the most difficult conversations simple and private. AI makes personalized coaching at scale not just possible but also effective by taking out the emotions of at least one party away from the conversation.

However, AI's possibilities don’t automatically make it problem-free. Most AI today doesn’t reveal its sources. So, it’s possible that the bad patterns of human behavior are embedded into the AI as well. Moreover, most of today’s AI is text-based, even though the most important conversations happen face-to-face. AI isn’t ready for such conversations yet. 

These are opportunities for technologists to build tools that help organizations further the pursuit of justice. This is not only emotionally or morally right but also practical for the business.

About Kim Scott

Kim is the author of Just Work: How to Root Out Bias, Prejudice, and Bullying to Build a Kick-ass Culture of Inclusivity and Radical Candor: Be a Kick-Ass Boss Without Losing Your Humanity and co-founder of the company Radical Candor. Kim was a CEO coach at Dropbox, Qualtrics, Twitter, and other tech companies. She was a member of the faculty at Apple University and before that led AdSense, YouTube, and DoubleClick teams at Google. Prior to that Kim managed a pediatric clinic in Kosovo and started a diamond-cutting factory in Moscow. She lives with her family in Silicon Valley.

No items found.

You may also like...

Atomic Conversations: Dave Ulrich on reinventing HR for the modern workplace
A conversation with Dave Ulrich on the major aspects of HR today, including handling uncertainty and the need to create value for all stakeholders.
Atomic Conversations: Trish McFarlane on the new opportunities in HR technology
A conversation with Trish McFarlane on the many possibilities with HR technology.
AI at the service of the employee
AI can contribute significantly in employee success. Find out how from Jim Stroud.