AI & DEI (Part Two): How can AI disrupt the DEI industry?
Welcome back to our series on artificial intelligence and its impact on the Diversity, Equity, and Inclusion (DEI) industry. This week we’ll look at the benefits (and pitfalls) of leveraging AI in DEI, while also analyzing some of the disruptors who are applying this technology in the space.
AI isn’t just potentially discriminatory – sometimes it’s plain wrong
Last week, I wrote about the potential for AI technology to replicate and sustain existing systemic bias. The focus was necessarily on some of the bigger scandals surrounding the technology: from facial recognition AI that doesn’t “see” darker skin tones to hiring algorithms that discriminate against women and people of color.
My point was that any AI is only as good as the data that feeds it.
With the advent of Chat-GPT and other generative AI tools, it’s important to remember that this still holds true. Chat-GPT is “fed” by publicly available information on the internet. That includes all the biased, racist, sexist, homophobic, and xenophobic content on the web. What’s more, when Chat-GPT can’t figure out a reliable answer to a query, it’s been known to simply make things up.
All sorts of experiments have demonstrated these “hallucinations,” which is how the industry refers to AI rendering incorrect, faulty, or even abusive information. OpenAI, the creators of Chat-GPT, have acknowledged these drawbacks. They advise users to work mitigating factors into any process that relies on Chat-GPT, such as human reviews and additional context in their requests.
These “hallucinations” are well-known from other high stakes fields where AI promises much but doesn’t always deliver – like medicine. In one highly publicized case, an AI-powered tool became highly accurate at diagnosing skin cancer, simply by analyzing medical images of lesions. But when engineers dug a little deeper, they found the number one factor associated with a confirmed cancerous lesion was the presence of a ruler in the image. That’s because rulers are used in medical images to show scale, so the AI concluded they must indicate cancer.
In other words, in its learning environment, the machine wasn’t actually distinguishing between cancerous and benign lesions – it was making a statistical association between “ruler present in image” and “confirmed cancerous lesion”. The tool would therefore be useless in helping real doctors assess patients.
Other stories are out there about racist or xenophobic content written by Chat-GPT, but the point is always the same: the data AI learns from has to be high quality and engineers have to be extremely careful about how AI draws conclusions. Human oversight is essential to both.
Luckily, the OpenAI team has been quick to address these issues with the release of GPT-4, Chat-GPT’s latest iteration. It’s still not perfect, but GPT-4 is much more successful and careful when rendering answers to users. It can easily detect potentially abusive requests and bakes caveats into the information it supplies to ensure users don’t blindly take it as fact.
This represents a massive leap forward for any business thinking about how AI can disrupt their industry. With even more reliable language processing, GPT-4 can be integrated into any system or process that requires content creation, comms, education, or learning.
Some innovators have seized the opportunity already. Duolingo, the well-known language learning app, has worked with OpenAI to launch tools that allow students to have conversations with AI in foreign languages. E-learning giant Khan Academy leverages the technology to help teachers design personalized lesson plans and give struggling students special attention wherever they most need it.
Speed, personalization, creativity, guidance – these would all be incredibly valuable benefits in any industry. But they strike me as particularly significant for Diversity, Equity, and Inclusion (DEI). It’s worth examining some of the current pain points in that industry to understand why.
The structural challenges faced by DEI
I’ve written previously about the difficult position Chief Diversity Officers (CDOs) can find themselves in. While companies have been aggressively hiring for the position since the summer of 2020 – and pledging large amounts of money to their DEI initiatives – the CDO role has some of the highest turnover in the C-suite.
There are many reasons for this. DEI is often not seen as business-critical, which makes it difficult for CDOs to secure budgets. Their teams are therefore often understaffed, leaving them with very few resources and even higher expectations. This inevitably leads to feelings of failure, inadequacy, or inefficiency – hence the high rates of attrition.
As a result, many organizations feel they’re behind on the DEI curve. In a study conducted by the HR Institute, less than a quarter of HR professionals deemed their companies’ DEI practices to be “advanced” or “expert”. And only 9% rated their organizations’ policies to be “very effective”.
Clearly, the issue doesn’t come down to CDOs themselves. Instead, we have to look to the structural issues that can hamstring an organization’s DEI ambitions.
First, outdated and ineffective methods continue to be favored. The same HR Institute study found that 69% of companies are still relying on unconscious bias training. Another report conducted by Dandi estimates that globally, 80% of corporations paid for unconscious bias training in 2021, despite widespread agreement that these trainings are ineffective at best and at worst, actively damaging (because they’re perceived to be ineffective drains on resources).
Perhaps as a result, other forms of training receive very little attention, including around DEI-specific communications. Having worked with many DEI experts and professionals in my career, I know transparent comms are absolutely essential to the success of any DEI program. But sensitivity around language and perception makes it difficult for leaders and organizations to communicate with their workforces confidently.
For many organizations, that comms problem is tied to an even more fundamental issue: low levels of reporting, analytics, and data capture. Only 20% of HR professionals in the HR Institute report say their organizations establish and measure DEI analytics to a “high or very high degree.” Securing that data is critical to getting leadership (and everyone else) to identify issues, set objectives, and buy into the value of DEI.
Finally, in speaking to my own network, I’m constantly reminded that every organization is at its own unique stage in terms of DEI maturity. According to one HBR study, around a third of companies describe their DEI efforts as “compliant” – meaning there’s no connection between DEI and the business beyond basic legal requirements. (Another 16% report they’re merely aware of DEI’s importance, but not yet doing anything about it.)
When we consider that the individuals responsible for championing DEI (ideally everyone, but especially leadership) are also at their own levels of DEI maturity, it’s easy to see how one-size-fits-all solutions simply won’t cut it.
Content, comms, data and analytics, education – all personalized to an organization at an individual level: these are the fundamental needs most businesses are struggling with in their DEI programs. And AI is uniquely poised to address them.
Current attempts at AI disruption in DEI
The market has already started responding to these needs. Innovators have been quick to capitalize on AI’s promise (and hype) to automate data management and reveal unseen insights. Similarly, with the recent advances in natural language processing (NLP), new players are helping organizations analyze their content and communications through a DEI lens.
Most of these solutions are tied to HR processes and their management. DEI is of course deeply tied to HR functions such as talent acquisition, retention, and promotion. Pay equity and parity fall into this shared space, as does representation in leadership positions.
Tracking these aspects of human capital is essential for any business claiming to make headway in DEI. This is especially true when it comes to sensitive issues like racial and gender pay gaps. Data analytics tools now exist that allow large organizations to track this information in real time.
One of the pioneers in leveraging AI to do this is Diversio. With big name clients like Honda and Unilever, as well as a proprietary database of over 20,000 companies, Diversio is able to plug into an organization’s HR data and make recommendations that help work toward a “measurably inclusive” workforce. Diversio integrates with existing HR platforms and makes it easier for companies to see how well they’re doing through benchmarking tools and “inclusion scores.” The AI recommendation engine analyzes data for insights and advises users on what adjustments to make. These recommendations are based on a set of 1,200 vetted DEI programs and policies sourced from companies around the world.
This is DEI management in a one-stop shop. By responding to key organizational needs, like real-time insights, benchmarking, and employee engagement, Diversio has been able to build an AI-powered analytics platform that helps businesses manage their “DEI performance”
In the field of content and comms, AI has also been leveraged to help companies identify and mitigate unconscious bias. Textio is one of the premier examples of these tools. It acts as a sort of screening service and performance management tool, so that anyone holding the pen in an organization can have an AI assistant analyze whether the language they’re using is biased.
It’s an incredible tool, which, like Diversio, is mostly geared towards HR functions and process or performance management. There are a few other examples of AI-powered tools in DEI, but overall my impression is that these are not true “disruptions”. They are elegant and useful innovations that provide real benefits, but they’re not shifting the status quo.
A truly disruptive vision for AI & DEI
Diversio and Textio help companies measure, track, or course-correct in real time. They are necessary add-ons and plug-ins to existing HR software that make it easier for companies to manage what’s going on internally. But who’s to say that guarantees greater success in effecting meaningful change?
And who are these tools really serving? There’s an old adage in management consulting that “what gets measured gets managed.” These tools will no doubt help large companies with robust DEI resources improve over time. But what about the organizations who are way lower down on the DEI Maturity curve (i.e. nearly 50% of companies)? It’s not enough to merely introduce tools for metrics if you don’t have the infrastructure and buy-in to effect change.
Assisting companies with change management is no mean feat. There are entire divisions devoted to it at the most prestigious consulting firms – the McKinseys and Bains of the world. Nevertheless, most of the engagements they sell are long and end in the delivery of a PowerPoint. I’ve heard plenty of in-house teams complain that big consultancies are not the answer to their DEI challenges.
There are also smaller DEI consulting firms and practitioners, whose knowledge and experience can be extremely valuable, though they typically have less reach. They have the benefit of being specialized, but they too are based on a rapidly aging model of consulting that relies on long engagements, deeply baked-in consultants, and recommendations instead of action.
To repeat: the benefits of AI are speed, creativity, personalization, and real-time guidance. These all respond to needs companies have when DEI is primarily a change management issue.
If you can increase speed to action, you can accelerate experimentation and change. If you can outsource creativity, you can get to ideas for strategy, content, and comms faster and with fewer people. This makes problem-solving – a crucial component of change management – less daunting. If you can introduce elements of personalization, you can reach organizations at scale and account for variation in region, industry, and maturity. This leads to relevant and actionable guidance, which can also help accelerate under-resourced DEI departments.
Now I’m not saying that Chat GPT is suddenly going to become the next DEI guru for organizations of all shapes and sizes. These issues are complex, and given all the pitfalls discussed above, it’s clear to me we’re not ready to outsource DEI to machines quite yet.
DEI is still a very human issue. It is ultimately rooted in cultural perceptions and societal inequalities and injustices, which make it a highly sensitive topic. No one is arguing that we should have all our CEOs write their DEI-related comms through Chat-GPT.
But we have to recognize that data and analytics don’t go far enough. What we really need to disrupt are the players who claim to effect meaningful and lasting organizational change around DEI. It’s my belief that no one has yet figured out how to leverage the greatest technological advance of our age to back up those claims.
No one is offering solutions that combine AI and human expertise to help companies move up that DEI maturity curve.
To which I say: game on.
Porter Braswell