3D model

Investing in artificial intelligence? Here are 3 things to do today to ensure ethical AI tomorrow.

As internally developed artificial intelligence systems move from lab to deployment, the importance of creating unbiased, ethical systems is greater than ever. The challenge is that there is no simple solution to building ethical consideration into AI algorithms. But there are a few things you can do early on that help.

To get a sense of the many issues, let’s check out a hypothetical AI-powered job candidate review system. Designed as an AI chatbot, the system vets potential candidates, analyzing a wide range of objective criteria to determine whether someone should pass through the initial application stage. The company decided to restrict access by the system to data about a candidate’s gender, age, and ethnicity, in order to promote a level playing field for candidates who might otherwise be overlooked. 

The system provides another benefit, reducing the chance that a hiring manager’s bad mood or distracted mindset will hurt a qualified applicant’s prospects of landing the job. Operating without emotion, the AI system evaluates candidates based on their experience, skillsets, and even their empathy levels. But the system doesn’t make decisions, it makes recommendations, passing along the most promising candidates to the company’s human managers, who then rely on their professional judgment to make a final decision.

This simplified example highlights the need to anticipate non-technical factors like data bias in designing AI systems. Decisions made early on in the planning process help ensure your company successfully engineers an ethical AI system.

Indispensable human judgment

Data is vital to decisionmaking, and AI helps gather and parse that information. It can even generate reports and recommendations based on the objectives with which it’s been programmed. But a machine learning algorithm can’t tell you whether a decision is ethical or whether it will irreparably damage morale within your organization. It hasn’t spent years honing its business intuition – the kind of intuition that tells you that even though a decision looks right on paper, it would be a betrayal of your core client base. 

That’s where human judgment enters the picture. There are a number of approaches you can take for integrating AI into your decisionmaking strategies. Depending how high the stakes are and the problem you’re trying to solve, you might outsource the job to AI but insist that a person review its findings before action is taken. Or you might identify key areas that will largely be the domain of AI, relieving you of the need to be involved in every decision related to that particular process. 

But human judgment will remain central to business decisions for some time to come. In fact, judgment and interpersonal skills will be at a premium in the workforce of the future. As AI becomes an increasingly prominent tool in our professional arsenals, we must ensure that we’re using it ethically. Here are some ways to do that: 

1. Identify your company’s core values 

Systematizing your company’s core values starts with identifying and documenting those values. Start a process that captures the values that have become central to your company culture. Writing in Harvard Business Review, one researcher made a useful distinction between “values” as marketing and “values” as deeply held beliefs: “If you’re not willing to accept the pain real values incur, don’t bother going to the trouble of formulating a values statement.”

If an AI program suggests a course of action that makes sense on paper but not in the broader context of your organization’s long-term goals, you’ll need a strong internal compass to make the right call. Data is important, but you’re ultimately responsible for your decisions. When called upon to explain your actions, you can’t default to saying, “The AI made me do it.” Use the tools to gather information and add context to your decisionmaking process. But when you make a choice, human nature should be in the mix. 

2. Establish an AI oversight group 

Machine learning systems are only as good as the data we feed them. Which immediately creates a challenge for AI system developers: humans are biased. Even the most fair-minded person carries unconscious bias, sometimes without being aware of it. And so without meaning to, developers can end up corrupting the systems they’re designing to help us make more objective decisions. 

To get around this problem, create internal AI watchdog groups that periodically review your algorithms’ outputs and can address complaints about discrimination and bias. Then use the group’s findings to refine your AI-assisted approach to leadership. 

3. Use AI to facilitate better experiences for customers and employees alike 

Machine learning systems can generate powerfully personalized experiences—for both customers and employees. The World Economic Forum suggests that using AI ethically includes shifting employee performance metrics from output-based measurements to evaluating the creative value they bring to the company. “Although there are roles under threat, there are also roles that will become needed more than ever. It’s more cost efficient to retrain current employees to fill the roles you need in the future than it is to hire new ones, and they are also more likely to be loyal to your organisation.” 

One great part of the power of AI tools in the office is that people don’t have to do drudge work anymore. As their roles become more dynamic, so, too, should your evaluation standards.

By investigating these 3 areas early on in the development process, your company is better positioned to build new AI systems that reflect—and protect—your company’s values. And improve the experiences of your customers and employees alike.