AI ethics and adding trust to the equation

The size and scope of the increasingly global digital activity is well documented. The growing consumer and business demand, in essentially every vertical market, for productivity, knowledge, and communication is pushing the world into a new paradigm of digital interaction, very much like mobility, the Internet, and the PC before it. But, on a more massive scale. The new paradigm includes bringing machine intelligence to virtually every aspect of our digital world, from conversational interfaces to robot soldiers, autonomous vehicles, deep fakes, virtual tutoring systems, shopping and entertainment recommendations, and even minute-by-minute cryptocurrency price predictions.

The adoption of new technologies typically follows a fairly predictable S-curve during their lifecycles—from infancy to rapid growth, maturity, and ultimately decline. With machine learning, in many ways, we are still in the early stages. This is especially the case for artificial general intelligence (AGI) which aims at the type of machine intelligence that is human like in terms of self reasoning, planning, learning, and conversing. In science fiction, AGI experiences consciousness and sentience. Although we may be many years away from the sci-fi version of human-like systems, today’s advanced machine learning is creating a step change in interaction between people, machines, and services. At Entefy, we refer to this new S-curve as universal interaction. It is important to note this step change impacts nearly every facet of our society, making ethical development and practices in AI and machine learning a global imperative.

Ethics

Socrates’ teachings about ethics and standards of conduct nearly 2,500 years ago has had a lasting impact on our rich history ever since. The debate about what constitutes “good” or “bad” and, at times, the blurry lines between them is frequent and ongoing. And, over millenia,  disruptive technologies have influenced this debate with new significant inventions and discoveries.   

Today, ethical standards are in place across many industries. In most cases, these principles exist to protect lives and liberty and to ensure integrity in business. For example, the Law Enforcement Oath taken by local and state police officers. The oath in the field of law required for attorneys, as officers of the court. Professional code of ethics and code of conduct. The mandatory oath for federal employees or armed forces.

The Hippocratic Oath in medicine is another well-known example. Although the specific pledges vary by medical school, in general, there’s recognition as to the unique role doctors play in their patients’ lives and the delineation of code of ethics that guide the actions of physicians. The more modern version of the Hippocratic oath takes guidance from the Declaration of Geneva which was adopted by the World Medical Association (WMA). Among other ethical commitments, the Physician’s Pledge states:

“I will not permit considerations of age, disease or disability, creed, ethnic origin, gender, nationality, political affiliation, race, sexual orientation, social standing, or any other factor to intervene between my duty and my patient.”

This clause is striking for its relevance to a different set of technology fields that today are still in their early phases: AI, machine learning, hyperautomation, cryptocurrency. With AI systems already at work in many areas of life and business, from medicine to criminal justice to surveillance, perhaps it’s not surprising that members of the AI and data science community have proposed an algorithm-focused version of a Hippocratic oath. “We have to empower the people working on technology to say ‘Hold on, this isn’t right,’” said DJ Patil, the U.S. Chief Data Scientist in President Obama’s administration. The group’s 20 core principles include ideas such as “Bias will exist. Measure it. Plan for it.” and “Exercise ethical imagination.”

The case for trustworthy and ethical AI

The need for professional responsibility in the field of artificial intelligence cannot be understated. There are many high-profile cases of algorithms exhibiting biased behavior resulting from the data used in their training. The examples that follow add more weight to the argument that AI ethics are not just beneficial, but essential:

  1. Data challenges in predictive policing. AI-powered predictive policing systems are already in use in cities including Atlanta and Los Angeles. These systems leverage historic demographic, economic, and crime data to predict specific locations where crime is likely to occur. So far so good. The ethical challenges of these systems became clear in a study of one popular crime prediction tool. The predictive policing system developed by the Los Angeles Police Department in conjunction with university researchers, was shown to worsen the already problematic feedback loop present in policing and arrests in certain neighborhoods. An attorney from the Electronic Frontier Foundation said, “‘if predictive policing means some individuals are going to have more police involvement in their life, there needs to be a minimum of transparency.”’
  2. Unfair credit scoring and lending. Operating on the premise that “all data is credit data,” machine learning systems are being designed across the financial services industry in order to determine creditworthiness using not only traditional credit data, but social media profiles, browsing behaviors, and purchase histories. The goal on the part of a bank or other lender is to reduce risk by identifying individuals or businesses most likely to default. Research into the results of these systems has identified cases of bias such as two businesses of similar creditworthiness receiving different scores due to the neighborhood in which each business is located.
  3. Biases introduced into natural language processing. Computer vision and natural language processing (NLP) are subfields of artificial intelligence that give computer systems digital eyes, ears, and voices. Keeping human bias out of those systems is proving to be challenging. One Princeton study into AI systems that leverage information found online showed that the biases people exhibit can make their way into AI algorithms via the systems’ use of Internet content. The researchers observed “that human-like semantic biases result from the application of standard machine learning to ordinary language—the same sort of language humans are exposed to every day.” This matters because other solutions and systems often use machine learning models that were trained on similar types of datasets.
  4. Limited effectiveness of health care diagnosis. There is limitless potential for AI-powered systems to improve patients’ lives using trustworthy and ethical AI. Entefy has written extensively on the topic, including the analysis of 9 paths to AI-powered affordable health care, how machine learning can outsmart coronavirus, improving the relationship between patients and doctors, and AI-enabled drug discovery and development.

    The ethical AI considerations in the health care industry emerge from the data and whether the data includes biases tied to variability in the general population’s access to and quality of health care. Data from past clinical trials, for instance, is likely to be far less diverse than the face of today’s patient population. Said one researcher, “At its core, this is not a problem with AI, but a broader problem with medical research and healthcare inequalities as a whole. But if these biases aren’t accounted for in future technological models, we will continue to build an even more uneven healthcare system than what we have today.”
  5. Impaired judgement in the criminal justice system. AI is performing a number of tasks for courts such as supporting judges in bail hearings and sentencing. One study of algorithmic risk assessment in criminal sentencing revealed the need for removing bias from some of their systems. Examining the risk scores of more than 7,000 people arrested in Broward County, Florida, the study concluded that the system was not only inaccurate but plagued with biases. For example, it was only 20% accurate in predicting future violent crimes and twice as likely to inaccurately flag African-American defendants as likely to commit future crimes. Yet these systems contribute to sentencing and parole decisions.

Ensuring AI ethics in your organization

The topic of ethical AI is no longer purely philosophical. It is being shaped and legislated to protect consumers, business, and international relations. According to a White House fact sheet, “the United States and European Union will develop and implement AI systems that are innovative and trustworthy and that respect universal human rights and shared democratic values, explore cooperation on AI technologies designed to enhance privacy protections, and undertake an economic study examining the impact of AI on the future of our workforces.”

The European Commission has already drafted “The Ethics Guidelines for Trustworthy Artificial Intelligence (AI)” to promote ethical principles for organizations, developers, and society at large. To bring trustworthy AI to your organization, begin with structuring AI initiatives in ways that are lawful, ethical, and robust. Ethical AI demands respect for applicable laws, regulations, as well as ethical principles and values, while making sure the AI system itself is resilient and safe. In their guidance, the European Commision presents the following 7 requirements for trustworthy AI:

  1. Human agency and oversight
    Including fundamental rights, human agency and human oversight
  2. Technical robustness and safety
    Including resilience to attack and security, fall back plan and general safety, accuracy, reliability and reproducibility
  3. Privacy and data governance
    Including respect for privacy, quality and integrity of data, and access to data
  4. Transparency
    Including traceability, explainability and communication
  5. Diversity, non-discrimination and fairness
    Including the avoidance of unfair bias, accessibility and universal design, and stakeholder participation
  6. Societal and environmental wellbeing
    Including sustainability and environmental friendliness, social impact, society and democracy
  7. Accountability
    Including auditability, minimisation and reporting of negative impact, trade-offs and redress

Be sure to read Entefy’s previous article on actionable steps to include AI ethics in your digital and intelligence transformation initiatives. Also, brush up on AI and machine learning terminology with our 53 Useful terms for anyone interested in artificial intelligence.