Close
Skip to content
Entefy logo
  • Why Entefy
    • Core Technologies
      • Mimi AI Engine
      • Entefy 5-Layer Platform
      • Entefy App Framework
    • Key Differentiators
    • Inventions & Patents
  • Products & Services
    • AI & Automation
      • Digital Supply Chain Hub
      • Credit & Risk Decisioning
      • Report Analyzer
      • Communicator
      • Automated Data Scientist
      • Crypto Market Intelligence
    • Industry Solutions
    • Innovation Services
    • Education & Workshops
    • Customer Case Studies
  • EASI Subscription
  • Blog
  • Resources
    • Executive AI Sessions
    • Data Sheets
  • Company
    • Overview
    • Leadership
    • Investors
    • Newsroom
    • Careers
Request a Demo
Entefy logo
Request a Demo
  • Why Entefy+
    • Core Technologies+
      • Mimi AI Engine+
      • Entefy 5-Layer Platform+
      • Entefy App Framework+
    • Key Differentiators+
    • Inventions & Patents+
  • Products & Services+
    • AI & Automation+
      • Digital Supply Chain Hub+
      • Credit & Risk Decisioning+
      • Report Analyzer+
      • Communicator+
      • Automated Data Scientist+
      • Crypto Market Intelligence+
    • Industry Solutions+
    • Innovation Services+
    • Education & Workshops+
    • Customer Case Studies+
  • EASI Subscription+
  • Company+
    • Overview+
    • Leadership+
    • Investors+
    • Newsroom+
    • Careers+
  • Resources+
    • Executive AI Sessions+
    • Data Sheets+
  • Blog+
  • Contact+
Robot
June 5, 2018 Entefy

5 Reasons why the world needs ethical AI

In the U.S., 98% of medical students take a pledge commonly referred to as the Hippocratic oath. The specific pledges vary by medical school and bear little resemblance to the 2,500-year-old oath attributed to the Greek physician Hippocrates. Modern pledges recognize the unique role doctors play in their patients’ lives and delineate a code of ethics to guide physicians’ actions. One widely used modern oath states:

“I will not permit considerations of age, disease or disability, creed, ethnic origin, gender, nationality, political affiliation, race, sexual orientation, social standing, or any other factor to intervene between my duty and my patient.”

This clause is striking for its relevance to a different set of fields that today are still in their infancy: AI, machine learning, and data science. Data scientists are technical professionals who use machine learning and other techniques to extract knowledge from datasets. With AI systems already at work in practically every area of life, from medicine to criminal justice to surveillance, data scientists are key gatekeepers to the data powering the systems and solutions shaping daily life.

So it’s perhaps not surprising that members of the data science community have proposed an algorithm-focused version of a Hippocratic oath. “We have to empower the people working on technology to say ‘Hold on, this isn’t right,’” said DJ Patil, the U.S. chief data scientist under President Obama. The group’s 20 core principles include ideas like “Bias will exist. Measure it. Plan for it.” and “Exercise ethical imagination.” The full oath is posted to GitHub.

The need for professional responsibility in the field of data science can be seen in some very high-profile cases of algorithms exhibiting biased behavior resulting from the data used in their training. The examples that follow here add more weight to the argument that ethical AI systems are not just beneficial, but essential.

1.     Data challenges in predictive policing

AI-powered predictive policing systems are already in use in cities including Atlanta and Los Angeles. These systems leverage historic demographic, economic, and crime data to predict specific locations where crime is likely to occur. So far so good. The ethical challenges of these systems became clear in a study of one popular crime prediction tool. PredPol, the predictive policing system developed by the Los Angeles police department in conjunction with university researchers, was shown to worsen the already problematic feedback loop present in policing and arrests in certain neighborhoods. “If predictive policing means some individuals are going to have more police involvement in their life, there needs to be a minimum of transparency. Until they do that, the public should have no confidence that the inputs and algorithms are a sound basis to predict anything,” said one attorney from the Electronic Frontier Foundation.

2.     Unfair credit scoring and lending

Operating on the premise that “all data is credit data,” the financial services industry is designing machine learning systems that can determine creditworthiness using not only traditional credit-worthiness data, but social media profiles, browsing behaviors, and purchase histories. The goal on the part of a bank or other lender is to reduce risk by identifying individuals or businesses most likely to default. Research into the results of these systems has identified cases of bias like, for example, two businesses of similar creditworthiness will receive different scores due to the neighborhood the business is located in.

3.     Biases introduced into natural language AI

The artificial intelligence technologies of natural language processing and computer vision are what give computer systems digital eyes, ears, and voices. Keeping human bias out of those systems is proving to be challenging. One Princeton study into AI systems that leverage information found online demonstrated that the same biases people exhibit make their way into AI algorithms via the systems’ use of Internet content. The researchers observed, “Here we show for the first time that human-like semantic biases result from the application of standard machine learning to ordinary language—the same sort of language humans are exposed to every day.” This is significant because these are the same datasets often used to train machine learning systems used in other products and systems.

4.     Limited effectiveness of healthcare diagnosis

There is limitless potential for AI-powered healthcare systems to improve patients’ lives. Entefy has written extensively on the topic, including this analysis of 9 paths to AI-powered affordable healthcare. The ethical AI considerations in the healthcare industry emerge from the data that’s available to train machine learning systems. That data has a legacy of biases tied to variability in the general population’s access to and quality of healthcare. Data from past clinical trials, for instance, is likely to be far less diverse than the face of today’s patient population. Said one researcher, “At its core, this is not a problem with AI, but a broader problem with medical research and healthcare inequalities as a whole. But if these biases aren’t accounted for in future technological models, we will continue to build an even more uneven healthcare system than what we have today.”

5.     Impaired judgement in criminal justice sentencing

Advanced artificial intelligence systems are at work in courtrooms performing tasks like supporting judges in bail hearings and sentencing. One study of algorithmic risk assessment in criminal sentencing revealed how much more work is needed to remove bias from some of the systems supporting the wheels of justice. Examining the risk scores of more than 7,000 people arrested in Broward County, Florida, the study concluded that the system was not only inaccurate but plagued with biases. For instance, it was only 20% accurate in predicting future violent crimes and twice as likely to inaccurately flag African-American defendants as likely to commit future crimes. Yet these systems contribute to sentencing and parole decisions.

Entefy previously examined specific action steps for developing ethical AI that companies can use to help ensure the creation of unbiased automation. 

RECENT POSTS
AI and the future of dynamic pricing
Reinventing the supply chain one link at a time
Board-level guidelines for navigating AI opportunities and threats
Reliable AI built on transparency and ethics
PREVIOUS POST
Machine robot

The machine learning revolution: ML transformation in 7 global industries

In the U.S., 98% of medical students take a pledge commonly referred to as the Hippocratic oath. The specific pledges vary by ...
NEXT POST
Briane

The 4 digital headwinds impacting productivity and growth [VIDEO]

In the U.S., 98% of medical students take a pledge commonly referred to as the Hippocratic oath. The specific pledges vary by ...

Stay up to date

Newsletter Signup
By clicking 'Submit' you agree to Entefy's Privacy statement and Terms of use.
Home Blog Blog Contact
Entefy is an advanced AI software and automation company. Our multisensory AI technology delivers on the promise of the intelligent enterprise.
  • LinkedIn
  • You Tube
  • WHY ENTEFY
  • Core Technologies
  • Mimi AI Engine
  • Entefy 5-Layer Platform
  • Entefy App Framework
  • Key Differentiators
  • Inventions & Patents
  • EASI SUBSCRIPTION
  • PRODUCTS & SERVICES
  • AI & Automation
  • Digital Supply Chain Hub
  • Credit & Risk Decisioning
  • Report Analyzer
  • Communicator
  • Automated Data Scientist
  • Crypto Market Intelligence
  • Industry Solutions
  • Innovation Services
  • Education & Workshops
  • Customer Case Studies
  • COMPANY
  • Overview
  • Leadership
  • Investors
  • Newsroom
  • Careers
  • RESOURCES
  • Executive AI Sessions
  • Data Sheets

Entefy, enFacts, Life Compatible, and It's Life Compatible are trademarks and / or service marks of Entefy Inc.

© 2025 Entefy Inc. All rights reserved.

Privacy | Terms

Request a Demo

Is your organization pursuing an AI-first transformation strategy? If so, start the conversation by submitting the form below.

Request a Demo
By clicking 'Submit' you agree to Entefy's Privacy statement and Terms of use.

Contact Us

Thank you for your interest in Entefy. You can contact us using the form below.

Contact Us
By clicking 'Submit' you agree to Entefy's Privacy statement and Terms of use.

Download Data Sheets

Download Data Sheets
By clicking 'Submit' you agree to Entefy's Privacy statement and Terms of use.
See our Privacy Statement to learn more about how we use cookies on our website and how to change cookies settings if you do not want cookies on your computer. By using this site you consent to our use of cookies in accordance with our Privacy Statement.