Navigating the labyrinth of fast-evolving AI regulation

The world stands at the precipice of a new era, one where artificial intelligence (AI) is poised to revolutionize every facet of human life. From healthcare to education, finance, retail, entertainment, and supply chain, AI promises to reshape our experiences in ways both subtle and profound. Yet, with such immense power comes an equally immense responsibility: to ensure that AI is developed and deployed responsibly, ethically, and in a manner that benefits all of humanity.

One of the most critical tools in this endeavor is effective regulation and guardrails designed to safeguard against individual and societal harm. Governments around the world are grappling with the complex and multifaceted challenge of crafting regulatory frameworks that foster innovation while mitigating the potential risks associated with AI and, more specifically, artificial general intelligence (AGI). This task is akin to navigating a labyrinth, where each turn presents new challenges and opportunities.

The regulatory landscape for building trustworthy AI

Building trustworthy AI involves addressing various concerns such as algorithmic biases, data privacy, transparency, accountability, and the potential societal impacts of AI. At present, multiple corporate and governmental initiatives are underway to create ethical guidelines, codes of conduct, and regulatory frameworks that promote fairness, accountability, and transparency in AI development and deployment. Collaborative efforts between industry leaders, policymakers, ethicists, and technologists aim to embed ethical considerations into the entire AI lifecycle, fostering the creation of AI systems that benefit society while respecting fundamental human values and rights. The goal is to navigate the complexities of AI advancements while upholding principles that prioritize human well-being and ethical standards.

Increasingly, large corporations and government entities alike are taking key steps aimed at protecting consumers and society at large. Leading the charge is perhaps the European Union, taking a bold step towards comprehensive regulation in the field with its EU AI Act. This legislation, the first of its kind on a global scale, establishes a risk-based approach to AI governance. By classifying AI systems into four risk tiers based on their potential impact, the Act imposes varying levels of oversight. Thus, promoting responsible development while encouraging innovation.

Across the Atlantic, the United States has taken a more decentralized approach to AI regulation. Rather than a single national law, the U.S. is relying on a patchwork of guidelines issued by different states. This fragmented approach can lead to inconsistencies and uncertainties, potentially hindering responsible AI development. In this regard, the U.S. Chamber of Commerce has raised concerns, stating that such a patchwork approach to AI regulation “threatens to slow the realization of [AI] benefits and stifle innovation, especially for small businesses that stand to benefit the most from the productivity boosts associated with AI.”

Here are several specific examples of initiatives worldwide that have emerged to help ensure ethical AI development:

  • As an early leader in this area, the European Commission drafted “The Ethics Guidelines for Trustworthy Artificial Intelligence (AI)” to promote ethical principles for organizations, developers, and society at large. On December 8, 2023, after intense negotiations among policymakers, the European Union reached agreement on its landmark AI legislation, the AI Act. This agreement clears the way for the most ambitious set of principles yet to help control the technology. “The proposed regulations would dictate the ways in which future machine learning models could be developed and distributed within the trade bloc, impacting their use in applications ranging from education to employment to healthcare.” 
  • The first comprehensive regulatory framework for AI was proposed in the EU in April 2021 and is expected to be adopted in 2024. This EU Artificial Intelligence Act represents the first official regulation in the field of AI aimed at the protection of the rights, health, and safety of its people. The new rules will categorize AI by risk levels and prohibit certain practices, with full bans on predictive policing and biometric surveillance, mandatory disclosure requirements by generative AI systems, and protection against AI systems that are used to sway elections or influence voters.
  • According to a joint paper, Germany, France, and Italy have reached agreement on the treatment of AI and how it should be regulated. “The three governments support ‘mandatory self-regulation through codes of conduct’ for so-called foundation models of AI, which are designed to produce a broad range of outputs.”
  • The Organisation for Economic Co-operation and Development (OECD) AI Principles call for accountability and responsibility in developing and deploying AI systems. These principles emphasize human-centered values and transparency, providing guidelines for policymakers, developers, and users.
  • Big tech, including Amazon, Google, Microsoft, and Meta, has agreed to meet a set of AI safeguards. President Biden recently “announced that his administration has secured voluntary commitments from seven U.S. companies meant to ensure that their AI products are safe before they release them. Some of the commitments call for third-party oversight of the workings of the next generation of AI systems.”
  • In 2023, the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) released the version 1.0 of its AI Risk Management Framework (AI RMF). “The AI RMF follows a direction from Congress for NIST to develop the framework and was produced in close collaboration with the private and public sectors. It is intended to adapt to the AI landscape as technologies continue to develop, and to be used by organizations in varying degrees and capacities so that society can benefit from AI technologies while also being protected from its potential harms.”

Key challenges in AI regulation

Delving deeper into the intricacies of AI regulation exposes a set of sophisticated challenges ahead:

Risk Assessment. Effectively managing the risks associated with AI requires robust risk assessment frameworks. Determining the level of risk posed by different AI systems is a complex task, demanding a nuanced and objective evaluation of potential harms.

Data Privacy and Security. AI’s dependence on personal or proprietary data raises significant concerns about privacy and security. Implementing robust data protection frameworks is crucial for ensuring user privacy and safeguarding against data breaches or criminal misuse.

Transparency and Explainability. Building trust in AI requires transparency and explainability in how these intelligent systems make decisions. In many cases, advanced models using deep learning such as large language models (LLMs) are categorized as black boxes. Black box AI is a type of artificial intelligence system that is so complex that its decision-making or internal processes cannot be easily explained by humans, thus making it challenging to assess how the outputs were created. Regulations mandating transparency are essential for responsible AI development and ensuring accountability for potential harm.

Algorithmic Bias and Discrimination. This is the invisible enemy in AI. Intelligent systems can inadvertently perpetuate harmful biases based on factors such as race, gender, and socioeconomic status. Addressing this issue necessitates policy and regulation that promotes fairness, transparency, and accountability in development and deployment of algorithmic models.

The case for ethical AI

The need for professional responsibility in the field of artificial intelligence cannot be understated. There are many high-profile cases of algorithms exhibiting biased behavior resulting from the data used in their training. The examples that follow add more weight to the argument that AI ethics are not just beneficial, but essential:

  • Data challenges in predictive policing. AI-powered predictive policing systems are already in use in cities including Atlanta and Los Angeles. These systems leverage historic demographic, economic, and crime data to predict specific locations where crime is likely to occur. However, the ethical challenges of these systems became clear in a study of one popular crime prediction tool. The predictive policing system developed by the Los Angeles Police Department in conjunction with university researchers, was shown to worsen the already problematic feedback loop present in policing and arrests in certain neighborhoods. An attorney from the Electronic Frontier Foundation said, “‘if predictive policing means some individuals are going to have more police involvement in their life, there needs to be a minimum of transparency.”’
  • Unfair credit scoring and lending. Operating on the premise that “all data is credit data,” machine learning systems are being designed across the financial services industry to determine creditworthiness using not only traditional credit data, but also social media profiles, browsing behaviors, and purchase histories. The goal on the part of a bank or other lender is to reduce risk by identifying individuals or businesses most likely to default. Research into the results of these systems has identified cases of bias such as two businesses of similar creditworthiness receiving different scores due to the neighborhood in which each business is located. According to Deloitte, the bias in AI can come from the input data, how the engineers may impact the model training, and in post-training where there is “continuous learning drift towards discrimination.
  • Biases introduced into natural language processing. Computer vision and natural language processing (NLP) are subfields of artificial intelligence that give computer systems digital eyes, ears, and voices. Keeping human bias out of those systems is proving to be challenging. One Princeton study into AI systems that leverage information found online showed that the biases people exhibit can make their way into AI algorithms via the systems’ use of Internet content. The researchers observed “that human-like semantic biases result from the application of standard machine learning to ordinary language—the same sort of language humans are exposed to every day.” This matters because other solutions and systems often use machine learning models that were trained on similar types of datasets.

    Today, large language models (LLMs), pre-trained on massive datasets with billions of parameters, are more powerful than ever and can generate new content. The positive opportunities created by LLMs and generative AI are endless but that all comes with a set of risks. These risks include discriminatory outputs, hallucinations where the LLMs generate false information, or “plausible-sounding reason, based on a process of predicting which words go together,” without actual reasoning. 
  • Limited effectiveness of health care diagnosis. There is limitless potential for AI-powered systems to improve patients’ lives using trustworthy and ethical AI. Entefy has written extensively on the topic, including the analysis of 9 paths to AI-powered affordable health care, how machine learning can outsmart coronavirus, improving the relationship between patients and doctors, and AI-enabled drug discovery and development.

    The ethical AI considerations in the health care industry emerge from the data and whether the data includes biases tied to variability in the general population’s access to and quality of health care. Data from past clinical trials, for instance, is likely to be far less diverse than the face of today’s patient population. Said one researcher, “At its core, this is not a problem with AI, but a broader problem with medical research and healthcare inequalities as a whole. But if these biases aren’t accounted for in future technological models, we will continue to build an even more uneven healthcare system than what we have today.” AI systems can reflect and perpetuate existing societal biases, leading to unfair outcomes for certain groups of people. This is particularly true for disadvantaged populations, who may receive inaccurate or inadequate care due to biased algorithmic predictions.
  • Impaired judgement in the criminal justice system. AI is performing a number of tasks for courts such as supporting judges in bail hearings and sentencing. One study of algorithmic risk assessment in criminal sentencing revealed the need for removing bias from some of their systems. Examining the risk scores of more than 7,000 people arrested in Broward County, Florida, the study concluded that the system was not only inaccurate but plagued with biases. For example, it was only 20% accurate in predicting future violent crimes and twice as likely to inaccurately flag African American defendants as likely to commit future crimes. Yet these systems contribute to sentencing and parole decisions. And in cases where policing is more active in some communities than others, biases may exist in the underlying data. “An algorithm trained on this data would pick up on these biases within the criminal justice system, recognize it as a pattern, and produce biased decisions based on that data.”

The road ahead for AI regulation will be paved by collaborative efforts towards responsible AI. As we navigate the road ahead, the landscape is expected to continue evolving rapidly. We can anticipate a global surge in the development and implementation of national and regional AI regulations, increased focus on advanced risk management and mitigation strategies, and continued collaboration on the development of international standards and best practices for responsible AI governance.

Conclusion

The effective regulation of AI is not simply a technical challenge; it is a call to action for all stakeholders, including governments, businesses, researchers, and individuals. By engaging in open dialogue, adopting responsible development practices, and actively participating in the regulatory process, we can collectively foster a future where AI serves as a force for good. A force that protects consumers, empowers innovation, and creates a more equitable and prosperous world for all.

To learn more, be sure to read Entefy’s guide to essential AI terms and our previous article about AI ethics and ways to ensure trustworthy AI for your organization.

ABOUT ENTEFY

Entefy is an enterprise AI software and hyperautomation company. Entefy’s patented, multisensory AI technology delivers on the promise of the intelligent enterprise, at unprecedented speed and scale.

Entefy products and services help organizations transform their legacy systems and business processes—everything from knowledge management to workflows, supply chain logistics, cybersecurity, data privacy, customer engagement, quality assurance, forecasting, and more. Entefy’s customers vary in size from SMEs to large global public companies across multiple industries including financial services, healthcare, retail, and manufacturing. 

To leap ahead and future proof your business with Entefy’s breakthrough AI technologies, visit www.entefy.com or contact us at contact@entefy.com