Artificial general intelligence, the Holy Grail of AI

From Mary Shelley’s Frankenstein to James Cameron’s Terminator, the possibility of scientists creating an autonomous intelligent being has long fascinated humanity—as has the potential impact of such a creation. In fact, the invention of machine intelligence with broad, humanlike capabilities, often referred to as artificial general intelligence, or AGI, is an ultimate goal of the computer science field. AGI refers to “a machine’s intelligence functionality that matches human cognitive capabilities across multiple domains. Often characterized by self-improvement mechanisms and generalization rather than specific training to perform in narrow domains.”

Some well-known thinkers such as Stephen Hawking have warned that a true artificial general intelligence could quickly evolve into a “superintelligence” that would surpass and threaten the human race. However, others have argued that a superintelligence could help us solve some of the greatest problems facing humanity. Via rapid and advanced information processing and decision making, AGI can help us complete hazardous jobs, accelerate research, and eliminate monotonous tasks. As global birthrates plummet and the old outnumber the young, at the bare minimum, robots with human-level intelligence might help address the shortage of workers in critical fields such as healthcare, education, and yes, software engineering.

AI all around us

Powerful real-life demonstrations of AI, from computers that beat world chess champions to virtual assistants in our devices that speak to us and execute tasks, may lead us to think that a humanlike AGI is just around the corner. However, it’s important to note that virtually all AI implementations and use cases you experience today are examples of “narrow” or “weak” artificial intelligence—that is, artificial intelligence built and trained for specific tasks.   

Narrow forms of AI that actually work can be useful because of their ability to process information exponentially faster (and with fewer errors) than human beings ever could. However, as soon as these forms of narrow AI come upon a new situation or variation of their expert tasks, their performance falters.

To understand how ambitious the notion of creating AGI truly is, it’s helpful to reflect on the vast complexity of human intelligence.

Complex and combinatorial intelligence

Father of computer science Alan Turing offered what is famously called the Turing Test for identifying AGI. He proposed that any machine that could imitate a human being to the point of fooling another human being could pass the test.

However, MIT roboticist Rodney Brooks has outlined four key developmental stages of human intelligence that can help us measure progress in machine intelligence:

  1. A two-year-old child can recognize and find applications for objects. For example, a two-year-old can see the similarity between a chair and a stool, and the possibility of sitting on both a rock and a pile of pillows. While the visual object recognition and deep learning capabilities of AI have advanced dramatically since the 1990s, most forms of current AI aren’t able to apply new actions to dissimilar objects in this manner.
  2. A four-year-old child can follow the context and meaning of human language exchange through multiple situations and can enter a conversation at any time as contexts shift and speakers change. To some degree, a four-year-old can understand unspoken implications, responding to speech as needed, and can spot and use both lying and humor. In a growing number of cases, AI is able to successfully converse and use persuasion in manners similar to human beings. That said, none currently has the full capacity to consistently give factual answers about the current human-social context in conversation, let alone display all the sophistication of a four-year-old brain.
  3. A six-year-old child can master a variety of complex manual tasks in order to engage autonomously in the environment. Such tasks include self-care and goal-oriented tasks such as dressing oneself and cutting paper in a specific shape. A child at this age can also manually handle younger siblings or pets with elevated sensitivity. There is hope that AI might power robotic assistants to the elderly and disabled by the end of the decade.
  4. An eight-year-old child can infer and articulate the motivations and goals of other human beings by observing their behavior in a given context. They are socially aware and can identify their own motivations and goals and explain them in conversations, in addition to understanding the personal ambitions explained by their conversation partners. Eight-year-olds can implicitly understand the goals and objectives of assignments others give them without a full spoken explanation of the purpose. In 2020, MIT scientists successfully developed a machine learning algorithm that could determine human motivation and whether a human successfully achieved a desired goal through the use of artificial neural networks. This represents a promising advancement in independent machine learning.

Why the current excitement over AGI?

Artificial general intelligence could revolutionize the way we work and live, rapidly accelerating solutions to many of the most challenging problems plaguing human society—from the need to reduce carbon emissions to the need to react to and manage disruptions to global health, the economy, and other aspects of society. Some have posited that it could even liberate human beings from all forms of taxing labor, leaving us free to pursue pleasurable pursuits full time with the help of a universal basic income.

For his 2018 book, Architects of Intelligence, futurist Martin Ford surveyed 23 of the leading scientists in AI and asked them how soon they thought research could produce a genuine AGI. “Researchers guess [that] by 2099, there’s a 50 percent chance we’ll have built AGI.”

So why do news headlines make it seem as though AGI may be only a few years off? Hollywood is partly to blame for both popularizing and glamorizing super-smart machines in pseudo realistic films. Think Iron Man’s digital sidekick J.A.R.V.I.S., Samantha in the movie Her, or Ava in the movie Ex Machina. There is also the dramatic progress in machine learning that has occurred over the past decade in which a combination of advances in big data, computer vision, and speech recognition, along with the application of graphic processing units (GPUs) to algorithmic design, allowed scientists to better replicate patterns of the human brain through artificial neural networks.

Recent big digital technology trends in machine learning, hyperautomation, and the metaverse, among others, have made researchers hopeful that another important scientific discovery could help the field of AI make a giant leap forward toward complex human intelligence. As with previous revolutions in computing and software, it’s magical and inspiring to witness the journey of machine intelligence from narrow AI to AGI and the remarkable ways it can power society.

For more about AI and the future of machine intelligence, be sure to read our previous blogs on important AI terms, the ethics of AI, and the 18 valuable skills needed to ensure success in enterprise AI initiatives.