During a wide-ranging discussion about digital transformation strategies, Alston presented his views on the importance of a multi-modal approach when implementing AI solutions to build intelligence into enterprise systems. While unimodal AI can deliver promising results when applied to direct use cases, Alston argued, they often fail to capture much of the overall value that multi-modal AI systems can harness and apply to a broader set of business processes. Alston shared an analogy with cloud technologies. An enterprise that transitions only its CRM platform to the cloud can capture some business value; but that value may be a fraction of what can be harnessed when applying digital transformation to a broader spectrum of systems. Alston argued that, when it comes to extracting optimal value out of an organization’s lake of data, multi-modal AI systems that leverage multiple domains of AI such as computer vision, natural language, audio, and other data intelligence can significantly outperform unimodal systems. It truly is a case of “greater than the sum of its parts.”
Over the course of the program, multiple speakers presented their thoughts on the broad trends impacting digital transformation, including AI and machine learning, cybersecurity and data privacy in the post-GDPR world, infrastructure and interconnection strategies, and workload-to-cloud projects using multiple IoT and mobile interfaces.
On behalf of Alston and everyone here at Entefy, we offer our sincere thanks to the event organizers, symposium attendees, and Ryan Mallory of Equinix, who moderated the discussion.
You click through to a dated-looking website. A video begins to play with the mysterious title “Real-time Facial Reenactment.” In the video, a tabletop TV shows cable news network footage of President George W. Bush. Beside the TV is a young man seated in front of a webcam, stretching his mouth and arching his eyebrows dramatically like a cartoon character. And then it clicks. The President is making the exact same facial movements as the young man, perfectly mirroring his exaggerated expressions. The seated figure is, to all appearances, controlling a President of the United States. Or at least his image on a video screen. It’s worth checking out the video to see this digital puppetry in action, with 4 million views and counting.
Welcome to the 21st century version of “seeing is believing.” The video in question is the product of a group of university researchers working in a specialty field of artificial intelligence called computer vision. Computer vision algorithms can do things like digitally map one face onto another in real-time, allowing anyone with a webcam to project their own facial expressions onto a video image of a second person. No special media or expensive technology is required. The researchers used an off-the-shelf webcam and publicly available YouTube videos of American presidents and celebrities.
AI-powered computer vision technologies that manipulate video can be impressively realistic, often eerie, and just the sort of thing that should ignite discussions of authenticity in the age of “fake news.”
Photoshop, the verb
The photo editing software Photoshop has been around so long at this point that we use it as a verb. But AI computer vision technologies with the power to manipulate what we see are something else entirely. After all, it’s one thing to airbrush a model in an advertisement and quite another to manipulate what’s said on the nightly newscast.
To see why, consider a few facts. Billions of people worldwide, including 2 out of 3 Americans, rely on social media and other digital sources for news. And in doing so, they overwhelmingly choose video as their preferred format. These informational ecosystems are already struggling with “fake news,” the term for deliberate disinformation masquerading as legitimate reporting.
If AI computer vision enables low-cost, highly convincing tools for manipulating video—for making anyone appear to say anything someone might want them to say—those tools have an extremely high potential for mayhem, or worse. After all, once we see something, we can’t unsee it. The brain won’t let us. Even if later we learn that what we saw was fake, that first impression remains.
AI raises some big questions, and they’re important ones. What happens when we can no longer rely on “seeing is believing?” How can we ensure reliable authenticity in the digital age? Technology is moving fast, and new AI tools remind us that sometimes we need to put aside the cool factor and think instead about how specific technologies impact our lives. Because digital technology is at its best when it improves lives and empowers people.
Putting the dead to work
Part of the reason that AI computer vision technology is so remarkable is that it compresses hours or days of professional work into something that happens in the moment.
The technology has a clear application in entertainment. The production of the immersive worlds and intricate characters we see in film and video games is quite expensive. CGI (computer-generated imagery) is lauded for its ability to make the unreal appear as close to real as possible, to turn imagination into reality. But there is a lot of effort behind these transformations.
CGI may fill theaters, but we’ve also seen that people place limits on the type of unreality they’ll accept. Look no further than Rogue One, the 2016 film set in George Lucas’ Star Wars universe. The film’s story takes place before the events of the original Star Wars film from 1977. The Rogue One filmmakers wanted to feature one of the characters from the original film but ran into a small problem: the actor, Peter Cushing, died in the 1990’s.
CGI to the rescue. The producers employed computer graphics to generate a digital version of Cushing, then brought him to simulated life using motion capture gear and another actor, Guy Henry, to voice Cushing’s character. The result was a rather convincing resurrection of Cushing, which simulated everything from his facial tics to the differences in lighting between films.
The invention of Photoshop didn’t cause people to completely distrust every digital photo. We still happily share pictures from vacations and selfies with celebrities without worry that our friends will doubt their authenticity.
The challenge with AI computer vision tech isn’t how it might be misused—after all, practically any technology can be misused. It’s that it joins a growing list of technologies that are developing so quickly that people haven’t had enough time to collectively decide how we want them to be used. This is as true about some forms of AI as it is about robotics and gene editing.
But if we did come together to have this discussion, what we’re likely to find is a lot of common ground around the idea that the best way to use all of these revolutionary technologies is to make life better for people, solve global problems, and empower individuals. After all, for every potential misuse of computer vision AI, there are hundreds of positive and impactful applications like real-time monitoring of crime, improved disaster response, automated medical diagnosis, and on and on.
Fake news is a problem worth solving. But until we’ve successfully leveraged advanced technology in support of truth and authenticity, let’s not abandon “seeing is believing” just yet.
AI augmentation is expected to generate $2.9 trillion in value by 2021, freeing up 6.2 billion hours of worker productivity in the process. Within the AI category, machine learning attracted about 60% of the estimated $8 to $12 billion in external investment in AI capabilities during 2016. Machine learning’s outsized role comes in large part from its usefulness in enabling other capabilities, like robotics, process automation, and speech recognition. Not surprisingly, machine learning sits squarely at the center of digital transformation strategies the world over.
Entefy recently shared a look at ML innovation in 8 different industries. Below we continue our survey of noteworthy machine learning projects, this time in verticals as diverse as banking, nonprofits, energy, and government.
Banking
Machine learning analytics programs offer banks the potential to increase profits substantially. One report found that companies that invested in advanced analytics saw an average profit increase of approximately $369,000,000 a year. They achieve this by making better use of their customer data.
By analyzing client behavior, one institution learned when to intervene before customers disengaged from the company, leading to a 15% reduction in churn. Other organizations used the insights gleaned through machine learning programs to identify new segments within its existing customer base and phase out costly, unnecessary discount practices. Machine learning programs “discover” hidden trends and insights that companies can use to strategize more effectively.
Energy
Deep and machine learning will prove invaluable to combatting climate change and achieving more efficient energy usage. Companies are already using machine learning to sell solar panels in more cost-efficient ways, while researchers predict that cloud-based monitoring systems will be able to optimize energy usage in real time.
Machine learning could also help tech companies reduce their carbon footprints, optimizing energy usage according to a variety of conditions and ultimately decreasing their power draws by up to 15%. This is will be of growing importance as rising demands for computing power raise serious environmental concerns.
Healthcare
With the global medical community facing a shortfall of more than 4 million medical providers, it’s clear that the healthcare system needs help. As Entefy wrote last year, artificial intelligence is here to help doctors. Researchers are exploring the use of machine learning to diagnose disease, a breakthrough that could lead to faster, more accurate patient treatments.
One group of researchers used machine learning to predict whether patients would be hospitalized due to heart disease. They achieved an 82% accuracy rate, which was 26% percent higher than the average rate using one of the most common existing prediction models. Identifying a patient’s risk for heart disease and hospitalization could allow doctors to make life-saving recommendations before the condition reaches a critical stage.
Retail
Machine learning makes for a more dynamic and interactive retail experience. Instead of wandering store aisles for hours searching for the right tool or outfit or home appliance, home makeover businesses are embracing artificial intelligence to create more personalized – and convenient – brand encounters.
One company used machine learning to create a tool for better home decorating. Customers will soon be able upload photos of their homes and see realistic simulations of what different shades of paint, pieces of furniture, and light fixtures would look like in their living rooms. No more guesswork, no more aggravating trips to the store because you bought the wrong shade of paint. Thanks to machine learning, you could be sure to get it right the first time.
Government
Machine learning could make the country safer and more equitable, and that’s not just idealism talking. A 2017 study found that a machine learning algorithm was more adept than judges at predicting which defendants were flight risks while awaiting trials. The program assigned risk scores based on details such as defendants’ ages, their rap sheets, the offenses for which they were awaiting trial, and where they had been arrested.
Researchers determined that the program could decrease the number of defendants in jail while awaiting trial by 40% without risking an increase in crime rates. Widespread use of such algorithms would alleviate strains on the criminal justice systems and could even prevent future crimes. The program’s accuracy would serve as a safeguard against judges’ erroneous decisions, as the defendants they release sometimes commit additional crimes before their trials or fail to appear for their court dates.
Nonprofit
Nonprofits are using machine learning to identify trends related to mental health crises, such as indicators for suicide. Importantly, machine learning can make connections that humans might not see, and that information enables crisis counselors to reach people who are in urgent need. In one example, a machine learning program identified that the term “ibuprofen” was a likelier indicator of an imminent threat than the term “suicide.” Therefore, the computer program prioritizes users who have mentioned ibuprofen in their communications to ensure they reach a counselor as quickly as possible.
One of the core insights that drives our work at Entefy is that the world is in the early stages of a new S-curve of innovation: universal interaction. Think about the sum of all communication taking place between people, devices, and services every minute of every day. Broadly speaking, we’re talking about everything from instant messaging to global IoT sensor activity to conversational AI.
In this video, Entefy CEO Alston Ghafourifar gives an overview of the new S-curve and its implications for people, industries, markets, and our global community.
Anxiety and depression plague the U.S., and their impact grows by the year. Forty million adults throughout the country suffer from an anxiety disorder, and 16.2 million have gone through at least one major depressive episode. Not only do these mental health conditions wreak havoc on people’s well-being, personal lives, and careers, they also carry a significant economic toll.
Depression costs the U.S. $210 billion each year. That number includes costs directly linked to depressive cases, as well as related mental health issues and physical ailments and losses associated with workplace absenteeism due to depression. Globally, depression-related costs are $1 trillion, according to the World Health Organization. With mental health issues on the rise, the need for life-saving, cost-effective depression and anxiety treatments becomes ever more urgent.
But current treatment protocols are imperfect, for both patients and providers. Researchers found that although people suffering from depression will most often go to their primary care doctors for guidance, those doctors may not be best-equipped to help them. The predominant approach to disease management for depression doesn’t include ongoing monitoring and education. Without a plan for how to cope with and improve their conditions, people are more likely to continue suffering from depression and related health problems and therefore will need additional care. Expensive additional care.
Troublingly, researchers found that many medical practices lack the infrastructure and protocols to help people on an ongoing basis. Depression and anxiety require more time and potentially more involved patient visits than strictly physical diseases, and many providers are already overworked and strapped for time. They may also struggle to find the financial resources, especially as some insurance companies are still reluctant to cover mental health expenses.
Now technology may have a role to play in alleviating mental health suffering and reducing the cost burden on healthcare providers. Researchers and scientists are already exploring the use of artificial intelligence in diagnosing disease. But increasingly, AI appears to have applications in mental health treatment as well. All of which has the potential to finally bend the mental health cost curve significantly.
Deep learning for better treatment
Depression may emerge in a patient for a number of reasons, including genetic predisposition, biological factors, and hormone imbalances. There are also a number of risk factors that can trigger depressive episodes, such as anxiety and other mental health conditions, trauma, and some prescription medications.
With so many variables at play, there is no one-size-fits-all treatment for depression. Patients sometimes react adversely to one medication, forcing them to try out several prescriptions until they find one that helps. Some benefit from talk therapy while others require a combination of treatments. But finding the right fit isn’t a precise science and identifying an effective approach can be a costly and time-consuming process for everyone involved.
Deep learning may be able to help doctors create personalized depression treatment plans based on patients’ conditions and histories. By aggregating and analyzing a person’s medical records, current medications, and other factors, a deep learning algorithm could recommend potential treatments that doctors can incorporate into their assessments. There are no guarantees those plans would work, but they would save doctors a good deal of trial-and-error when trying to determine which course is most likely to help a patient.
Importantly, an AI program would be able to cross-check potential treatments to ensure that prescriptions won’t negatively interfere with one another or set off additional health crises. Doctors would of course still have final say in recommending a course of action. But AI could help them do so quickly and more effectively.
Chatbots for prevention and care
Ideally, access to mental health specialists would be ubiquitous for all patients. In reality, however, many people cannot afford to see a psychologist or psychiatrist. Even those who can afford therapy sessions may be reluctant to do so for fear of being stigmatized. Chatbots that use natural language processing (NLP) and other forms of artificial intelligence can bridge the gap for people in these circumstances. Free chatbot apps help people suffering from depression and anxiety by giving them someone to “talk” to and a place to gather resources and recommendations for coping with their conditions.
As with diagnosing and treating mental health conditions, an AI chatbot can’t replace a licensed therapist long-term. But daily check-ins via chatbot may help people stay motivated to take actions that will help them manage their depression and anxiety. Given that ongoing care is one of the challenges primary care physicians face with patients who have these diseases, a chatbot can help sustain important habits and lifestyle changes that lead to improved outcomes. To the extent that such chatbots contribute to people’s mental wellness, they also reduce the financial burdens the healthcare system and the workforce bear due to mental illnesses.
Biometrics: the next frontier in preventive medicine
Doctors may ultimately receive assistance in the form of biometric data gathered from patients’ smartphones. One company is exploring how gathering information about how patients use their phones, their pulse rates, and other physical indicators can be used to paint a more accurate picture for their doctors. When a doctor notices that a patient’s pulse rate is surprisingly high, she knows to discuss that issue and identify ways of addressing it. In mental health and other areas, AI may become a powerful tool for monitoring and managing health conditions.
Having quality data on people’s health also takes the conversation beyond a patient’s self-reported state of wellness. Someone might be embarrassed to admit that he’s felt anxious or depressed and so might avoid having that conversation with his doctor. But by looking at his biometric data, the doctor can detect patterns that might be cause for concern.
In theory, aggregating this data could help on a broader level as well. Collecting biometric data could enable deep learning algorithms to find trends in behavioral and emotional patterns and help doctors identify patients who have high suicide risks.
Mental health issues are complex and difficult to manage. Healthcare providers who are already overworked and underprepared to address the needs of depressed and anxious patients would be well-served by AI-powered technologies that make diagnosis and ongoing treatment easier and more cost-effective.
These days, there is a lot of confusion surrounding the term “artificial intelligence,” with people attributing it at different times to different technology domains. Machine learning and deep learning are on that list, too, though they’re distinct AI approaches. Both refer to techniques by which computer programs become “smarter” as they process more information. Entefy produced a primer on key terms in AI in our article Essential AI: A brief introduction to terms and applications.
From a business perspective, what is most interesting about machine learning is not the mechanics of how it works but the vast array of revolutionary applications being developed using it. To show just how impactful this technology already is, we put together this roundup of machine learning projects in 8 different industries.
Aerospace
Machine learning could make flying safer and more user-friendly. Pilots and flight crews operate around the world, and each of them generates significant amounts of data. In addition to information that’s automatically recorded via the aircraft’s computers, human personnel catalog their notes and observations as well. But there aren’t necessarily standard documentation processes for the latter, which means valuable information might slip through the cracks.
Machine learning may soon be able to parse these varied forms of data, interpreting different types of shorthand and slang and organizing all available information into a central database. In the future, an AI pilot may well draw on that database to respond to in-flight challenges and ensure flight safety.
Automotive
The more data companies gather about driver behaviors, the better they understand why accidents happen and how to prevent them, which is why machine learning may well make our roads safer. Businesses that operate vehicle fleets have begun using telematics to collect information about all aspects of driver performance and deploy machine learning algorithms to do just that.
Every time an employee over-accelerates, takes a turn too sharply, or fails to buckle their seatbelt, sensors and tracking systems detect and record it. This information enables companies to better train drivers by homing in on common driving problems. They can compare numbers of accidents against how many hours drivers are working and adjust employee schedules accordingly. Of course, improving human driving skills may be a stopgap on the path toward even greater road safety, as autonomous vehicles could prevent 90% of accidents.
Manufacturing
Manufacturing companies use machine learning algorithms to cut waste and other expenses in their processes. Smart programs analyze existing workflows and key in on areas that can be improved. In a report on artificial intelligence in the industrial sector, the authors found that machine learning programs could process thousands of data points gathered from multiple machine types and subprocesses. The results of such analyses could lower expenses in semiconductor production alone by 30% and could boost manufacturing productivity more generally by 20%.
Transportation & Logistics
Forecasting is incredibly important in supply chain logistics. A sudden storm, market upset, or rise in transportation costs could severely impact logistics companies and their clients. But machine learning is helping these businesses become both more agile and resilient. Smart programs can use contextual data to predict potential problems so companies can create contingencies. The more accurate and current their data, the better equipped they are to respond to crises.
Agriculture
Imagine going to work in the morning and being faced with millions of options for solving a given problem. Being spoiled for choice may sound good, but human beings simply cannot process that amount of information, especially on a short and urgent timescale. Yet that’s the situation plant breeders encounter all the time. Determining which breeds are most likely to thrive in a given climate, region, or season is no small feat, and the outcomes of their decisions impact the food supply for millions of people.
Fortunately, machine learning is up to the demands of modern agriculture and can analyze historical datasets to identify which breeds are suited to different circumstances. The technology can also be used to spot diseased crops through pattern recognition, so growers can intervene and save the rest of their yields. This type of precise science will be increasingly important as the global community copes with food insecurity and the growing impact of climate change.
Consumer Goods & Services
Effective use of artificial intelligence depends on data, and retailers gain access to more customer information every day. Companies that combine consumer profiles with behavioral data and market trends can create powerful sales strategies, and machine learning programs can analyze wide-ranging data sets to identify optimal selling conditions. A business that deploys a dynamic pricing model powered by machine learning insights can promote its products “at the right price, with the right message, to the right targets,” according to the McKinsey Global Institute’s discussion paper, Artificial Intelligence: The Next Digital Frontier? Companies that get this right could see up to a 30% increase in online sales.
Hospitality & Travel
Travel companies use machine learning to identify behavioral trends among consumers so they can tailor their booking experiences accordingly. Machine learning enables travel industry brands to extract trends in which factors influence travelers’ decisions most and which devices and search methods they use for different types of queries. For instance, the number of reviews a property garners matters more to consumers than the actual number of stars it receives. Knowing this, hotels may try to incentivize people to leave reviews so they can get on other travelers’ radars.
Communications
Approximately 9 in 10 American adults use the Internet, and usage among the 18-29 age group stands at 98%. At that level of Internet penetration, telecommunication companies can’t afford downtime or infrastructure lapses. But their vast networks of cell towers, satellites, and fiber optic cables ae difficult to monitor manually.
Now, one major telco is using cameras on wings, or COWs, which are drones that capture images of their cell towers and commercial installments and diagnose problems in real-time. The company anticipates a future in which it runs the drone-captured data through a machine learning algorithm that pinpoints problems and fixes them automatically. Not only would this mean faster problem resolution, it would also create safer working conditions since there would be less need for technicians to climb to the tops of towers and telephone poles to make the fixes themselves.
Success in professional life these days is linked to how well you keep up with constant rapid change. Change in market dynamics, technology, skillsets, to name a few. Which suggests that learning is not a pastime, but a core requirement for keeping ahead of the curve in today’s business environment.
This infographic spotlights powerful techniques for learning about any new subject: mental models. We explain how mental models work and provide several examples of these useful framing techniques.
Money laundering is the term for deliberately masking the origin of illegally obtained cash generated by criminal activity. Money laundering activity is on the rise, fueling the operations of drug cartels and terrorist organizations around the globe. Advances in technology allow so-called “megabyte money” to exist on computer screens and move anywhere in the world nearly instantaneously. According to data on money laundering from the United Nations, 2% to 5% of global GDP is laundered annually, or between $800 billion and $2 trillion dollars.
Banks and other financial services companies are on the front line of anti-money laundering (AML) efforts. AML regulations require financial institutions to monitor deposits and wire transfers to spot signs of suspicious activity. An American Bankers Association survey of AML enforcement at banks with assets larger than $1 billion showed that 30% of banks intend to increase AML team headcount and budgets, citing the need for new software to remain efficient and the overall increase in suspicious activities that require review.
AML compliance programs now look more like operational utilities or, as one executive put it, “factories,” and less like the independent oversight functions that banks first envisioned. These factories are expensive, yet might be acceptable if the huge teams and manual processes were working well. But many are not.
One central challenge to AML compliance is that banks’ legacy monitoring systems are often rules-based and inefficient, with data showing that as many as 90% of AML alerts are false positives. Meaning that a lot of effort and expense goes into investigating legal, compliant wire transfers and deposits. AI-powered solutions are a clear need, but advanced automation carries its own challenges.
A range of factors contribute to the difficulty of automating AML efforts. Low-quality data and fragmented data sources make automated solutions difficult to develop. New banking products and services, like instant fund transfers and mobile payments, create more platforms that need to be monitored. And the inconsistent availability and quality of data across geographies makes standardizing processes difficult.
Despite the challenges, banks see machine learning and data analytics technologies as the future of AML fraud detection. These systems have the potential to reduce the manual work of compliance personnel by 50%, effectively doubling their effectiveness dollar for dollar. The potential financial impact of these AI systems could be huge. Using the UN estimate of global money laundering, a 50% cut in illegal activity would represent a $400 billion to $1 trillion decline in dirty money transfers. Which would create a positive real-world impact slowing the flow of money to illegal organizations worldwide.
There is a paradox at the heart of life in the digital age.
Despite what we often see in the headlines about politics or the economy, in most ways of measuring life outcomes, people in the U.S. and much of the world are living better today than ever before. Americans are living longer and healthier; are more educated; have plentiful and varied food supplies; live in bigger and safer homes; experience a cleaner and greener environment; work in safer workplaces; are less likely to suffer violence and accidents; collect incomes that are higher while absolute poverty is lower. Globally, human rights are expanded; more countries are democracies; income inequality is down as is warfare between states. However, not everything is better for everyone, everywhere, all the time. Yet in the aggregate, much of humanity is at the pinnacle of a march towards better living.
But we don’t experience life in the aggregate. We experience life specifically, uniquely, personally. Not as an average but as a point. And many people’s experience of life today is not that of a pinnacle of progress. It is, day in and day out, a feeling of inexplicable unease, a sense of unending challenge, a fear that things are perhaps more dangerous, the sensation of being constantly overloaded and ceaselessly stressed. By one measure, 40 million U.S. adults have some form of anxiety disorder. And 71% of Americans surveyed say they are dissatisfied with the way things are going.
This Entefy research attempts to answer the question at the heart of the paradox of prosperity: Why is our day-to-day experience of life out of sync with data telling us things are going so well? The answer starts with some important relationships. To achieve the improved outcomes we collectively and individually desire (longevity, wealth, health, etc.), we have to generate increased prosperity; to generate increased prosperity, we have to be more productive; to be more productive, we have to make changes that accomplish more; and when a lot of changes takes place at once, the result is increased complexity. You might say that we purchase better life outcomes using the currency of complexity.
In seeking to explain the disconnect between objective data and subjective experience, we analyzed information from 45 different sources—everything from the U.S. Census Bureau and Bureau of Labor Statistics to polling firms like Gallup—covering the decades from the 1950’s until today. This report is divided into 4 parts.
In Part 1, we start decades back in time to examine that communal feeling of widespread success that today sometimes feels part of a bygone Golden Age.
In Part 2, we will define complexity, put some measurements around it, and evaluate the familiar and surprising ways it impacts our lives today.
In Part 3, we take a data-driven look at our shared confidence in society’s institutions.
In Part 4, we analyze changes to the structures of families, friends, and communities as well as changes to income and education.
Where we end up might surprise you because merely pointing out the challenges of modern life is not the goal here. It is simply the first step in identifying ways for each of us to participate more fully in modern progress. When we understand the paradox, perhaps we can break through it and start enjoying better, more fulfilling lives.
From team owner to team captain
At the end of World War II in 1945, the United States found itself the single largest country with its industry and economy still intact. But that economy was structured for war-fighting purposes. Soon after the war, slowly at first and then in earnest, the nation embarked on a 25- to 35-year period (up to roughly 1980-1990) in which we systematically dismantled the regulatory structures put in place during the War years.
During this period, the country largely eliminated price controls and capital controls. We deregulated the rail industry, the trucking industry, the airline industry, the shipping industry, the telecommunications industry, the financial services industry, the healthcare industry, and the energy industry. This strategic deregulation permitted greater competition, which in turn accelerated investment and innovation. Strategic deregulation was complemented by tactical regulation, which strengthened areas like product safety standards, consumer protection, and labor law.
Because we emerged from the War with a functioning economy, the U.S. began the post-War period with significant momentum driving its economic expansion compared to most other major economies. Deregulation further accelerated that growth during the 1970’s through the 1990’s. All told, the U.S. would sit atop the global economic ladder and experience 50 years of steady economic growth, interrupted only by temporary periods of recession.
Importantly, the benefits of post-War growth—the living experience of it—were enjoyed by most everyone. There was inequality but generally speaking most boats were floating higher on a rising tide.
Another factor contributing to post-War prosperity was the culmination of the accumulation of science and technology innovation across most economic sectors that took place over a 100-year period beginning around 1870. Throughout this time frame, transformative scientific discoveries were made in fields as diverse as pharmaceuticals, metallurgy, agrochemicals, electronics, biology, and physics. This broad tide of innovation drove great increases in productivity.
After 1970, innovation became increasingly restricted to the high technology and communication industries. While innovation was and continues to be dramatic in these two fields, these two sectors account for just 7% of U.S. GDP. At the same time, the pace of innovation in other key sectors such as agriculture, finance, and transportation slowed dramatically. Innovation continues, but many industries today sit atop their S-curves with their inventive heydays behind them.
During the 50-year period following the War, it was reasonable to conclude that the U.S. held a special and unique place in the world. It was easy to take for granted the multiple circumstances that created so much prosperity for so many, so quickly. Our corporations grew ever-larger, employing more and more people, and delivering to them more and more comforts. We reached the moon and defeated Communism. We conquered or contained centuries-old diseases such as smallpox, diphtheria, cholera, typhus, and malaria. Our universities were unrivaled, our scientists provided a constant flow of amazing discovery. At any point along this timeline, the future looked bright.
Today, much has changed. Broad-based, easy gains in productivity may be behind us. We have to make harder choices. We have to live within constraints we did not anticipate or wish to acknowledge. There is more competition within our country and among global competitors. Digitization, the Internet, chip designs, and artificial intelligence are disrupting all our other industries. The U.S. still holds a unique and commanding position but the global order is disrupted and unpredictable. Our collective sense of exceptionalism has been upended by clear signs that we are one among many.
Which brings us to the paradox that defines life in the digital age.
The paradox of prosperity
Let’s restate the paradox of prosperity: despite pervasive evidence that life today is better than ever before in history, our subjective experience of life can feel—take your pick—unsettled, uncertain, or just plain anxiety-riddled.
How do we reconcile these contradictory ideas? The first step is to establish a credible description of the problem. Matt Ridley, a former editor of The Economist, did just that in a speech to the Long Now Foundation entitled “Deep Optimist,” a great starting point on this topic. If we acknowledge that life has improved in general, what is the basis for all this pessimism? The answer begins in the abstract. Picture life as a black box with inputs, processes, and often unpredictable outputs.
Life’s inputs are people, environment, and context (an individual’s circumstances at birth and beyond). The processes of life influence and transform people’s lives, everything from parenting and education to employment and law. These processes impact the outputs: your quality of life. When we talk about the evidence that life today is better than ever, we are talking about measurements of these outputs, everything from GDP to educational attainment to average lifespan.
As we presented above, outputs have improved across the board for most Americans and for many around the globe. If all the outputs have improved, perhaps the perception of life’s difficulties and challenges reside within the processes that generate the outcomes—learning, socializing, protecting, working, parenting, healthcare, governance, and so on.
The sense of generalized anxiety so many people experience is a byproduct of the very complexity that emerges from increased productivity. We have access to incredible amounts of information to solve very targeted problems but this same ocean of information threatens to drown us. With greater complexity, we are less in control.
To dig deeper, we need to look at a representative example of complexity then and now. Let’s use the example of financing the purchase of a home in 1970 versus today.
To buy a house in 1970, you had to save enough for a down payment and then select from two choices of mortgage: a 15-year fixed-rate mortgage and a 30-year fixed-rate mortgage. Then, as it is now, that decision was monumental and consequential because a home is typically the single largest investment most people make in their lives.
In 1970, while your choice was simple (basically two mortgage options), it was also constrained. You had to have enough money saved for a sizable down payment as well as closing costs. You had to document a steady and secure income and an unblemished credit record. It was difficult to get a mortgage back then, with fewer mortgage lenders from which to borrow. Fewer choices and less risk.
In 2017, you have many more choices of a mortgage: Fixed Rate, Adjustable Rate, and Balloon Mortgages as well as 10-, 15-, 20-, 30-, and 40-year terms. There are many more providers than there were then, supplemented with countless mortgage brokers and online services. There are many options in down payment rules and credit requirements. It is much easier to borrow more than you can afford these days. What was a simple choice in 1970 has turned into a series of rather complex decisions for you to make.
Importantly, the consequences of poor decisionmaking are now far greater than they were back then. It was much harder to get a mortgage in 1970 because the requirements were more stringent. But because they were more stringent, you were also much less likely to default with all the negative consequences that creates.
This change in consequentiality became tragically apparent in 2008 with the bursting of the real estate bubble. Millions of families lost their homes or declared personal bankruptcy for many different reasons but often simply because they had not properly understood the liabilities and risks of their mortgage decisions. Complexity and ambiguity created financial chaos that nearly collapsed the global economy.
Mortgages are just one example. You can see the same dynamics of complexity in college education (where to study, what to study, how to pay for it). Or in health insurance (HSA, PPO, HMO, HDHP). Or just select from the 39,500 different items in the average grocery store. The list goes on and on.
Common threads run through all of these examples. What you find is that today we have more choices, the context surrounding decisions is more volatile (past assumptions are not always reliable), there is greater uncertainty (not because of an absence of information but often because of an overabundance of information), and greater ambiguity (more fine print and nuance in terms).
Now, add isolation to this complexity. People in general have fewer friends and family members to call upon for advice and support, making us more responsible for our own lives than ever before. Family sizes and close friendship circles are smaller, church attendance is down—important because these institutions traditionally provided a buffer against life’s unanticipated setbacks. We are on our own in a world that is often volatile, uncertain, confusing, and ambiguous. Or at least that’s how it feels to a lot of people a lot of the time.
And so we find that, without intending to, we have collectively made a Faustian bargain. Across practically every area of life, we have more options to choose from and greater flexibility in achieving what we want. But at the same time we also bear a very real burden of greatly increased responsibility for the outcomes of those choices.
Part 2: Defining modern complexity
Complexity is central to the paradox of prosperity. It emerges from the rapid pace of change, and the arrival of seemingly endless choices in practically every area of life. But not just the number of choices we face. It’s the number of choices we make that carry great weight and consequentiality, yet are made with little outside support.
This pattern of choice and consequentiality can be seen in many areas of life. We are more responsible for our own retirement saving than in the past, as demonstrated by the decline of employer-provided pensions and the rise of personally-managed savings accounts like 401(k)s. Similarly, education choices are greater as are their costs. Healthcare planning is more complex and driven by our own choices.
And nowhere is complexity and the pace of change more apparent than in digital technology. To take just one example, there’s the increasing complexity and volume of communication that occurs in our connected worlds. Digital messaging technologies allow astounding opportunities for personal connection, independent of time and space. Yet the technologies have also created very real obligations in terms of hours in a day and number of emails, phone calls, text messages, and the like. When you consider that just 10 years ago, the first smartphones were hitting store shelves, the pace at which we’ve adjusted our lives to accommodate their capabilities is astounding.
Financial planning, education, healthcare, and technology are not carefully selected examples. We see the same dynamics of complexity throughout our day-to-day lives. Here are some more concrete manifestations of increased choice and consequentiality.
It is important to note that, for example, the 4x increase in items in the supermarket is neither inherently good nor bad. In fact, choice in life is generally considered a positive—until it overwhelms us and leads to analysis paralysis.
As the data suggests, today’s world for the most part offers us more choices. We have more items to choose from at the grocery store, we dine out more often, we live in larger homes, and those homes are equipped with more labor-saving and electronic devices than before.
The number of unique items for sale in a typical grocery store rose nearly four-fold between 1950 and now. This is a good thing in terms of increased choice. However, it is a challenge in terms of efficiency. Here’s an example: You are sent to the grocery store to “pick up an extra package of spaghetti.” You get there and discover that you have a dozen brands to choose from, you realize you don’t know the difference between n. 8 and n. 9 spaghetti, and you aren’t sure whether to buy the regular or gluten-free version. The consequences aren’t grave in making a bad decision. But a decision has to be made nonetheless.
Eating out is a pleasant indulgence but again: “Where do you want to eat?” “I don’t know, where do you want to eat?” This is trivial, until you add up all of the time we spend making all of these decisions.
Homes that are 2.7x the size of those in the 1950’s represent a rising degree of prosperity but that extra space also means more decorating decisions, more decisions about home security, energy consumption, furnishings, and so on. A corollary to home size is the number of electronic devices. Not just straightforward things like a TV or a computer but also a microwave, an alarm clock, a thermostat, a speaker. Each one of these electronic devices has attendant complexities – whether to buy a warranty, learning how to use it, figuring out how to connect one with another. We take much of this day-to-day complexity for granted, but there are hidden costs.
There are other forms of complexity. We are living more densely than we used to do, with nearly half again as many neighbors as we had in 1970. More neighbors mean more interactions but also more chances for contention and conflict.
Then there’s employment. In Agriculture, we have been moving off the farm and into cities for more than a century. Manufacturing has seen workers exiting manufacturing jobs for 50 years. Where are these workers headed? Into the Service sector, which is nearly 30% larger today than in 1970—a figure that has a direct link to complexity as well. There is complexity in Agriculture and Manufacturing, but it tends to be slow developing with long cycle times. The expertise required to drive a tractor or work on an assembly line changes slowly. In contrast, Service sector jobs, roles, processes, and skills tend to evolve quickly. The more that employment is concentrated in the Service sector, the greater uncertainty and volatility there will be for more people.
What is so interesting about complexity is how much the picture of life changes the farther you zoom out from these individual data points. We have become so accustomed to many of the characteristics we discuss here that we take them for granted. Only with some distance can we see what makes the modern world “modern,” and the complexity and consequentiality that define so many of our many decisions.
Among the mechanisms by which confidence contributes to economic, societal, and personal development is efficiency. When there is high confidence, a handshake can take the place of a contract, saving time and money. The lower the level of confidence, the more precautions have to be taken, the more you have to focus on managing risk.
Gallup has been monitoring trust and confidence levels in institutions for decades. They cast their survey questions in terms of confidence: “Please tell me how much confidence you, yourself, have in each one — a great deal, quite a lot, some, or very little?”. They started asking these questions in 1973, and have added additional institutions over time. The results are insightful and tell us a great deal about how complexity has impacted our day-to-day lives.
There are some bright spots. Today, we are more confident in Small Business, the Police, the Military, and the Scientific Community than when polling began decades ago. Confidence in Small Business has always been high and remains so. The Military scores the highest confidence at 73%. The Scientific Community, on the other hand, elicits only a moderate degree of confidence; but confidence in the community has risen 11% since 1973.
Confidence is eroding in most of the institutions covered in the survey. These include institutions that are part of most people’s everyday lives like Banks, Television News, and the Medical System, as well as government and public services like Congress, the Presidency, and Public Schools.
Today, Congress earns the lowest confidence score at just 9%, a 79% erosion from when the survey began in 1973. Television News is not faring much better, with confidence falling from 46% in 1993 to 21% in 2016.
In 1993, new categories such as the Medical System and the Criminal Justice System were added to the survey. The Medical System received an 80% confidence vote, the highest of all categories in the survey. That number has since declined by slightly more than half to just 39%. In the same year, the Criminal Justice System scored the lowest at only 17%, though it has seen modest increases since, to 23% in 2016.
In sum, we know that trust and confidence are critical elements in a healthy society and economy and that high levels of trust and confidence contribute to productivity and overall prosperity. As important as these attributes are, the past several decades have seen overall declines in confidence in most of our institutions.
Which brings us back to the idea of complexity. The everyday experience of this data is troubling. Day to day, we have little confidence in the veracity of a speech in Congress, or the fairness of a judicial ruling, or the motivations of the bank that holds our savings. So our experiences of these interactions are strained and overly complicated.
Part 4: Family, friends, and community
Today, after nearly 50 years of rapid social evolution, the U.S. has far more variety and diversity than ever before. Here is another face of the complexity we’ve been describing: the changes to traditional family structures, friend networks, and communities. The rise of single parenthood and the decline of the middle class are two aspects of these changes.
The U.S. in the 1960’s had more elements of a consistency in family and friend structure. For example, 87% of families included two parents, according to Pew Research Center. However, by 2014 that figure had declined to 61%. Similarly, in 1985 78% of survey participants indicated having more than 1 friend. Whereas today, that figure is much lower at only 53%.
The number of households with only a single person has risen from 13% in 1960 to 28% in 2016. The number of people with many friends is small (15%) and is down 72% from where it was in 1985. While post-Internet communication technologies created the capacity for broad virtual relationships, the data nonetheless describes a very different story of personal connection, or lack of it. When you factor in that labor force participation is declining, more and more people are essentially disengaged, not just from friends and family but from meaningful interaction with others.
Next is money. While U.S. real median household income is high by global standards, it has also been relatively stagnant for the past twenty years. Importantly, this stagnation comes after fully 50 years of steady increases. The reliability of these income increases and the fact that they were spread quite widely across social tiers was a major factor underpinning the “American Dream” itself.
The importance of increasing income is not solely monetary. Higher income creates options for the wage earner and helps buffer households from unexpected shocks and setbacks. In contrast today, a static income, even a high one, represents a decline in expectations alongside the practical reduction in capacity to address unexpected setbacks.
There are complex aspects to educational attainment as well. The percent of the population with a college degree has been rising steadily since 1940. That figure is higher than ever today at 32.5%. While a more educated workforce is a good thing from a societal perspective, from an individual perspective, that simply means more competition. When 5% of the population had a college degree, that represented a near-guarantee of high-paying, opportunity-filled work. At 32.5%, the degree represents a less valuable credential and less of a guarantee for future prosperity.
As we saw with the general erosion of confidence in social institutions, the true impact of these changes is best understood with some distance. Single data points do not tell the whole story. Taken together, rapid changes to the nature of our connections to friends and families have created for many people a sense of…something. Nostalgia for a better yesterday. Confusion and distrust towards today. Or in some cases the widespread anger that propelled an outsider to the Presidency in the 2016 election.
Of the facets of complexity that we have covered in this research, the changes to our social fabric are the most personally felt. But they follow the same pattern we’ve described in other areas. Rapid change introduces complexity to our lives, and that complexity serves as a headwind to our efforts to increase our productivity and, in turn, our prosperity.
But as we stated at the outset, our goal is not simply to quantify complexity in our lives, but discover the way past the paradox of prosperity towards better living. That path starts with awareness and knowledge of how to deal with the tectonic shifts taking place in technology.
Will tomorrow’s technologies transcend the paradox?
The thing with complexity is that it’s complicated. Complexity carries a cognitive burden that we experience whenever we observe the rapidly evolving world. The faster the world changes in terms of social conventions, technology, and commercial needs, the more we have to prepare for change. Every iteration of change is both an opportunity to excel and a risk of failure. No wonder there is so much anxiety.
So what is to be done?
One solution to the paradox of prosperity is to merely accept a simpler life even though that’s likely to entail lower productivity. The other solution is to master advanced technology to help bridge the gap between increased complexity and increased productivity.
Historically, new innovations, in particular communication technologies that enable new forms of connection and collaboration (from telegraph to radio to the Internet), have correlated to dramatic growth in global GDP. The advent of the telephone, for instance, led to widespread behavior change: a certain portion of a person’s time was now dedicated to connecting on the phone with friends, family, and customers.
Today, the “phone” we carry represents a massive focus of our time, for not only basic communication but also reading, information retrieval, photography, games, music, and so on. As with so many aspects of modern life, today’s technologies also represent a massive increase in complexity over what existed only a few years ago. First there is disruption, then widespread benefit.
What is interesting is that even though technology can cause complexity, it is also our best resource for simplification. We are only decades into the Internet era, while the impacts of artificial intelligence and robotics have barely been felt. Developments in computing, smart machines, and automation carry tremendous potential to directly improve people’s lives.
In recent years, technology has increased personal capabilities and productivity even as it has increased complexity. To take just one example, getting all our devices to seamlessly talk and sync with one another is a daunting task for most people. But there is a predictable cycle here. Today we’re in the early stage of the adoption of a long list of new technologies—from smartphones to artificial intelligence to social media to robotics. And, consistent with past adoption cycles, we’re in the difficult, sometimes confusing, disruptive stage. New capabilities carry a cost in complexity as we figure out how to fit them into our lives.
Current-generation technologies are powerful—powerfully capable and powerfully distracting—and in need of refinement. The good news is that we are at the cusp of a shift where that very technological capability starts to bring greater simplicity to our lives, as a new generation of technologies hides the technical complexity away behind screens and shells. Which leaves us with much simpler interfaces that make getting more accomplished faster and easier.
All of this relates directly to the paradox of prosperity. The generations-long changes we have described, the day-to-day complexity they have created, are at a tipping point. Much of this complexity can be untangled with advances in a new generation of technologies. When that refinement comes, all of the time we spend every day managing devices, fiddling with settings, troubleshooting problems, and so on will be freed up to spend however we want.
Advanced technology that insulates us from complexity will free up that time even as it helps us make better choices, ultimately empowering people to live and work better. And there we see the narrow path out of the paradox of prosperity: advances in artificial intelligence, robotics, biotechnology, and other technical areas that support a streamlined and simplified life, a life lived with meaning and purpose, that is savored not rushed, deep not shallow. Where we can be the best versions of ourselves.
For anyone following headlines about the collection and monetization of personal data, the Cambridge Analytica news didn’t come as much of a surprise. Entefy routinely covers developments related to what we call “data trackers,” computer systems and devices designed to go unnoticed as they record your sensitive private data. These slides cover 9 examples of digital tracking and surveillance happening right now on smartphones, websites, and even airport security lines.
Is your organization pursuing an AI-first transformation strategy? If so, start the conversation by submitting the form below.
Contact Us
Thank you for your interest in Entefy. You can contact us using the form below.
Download Data Sheets
See our Privacy Statement to learn more about how we use cookies on our website and how to change cookies settings if you do not want cookies on your computer. By using this site you consent to our use of cookies in accordance with our Privacy Statement.