Success in professional life these days is linked to how well you keep up with constant rapid change. Change in market dynamics, technology, skillsets, to name a few. Which suggests that learning is not a pastime, but a core requirement for keeping ahead of the curve in today’s business environment.
This infographic spotlights powerful techniques for learning about any new subject: mental models. We explain how mental models work and provide several examples of these useful framing techniques.
Money laundering is the term for deliberately masking the origin of illegally obtained cash generated by criminal activity. Money laundering activity is on the rise, fueling the operations of drug cartels and terrorist organizations around the globe. Advances in technology allow so-called “megabyte money” to exist on computer screens and move anywhere in the world nearly instantaneously. According to data on money laundering from the United Nations, 2% to 5% of global GDP is laundered annually, or between $800 billion and $2 trillion dollars.
Banks and other financial services companies are on the front line of anti-money laundering (AML) efforts. AML regulations require financial institutions to monitor deposits and wire transfers to spot signs of suspicious activity. An American Bankers Association survey of AML enforcement at banks with assets larger than $1 billion showed that 30% of banks intend to increase AML team headcount and budgets, citing the need for new software to remain efficient and the overall increase in suspicious activities that require review.
AML compliance programs now look more like operational utilities or, as one executive put it, “factories,” and less like the independent oversight functions that banks first envisioned. These factories are expensive, yet might be acceptable if the huge teams and manual processes were working well. But many are not.
One central challenge to AML compliance is that banks’ legacy monitoring systems are often rules-based and inefficient, with data showing that as many as 90% of AML alerts are false positives. Meaning that a lot of effort and expense goes into investigating legal, compliant wire transfers and deposits. AI-powered solutions are a clear need, but advanced automation carries its own challenges.
A range of factors contribute to the difficulty of automating AML efforts. Low-quality data and fragmented data sources make automated solutions difficult to develop. New banking products and services, like instant fund transfers and mobile payments, create more platforms that need to be monitored. And the inconsistent availability and quality of data across geographies makes standardizing processes difficult.
Despite the challenges, banks see machine learning and data analytics technologies as the future of AML fraud detection. These systems have the potential to reduce the manual work of compliance personnel by 50%, effectively doubling their effectiveness dollar for dollar. The potential financial impact of these AI systems could be huge. Using the UN estimate of global money laundering, a 50% cut in illegal activity would represent a $400 billion to $1 trillion decline in dirty money transfers. Which would create a positive real-world impact slowing the flow of money to illegal organizations worldwide.
There is a paradox at the heart of life in the digital age.
Despite what we often see in the headlines about politics or the economy, in most ways of measuring life outcomes, people in the U.S. and much of the world are living better today than ever before. Americans are living longer and healthier; are more educated; have plentiful and varied food supplies; live in bigger and safer homes; experience a cleaner and greener environment; work in safer workplaces; are less likely to suffer violence and accidents; collect incomes that are higher while absolute poverty is lower. Globally, human rights are expanded; more countries are democracies; income inequality is down as is warfare between states. However, not everything is better for everyone, everywhere, all the time. Yet in the aggregate, much of humanity is at the pinnacle of a march towards better living.
But we don’t experience life in the aggregate. We experience life specifically, uniquely, personally. Not as an average but as a point. And many people’s experience of life today is not that of a pinnacle of progress. It is, day in and day out, a feeling of inexplicable unease, a sense of unending challenge, a fear that things are perhaps more dangerous, the sensation of being constantly overloaded and ceaselessly stressed. By one measure, 40 million U.S. adults have some form of anxiety disorder. And 71% of Americans surveyed say they are dissatisfied with the way things are going.
This Entefy research attempts to answer the question at the heart of the paradox of prosperity: Why is our day-to-day experience of life out of sync with data telling us things are going so well? The answer starts with some important relationships. To achieve the improved outcomes we collectively and individually desire (longevity, wealth, health, etc.), we have to generate increased prosperity; to generate increased prosperity, we have to be more productive; to be more productive, we have to make changes that accomplish more; and when a lot of changes takes place at once, the result is increased complexity. You might say that we purchase better life outcomes using the currency of complexity.
In seeking to explain the disconnect between objective data and subjective experience, we analyzed information from 45 different sources—everything from the U.S. Census Bureau and Bureau of Labor Statistics to polling firms like Gallup—covering the decades from the 1950’s until today. This report is divided into 4 parts.
In Part 1, we start decades back in time to examine that communal feeling of widespread success that today sometimes feels part of a bygone Golden Age.
In Part 2, we will define complexity, put some measurements around it, and evaluate the familiar and surprising ways it impacts our lives today.
In Part 3, we take a data-driven look at our shared confidence in society’s institutions.
In Part 4, we analyze changes to the structures of families, friends, and communities as well as changes to income and education.
Where we end up might surprise you because merely pointing out the challenges of modern life is not the goal here. It is simply the first step in identifying ways for each of us to participate more fully in modern progress. When we understand the paradox, perhaps we can break through it and start enjoying better, more fulfilling lives.
From team owner to team captain
At the end of World War II in 1945, the United States found itself the single largest country with its industry and economy still intact. But that economy was structured for war-fighting purposes. Soon after the war, slowly at first and then in earnest, the nation embarked on a 25- to 35-year period (up to roughly 1980-1990) in which we systematically dismantled the regulatory structures put in place during the War years.
During this period, the country largely eliminated price controls and capital controls. We deregulated the rail industry, the trucking industry, the airline industry, the shipping industry, the telecommunications industry, the financial services industry, the healthcare industry, and the energy industry. This strategic deregulation permitted greater competition, which in turn accelerated investment and innovation. Strategic deregulation was complemented by tactical regulation, which strengthened areas like product safety standards, consumer protection, and labor law.
Because we emerged from the War with a functioning economy, the U.S. began the post-War period with significant momentum driving its economic expansion compared to most other major economies. Deregulation further accelerated that growth during the 1970’s through the 1990’s. All told, the U.S. would sit atop the global economic ladder and experience 50 years of steady economic growth, interrupted only by temporary periods of recession.
Importantly, the benefits of post-War growth—the living experience of it—were enjoyed by most everyone. There was inequality but generally speaking most boats were floating higher on a rising tide.
Another factor contributing to post-War prosperity was the culmination of the accumulation of science and technology innovation across most economic sectors that took place over a 100-year period beginning around 1870. Throughout this time frame, transformative scientific discoveries were made in fields as diverse as pharmaceuticals, metallurgy, agrochemicals, electronics, biology, and physics. This broad tide of innovation drove great increases in productivity.
After 1970, innovation became increasingly restricted to the high technology and communication industries. While innovation was and continues to be dramatic in these two fields, these two sectors account for just 7% of U.S. GDP. At the same time, the pace of innovation in other key sectors such as agriculture, finance, and transportation slowed dramatically. Innovation continues, but many industries today sit atop their S-curves with their inventive heydays behind them.
During the 50-year period following the War, it was reasonable to conclude that the U.S. held a special and unique place in the world. It was easy to take for granted the multiple circumstances that created so much prosperity for so many, so quickly. Our corporations grew ever-larger, employing more and more people, and delivering to them more and more comforts. We reached the moon and defeated Communism. We conquered or contained centuries-old diseases such as smallpox, diphtheria, cholera, typhus, and malaria. Our universities were unrivaled, our scientists provided a constant flow of amazing discovery. At any point along this timeline, the future looked bright.
Today, much has changed. Broad-based, easy gains in productivity may be behind us. We have to make harder choices. We have to live within constraints we did not anticipate or wish to acknowledge. There is more competition within our country and among global competitors. Digitization, the Internet, chip designs, and artificial intelligence are disrupting all our other industries. The U.S. still holds a unique and commanding position but the global order is disrupted and unpredictable. Our collective sense of exceptionalism has been upended by clear signs that we are one among many.
Which brings us to the paradox that defines life in the digital age.
The paradox of prosperity
Let’s restate the paradox of prosperity: despite pervasive evidence that life today is better than ever before in history, our subjective experience of life can feel—take your pick—unsettled, uncertain, or just plain anxiety-riddled.
How do we reconcile these contradictory ideas? The first step is to establish a credible description of the problem. Matt Ridley, a former editor of The Economist, did just that in a speech to the Long Now Foundation entitled “Deep Optimist,” a great starting point on this topic. If we acknowledge that life has improved in general, what is the basis for all this pessimism? The answer begins in the abstract. Picture life as a black box with inputs, processes, and often unpredictable outputs.
Life’s inputs are people, environment, and context (an individual’s circumstances at birth and beyond). The processes of life influence and transform people’s lives, everything from parenting and education to employment and law. These processes impact the outputs: your quality of life. When we talk about the evidence that life today is better than ever, we are talking about measurements of these outputs, everything from GDP to educational attainment to average lifespan.
As we presented above, outputs have improved across the board for most Americans and for many around the globe. If all the outputs have improved, perhaps the perception of life’s difficulties and challenges reside within the processes that generate the outcomes—learning, socializing, protecting, working, parenting, healthcare, governance, and so on.
The sense of generalized anxiety so many people experience is a byproduct of the very complexity that emerges from increased productivity. We have access to incredible amounts of information to solve very targeted problems but this same ocean of information threatens to drown us. With greater complexity, we are less in control.
To dig deeper, we need to look at a representative example of complexity then and now. Let’s use the example of financing the purchase of a home in 1970 versus today.
To buy a house in 1970, you had to save enough for a down payment and then select from two choices of mortgage: a 15-year fixed-rate mortgage and a 30-year fixed-rate mortgage. Then, as it is now, that decision was monumental and consequential because a home is typically the single largest investment most people make in their lives.
In 1970, while your choice was simple (basically two mortgage options), it was also constrained. You had to have enough money saved for a sizable down payment as well as closing costs. You had to document a steady and secure income and an unblemished credit record. It was difficult to get a mortgage back then, with fewer mortgage lenders from which to borrow. Fewer choices and less risk.
In 2017, you have many more choices of a mortgage: Fixed Rate, Adjustable Rate, and Balloon Mortgages as well as 10-, 15-, 20-, 30-, and 40-year terms. There are many more providers than there were then, supplemented with countless mortgage brokers and online services. There are many options in down payment rules and credit requirements. It is much easier to borrow more than you can afford these days. What was a simple choice in 1970 has turned into a series of rather complex decisions for you to make.
Importantly, the consequences of poor decisionmaking are now far greater than they were back then. It was much harder to get a mortgage in 1970 because the requirements were more stringent. But because they were more stringent, you were also much less likely to default with all the negative consequences that creates.
This change in consequentiality became tragically apparent in 2008 with the bursting of the real estate bubble. Millions of families lost their homes or declared personal bankruptcy for many different reasons but often simply because they had not properly understood the liabilities and risks of their mortgage decisions. Complexity and ambiguity created financial chaos that nearly collapsed the global economy.
Mortgages are just one example. You can see the same dynamics of complexity in college education (where to study, what to study, how to pay for it). Or in health insurance (HSA, PPO, HMO, HDHP). Or just select from the 39,500 different items in the average grocery store. The list goes on and on.
Common threads run through all of these examples. What you find is that today we have more choices, the context surrounding decisions is more volatile (past assumptions are not always reliable), there is greater uncertainty (not because of an absence of information but often because of an overabundance of information), and greater ambiguity (more fine print and nuance in terms).
Now, add isolation to this complexity. People in general have fewer friends and family members to call upon for advice and support, making us more responsible for our own lives than ever before. Family sizes and close friendship circles are smaller, church attendance is down—important because these institutions traditionally provided a buffer against life’s unanticipated setbacks. We are on our own in a world that is often volatile, uncertain, confusing, and ambiguous. Or at least that’s how it feels to a lot of people a lot of the time.
And so we find that, without intending to, we have collectively made a Faustian bargain. Across practically every area of life, we have more options to choose from and greater flexibility in achieving what we want. But at the same time we also bear a very real burden of greatly increased responsibility for the outcomes of those choices.
Part 2: Defining modern complexity
Complexity is central to the paradox of prosperity. It emerges from the rapid pace of change, and the arrival of seemingly endless choices in practically every area of life. But not just the number of choices we face. It’s the number of choices we make that carry great weight and consequentiality, yet are made with little outside support.
This pattern of choice and consequentiality can be seen in many areas of life. We are more responsible for our own retirement saving than in the past, as demonstrated by the decline of employer-provided pensions and the rise of personally-managed savings accounts like 401(k)s. Similarly, education choices are greater as are their costs. Healthcare planning is more complex and driven by our own choices.
And nowhere is complexity and the pace of change more apparent than in digital technology. To take just one example, there’s the increasing complexity and volume of communication that occurs in our connected worlds. Digital messaging technologies allow astounding opportunities for personal connection, independent of time and space. Yet the technologies have also created very real obligations in terms of hours in a day and number of emails, phone calls, text messages, and the like. When you consider that just 10 years ago, the first smartphones were hitting store shelves, the pace at which we’ve adjusted our lives to accommodate their capabilities is astounding.
Financial planning, education, healthcare, and technology are not carefully selected examples. We see the same dynamics of complexity throughout our day-to-day lives. Here are some more concrete manifestations of increased choice and consequentiality.
It is important to note that, for example, the 4x increase in items in the supermarket is neither inherently good nor bad. In fact, choice in life is generally considered a positive—until it overwhelms us and leads to analysis paralysis.
As the data suggests, today’s world for the most part offers us more choices. We have more items to choose from at the grocery store, we dine out more often, we live in larger homes, and those homes are equipped with more labor-saving and electronic devices than before.
The number of unique items for sale in a typical grocery store rose nearly four-fold between 1950 and now. This is a good thing in terms of increased choice. However, it is a challenge in terms of efficiency. Here’s an example: You are sent to the grocery store to “pick up an extra package of spaghetti.” You get there and discover that you have a dozen brands to choose from, you realize you don’t know the difference between n. 8 and n. 9 spaghetti, and you aren’t sure whether to buy the regular or gluten-free version. The consequences aren’t grave in making a bad decision. But a decision has to be made nonetheless.
Eating out is a pleasant indulgence but again: “Where do you want to eat?” “I don’t know, where do you want to eat?” This is trivial, until you add up all of the time we spend making all of these decisions.
Homes that are 2.7x the size of those in the 1950’s represent a rising degree of prosperity but that extra space also means more decorating decisions, more decisions about home security, energy consumption, furnishings, and so on. A corollary to home size is the number of electronic devices. Not just straightforward things like a TV or a computer but also a microwave, an alarm clock, a thermostat, a speaker. Each one of these electronic devices has attendant complexities – whether to buy a warranty, learning how to use it, figuring out how to connect one with another. We take much of this day-to-day complexity for granted, but there are hidden costs.
There are other forms of complexity. We are living more densely than we used to do, with nearly half again as many neighbors as we had in 1970. More neighbors mean more interactions but also more chances for contention and conflict.
Then there’s employment. In Agriculture, we have been moving off the farm and into cities for more than a century. Manufacturing has seen workers exiting manufacturing jobs for 50 years. Where are these workers headed? Into the Service sector, which is nearly 30% larger today than in 1970—a figure that has a direct link to complexity as well. There is complexity in Agriculture and Manufacturing, but it tends to be slow developing with long cycle times. The expertise required to drive a tractor or work on an assembly line changes slowly. In contrast, Service sector jobs, roles, processes, and skills tend to evolve quickly. The more that employment is concentrated in the Service sector, the greater uncertainty and volatility there will be for more people.
What is so interesting about complexity is how much the picture of life changes the farther you zoom out from these individual data points. We have become so accustomed to many of the characteristics we discuss here that we take them for granted. Only with some distance can we see what makes the modern world “modern,” and the complexity and consequentiality that define so many of our many decisions.
Among the mechanisms by which confidence contributes to economic, societal, and personal development is efficiency. When there is high confidence, a handshake can take the place of a contract, saving time and money. The lower the level of confidence, the more precautions have to be taken, the more you have to focus on managing risk.
Gallup has been monitoring trust and confidence levels in institutions for decades. They cast their survey questions in terms of confidence: “Please tell me how much confidence you, yourself, have in each one — a great deal, quite a lot, some, or very little?”. They started asking these questions in 1973, and have added additional institutions over time. The results are insightful and tell us a great deal about how complexity has impacted our day-to-day lives.
There are some bright spots. Today, we are more confident in Small Business, the Police, the Military, and the Scientific Community than when polling began decades ago. Confidence in Small Business has always been high and remains so. The Military scores the highest confidence at 73%. The Scientific Community, on the other hand, elicits only a moderate degree of confidence; but confidence in the community has risen 11% since 1973.
Confidence is eroding in most of the institutions covered in the survey. These include institutions that are part of most people’s everyday lives like Banks, Television News, and the Medical System, as well as government and public services like Congress, the Presidency, and Public Schools.
Today, Congress earns the lowest confidence score at just 9%, a 79% erosion from when the survey began in 1973. Television News is not faring much better, with confidence falling from 46% in 1993 to 21% in 2016.
In 1993, new categories such as the Medical System and the Criminal Justice System were added to the survey. The Medical System received an 80% confidence vote, the highest of all categories in the survey. That number has since declined by slightly more than half to just 39%. In the same year, the Criminal Justice System scored the lowest at only 17%, though it has seen modest increases since, to 23% in 2016.
In sum, we know that trust and confidence are critical elements in a healthy society and economy and that high levels of trust and confidence contribute to productivity and overall prosperity. As important as these attributes are, the past several decades have seen overall declines in confidence in most of our institutions.
Which brings us back to the idea of complexity. The everyday experience of this data is troubling. Day to day, we have little confidence in the veracity of a speech in Congress, or the fairness of a judicial ruling, or the motivations of the bank that holds our savings. So our experiences of these interactions are strained and overly complicated.
Part 4: Family, friends, and community
Today, after nearly 50 years of rapid social evolution, the U.S. has far more variety and diversity than ever before. Here is another face of the complexity we’ve been describing: the changes to traditional family structures, friend networks, and communities. The rise of single parenthood and the decline of the middle class are two aspects of these changes.
The U.S. in the 1960’s had more elements of a consistency in family and friend structure. For example, 87% of families included two parents, according to Pew Research Center. However, by 2014 that figure had declined to 61%. Similarly, in 1985 78% of survey participants indicated having more than 1 friend. Whereas today, that figure is much lower at only 53%.
The number of households with only a single person has risen from 13% in 1960 to 28% in 2016. The number of people with many friends is small (15%) and is down 72% from where it was in 1985. While post-Internet communication technologies created the capacity for broad virtual relationships, the data nonetheless describes a very different story of personal connection, or lack of it. When you factor in that labor force participation is declining, more and more people are essentially disengaged, not just from friends and family but from meaningful interaction with others.
Next is money. While U.S. real median household income is high by global standards, it has also been relatively stagnant for the past twenty years. Importantly, this stagnation comes after fully 50 years of steady increases. The reliability of these income increases and the fact that they were spread quite widely across social tiers was a major factor underpinning the “American Dream” itself.
The importance of increasing income is not solely monetary. Higher income creates options for the wage earner and helps buffer households from unexpected shocks and setbacks. In contrast today, a static income, even a high one, represents a decline in expectations alongside the practical reduction in capacity to address unexpected setbacks.
There are complex aspects to educational attainment as well. The percent of the population with a college degree has been rising steadily since 1940. That figure is higher than ever today at 32.5%. While a more educated workforce is a good thing from a societal perspective, from an individual perspective, that simply means more competition. When 5% of the population had a college degree, that represented a near-guarantee of high-paying, opportunity-filled work. At 32.5%, the degree represents a less valuable credential and less of a guarantee for future prosperity.
As we saw with the general erosion of confidence in social institutions, the true impact of these changes is best understood with some distance. Single data points do not tell the whole story. Taken together, rapid changes to the nature of our connections to friends and families have created for many people a sense of…something. Nostalgia for a better yesterday. Confusion and distrust towards today. Or in some cases the widespread anger that propelled an outsider to the Presidency in the 2016 election.
Of the facets of complexity that we have covered in this research, the changes to our social fabric are the most personally felt. But they follow the same pattern we’ve described in other areas. Rapid change introduces complexity to our lives, and that complexity serves as a headwind to our efforts to increase our productivity and, in turn, our prosperity.
But as we stated at the outset, our goal is not simply to quantify complexity in our lives, but discover the way past the paradox of prosperity towards better living. That path starts with awareness and knowledge of how to deal with the tectonic shifts taking place in technology.
Will tomorrow’s technologies transcend the paradox?
The thing with complexity is that it’s complicated. Complexity carries a cognitive burden that we experience whenever we observe the rapidly evolving world. The faster the world changes in terms of social conventions, technology, and commercial needs, the more we have to prepare for change. Every iteration of change is both an opportunity to excel and a risk of failure. No wonder there is so much anxiety.
So what is to be done?
One solution to the paradox of prosperity is to merely accept a simpler life even though that’s likely to entail lower productivity. The other solution is to master advanced technology to help bridge the gap between increased complexity and increased productivity.
Historically, new innovations, in particular communication technologies that enable new forms of connection and collaboration (from telegraph to radio to the Internet), have correlated to dramatic growth in global GDP. The advent of the telephone, for instance, led to widespread behavior change: a certain portion of a person’s time was now dedicated to connecting on the phone with friends, family, and customers.
Today, the “phone” we carry represents a massive focus of our time, for not only basic communication but also reading, information retrieval, photography, games, music, and so on. As with so many aspects of modern life, today’s technologies also represent a massive increase in complexity over what existed only a few years ago. First there is disruption, then widespread benefit.
What is interesting is that even though technology can cause complexity, it is also our best resource for simplification. We are only decades into the Internet era, while the impacts of artificial intelligence and robotics have barely been felt. Developments in computing, smart machines, and automation carry tremendous potential to directly improve people’s lives.
In recent years, technology has increased personal capabilities and productivity even as it has increased complexity. To take just one example, getting all our devices to seamlessly talk and sync with one another is a daunting task for most people. But there is a predictable cycle here. Today we’re in the early stage of the adoption of a long list of new technologies—from smartphones to artificial intelligence to social media to robotics. And, consistent with past adoption cycles, we’re in the difficult, sometimes confusing, disruptive stage. New capabilities carry a cost in complexity as we figure out how to fit them into our lives.
Current-generation technologies are powerful—powerfully capable and powerfully distracting—and in need of refinement. The good news is that we are at the cusp of a shift where that very technological capability starts to bring greater simplicity to our lives, as a new generation of technologies hides the technical complexity away behind screens and shells. Which leaves us with much simpler interfaces that make getting more accomplished faster and easier.
All of this relates directly to the paradox of prosperity. The generations-long changes we have described, the day-to-day complexity they have created, are at a tipping point. Much of this complexity can be untangled with advances in a new generation of technologies. When that refinement comes, all of the time we spend every day managing devices, fiddling with settings, troubleshooting problems, and so on will be freed up to spend however we want.
Advanced technology that insulates us from complexity will free up that time even as it helps us make better choices, ultimately empowering people to live and work better. And there we see the narrow path out of the paradox of prosperity: advances in artificial intelligence, robotics, biotechnology, and other technical areas that support a streamlined and simplified life, a life lived with meaning and purpose, that is savored not rushed, deep not shallow. Where we can be the best versions of ourselves.
For anyone following headlines about the collection and monetization of personal data, the Cambridge Analytica news didn’t come as much of a surprise. Entefy routinely covers developments related to what we call “data trackers,” computer systems and devices designed to go unnoticed as they record your sensitive private data. These slides cover 9 examples of digital tracking and surveillance happening right now on smartphones, websites, and even airport security lines.
What is the Mimi AI Platform and how does it boost productivity? How does Entefy Communicator address the information overload that complicates our digital lives? What do Entefy’s 48 combined filed and issued patents cover?
You’ll find answers to these questions and more on Entefy’s new website. Dive deeper into the company’s mission to save people time so they can live and work better, and learn more about the advanced machine intelligence solutions our team has developed to make it happen.
Here are highlights of what’s new.
Entefy data sheets
Downloadable access to data sheets on Entefy intelligence and productivity solutions:
Entefy Illuminate
AI-powered knowledge platform
Entefy Find
Universal search and discovery technology
Entefy Insight
Valuable data intelligence on every screen
Entefy Communicator
All-in-one communication and collaboration
Mimi SmartAgent
“Show me the future of artificial intelligence”
Entefy Secure
Next-gen cybersecurity and adaptive privacy controls
Entefy Visualize
Intelligent data monitoring dashboard
Demo signup
For industry leaders and decisionmakers interested in scheduling a technology demo, we’ve added signup forms to get you started.
Design
A new modular design to help you navigate the fast-changing worlds of artificial intelligence, digital communication, search, cybersecurity, data privacy, IoT, and blockchain.
And be sure to check out the Entefy blog for data-driven insights on all these topics and more.
When it comes to health, technology is making a big impact. Yet not all of it is positive. There are increasing signs that technologies like smartphones and social media are causing physical and mental health problems. Data suggests that technology use (and especially overuse) is linked to everything from developmental issues to increased accident risk to recurring headaches.
In spring 2017, the world got its first glimpse of extracorporeal incubation for fetuses – also known as growing a mammal outside of its mother’s body. Scientists successfully grew eight lambs in large, protective sacks that were hooked up to machines providing amniotic fluid and oxygen. Seeing a baby sheep in an artificial womb might be startling, even off-putting, but the experiment’s success made one thing clear. The potential for deliberately created life grown outside the body is here, and it’s only a matter of time before it extends to humans.
The researchers who developed the BioBag (the sack in which the lambs grew) say their goal is to create circumstances in which severely premature human children can flourish. By providing them a safe, womb-like environment in the crucial early days of their lives, scientists could save children who are born too early to survive on their own.
Once you’ve accepted the concept of an artificial womb for premature babies, it’s not too far a leap to imagine an incubator that nurtures life from its very earliest stages. Last year, a group of students participating in the Biodesign Challenge Summit introduced a concept for a crib designed to grow babies outside the womb. The crib, or pod, would allow parents to create a baby and then watch it grow until it was ready to be “born.”
Such a device could prove life-changing for couples that want to have a baby but can’t carry a child for whatever reason. But growing children in artificial wombs puts humans in uncharted territory, biologically, legally, and ethically. As technology opens new doors for what we can achieve when it comes to conceiving and growing healthy babies, we must grapple with the questions it raises. From artificial wombs to gene editing, technology may cause us to reframe how we think about human reproduction.
The life-saving potential of artificial wombs
While the thought of babies floating in fluid-filled bags might make you uncomfortable, it’s worth understanding how they work. In the case of the BioBag concept, a premature child – say, one who is born at just 24 weeks – would be transferred immediately from his mother’s womb to an artificial version filled with synthetic amniotic fluid.
Rather than being hooked up to ventilators and IV drips, the child would continue to grow in a simulation of its previous prenatal environment. Importantly, the baby would pump its own blood into an oxygenator instead of being placed on an external pump that could damage its heart. Continuing to grow within the amniotic fluid also safeguards against infection and gives organs such as the intestines and lungs more time to develop healthily.
Premature children face significant health consequences, including vision and hearing loss, respiratory distress, and dangerous infections such as sepsis and meningitis. Although artificial wombs are still several years away from becoming a reality, their potential for saving lives is real.
But such devices also raise a number of ethical and philosophical questions about what happens when you can grow a baby in a bottle. If an artificial womb could be optimized for an embryo’s development – meaning it receives the necessary nutrients and gene activation needed for healthy growth – and reduce the baby’s exposure to environmental stressors, then is there an ethical case to be made for those devices over natural pregnancy? If your baby develops outside your body, does that lessen the bond between you and the child? If children can successfully gestate in an artificial womb, how does that change the way we think about an unborn baby’s viability?
Does the future belong to designer babies?
Whatever your feelings on artificial wombs, another transformative technology is rapidly becoming a reality. Gene editing may hold the key to curing and even eliminating painful illnesses and conditions. Scientists in China are already experimenting with Crispr-Cas9 gene editing technology for treating patients with HIV and cancer.
But gene editing could affect unborn humans as well. Prenatal testing is currently precise enough to detect most chromosomal abnormalities. These tests indicate a child’s potential for being born with fatal, painful conditions, and they allow parents to make early decisions about how best to help their babies and whether to carry them to term. Learning that your child will be impaired is devastating for parents, and it thrusts them into very difficult conversations with one another and their doctors.
Proponents of gene editing believe this technology will alleviate that distress for both parents and their children. As these tools become more sophisticated, doctors may be able to correct mutations that lead to severe disorders before a child is even born. Last year, scientists in the U.S. successfully corrected genes for a heritable heart condition in human embryos. In the future, they may be able to edit genes for a range of other diseases as well.
The mere potential for gene editing raises another host of ethical questions. Some people will see it as “playing God,” while others fear a mania toward editing embryos into “designer babies.” But the latter is less likely than most people believe. Editing a single gene to avert disease is one thing but optimizing an unborn child’s DNA to make them smarter, more attractive, or more creatively inclined is far more difficult.
As of now, it’s unlikely that embryonic editing would even be able to touch conditions such as mental health disorders. But it could correct mutations for devastating diseases like Huntington’s, early-onset Alzheimer’s, and certain cancers. The hope is that by eliminating the mutation in one child, all of his or her descendants will be spared as well.
With great technology comes great responsibility
Neither artificial wombs nor gene editing are in use today, and it will likely be years before parents-to-be will face the prospect of using these tools in their pregnancies. But the questions raised by both are urgent. We know how quickly technology advances, and if we don’t begin discussing the ethical and philosophical implications today, these realities can catch us unawares.
Both artificial wombs and gene editing hold remarkable potential for improving children’s health outcomes. But they also change our perception of what it means to create life. Reproduction is the most human act we can imagine, and we should be talking now about how technology is quickly changing our perceptions of who we are.
Bitcoin commanded breathless headlines through much of 2017. The cryptocurrency enthralled both naysayers and crypto-enthusiasts with its skyrocketing valuations and speculation over another “tulip mania” in the making. This year, Bitcoin’s valuation landed it at the center of a media storm once again, only this time, the numbers are falling.
But fixating on Bitcoin’s investment potential distracts from a much more urgent and far-reaching issue. Bitcoin mining, the computing process needed to generate the cryptocurrency, is a massive energy hog. And that’s bad news for a planet that grows warmer by the year.
What is Bitcoin mining?
When transactions occur digitally, they don’t always feel “real.” That’s why people view digital data threats with apathy while they fret tirelessly over physical break-ins. We know that data hacks happen all the time and that there’s a good chance we or someone close to us will become a cybercrime victim. Yet we fail to safeguard our accounts with the same voracity we would our physical property.
So, unless you’ve hedged your life savings on Bitcoin, how it performs won’t make much of a tangible impact on you. But here’s what will. By some estimates, Bitcoin mining burns up to 32 gigawatts of energy each year. To put that into context, that’s more energy than all of Ireland uses in a year. In fact, Bitcoin mines draw more power than 159 individual countries. While that level of use isn’t causing a crisis yet, the growing appetite for Bitcoin is driving up demand, and meeting that demand requires more power.
To understand how a digital currency can dramatically impact the environment, you need to understand how it works. So let’s break down what we mean by Bitcoin mining. As a cryptocurrency, all Bitcoin transactions happen digitally. When someone initiates a Bitcoin transaction, whether they’re buying or selling, that activity gets recorded on a blockchain. You might recall from previous Entefy articles that a blockchain is a digital ledger that uses cryptography to verify and secure transactions. Blockchains are very difficult to hack, which is why the technology is seeing increased use in cybersecurity and other fields.
However, blockchains are ideal for use in digital currencies because they allow for decentralization. Blockchains operate 24/7 without need for human interference. This prevents corruption and theft, and it allows exchanges to happen at all hours, all over the world.
Mining is the process that allows transactions to be added to the blockchain at all. Specialized computers “mine” for pending transactions, and they translate them into mathematical problems that need to be solved before transactions are approved. When one computer grabs a bundle of transactions, it sends a signal to other mining computers in the network.
The others then verify that the transaction is sound. For instance, they’ll check that a buyer has enough Bitcoin to pay for the trade. They’ll also solve the math problems created by the initial miner, which enhances the security of the exchange. Once that process is complete, the transaction gets recorded on the blockchain.
The miners who successfully solve the math problems receive Bitcoin as a reward. If you’re a true believer in the future of cryptocurrency or are seduced by Bitcoin’s potential, then you’re strongly incentivized to mine continually.
This process may seem simple – or at least not too energy intensive – but consider that there are millions of known mining computers (1.65 million of which may contain malware used by hackers). As Bitcoin awareness increases, so do the number of people who want to buy and trade it. The more transactions that take place, the more work the mining computers must perform. And all that computation takes energy. Lots of energy.
Bitcoin versus renewable energy
One of the biggest concerns raised about Bitcoin’s energy use is that the demands are so great and so urgent, mining farms will necessitate continued dependence on fossil fuels. This is particularly alarming in countries such as China, which is home to some of the biggest Bitcoin mining operations on the planet. A single Bitcoin mine in Inner Mongolia consumed the same amount of power as a Boeing 747. Multiply that by several million, and you can understand why people fear a clash between Bitcoin and the environment.
The situation in China is exacerbated by the use of coal-powered energy plants. Many Chinese Bitcoin mines are located in rural areas where coal is a cheap and readily available resource. Since energy is the number one cost in Bitcoin mining operations, it makes sense to seek out the cheapest resource. But that trend comes at a cost. As governments around the world struggle to move away from fossil fuels, Bitcoin is driving up the need for them.
Even if China’s mines shut down following a recent government crackdown on energy used for mining, the global mining problem won’t go away any time soon. China may have the biggest mines, but it’s certainly not the only country where they operate. And the shutdown of Chinese mines would likely create a vacuum for miners in other parts of the world to fill.
Optimists and cryptocurrency defenders note that some mines draw from multisource grids, reducing the amount of fossil fuels used to process Bitcoin. Indeed, companies are exploring options for building waste-to-energy crypto-mines and other environmentally-sound mining methods. Some mining operations already use clean energy, pulling from hydroelectric dams in China and experimenting with electric cars to power a small mining setup.
But critics see two problems with this approach. On the one hand, using a multi-source grid sounds like a step in the right direction. However, as demand accelerates, mines may be forced to use fossil fuel sources to keep pace, which would slow down the transition away from coal and other carbon-centric sources.
The other criticism is that even when mines use clean energy, they’re doing so at the expense of providing that energy to people’s homes or charging electric, eco-friendly cars. At the heart of such viewpoints is skepticism that Bitcoin will ever amount to a widely-used currency. If you don’t see Bitcoin as offering the world any long-term benefit, then the environmental cost is simply too steep. Given that global governments are already struggling to slow climate change and are well behind on those efforts, some see Bitcoin as an unnecessary drain on an overtaxed energy supply.
Proceeding with caution
The environmental consequences of Bitcoin mining are real. But that doesn’t mean we should abandon the concept just yet. Cryptocurrencies hold real potential for providing financial services to the underbanked and creating faster, more secure transactions for everyone.
However, we are in uncharted territories which give rise to growing calls for increased oversight of Bitcoin and other digital currencies, especially given the potential environmental impact.
Healthcare costs have long been a thorn in the side of U.S. businesses. But there’s another significant expense category that dwarfs health spending: low-quality data.
According to data from the U.S. Centers for Medicare & Medicaid Services, private health insurance expenditures by U.S. businesses totaled $1.1 trillion in 2016. That’s a big number, but not necessarily a surprising one. The ever-increasing size of these obligations is one of the reasons Amazon, Berkshire Hathaway, and JPMorgan are trying to do an end run around traditional health insurance by offering their own employee health coverage.
Now consider another outsized business expense that gets far less attention. It has been estimated that in 2016 U.S. companies wasted $3.1 trillion on bad data. Said one analyst:
The reason bad data costs so much is that decision makers, managers, knowledge workers, data scientists, and others must accommodate it in their everyday work. And doing so is both time-consuming and expensive. The data they need has plenty of errors, and in the face of a critical deadline, many individuals simply make corrections themselves to complete the task at hand.
The fact that U.S. companies spend 182% more on data than healthcare puts a fine point on the eternal challenge of data management within an organization.
Entefy’s enFacts are illuminating nuggets of information about the intersection of communications, artificial intelligence, security and cyber privacy, and the Internet of Things. Have an idea for an enFact? We would love to hear from you.
Successful entrepreneurs and professionals are often seen as having above-average willpower, that capability to overcome challenges through sheer force of will. So if willpower and success are linked, what exactly is it, and how can it best be developed and deployed? The answers to those questions start with something rather surprising: marshmallows.
One of the most well-known studies of delayed gratification was conducted in 1972 by psychologist Walter Mischel. It has since become known as the Marshmallow Experiment. In the experiment, a child between the ages of 3 and 5 is seated at a desk empty except for a marshmallow. The child is told that if they can restrain themselves from eating the marshmallow for 15 minutes, they will receive a second marshmallow as a reward.
Video from the experiments revealed child after child struggling against their most basic instinct to eat the marshmallow instantly. Picture the grinding of teeth and the covering of eyes. It turned out to be too much to ask of many of the kids. But Mischel was not aiming to see what percentage would succeed—he wanted to know what strategies the successful kids employed. What he found was that the successful children were the one who could divert their attention.
The kids that held out long enough to be rewarded with a second treat were those who could take their minds off of the first one most effectively, whether by covering their eyes or engaging in cognitive distractions. Those whose attention bore down fully on the tasty temptation rarely mustered the same restraint. All of which suggests that willpower is less about holding back and more about turning away.
Willpower and personal goals
There are a few ways you can utilize this more nuanced view of willpower. The first step is to identify your triggers and temptations, then find ways to avoid them outright. For example, if you find yourself too engrossed in social media throughout the day, try moving those apps off of your home screen to avoid unnecessary temptation when you look at your device. Turning off notifications is a great way to regain your energy and focus. The less you are exposed to the things you want to avoid, the easier it will be to avoid them.
For those temptations you cannot outright resist, try making it harder to give in. When you’re done with your favorite fantasy football site, log out so that your next visit will require extra effort. Likewise, make positive choices easier to follow through on. For instance, take your workout clothes with you to work so that you don’t have to go back home first (and possibly not leave). Consider avoiding impulsive food temptations by preparing a weekly dinner plan ahead of time so that you don’t have to deal with difficult choices of what to cook in the moment.
And don’t forget the lesson of the Marshmallow Experiment. The best way to fight temptation might just be to cover your eyes and think of something unrelated. The quicker you can divert your gaze and start thinking about why a “W” is called a “double-u” and not a “double-v,” the more likely you will forget what it was you were struggling with.
Putting yourself on the right path
Follow-up studies by Mischel and others found that the children who resisted the first marshmallow went on to greater success in other areas of life than those who gave in too early. Everything from school grades to health seems to improve as people learn to say no to the things they desire. An analysis of 102 self-control studies, with a combined 32,000 participants, confirmed that self-control relates to a number of positive behaviors and outcomes.
All of which boils down to the idea that self-control alone isn’t as important to success as we may tend to believe. Instead it could be that the most successful are those who learn to avoid temptation better than others. The lesson of willpower is that designing an environment around us that utilizes positive distractions is more effective than relying solely on sheer determination.
Request a Demo
Is your organization pursuing an AI-first transformation strategy? If so, start the conversation by submitting the form below.
Contact Us
Thank you for your interest in Entefy. You can contact us using the form below.
Download Data Sheets
See our Privacy Statement to learn more about how we use cookies on our website and how to change cookies settings if you do not want cookies on your computer. By using this site you consent to our use of cookies in accordance with our Privacy Statement.