Buterfly

Artificial wombs and designer babies: Is this humanity’s future?

In spring 2017, the world got its first glimpse of extracorporeal incubation for fetuses – also known as growing a mammal outside of its mother’s body. Scientists successfully grew eight lambs in large, protective sacks that were hooked up to machines providing amniotic fluid and oxygen. Seeing a baby sheep in an artificial womb might be startling, even off-putting, but the experiment’s success made one thing clear. The potential for deliberately created life grown outside the body is here, and it’s only a matter of time before it extends to humans.

The researchers who developed the BioBag (the sack in which the lambs grew) say their goal is to create circumstances in which severely premature human children can flourish. By providing them a safe, womb-like environment in the crucial early days of their lives, scientists could save children who are born too early to survive on their own.

Once you’ve accepted the concept of an artificial womb for premature babies, it’s not too far a leap to imagine an incubator that nurtures life from its very earliest stages. Last year, a group of students participating in the Biodesign Challenge Summit introduced a concept for a crib designed to grow babies outside the womb. The crib, or pod, would allow parents to create a baby and then watch it grow until it was ready to be “born.”

Such a device could prove life-changing for couples that want to have a baby but can’t carry a child for whatever reason. But growing children in artificial wombs puts humans in uncharted territory, biologically, legally, and ethically. As technology opens new doors for what we can achieve when it comes to conceiving and growing healthy babies, we must grapple with the questions it raises. From artificial wombs to gene editing, technology may cause us to reframe how we think about human reproduction.

The life-saving potential of artificial wombs

While the thought of babies floating in fluid-filled bags might make you uncomfortable, it’s worth understanding how they work. In the case of the BioBag concept, a premature child – say, one who is born at just 24 weeks – would be transferred immediately from his mother’s womb to an artificial version filled with synthetic amniotic fluid.

Rather than being hooked up to ventilators and IV drips, the child would continue to grow in a simulation of its previous prenatal environment. Importantly, the baby would pump its own blood into an oxygenator instead of being placed on an external pump that could damage its heart. Continuing to grow within the amniotic fluid also safeguards against infection and gives organs such as the intestines and lungs more time to develop healthily.

Premature children face significant health consequences, including vision and hearing loss, respiratory distress, and dangerous infections such as sepsis and meningitis. Although artificial wombs are still several years away from becoming a reality, their potential for saving lives is real.

But such devices also raise a number of ethical and philosophical questions about what happens when you can grow a baby in a bottle. If an artificial womb could be optimized for an embryo’s development – meaning it receives the necessary nutrients and gene activation needed for healthy growth – and reduce the baby’s exposure to environmental stressors, then is there an ethical case to be made for those devices over natural pregnancy? If your baby develops outside your body, does that lessen the bond between you and the child? If children can successfully gestate in an artificial womb, how does that change the way we think about an unborn baby’s viability?

Does the future belong to designer babies?

Whatever your feelings on artificial wombs, another transformative technology is rapidly becoming a reality. Gene editing may hold the key to curing and even eliminating painful illnesses and conditions. Scientists in China are already experimenting with Crispr-Cas9 gene editing technology for treating patients with HIV and cancer.

But gene editing could affect unborn humans as well. Prenatal testing is currently precise enough to detect most chromosomal abnormalities. These tests indicate a child’s potential for being born with fatal, painful conditions, and they allow parents to make early decisions about how best to help their babies and whether to carry them to term. Learning that your child will be impaired is devastating for parents, and it thrusts them into very difficult conversations with one another and their doctors.

Proponents of gene editing believe this technology will alleviate that distress for both parents and their children. As these tools become more sophisticated, doctors may be able to correct mutations that lead to severe disorders before a child is even born. Last year, scientists in the U.S. successfully corrected genes for a heritable heart condition in human embryos. In the future, they may be able to edit genes for a range of other diseases as well.

The mere potential for gene editing raises another host of ethical questions. Some people will see it as “playing God,” while others fear a mania toward editing embryos into “designer babies.” But the latter is less likely than most people believe. Editing a single gene to avert disease is one thing but optimizing an unborn child’s DNA to make them smarter, more attractive, or more creatively inclined is far more difficult.

As of now, it’s unlikely that embryonic editing would even be able to touch conditions such as mental health disorders. But it could correct mutations for devastating diseases like Huntington’s, early-onset Alzheimer’s, and certain cancers. The hope is that by eliminating the mutation in one child, all of his or her descendants will be spared as well.

With great technology comes great responsibility

Neither artificial wombs nor gene editing are in use today, and it will likely be years before parents-to-be will face the prospect of using these tools in their pregnancies. But the questions raised by both are urgent. We know how quickly technology advances, and if we don’t begin discussing the ethical and philosophical implications today, these realities can catch us unawares.

Both artificial wombs and gene editing hold remarkable potential for improving children’s health outcomes. But they also change our perception of what it means to create life. Reproduction is the most human act we can imagine, and we should be talking now about how technology is quickly changing our perceptions of who we are. 

Bitcoin

Bitcoin mining consumes more power than 82% of countries

Bitcoin commanded breathless headlines through much of 2017. The cryptocurrency enthralled both naysayers and crypto-enthusiasts with its skyrocketing valuations and speculation over another “tulip mania” in the making. This year, Bitcoin’s valuation landed it at the center of a media storm once again, only this time, the numbers are falling.

But fixating on Bitcoin’s investment potential distracts from a much more urgent and far-reaching issue. Bitcoin mining, the computing process needed to generate the cryptocurrency, is a massive energy hog. And that’s bad news for a planet that grows warmer by the year.

What is Bitcoin mining?

When transactions occur digitally, they don’t always feel “real.” That’s why people view digital data threats with apathy while they fret tirelessly over physical break-ins. We know that data hacks happen all the time and that there’s a good chance we or someone close to us will become a cybercrime victim. Yet we fail to safeguard our accounts with the same voracity we would our physical property.

So, unless you’ve hedged your life savings on Bitcoin, how it performs won’t make much of a tangible impact on you. But here’s what will. By some estimates, Bitcoin mining burns up to 32 gigawatts of energy each year. To put that into context, that’s more energy than all of Ireland uses in a year. In fact, Bitcoin mines draw more power than 159 individual countries. While that level of use isn’t causing a crisis yet, the growing appetite for Bitcoin is driving up demand, and meeting that demand requires more power.

To understand how a digital currency can dramatically impact the environment, you need to understand how it works. So let’s break down what we mean by Bitcoin mining. As a cryptocurrency, all Bitcoin transactions happen digitally. When someone initiates a Bitcoin transaction, whether they’re buying or selling, that activity gets recorded on a blockchain. You might recall from previous Entefy articles that a blockchain is a digital ledger that uses cryptography to verify and secure transactions. Blockchains are very difficult to hack, which is why the technology is seeing increased use in cybersecurity and other fields.

However, blockchains are ideal for use in digital currencies because they allow for decentralization. Blockchains operate 24/7 without need for human interference. This prevents corruption and theft, and it allows exchanges to happen at all hours, all over the world.

Mining is the process that allows transactions to be added to the blockchain at all. Specialized computers “mine” for pending transactions, and they translate them into mathematical problems that need to be solved before transactions are approved. When one computer grabs a bundle of transactions, it sends a signal to other mining computers in the network.

The others then verify that the transaction is sound. For instance, they’ll check that a buyer has enough Bitcoin to pay for the trade. They’ll also solve the math problems created by the initial miner, which enhances the security of the exchange. Once that process is complete, the transaction gets recorded on the blockchain.

The miners who successfully solve the math problems receive Bitcoin as a reward. If you’re a true believer in the future of cryptocurrency or are seduced by Bitcoin’s potential, then you’re strongly incentivized to mine continually.

This process may seem simple – or at least not too energy intensive – but consider that there are millions of known mining computers (1.65 million of which may contain malware used by hackers). As Bitcoin awareness increases, so do the number of people who want to buy and trade it. The more transactions that take place, the more work the mining computers must perform. And all that computation takes energy. Lots of energy.

Bitcoin versus renewable energy

One of the biggest concerns raised about Bitcoin’s energy use is that the demands are so great and so urgent, mining farms will necessitate continued dependence on fossil fuels. This is particularly alarming in countries such as China, which is home to some of the biggest Bitcoin mining operations on the planet. A single Bitcoin mine in Inner Mongolia consumed the same amount of power as a Boeing 747. Multiply that by several million, and you can understand why people fear a clash between Bitcoin and the environment.

The situation in China is exacerbated by the use of coal-powered energy plants. Many Chinese Bitcoin mines are located in rural areas where coal is a cheap and readily available resource. Since energy is the number one cost in Bitcoin mining operations, it makes sense to seek out the cheapest resource. But that trend comes at a cost. As governments around the world struggle to move away from fossil fuels, Bitcoin is driving up the need for them.

Even if China’s mines shut down following a recent government crackdown on energy used for mining, the global mining problem won’t go away any time soon. China may have the biggest mines, but it’s certainly not the only country where they operate. And the shutdown of Chinese mines would likely create a vacuum for miners in other parts of the world to fill.

Optimists and cryptocurrency defenders note that some mines draw from multisource grids, reducing the amount of fossil fuels used to process Bitcoin. Indeed, companies are exploring options for building waste-to-energy crypto-mines and other environmentally-sound mining methods. Some mining operations already use clean energy, pulling from hydroelectric dams in China and experimenting with electric cars to power a small mining setup.

But critics see two problems with this approach. On the one hand, using a multi-source grid sounds like a step in the right direction. However, as demand accelerates, mines may be forced to use fossil fuel sources to keep pace, which would slow down the transition away from coal and other carbon-centric sources.

The other criticism is that even when mines use clean energy, they’re doing so at the expense of providing that energy to people’s homes or charging electric, eco-friendly cars. At the heart of such viewpoints is skepticism that Bitcoin will ever amount to a widely-used currency. If you don’t see Bitcoin as offering the world any long-term benefit, then the environmental cost is simply too steep. Given that global governments are already struggling to slow climate change and are well behind on those efforts, some see Bitcoin as an unnecessary drain on an overtaxed energy supply.

Proceeding with caution

The environmental consequences of Bitcoin mining are real. But that doesn’t mean we should abandon the concept just yet. Cryptocurrencies hold real potential for providing financial services to the underbanked and creating faster, more secure transactions for everyone.

However, we are in uncharted territories which give rise to growing calls for increased oversight of Bitcoin and other digital currencies, especially given the potential environmental impact. 

Quote

What healthcare costs tell us about the value of data

Healthcare costs have long been a thorn in the side of U.S. businesses. But there’s another significant expense category that dwarfs health spending: low-quality data.

According to data from the U.S. Centers for Medicare & Medicaid Services, private health insurance expenditures by U.S. businesses totaled $1.1 trillion in 2016. That’s a big number, but not necessarily a surprising one. The ever-increasing size of these obligations is one of the reasons Amazon, Berkshire Hathaway, and JPMorgan are trying to do an end run around traditional health insurance by offering their own employee health coverage.

Now consider another outsized business expense that gets far less attention. It has been estimated that in 2016 U.S. companies wasted $3.1 trillion on bad data. Said one analyst:

The reason bad data costs so much is that decision makers, managers, knowledge workers, data scientists, and others must accommodate it in their everyday work. And doing so is both time-consuming and expensive. The data they need has plenty of errors, and in the face of a critical deadline, many individuals simply make corrections themselves to complete the task at hand.

The fact that U.S. companies spend 182% more on data than healthcare puts a fine point on the eternal challenge of data management within an organization. 

Entefy’s enFacts are illuminating nuggets of information about the intersection of communications, artificial intelligence, security and cyber privacy, and the Internet of Things. Have an idea for an enFact? We would love to hear from you.

Kid and a cookie

What temptation teaches us about willpower’s role in success

Successful entrepreneurs and professionals are often seen as having above-average willpower, that capability to overcome challenges through sheer force of will. So if willpower and success are linked, what exactly is it, and how can it best be developed and deployed? The answers to those questions start with something rather surprising: marshmallows.

One of the most well-known studies of delayed gratification was conducted in 1972 by psychologist Walter Mischel. It has since become known as the Marshmallow Experiment. In the experiment, a child between the ages of 3 and 5 is seated at a desk empty except for a marshmallow. The child is told that if they can restrain themselves from eating the marshmallow for 15 minutes, they will receive a second marshmallow as a reward.

Video from the experiments revealed child after child struggling against their most basic instinct to eat the marshmallow instantly. Picture the grinding of teeth and the covering of eyes. It turned out to be too much to ask of many of the kids. But Mischel was not aiming to see what percentage would succeed—he wanted to know what strategies the successful kids employed. What he found was that the successful children were the one who could divert their attention.

The kids that held out long enough to be rewarded with a second treat were those who could take their minds off of the first one most effectively, whether by covering their eyes or engaging in cognitive distractions. Those whose attention bore down fully on the tasty temptation rarely mustered the same restraint. All of which suggests that willpower is less about holding back and more about turning away.

Willpower and personal goals

There are a few ways you can utilize this more nuanced view of willpower. The first step is to identify your triggers and temptations, then find ways to avoid them outright. For example, if you find yourself too engrossed in social media throughout the day, try moving those apps off of your home screen to avoid unnecessary temptation when you look at your device. Turning off notifications is a great way to regain your energy and focus. The less you are exposed to the things you want to avoid, the easier it will be to avoid them.

For those temptations you cannot outright resist, try making it harder to give in. When you’re done with your favorite fantasy football site, log out so that your next visit will require extra effort. Likewise, make positive choices easier to follow through on. For instance, take your workout clothes with you to work so that you don’t have to go back home first (and possibly not leave). Consider avoiding impulsive food temptations by preparing a weekly dinner plan ahead of time so that you don’t have to deal with difficult choices of what to cook in the moment.

And don’t forget the lesson of the Marshmallow Experiment. The best way to fight temptation might just be to cover your eyes and think of something unrelated. The quicker you can divert your gaze and start thinking about why a “W” is called a “double-u” and not a “double-v,” the more likely you will forget what it was you were struggling with.

Putting yourself on the right path

Follow-up studies by Mischel and others found that the children who resisted the first marshmallow went on to greater success in other areas of life than those who gave in too early. Everything from school grades to health seems to improve as people learn to say no to the things they desire. An analysis of 102 self-control studies, with a combined 32,000 participants, confirmed that self-control relates to a number of positive behaviors and outcomes.

All of which boils down to the idea that self-control alone isn’t as important to success as we may tend to believe. Instead it could be that the most successful are those who learn to avoid temptation better than others. The lesson of willpower is that designing an environment around us that utilizes positive distractions is more effective than relying solely on sheer determination.  

Alston

Entefy CEO Alston Ghafourifar speaks about AI and digital transformation with executives and policy experts from Germany

Innovation can spread globally through the exchange of ideas. This principle was on display recently as Entefy CEO Alston Ghafourifar met with a German delegation of executives and government officials in Palo Alto. Alston was invited to present his views on the past, present, and future of digital interaction—the sum of all human-to-human, human-to-machine, and machine-to-machine interactions in the modern workforce. 

The meeting took place during the delegation’s 4-day visit with Silicon Valley technology companies including Oracle, Google, SAP, Equinix, Salesforce, and Entefy, aimed at integrating the Valley’s technological know-how into frameworks meant to inform industrial planning and policy formation in their native Germany. The group sat down with Entefy to learn about cutting-edge advances in multi-modal machine intelligence and AI-powered communication, search, and cybersecurity.

Alston’s presentation focused on solutions to 3 central challenges facing the digital world today: ever-increasing information overload, organizational strains resulting from emerging technologies (such as machine learning), and the fragmentation of digital technology and data across platforms, standards, and channels.

The path forward, Alston argued, is to create a new generation of machine intelligence that operates using natural human communication and interaction behaviors—in contrast to the often-obtuse interfaces of legacy applications. By applying advanced AI and machine learning techniques, Entefy is building natural language interfaces that hide the complexity of underlying AI systems. Ultimately leading to machine cognition that augments people’s capabilities and boosts organization-wide productivity.

The presentation sparked a lively conversation amongst the delegation, Alston, and members of Entefy’s AI team. 

Slide

Digital security update: 10 cybersecurity and privacy threats [SLIDES]

A much as we all enjoy a laugh at “lost Nigerian prince” email scams, more than $12 billion is lost annually to phishing scams. Add in malware, ransomware, cryptojacking, and accidental password exposure and the need for constant vigilance in our digital lives becomes clear. To help you stay informed, we’ve assembled these 10 examples of cybersecurity and data privacy threats.

The original research in this presentation comes from the Entefy article, “10 cybersecurity and privacy threats that will make you miss Nigerian prince and lottery email scams.”

Blockchain

Want transparency in government? Think blockchain in 2020.

In this era of divisive national politics, it’s worth remembering that we all have a shared interest in improved government transparency, efficiency, and security. So it’s perhaps not surprising that blockchain, the distributed digital ledger technology, is being cited as a powerful tool for improving the operations of government.

Now that blockchain technology has emerged from Bitcoin’s shadow, it could transform the government’s ability to protect personal data like Social Security numbers or tax records. But its potential impact doesn’t stop there. Blockchain could also provide solutions to prevent voter fraud, corruption, and financial waste.

Consider the case of Andhra Pradesh, the Indian city that is piloting blockchain solutions for land record management and vehicle registration. It has even introduced blockchain-secured financial transactions, with the goal of reducing inefficiency and corruption. While Andhra Pradesh is a long way from Washington, DC, blockchain could soon become a global solution to longstanding government challenges.

The potential for blockchain in government

With their distributed structure, blockchain systems have no central authorities that control what information gets added to the record, and once a transaction is added, it’s very difficult to alter. Blockchain records are encrypted using secure algorithms, making it difficult to fake or delete transactions. They’re also auditable, allowing outside parties to see and verify the records.

You can probably already imagine why such a technology would be useful in government. Imagine a country in which every payment disbursement is available for public scrutiny. Nothing keeps government officials honest like shining a little sunlight on their dealings. 

Voter fraud prevention using blockchain

The 2016 election highlighted the threat of malicious interference in the democratic process. Not only did foreign actors influence voter opinion through social media, they also raised the specter of mass voter fraud, especially where electronic ballots are concerned.

Blockchain presents better ways to fight voter fraud. Once a transaction is recorded to the blockchain, it’s very difficult to change. Data blocks (which make up the blockchain) are verified across a network of computer nodes, and each node must approve the transaction.

Every action must pass through hashing algorithms that encrypt the information as well. So, if someone casts a vote on a blockchain system, the record of that vote is secure. It’s unlikely that a cybercriminal could hack the record and change the person’s ballot. The distributed nature of blockchain sytems also makes them less vulnerable to common cybersecurity breaches such as denial of service attacks in which valid users are blocked from accessing a service.

But blockchain could prove useful to voter fraud prevention even before a ballot is cast. By providing voters with unique identification numbers and encrypted log-in keys, blockchain-based platforms could further reduce fraudulent behavior. Combined with biometric verification, advanced voting systems could become increasingly difficult to hack.

Blockchain and data security

Wrangling government records into a secure database is a massive job for several reasons. For one thing, there’s a lot of information to manage. Just off the top of our heads, there are Social Security records, employment histories, fingerprints, tax filings, background checks, birth certificates, marriage certificates, and property registries. And that’s barely scratching the surface.

Then there’s the issue of how those records are currently stored. While some are digitized, others are still kept in hard copy form, so there’s a need for modernizing that information. Even if every government agency was optimally digitized – meaning that the records were organized and “cleaned” to eliminate redundancies and inaccuracies – much of that data would be siloed. For security purposes, there needs to be some oversight as to who can access different types of information. But without a clear structure for information sharing, you end up with needlessly cumbersome workflows.

As you might have guessed, blockchain offers a better method of data management for the public sector. We touched on the security aspect above, but it applies here as well. Government agencies could use blockchain to maintain virtually un-hackable records, helping to conceal vital information from cybercriminals.

Blockchain also creates a solution to the data silo problem. Last year, Entefy considered how blockchain could secure medical records, which are popular assets on the black market. Doctors’ offices and hospitals face a similar problem to government agencies in that there’s no easy way to share information. Patients often need to obtain their own records (a hassle unto itself) and update their providers every time they go to a new office.

Some experts suggest that blockchain-secured medical records could ease the burden on both patients and providers. If all of a patient’s records were secured via the same system, one that could be updated after each visit, all of her doctors could work off that one profile. To ensure that the data wasn’t misused, the patient could give permission to access the files only to select offices, and that access would be granted via private encryption keys. Perhaps most importantly, patients would also have a private key that would enable them to review their medical records at any time.

It’s not difficult to see how such a solution could benefit government agencies as well. By streamlining data storage, collection, and security, the government could reduce significant inefficiencies and better serve its citizens. As with the medical records, individuals could also access their own records, granting them greater autonomy over their information.

A new age of government transparency via blockchain

Perhaps we’ll soon see other governments following in the footsteps of Andhra Pradesh and using blockchain to curb corruption and bloated spending. But blockchain can drive other forms of government transparency, too. A clear, verifiable record of transactions could help with coordination on issues such as import management at national ports or the reduction of foodborne illness outbreaks through greater accountability in the global supply chain.

Governing is a complex task, with moving parts most of the governed never even see. But blockchain can simplify the process to create a more engaged citizenry and a more reliable, transparent government. 

Patent

New Entefy patent enhances intelligent message delivery

New Entefy patent enhances intelligent message delivery

Latest Entefy invention represents another advancement in people-centric digital interaction

PALO ALTO, Calif. April 4, 2018. Entefy Inc. has been issued a patent by the U.S. Patent and Trademark Office (USPTO). Patent No. 9,930,002 describes an “Apparatus and method for intelligent delivery time determination for a multi-format and/or multi-protocol communication.”

“Entefy is solving a key challenge in digital communication,” said Entefy’s CEO, Alston Ghafourifar. “We’re excited about this patent as it represents a clear advancement in the way communication systems can leverage time to intelligently optimize the delivery of messages, even across a variety of channels.”

Invention is a cornerstone of Entefy’s mission to save people time so that they can live and work better. Earlier in 2018, the company announced the filing of a suite of 15 new patents in artificial intelligence, search, and blockchain, as well as patent issuances covering encrypted group messaging and people-centric messaging.

ABOUT ENTEFY

Entefy is a machine intelligence company developing advanced technologies in contextual cognition, computer vision, natural language, audio, time series, and other data intelligence. Entefy’s SaaS and on-premise solutions deliver transformative AI, communication, search, cybersecurity, IoT, and blockchain capabilities—helping people and organizations Discover & Do more in less time. Get started at www.entefy.com.

AI brain

The left-brain vs. right-brain myth and other outdated ideas about the brain

When was the last time you were in a classroom studying the brain? If your answer is anything farther back than “very recently,” you’re probably walking around with a lot of outdated ideas about how our brains work. Left-brain versus right-brain? Out. The we-only-use-10%-of-our-brains belief? Debunked. The list goes on. Now is a good time to update your understanding of that most unique of organs, the human brain.

Neuroscientists discover new insights into how our brains function all the time, and the government invests hundreds of millions of dollars a year into brain-related research. In its categorical spending estimates, the National Institutes of Health suggested that Alzheimer’s research alone could draw $790 million this year. Clearly, our understanding of our minds remains a work in progress.

Unfortunately, that understanding is often marred by the organ’s most annoying flaws: we don’t change our minds easily and we’re susceptible to confirmation bias. When faced with the prospect that a long-held belief is false, we’re more likely to dig into our original interpretation than to consider new evidence. We also rely on a sort of collective knowledge pool that makes us overconfident. Research shows that although most of us don’t know the specifics of how common devices operate – say, how a toilet flushes or how a zipper clasps – we interact with them enough to assume we know more about them than we do. Someone knows how these everyday items function, and we take ownership of that understanding.

So, when presented with new information about how the brain operates, we’re inclined toward skepticism. The shift in data doesn’t fit with our existing narrative, and no one likes to admit they’re misinformed. If you believe that we lose most of our body heat from our heads, you’re going to keep wearing a hat in chilly weather, despite the fact that modern science has debunked that myth (which originated with military experiments in the 1950s).

Still, we would do well to outgrow this pesky trait. The brain is a fascinating organ, and the more we know about it, the better we can protect and use it.

Refreshing our cache

Many of the misconceptions that pervade popular culture are simply recollections of outdated research. As in any field, neuroscientists put forth theories they believe to be true based on the best data they have at a given time. But as technology improves and new evidence emerges, they retest and revise their claims.

Considering that we live in the most technologically advanced era in human history, when artificial intelligence is helping doctors make more accurate diagnoses, we can probably expect many of our previous notions about brain functions to be usurped in the coming years. There’s plenty of precedent for such an upset. Until recently, scientists thought the neurons in our brains operated like an analog circuit, switching on and off to convey information. Now researchers believe that the brain functions more like the Internet. If it’s true that the brain is a “large-scale distributed communication network,” as the researchers posit, that new understanding could have far-reaching implications for related fields of study.

Here’s another myth that just won’t quit: The left brain/right brain dichotomy. Since at least the 1970s, people have categorized their personalities as either detail-oriented and analytical (left-brained) or creative and emotionally intuitive (right-brained). But recent studies indicate that different personality attributes don’t live in opposite hemispheres of the brain. As one doctor wrote for the Harvard Health blog, “[I]f you performed a CT scan, MRI scan, or even an autopsy on the brain of a mathematician and compared it to the brain of an artist, it’s unlikely you’d find much difference.” You can continue using the terminology as a shorthand for describing your personality traits, but the categorization likely doesn’t say much about the inner workings of your brain.

Perhaps the most pervasive of all inaccuracies on this topic is that we only use 10% of our brains. There’s no evidence that any part of this organ lies inactive, and the life-changing consequences of brain injuries suggest that every section plays a critical role in our cognitive functioning. The concept doesn’t make sense from an evolutionary perspective, either. It’s unlikely that natural selection would have allowed for the growth of brain tissue that is, in the words of one professor, “metabolically expensive to both grow and run” if it wasn’t serving a purpose.

When it comes to these and other outdated perceptions, it’s time to upgrade our knowledge banks. In the same way we click the refresh button to load the most recent version of a website, we need to update our cache to have an accurate understanding of our brains.

What we think we know now

It’s likely that the breakthroughs we see in the coming years will overshadow what we know today. But there are exciting new findings that should motivate us to seek the best, most up-to-date information on our brains, even if that means admitting our current ignorance.

We’ve recently begun to understand the physiological connection between our brains and our guts, helping us to better realize the influence both have on our mental states and physical experiences. Such data could aid in treatments for people with both gastrointestinal and mental health disorders.

With the opioid epidemic ravaging communities across the country, both intel on and treatments for addiction are desperately needed. Recent research shows that most people are wired for addictive behaviors, and growing data on how people with addiction think could stem a devastating crisis.

Advances in brain research have revealed insights useful to every aspect of our lives. Just look at the body of knowledge surrounding the multitasking versus unitasking issue. Much as we might like the opposite to be true, science tells us that multitasking doesn’t work. Rather than being more productive, we get less done, overtax our minds, and put our well-being at risk. That’s powerful information to have from both a professional and personal perspective.

On these and so many other fronts, our understanding of the brain will likely change radically in the coming years. While updating our models of how the mind works is challenging and even counterintuitive, we should embrace new information as it comes. Because the more we know about the human brain, the better we understand ourselves. 

Mobile phones

6 Ways digital devices and services are impacting human relationships

here’s a healthy debate under way on whether there is such a thing as smartphone addiction and, if so, just what the implications are. The Google Trends chart for the term “smartphone addiction” looks like the stock chart of a healthy growth company (up, up, and away), suggesting at the very least that more people every day are beginning to look into the topic and perhaps question their own levels of dependence on digital devices.

No matter where you stand on issues of device use or abuse, it can’t be denied that digital technology in general is changing aspects of longstanding human interactions. From dating and romance to teen depression to parenting strategies, digital devices are central to the evolution of some of our most fundamental human relationships.

What follows is a quick roundup of recent advances in our understanding of how digital is impacting human relationships.

  1. Online matchmaking is the new normal. Online dating is the second most popular method for meeting a romantic partner behind an introduction by a mutual acquaintance. That statistic says something insightful about technology and its limits. On the one hand, people have become comfortable using technology as a tool to enhance their relationships. On the other, we still ultimately prefer a very traditional, non-tech way to meet someone new.
  2. The health of a romantic relationship can be predicted by smartphone usage. A psychological study mapped the impact of varying degrees of smartphone usage on the health of a relationship. The research found that “participants’ smartphone dependency is significantly linked to relationship uncertainty, while partners’ perceived smartphone dependency predicts less relationship satisfaction.” The key word here is dependency. Smartphone use wasn’t seen as a problem, but dependency (distracted overreliance) clearly drained partners’ satisfaction.
  3. The link between teen depression and smartphone use is unclear. A psychologist at San Diego State University published a study in Clinical Psychology Science that explored the links between smartphone use and teen depression. The study used data on 500,000 teenagers’ smartphone and Internet habits produced by two surveys that have been ongoing since 1991. On the surface, there has been a rise in teen depression and teen suicides during the smartphone era. Yet additional research is needed to understand the complex interplay of device and social media use to reveal more about root causes.
  4. Understanding of the “social paradox” is growing. There’s a paradox at the center of social media. One the one hand, users of Facebook and Instagram tend to share highlights of the best moments of their lives. Think weddings, vacations, newborns. Yet the data suggests that social media use has clear links to very negative mental health effects, including anxiety, depression, and low self-esteem. One large study out of Harvard found that the more a person uses Facebook, the worse they tend to feel: “Overall, our results showed that, while real-world social networks were positively associated with overall well-being, the use of Facebook was negatively associated with overall well-being.”
  5. Teen use of sexually explicit material has links to dating violence. It’s long been conventional wisdom that the pornography industry has long been a driver of technological change. What’s emerging is a more concerning link between the teenagers’ consumption of sexually explicit material online and violence inside relationships. These findings came from research that examined the results of 43 separate studies on the topic.
  6. “Screen time” is a new factor in parent-child relationships. Parents these days wrestle with screen time management, that catch-all phrase for the time children spend using computers, tablets, and smartphones. It’s not a new concern, in that parents of previous generations struggled with how much TV a child should consume. But there are key differences, namely how a device with an Internet connection opens the door to accessing practically any type of information or content. Contrasted with the known quantity that was the network TV schedule and its built-in censors. One study from Oxford University into the “Goldilocks Hypothesis”—the idea that there is an optimal, “just right” amount of screen time—concluded that moderate use of digital technology doesn’t appear to be harmful to adolescents and may in fact have benefits; though the authors acknowledged that further study was required before recommendations could be made.

Entefy has written extensively on the links between digital device use and everyday wellbeing, most recently in an examination of how technology abuse impacts our health.