Infographic

Don’t bother planning your career too far into the future

If you’ve been in your chosen career for a few years or more, you’ve probably experienced a lot of change. It could be in the tools you use to get your job done, or the skills you’re using most, or how frequently you work remotely. 

The speed at which technology is developing makes it hard to predict what changes lie ahead. After all, go back a decade or more and there’s little talk about Mobile App Developers. Or Social Media Managers. Or Cloud Computing Specialists. In fact, by one estimate, 65% of children in elementary school right now will work in jobs that don’t yet exist. Talk about dream jobs.

And it’s not just technology’s impact to jobs, but to work itself. College graduates entering the workforce today will job hop at twice the rate of a generation ago. Digital platforms have created new choices for independent workers in the so-called gig economy, which has expanded to include 20 to 30% of the workforce. 

Yet for all the variables and unknowns, at the end of the day, the only thing constant is change. Entefy’s enFacts are illuminating nuggets of information about the intersection of communications, artificial intelligence, security and cyber privacy, and the Internet of Things. Have an idea for an enFact? We would love to hear from you.

Infographic

Hurry up and read that text [VIDEO]

If you’re like most people, you’re getting a ton of different types of messages all the time. Texts, emails, voice notes, IMs, the list goes on. Out of the messaging list, texts are unique. Because in today’s world of modern communication, text messaging is extremely efficient and reliable. 

In our independent survey of 1,500 U.S. professionals, we found that people received on average 34 texts per day, which is a pretty noteworthy number. And that’s because texts are dependable at being opened and read. In fact, if you want to be 99% sure your message gets read, send a text. This video enFact explains why.

Read the original version of this enFact here.

Entefy’s enFacts are illuminating nuggets of information about the intersection of communications, artificial intelligence, security and cyber privacy, and the Internet of Things. Have an idea for an enFact? We would love to hear from you. 

Data privacy

The big new law that might finally shake up digital privacy

The Health Insurance Portability and Accountability Act of 1996—commonly known as HIPAA—created a uniform legal framework for protecting medical records across the entire healthcare sector. Basically, one set of pro-consumer rules applicable to everyone. And as a result, patients don’t have to perform privacy-policy due diligence just to select a doctor. 

If HIPAA had not been passed into law, the process for selecting a new doctor might look quite different than it does today. You would need to figure out how the doctor intended to use your medical records by struggling through a lengthy, legalese-heavy disclosure. You would need to determine who other than the doctor’s office could access your records, and whether they were authorized to sell health records to other parties. And you would need to continually monitor all of this, because the doctor could change policies at any time and, say, sell the practice’s entire medical record database wholesale. 

Now contrast healthcare to the technology industry, which operates without a HIPAA-like charter. As it stands today the technology companies that provide the most popular digital services do business without enforceable mandates to protect personally identifiable information. And so consumers are forced to personally manage their own digital privacy even as a shadowy multi-multi-multi-billion data brokerage industry operates largely without regulation, selling data about you without your knowledge or approval. This is online “privacy” in the U.S. circa 2017.

Privacy isn’t an abstract concept. How it is defined and legislated determines what companies can and cannot do with your personal information. The primary obstacle to the U.S. enacting any sort of overarching privacy law is—wait for it—money. Personal data drives advertising networks, and advertising revenues are the lifeblood of technology and media businesses the world over. 

If you’re someone concerned about personal privacy and the pervasiveness of digital surveillance, it’s unsatisfying how little discussion there is about online privacy rights in the U.S. Instead, it’s a lot of “Nothing to see here, carry on” from the titans of tech.

In sharp contrast: Europe’s right to privacy

It is enlightening to compare digital privacy rights in the U.S. and Europe. Or, more accurately, rules in the U.S. and rights in the EU. Data protection in the EU is protected with as much devotion as Americans view the Constitution and Bill of Rights. And “Rights” is the right word here. Article 7 of the European Union Charter of Fundamental Rights, the treaty that defines the political, social, and economic rights of EU citizens, succinctly outlines a right to respect for private and family life, home, and communications. Then builds on that foundation in Article 8 to further expand the right to privacy

1. Everyone has the right to the protection of personal data concerning him or her.

2. Such data must be processed fairly for specified purposes and on the basis of the consent of the person concerned or some other legitimate basis laid down by law. Everyone has the right of access to data which has been collected concerning him or her, and the right to have it rectified.

3. Compliance with these rules shall be subject to control by an independent authority.

The protection of these principles has been the responsibility of individual EU nations, which initially resulted in a patchwork of data protection regulations. That is, until the passage of the General Data Protection Regulations (GDPR) in 2016. GDPR will be enforced across all the nation states within the EU and UK beginning in May 2018. 

This

means the GDPR will operate EU-wide as the core privacy regulation, replacing the patchwork of national directives that preceded the GDPR. 

Big picture, GDPR means that a single privacy law will apply across all the countries of the EU. Viewed from another angle, the new law will effectively safeguard any and all data collected about any EU citizen, including everything from data generated from using a digital service to purely technical data about an individual’s devices. 

The broad scope of GDPR

One of the most striking aspects of GDPR is just how many types of data are categorized as protected personal data. The law states that “personal data is any information relating to an individual, whether it relates to his or her private, professional or public life. It can be anything from a name, a photo, an email address, bank details, your posts on social networking websites, your medical information, or your computer’s IP address.” 

Under GDPR, EU citizens benefit from new or stronger privacy protections, such as:

• Consent. Users must be informed exactly what data will be collected and how it will be used. This consent is revocable. 

• Data portability. Users can freely transfer their personal data from one service to another. 

• Erasure. Users can require a data collector to erase their personal data. 

• Corrections. Users have the right to correct inaccurate or incomplete information.

Nice and simple and a far cry from the situation in the U.S., where even resources meant to help you delete yourself from the Internet end up basically saying that you can’t really.

There’s an interesting back story here. Many elements of the GDPR emerged in the United States as far back as the 1970’s. But as the Internet era dawned, the U.S. had already lost its lead in privacy protections to the EU. 

Playing online privacy “what if” 

GDPR is an expansive law that applies to any company that collects data on any EU citizen, which means that the law will apply to companies that do not have a physical presence, server, or personnel within the EU or UK. So long as a company is processing the personal data of EU citizens—including simply owning a website with traffic from the EU—that company is required to comply with the provisions of GDPR. 

This definition includes pretty much every multinational corporation and organization of any size doing business in Europe. So it’s not surprising that GDPR compliance is a top data-privacy and security priority of 92% of U.S. companies. Individual companies report compliance costs in the millions of dollars.

GDPR has enormous practical implications for U.S. companies that have grown comfortable selling their customers’ personal data, defaulting users into opting-in status through byzantine Terms of Service agreements, and implicitly acting as if consumer personal data privacy is a privilege and not a right.

Let’s play “what if” and imagine for a moment a U.S. GDPR. The relationship between users and digital service providers would change substantially. Here are three examples.

1. Companies would have to clean house before signing up even one user 

The first change to a U.S. version of GDPR would relate to what a user finds when “arriving” to do business with a digital service. Today, signing up for a new service tends to look the same across services: set up an account by providing some personal data; click “Agree” after not reading nearly incomprehensible Terms of Service and Privacy Policy statements; accept that being tracked and having your data monetized is the “price of free” that pays for the service. 

But GDPR-compliant companies must “clean house” and organize themselves in a privacy-friendly way before users even show up. The content of their Terms of Service agreements, for example, must describe how the services safeguard and protect, not monitor and collect. 

As we described at the outset, you can get a glimmering of this approach in the U.S. when you visit a HIPAA-compliant doctor’s office. You can visit that doctor knowing your data will be protected in specific and comprehensive ways. You don’t have to evaluate the “terms” of working with each and every doctor to make sure they behave in the way you want them to behave. 

2. Users must freely opt-in to a service after receiving transparent descriptions of how the service collects and handles their data

GDPR defines a true opt-in in which a consumer provides knowing agreement with having their personal data collected. Transparency is critical under GDPR, which means the notification to opt-in must be very clearly disclosed and explained to the user or website visitor. 

This is very different from how opt-ins are handled in the U.S. where we live by default in an opt-in system. Digital services begrudgingly offer consumers an opt-out when they take proactive steps to opt-out of having their personal data processed and held. 

3. Users are granted the right to erasure and data portability

As it stands today, Americans can’t completely break up with a digital service. You can request an account be closed, but you have no right to have your personal data scrubbed from the service’s servers completely. That photo you posted to social media 10 years ago will remain the property of the social site indefinitely. 

The GDPR introduced two new rights for data subjects. The first is the right to erasure, which allows individuals to request the deletion of their personal data. The GDPR also gives users the right to receive their personal data in a common format that allows them to transfer the data to another service. 

Taken together this means a user can move everything they ever posted on one social media site to another site, and require the former site to delete everything about the user. The U.S. today has nothing like this.

We may thank Europe for new privacy safeguards

It is easy to get lost in the technical and regulatory details of privacy law and lose sight of its significance. This discussion is ultimately about the tension between an individual’s right to protect and control information by and about them, and an entity’s interest in benefitting from such information. 

The next chapter in the history of online privacy in the U.S. will likely be influenced by EU’s GDPR. Come May 2018, much of corporate America will need to comply with the GDPR or find their corporations subject to fines and sanctions. 

Perhaps the discussion in the U.S. should start with an idea that many people support: data privacy is a right, not a privilege. Seen in this light, much of the digital surveillance we accept today may suddenly seem far less acceptable.

Alston

Moving light speed ahead also means slowing down for health

Innovation can take many different forms, from cutting-edge new ideas in technology to fresh perspectives on conventional wisdom. Entefyers were reminded of this firsthand last week when we hosted a wellness seminar, Health 360. This seminar was presented by our esteemed advisor and investor Dr. Farzan Rajput, cardiologist and founder of Southcoast Cardiology, and Darin MacDonald, a health technologist, professional trainer, and former national natural bodybuilding champion. 

Dr. Rajput (fondly known around here as Dr. Fuzz) and Darin kicked off their presentation with one critical but often misunderstood insight: Every person has dietary and exercise needs that are as unique as a fingerprint. Which is why one-size-fits-all “fad” diets and health programs often fail to provide long-lasting benefits. Dr. Fuzz and Darin shared stories from their careers where, for instance, a vegan diet worked for one person while the Paleo Diet worked for another. 

Health 360 emphasizes the importance of individual metabolism, advocating that optimal health depends on unique factors such as how the body processes macronutrients (carbohydrates, fats, and proteins), produces insulin, and activates different muscle groups during exercise. 

The seminar concluded with a lengthy Q&A—Entefyers were particularly interested to learn more about making better diet and exercise decisions. As Dr. Fuzz explained, working at a young company like Entefy means finding ways to fit health and wellness into our fast-paced high-octane work life. But the rewards of wellness are found in how it helps sustain momentum and productivity for years to come. 

People looking at their devices

Pimples, sluggish sperm, and drooping jowls: 11 tech habits that are bad for your health

Digital technology is the wondrous vessel for connecting across distances, getting the job done, discovering boundless information, and increasing professional productivity. But at the same time, there is deep wisdom in the old expression “too much of a good thing.” An idea that certainly applies to how we use digital devices. Finding that balance is something we’re all figuring out together—after all, we’re less than 10 years into the smartphone era and so much work today involves long periods of screen time. This shouldn’t come as a surprise to anyone. 

What is surprising, though, is the evidence to emerge over the past several years about the impact our digital habits are having on our physical health. We did some research and assembled a list of the potential health repercussions of habits like regularly hopping in and out of 7.9 different communication apps or falling into the “busy trap” on our devices.  

So here are 11 digital habits that science tells us can stress physical health:

1. Placing a laptop on the lap while connected to Wi-Fi decreases sperm count in men. It might be time to reconsider the “lap” part of laptop. An American Society for Reproductive Medicine study analyzed men’s sperm after being exposed to an Internet-connected laptop for four hours. Results showed that sperm was significantly lower in mobility and suffered from DNA fragmentation. While acknowledging that further studies are needed, scientists concluded: “We speculate that keeping a laptop connected wirelessly to the Internet on the lap near the testes may result in decreased male fertility.”

2. All that typing on handheld devices can lead to “text claw.” It sounds bad, and it can be. If you’ve ever felt soreness or cramping in your fingers, wrist, or forearm after long bouts of texting, you might be suffering from “text claw.” The soreness might go away, but over the long term without proper treatment and stretching, muscles can become fibrotic and scarred. Here are some stretches that can alleviate pain from smartphone use

3. Looking down at a device can lead to “text neck.” Data shows that Americans spend an average of 4.7 hours daily on their smartphones. The posture we often assume while doing so involves tilting our heads forward to be closer to the screen. The next time you’re walking and texting, check and see if you do it, too. The challenge with this habit is that even a modest 15-degree tilt of the head creates 27 pounds of pressure on our spines. Which stresses out our backs big time, potentially causing pain and inefficient breathing, to name a few.

4. Then there’s the sagging skin of “tech neck.” As if “text neck” wasn’t bad enough. Another study found that women aged 18 to 39 who own an average of 3 digital devices are experiencing wrinkling and sagging of the skin on the neck from prolonged time looking down at devices. This can cause the neck skin to crease and compound the natural effect of gravity pulling the skin downwards under the chin. Eventually, wrinkles and reduced skin elasticity can result. 

5. Using a smartphone can give you pimples. Acne probably isn’t something you think about when using your smartphone. But we put our phones to our faces multiple times every day, so it’s natural that traces of oil and skin debris adhere to the phone. This creates an environment perfect for bacteria to thrive. Combine that with another finding that mobile phones can have more bacteria than toilet seats and, eww. One doctor stated, “It seems that the mobile phone doesn’t just remember telephone numbers, but also harbors a history of our personal and physical contacts such as other people, soil and other matter.”

6. Use of technology is correlated to reduced exercising around the globe. There is evidence that rates of physical activity are decreasing worldwide due to increased use of technology. One study projected that in the U.S. sedentary activities will increase to 42 hours per week by 2030. The findings are similar for other countries, including China, Brazil, and the UK. 

7. Too much screen time causes dry eye and eye strain. Did you know that when we look at things close to our faces, our blink rate slows down? When we stare at screens, we are blinking less, which means tears evaporate faster, causing dry eye. In addition, our eyes converge when we look at screens and our pupils get smaller. This causes strain because our eyes are more comfortable when they are parallel looking into the distance. The American Optometric Association recommends the “20-20-20” rule: every 20 minutes, take a 20-second break to look at an object 20 feet away.

8. Poor posture and poor breathing from typing at a desk go hand-in-hand. It isn’t news that sitting for long periods has negative effects on posture. But poor posture also affects how we breathe. Lungs are like two big balloons, so when we’re slouched they have a decreased range of motion and breathing can become labored and inefficient. What postures can be culprits? Furniture maker Steelcase completed a global study of 2,000 people in 11 countries that identified 9 new postures—from the Cocoon to the Swipe to the Strunch—that emerged from the common ways we position our bodies while using digital devices.   

9. Earbuds can create hearing loss and tinnitus. Setting the volume on your earbuds too high when listening to a conference call or music can cause hearing loss and tinnitus. More people of all ages are wearing earbuds and headphones for longer stretches of time. The impact of this behavior change is captured in a study published in the Journal of American Medicine, which found that 19.5% of American adolescents aged 12 to 19 suffered from hearing loss, up 33% from the 1990’s. 

10. Digital motion sickness can affect 50-80% of people not otherwise prone to motion sickness. If you’ve ever been rapidly scrolling through social feeds and suddenly felt a dull headache or nausea, you’re not alone. There’s such a thing as digital motion sickness, which stems from “a basic mismatch between sensory inputs…Your sense of balance is different than other senses in that it has lots of inputs…When those inputs don’t agree, that’s when you feel dizziness and nausea.” 

11. Blue light emitted from devices interrupts sleep cycles. Research shows that the blue light emitted from devices can have an impact on our sleep patterns. The light affects melatonin, the hormone that helps us go to sleep and our body’s natural clock called the circadian rhythm. Habitually using devices before bed tends to make it harder to fall asleep and reduces REM cycles during sleep. 

Digital technology is a consequence of our modern life, but hopefully a little more awareness of what science has to say will help you find your own “just enough” of a good thing.

Infographic

What would you do with an extra 4 billion hours?

A “billion” has long communicated “big.” So when it was announced that people are watching 1 billion hours of YouTube every day, we hit a big milestone. After all, a billion hours is more than 41.7 million days. If you travelled back in time 1 billion hours, you’d find yourself somewhere around 100,000 BCE, a period when the first anatomically modern homo sapiens were roaming Africa

The Great Pyramid of Giza (the Khufu Pyramid) is another something big. And, interestingly, we have some evidence from the 5th century BCE Greek historian Herodotus about how the pyramids were constructed. According to Herodotus, 100,000 workers toiled for 20 years to construct the Great Pyramid. Assuming 8-hour shifts and 52-week years, that adds up to around 4 billion hours of effort. 

Putting that figure another way: every 4 days all those hours we’re spending watching YouTube could represent the creation of another Wonder of the World. Entefy’s enFacts are illuminating nuggets of information about the intersection of communications, artificial intelligence, security and cyber privacy, and the Internet of Things. Have an idea for an enFact? We would love to hear from you.

Infographic

The surprising fertility rate of mobile devices [VIDEO]

There are over 7 billion people on Earth. That’s a big number. But here’s something that caught our attention. When we compared the number of people on the planet versus the number of mobile devices, we found that there are actually more mobile devices than people. We’re talking all those smartphones, cell phones, tablets, and other connected devices. 

Now that’s pretty surprising. But what’s even more shocking is the rate at which new mobile devices are growing—it’s like they’re multiplying like rabbits! This video enFact talks about the rapid growth rate of mobile devices today. But when will enough be enough? 

Read the original version of this enFact here.

Entefy’s enFacts are illuminating nuggets of information about the intersection of communications, artificial intelligence, security and cyber privacy, and the Internet of Things. Have an idea for an enFact? We would love to hear from you. 

Man multitasking

“Time is Money” and the myth of busyness

Ever since Benjamin Franklin wrote one of the world’s most famous pieces of career advice, “remember that time is money,” professionals and employers have been focused on managing the value of time. The challenge is that somewhere along the way we lost sight of the distinction between productive and just-plain-busy. Productivity is what allows us to excel professionally and personally; busyness is what happens when we overschedule and overcommit to non-essential activities. 

It isn’t always easy to distinguish between productive activities and distractions. There are always calls to make, emails to send, chats to join, news to read, and social feeds to check. It’s no wonder people complain about being stretched thin. If we’re not careful, instead of feeling more accomplished from all this activity, we end up feeling permanently over-scheduled and under-caffeinated. Especially when most of us never learned to say “yes” to “no.”

There is evidence, however, that our perceptions don’t quite align with reality. We may feel busier than ever, but in truth, we have more discretionary time than at any other point in modern history. Decades of time-use studies investigating our perceptions of time show that men work for pay 12 fewer hours per week than they did 40 years ago, commutes and break times included. Women spend more time on paid work and far less on housework and other unpaid labor than they did in the past.

Yet we all hear, again and again: “I’m so busy.” “My schedule has been crazy.” “I literally haven’t had a free moment in weeks.” Author Tim Kreider, writing about what he called the “busy trap,” said that when we bemoan or humblebrag about how busy we are, what we really mean is that we’re exhausted from self-imposed obligations, overextending ourselves socially. All of which has little to do with being productive and effective.

Another factor in our perceived busyness may be the ways we engage with technology. Let’s say you like to read novels to unwind after work. Your brain needs rest and downtime if you want to perform at your highest level again tomorrow. But if you check email and answer texts while reading, your brain is forced to multitask, which neuroscientists say drains the energy reserves of your brain. Multitasking, whether at work or at home, initiates the production of cortisol (the stress hormone) and adrenaline (the fight-or-flight hormone). Instead of letting your brain recover from a demanding day, you’re forcing it into a very un-restful conflict state. 

How we spend our free time strongly influences the perception of our lives as well. If we plan to catch up on a programming course or study for a professional certification during the weekend but procrastinate instead, we become frustrated when Sunday night rolls around and we’ve made little progress. But if we tracked our activities, we might just discover that we had more time than we realized. We just spent it inefficiently. 

And that’s not something we like to admit to ourselves because of the opportunity costs of time itself. Originally a concept from economics, opportunity cost describes the cost of an alternative not chosen. Applied to time, it’s the gains we give up when we choose to do one thing and not another. If we believe that time is money, then every hour we aren’t doing something useful represents lost dollars. The two hours we spend after work in group chats with co-workers could have been channeled into learning new skills, squeezing in more billable time for clients, or meeting with high-level peers or mentors. 

There’s also a long-term opportunity cost. When we look at time in this way, our brains never really get a chance to decompress. At the extreme, the “cult of busy” correlates to anxiety, high blood pressure and heart rates, lack of sleep, and other health problems. Any one of these can lead to burnout or detract from our ability to excel in our careers. 

Yet even when we become aware of these consequences, we often struggle to reign in our schedules. An overbooked calendar is still basically a status symbol at work. After all, no one wants to be perceived as a slacker, and we don’t want an opening in our schedules to be mistaken for laziness. 

But here’s the irony. Professionals in the habit of turning down unimportant commitments to instead focus on important, cognitively demanding tasks are often the most successful. In our increasingly automated workforce, it’s these same people who stay employable, not the ones who are fastest at replying to email or first to sign up for a team networking event. These unitaskers may not seem as busy, but they’re engaged in more substantial work. It’s with these professionals that the differences between productive and just-plain-busy become most apparent.

The antidote to the busyness trap may be a more conscientious attitude toward how we spend our time. Prioritizing truly important work and productive activities are good first steps. Putting down our smartphones when we’re supposed to be relaxing is also critical. But ultimately, we need to stop associating our self-worth with our schedules and instead focus on the quality of our work and the happiness it brings us. Think work-life fulfillment. Doing so lets you define your own “time is…”

Infographic

The hazards of confirmation bias in life and work [SLIDES]

The human brain isn’t really equipped to process the volume of information that floods it moment-to-moment. Instead, we’ve developed shortcuts that help make sense of all the people, ideas, and thoughts that pass through our consciousness. This is great for mental efficiency because it lightens cognitive load. 

But these mental shortcuts also favor our pre-existing beliefs over the new information, generating an unfortunate side effect called confirmation bias. This bias can creep into pretty much everything we do, and gets magnified online where our searches, news, and social connections tend to align with our current belief systems.

This presentation highlights key concepts from our article about how confirmation bias affects our lives. These slides examine how confirmation bias works for and against us, explore some of the ways it can wrap us in ideological bubbles, and then address how we can burst our own bubbles of bias.

Download full PDF version here.

Two men

“Are you thinking what I’m thinking?” The new science behind brain-to-brain communication.

The evolution of communication throughout history has largely been about taming two dimensions: time and distance. That was as true of spoken language as it was of the telegraph or email. Humans have been so successful at taming these factors that, today, neither time nor distance are meaningful bottlenecks to transmitting your thoughts. 

In fact, if you want to tell a friend living in a different hemisphere what you just ate for lunch, the most time-consuming part of the process is physically typing or tapping the message. Communication these days can be so fast that it’s fair to consider it “instantaneous,” though technically it’s not quite there yet. Because with all of the messaging we do in a day, all of that typing and tapping adds up. 

We still rely heavily on fingers and thumbs to get the message across, but for how long will this be the case? It’s too soon for anything but speculation, but research groups and corporations are working on new technologies that might allow us to share our ideas using thoughts alone. Flying cars, walking-talking robots, and now sci-fi telepathy? It’s a fascinating time to be alive.

But are we ready for a technology like this? Imagine being able to think a thought around the globe to anyone you wanted. No typing, speaking, or writing required. The technology required to achieve this is already in the labs, with proofs of concept being unveiled with increasing frequency.

The main hurdle to developing this technology is the brain itself, in all its fine and complex detail. So let’s start with a look at what we know (and don’t) about how the brain processes language, then dive into examples of primitive brain-to-brain and brain-to-machine communication devices moving through labs around the world.


Understanding what the brain is saying

Rattling around in the skull are around 100 billion neurons with more than 100 trillion synaptic connections between them, all arranged in a way that is as unique as a fingerprint. That being said, there is also a great deal of commonality between human brains. The look and shape remain relatively consistent, as do the abilities and tasks that certain brain regions specialize in. For instance, the signals from the eyes travel to the back of the brain, where they are decoded and analyzed by the visual cortex. The motor cortex runs from the temple to the top of the head and fires up when body parts need to move. 

The commonality between brains has allowed its regions to be reliably mapped—call it human cerebral cartography. The areas corresponding to the tongue and mouth map to the sides of the cortex, near the temple. The area corresponding to the feet can be found at the top of the brain, nestled in between the two hemispheres. And so on.

The complexity of the brain is found not only in the sheer number of neurons but in their variety. There are neurons that respond to specific directions of movement and to certain shapes, for instance. It gets far more complex when we look at neurons related to appreciating art, anticipating chess moves, or converting our ideas into discrete sentences. For these actions, the brain engages in some incredibly complex calculations.

The brain learns how to interpret the world around it as it grows. And, again, this is a highly personalized process. While the brain routes certain types of sensory experience to the same general areas in all brains, the individual determines how we end up processing specific experiences and how we learn to interpret them. 

Which brings us to language. The word “cat” won’t be found in the same place in your brain as it will in someone else’s, as you both learned it in a different way and with different experiences. This is an important detail, because brain-to-brain communication in any form requires a device capable of interpreting a given pattern of neurons as a particular idea, thought, or concept. This is an almost inconceivably difficult task. 

Neurons are the basis of communication within the brain, but they are just one of the key cell types. In the cortex, where all the higher-level activities that we associate with human thought take place, there are neurons and their constituent parts—long dendrites and axons that can extend to other parts of the brain—along with glial cells and blood vessels. It’s a tightly packed jumble, and any reading and interpreting of its signals must reliably account for all of these elements

Technology and reading the mind

As it stands today, the best ways to measure brain activity are also the most invasive. For brain-to-brain communication to attain feasibility, other means of interacting with the brain are required. Ideally, the hope would be to spy on the activity of individual neurons without having to open the skull. 

One current method for reading brain signals is functional magnetic resonance imaging, or fMRI. This is the familiar large tube that requires a patient to lie unmoving inside it, while it measures the blood flow in certain areas of the brain. It’s cumbersome, crude, and isn’t likely to be useful in brain-to-brain communication.

Another common brain scanning technology is the electroencephalogram, or EEG. This technology measures the activity of an area of the brain using electrodes placed on the skull. It’s not going to read neuronal signals in high detail and, while better than sitting inside a machine, requires the use of a skull cap and cables.

The methods that record brain activity in higher detail are also the most invasive. Typically, they require opening the skull and inserting tiny wires into the brain, which pick up the signals generated by neurons. Which isn’t easy. There is not a lot of structural uniformity in the brain given upwards of 100,000 neurons and other types of cells in a single square millimeter of cortex tissue. And since the brain forms scar tissue around any foreign object it detects, brain receptor devices would need to be designed in a way that convinces the brain these objects are native and natural.

If all of this suggests brain-to-brain communication is practically insurmountable, we’re only getting started. We’ve only focused on one side of the equation—reading signals. If we want another, unique-as-a-snowflake brain to be able to interpret those signals, we would need to be able to stimulate neurons, which creates another set of engineering challenges.

Recent advances are making this less theoretical

But let’s not count science out, or call this concept a pipe dream. Because there are already working examples of some potential precursor technologies.

One such idea is neural lace, which is a fine mesh containing electrodes that can be injected into brain tissue. The neural lace is rolled up and placed in a tiny glass syringe, and expands once released into the desired area, where it can then record surrounding activity.

Syringe-injectable electronics like neural lace have been successfully implanted in rodents. The researchers applied the lace to two areas of an anesthetized rat’s brain, including the hippocampus, an area central to forming memories. Once there, it was able to record neuronal activity and, equally promising, it didn’t prompt an immune-system response.

While installing neural lace would still require an invasive procedure, the study’s authors noted, “Compared to other delivery methods, our syringe injection approach allows the delivery of large (with respect to the injection opening) flexible electronics into cavities and existing synthetic materials through small injection sites and relatively rigid shells.”

Another brain interface concept is neural dust—dust-sized silicon sensors that can read activity and stimulate nerves and muscles. The devices rely on ultrasound generated by a small external device. Ultrasound can relay information between the dust and external devices wirelessly, and can reach almost anywhere in the body. What’s more, the dust can convert ultrasound vibrations into electricity that can then power an on-board transistor, eliminating the need for batteries while maintaining the ability to stimulate nerve fibers.

Researchers estimate they can shrink ultrasonic, low-power neural dust down to half the width of a human hair. As of now neural dust devices are capable of functioning within the peripheral nervous system, but further advances will need to be made before they are small enough to use in the brain.

Not without their limitations, these technologies hint at the possible improvements that can be made to current methods. While it may be some time before we develop true brain-to-brain communication, a few game-changing innovations might be just around the corner, and a few key pieces are already in place.

The current state of brain-to-brain communication

There are two sides to the brain-to-brain communication story: reading the signals from one brain and introducing them comprehensibly into another. We have working examples for each side of the equation. 

For instance, cochlear implants convert sound signals into electrical signals, which by way of an electrode array can stimulate the auditory nerve within the ear, allowing people to hear. A similar process happens in retinal implants, which sit at the back of the eye and stimulate the nerves in response to light signals. The nerves within the eyes and ears are, however, quite different from those in the cerebral cortex, both in terms of complexity and accessibility.

There are people who have lost limbs, or lost the use of their limbs, that have learned to control robotic arms or mouse cursors. Juliano Pinto, for instance, became a paraplegic after a car accident in Sao Paulo, but thanks to a large exoskeleton and an EEG cap, he was able to kick a football for the opening of the 2014 World Cup. 

When it comes to sending signals to other brains, there are a few working examples. For instance, one non-invasive brain-to-brain method was demonstrated in an experiment that successfully allowed a man’s thoughts to wiggle the tail of an anesthetized rat. When the man thought a specific thought, the signal would stimulate an area of the motor cortex in the rat, which caused its tail to swing.

This is a long way from transferring language and ideas, of course. In that realm, an experiment with human subjects took place in which two people had their brain signals for “hola” and “ciao” converted to binary and transmitted between India and France. The method was cumbersome, however, as the receiver would experience the signal as flashes of light in the corner of their vision, not as the actual word being transmitted.

Movement appears to be an easier activity to transfer between brains, as the neuronal signatures are easier to spot and interpret. Language is complex and detailed. Most adults have an approximately 42,000-word vocabulary, which is a lot of neuronal signals to be interpreted if we’re going to start having brain-to-brain conversations—and that’s before the added complexities of context and humor and sarcasm.

We’re not quite ready for this

There’s something magical about the brain that makes it more than just another engineering challenge. So perhaps it’s the sci-fi appeal of transmitting messages with our thoughts that has our collective imaginations running wild. But there are also some deep challenges to us humans adopting a technology like this. 

Take just one consideration: identity. Consider that having another person’s voice not heard but experienced in your mind just as if it were your own thoughts, could have striking effects on your sense of self and identity. If your thoughts merge with everyone else’s thoughts, where does your “I” begin and end? Not to mention all the opportunities for hacking and spying and other hijinks. 

There’s also the chance that we will have these mental conversations not just with other people, but with bots and machines. If we can call upon the wealth of knowledge available on the Internet and have it naturally communicated to us as easily as recalling a memory, we may see it—perhaps legitimately—as an augmentation and extension of our selves, not of some external source of information. If we think fake news and spam are a problem now… 

Are we at all prepared for such a fundamental change to communication itself? How quickly can people adapt to such an ability in ways that are positive and healthy for the individual and society? For now, and for a long time to come, these ideas will remain kindling for the fire of imagination. But it’s a blazingly fascinating fire.