Man

Information overload, fake news, and invisible gorillas. Teach your brain new habits.

Consider newborns. When we first lay eyes on the world around us, every sensation overwhelms our brains. There are colors, shapes, sounds, movement, and people totally unknown to us. Our brain does not yet know which of these experiences to prioritize and which to ignore, so our focus jumps from sensation to sensation. Slowly over time we learn to distinguish between the elements of our environment that are important and those that are not. We figure out how to properly filter the unceasing deluge of information swarming our senses. We grow up. 

This filtering process is important because the brain does not respond well to being overworked. It prefers things to run smoothly and efficiently, which it does best when it is focusing on a few discrete items at once. Furrowed brows are the outward signs of brains at the limits of their processing capabilities. Many of the things we try to do with our brains will quickly test these limits—such as setting out to master advanced calculus or learning how to read sheet music. Only with experience and accumulated knowledge can our brains safely navigate learning with ease and efficiency.

When it comes to getting our information off the Internet, we’re all a little like children—we have limited attention and we’re not always sure where to direct it. It can be difficult to limit how much information we consume when there’s always something new waiting for a click. And then, before we know it, an abundance of messy and complex information has infiltrated our minds. If our processing strategies don’t keep pace, our online explorations create strained confusion instead of informed clarity. Focus is, after all, a finite resource.

Let’s take a look at how it all works, and how we can optimize the time we spend directing our focus.

Attention overload

One fascinating psychology study illustrates just how finite our attention can be. Researchers had participants watch a video of two groups of people passing a ball from person to person. One group wore white shirts, the other wore black. Participants were asked to count how many times those wearing white passed the ball. So far so good. But what initially seems like a task of ignoring the subjects wearing black takes an interesting turn when someone in a gorilla suit casually strolls into the scene, stands in the middle of the game, and thumps its chest. The gorilla, of course, should be an immediate distraction and easily noticeable, yet only half the participants spotted it. The conclusion: too many demands on our attention blind us to everything else.

Distraction has ties to forgetfulness. As we’ve explored previously, for humans to remember information, it must successfully survive the filtering processes of our sensory, short-term, and working memory systems. Only then can it be stored away in long-term memory. The place where we attend to and think about information is in working memory, and trying to jam too much information through this system leads to a substantial drop in performance, both in terms of processing the information and in storing it for later use. How much is too much? George Miller’s research in the 1950’s suggested that the limit was 7 items (plus or minus two), although more recent research has found that the magical number may be as low as 4. Which means that, structurally, the brain can only do so much at once. There goes multitasking.

Silicon Valley pioneer Mitchell Kapor has said that: “Getting your information off the Internet is like taking a drink from a fire hydrant.” On the Internet there’s a guide to everything, opinions abound, and there are more interesting facts and ideas than anyone knows what to do with. So much so that collectively we write as much in a day online as all the books in the Library of Congress combined. Where is this information storm taking us? And how is our limited attentional system going to keep up?

More information is not necessarily better information

There are now around 47 billion web pages online, while the number of Internet users has surpassed 3.5 billion. Meanwhile, a Nielsen report indicates that we spend an hour longer online each day now than we did just a year ago, with most of that increase coming from time spent on mobile devices. As more people jump on board, our reliance on the Internet grows. When Google went down for a few minutes in 2013, it took 40% of Internet traffic with it. Which isn’t that surprising when you consider that there are more than 58,000 Google searches every second. For better or worse, search engines have altered our behavior. In terms of our mental processing power, having answers so close at hand certainly reduces the cognitive load on our mental resources.

Yet when it comes to comprehending abstract concepts and making complex decisions, too much information causes its own set of problems. For instance, it can induce a feeling that we must read and consume more than is really required. We can get stuck in a stasis known as analysis paralysis, in which we set out to make a deliberate decision by considering every last detail, but instead get overwhelmed and struggle to make any decision at all. 

The first study to document this effect came from Sheena Iyengar, a business professor at Columbia University. She set up a booth at a supermarket and offered samples of a selection of jams. She wanted to know how people would behave when presented with a selection of 6 jams compared to 24. What she found was that people initially responded favorably to the extra options—60% of people approached the booth, compared to 40% when only 6 jams were presented. But when it came to making a purchase, the figures reversed dramatically: 30% of people purchased from the selection of 6 jams, while only 3% purchasing from the larger selection. 

Over time this effect compounds. The more choices we have to make, the greater the decline in the quality of our decisions. Psychologists refer to this effect as decision fatigue or mental fatigue, and one striking example of it comes from the judicial system, in which judges were found more likely to grant parole after having come back from a break and a bite to eat. Charted out over time, favorable rulings gradually dropped from around 65% to nearly zero, then returned to 65% after some rest. Rather than ruling case by case, the cumulative weight of making decision after decision eventually snowballed into mental exhaustion, making the judges less willing and able to properly evaluate cases later in the day. 

For a more universal example, a 2010 survey of 1,700 professionals from the U.S., UK, China, South Africa, and Australia found that 59% of respondents had experienced a large increase in the amount of information they were required to process in the workplace, and 62% felt that the quality of their work suffered at times because of this information overload. Adding to this headwind against clear thinking, research out of Stanford University suggests that overthinking diminishes our creativity, making innovative solutions more difficult to come by. 

Trying to tackle too much information clearly interferes with our ability to process the information effectively. This is a real concern in workplaces and schools, and whenever important life-decisions come about. Yet when it comes to the abundance of information we encounter on the Internet, it is not the only concern. Because this phenomenon we’ve taken to calling “fake news” impedes our natural ability to discern fact from fiction.

The informational wolf in sheep’s clothing

It’s not accidental that determining truth and reliability on the Internet is challenging. For every established website or expert that people feel they can trust, there are literally millions of others that we have never heard of and have no background information about. So when we scan through our feeds looking for something interesting to read, the content (subject matter) will often trump any evaluation of the source it comes from. To be fair, we hardly have the time to double check everything we read, yet the implicit trust we’re placing in total strangers is bound to backfire every now and again. Because what we read sticks.

Which brings us to the spread of fake news on social media, a problem that came into focus during the latter days of the 2016 U.S. presidential campaign. People were exposed to “news” items that looked legitimate—the provider appeared legitimate and the news itself may have even seemed probable. Yet it was fake. What is it about our brains that makes fake news possible? How did this whole issue come about?

It starts with how easy it is to assemble a beautiful, intuitive website quickly and with no knowledge of HTML. The Internet is populated with millions of good-looking blogs with snazzy logos and easy-to-use navigation. The problem with this is that psychologists have found that we inherently trust things that are simple and easy to use. This is another of the brain’s many shortcuts. Things (like websites) that we can use intuitively seem familiar, and familiarity breeds trust in our minds. And the lower the cognitive demand, the less critical we are of the information we’re taking in. Which provides absolutely no assurance that the information itself is reliable. 

The Internet has made the whole of human knowledge instantly accessible. This is hardly a bad thing. But if we’re not careful in our information consumption strategies, we become susceptible to misdirection by misinformation. Just as we must learn as children how to filter our environment, we must learn as adults how to filter our information. So just how do we do this?

It comes down to changing habits

Our brains simply weren’t designed to handle the influx of information constantly bombarding us. So we need to find ways to filter and limit what we use. And move beyond efficiency and speed in information consumption to take into account the accuracy and reliability of the information itself. All of which starts with changing some habits. 

For starters, you can’t go wrong with simply limiting what types of information you see and where you see it. Every diet starts with consuming less. If you primarily use your social media accounts as a means to keep up with the goings-on of your friends and family, don’t also clutter those feeds with brands and celebrities. Compartmentalize social from news from entertainment, and consider using tools like RSS readers that consolidate a lot of different types of content into one place and provide easy tools for grouping and categorizing different types of media. And only ever follow sources you know, trust, and respect. 

Next, unleash your inner critic. Teach yourself to simultaneously read an article while evaluating the support for the article’s premise—the claims, facts, and figure you’re reading about should be clearly backed by high-quality research or supported by multiple reliable sources. This improves the reliability of your personal list of trusted websites, publications, and people.

It’s also a good idea to purposefully investigate the opposite of what you expect or assume—people have a natural tendency to gravitate towards information that confirms their beliefs and ignore anything and everything that conflicts with them. This bias can lead us into an ideological bubble, a safe, warm, and self-sustaining echo chamber where we only ever see things that reconfirm our existing beliefs and preferences. When we step outside our comfort zone, we exercise our intellect in new ways and open up the door to discovery.

Another valuable tip for managing your information diet is to produce more than you consume. This can be done in several ways. Trying to generate an answer to a question before looking up the answers helps you remember the correct answer, even if your guess was well off the mark. And once you’ve read something interesting, take the time to summarize it and put it into your own words. Doing so helps you digest the material, spot the loose ends, and cement it in memory more effectively.

You also can’t get around something we’re often so busy we choose to ignore: take breaks. Research has shown that we are less able to find associations and links between facts and ideas when we’re tired, and sleeping (even if it’s only a nap) improves our recall of whatever we were learning before the break. 

You might also be familiar with the notion of getting brilliant ideas while in the shower, and there’s research to back this up. It turns out that when we stop working on a problem, our brains continue processing in the background. As the study authors report: “Our data indicate that creative problem solutions may be facilitated specifically by simple external tasks (i.e., tasks not related to the primary task) that maximize mind wandering.” In order to store knowledge in our memory banks, it must be connected and associated with existing memories and prior knowledge. This unconscious process of creating connections sometimes leads to unexpected linkages—solutions to problems or radical new ideas. 

It is easy to think that gaining more knowledge equates to consuming more information, but this isn’t the case. Knowledge asks more of us. It asks that we take the time to focus on quality, not quantity. That we ignore what is superfluous in order to focus on what’s important. That we take the time to connect new information with what we already know and understand. And, ultimately, that we leave behind sensory overload and learn to filter the meaningful from the noise. It’s how we grow up as information consumers.

Letter

Our CEO’s message about the immigration ban

Dear Entefyers,

On Friday, the President signed an executive order suspending or delaying entry into the United States for citizens of 7 countries: Iran, Iraq, Libya, Somalia, Sudan, Syria, and Yemen, including those citizens who currently hold green cards or visas for valid entry into the country.  

I know that many of you have already felt the repercussions of this executive order on loved ones, friends, and colleagues whose lives have taken an unexpected turn. And to all of you, I extend my greatest sympathies and also my firm commitment to keeping Entefy and its team strong during this period of uncertainty and confusion. 

Entefyers are global citizens who have travelled the world, done business in more than 50 countries, speak a dozen languages, and know something about how things are seen and done differently in other cultures. We are diversity personified. And so, despite this unnerving shift in policy and express abandonment of the core American values of openness and opportunity, Entefy will continue to support all no matter the country of your birth.

Early in Entefy’s young history, our small team sat together and penned our very first blog post to share with the world “Why we exist.” And in the process of communicating our mission, we realized something deeply embedded in our company’s culture and the DNA of everyone whom we proudly call an Entefyer. That, when you strip it all away; when you drill down to the very heart of why we do what we do every day, it is to stand defiantly in the face of greed and fear.

At the time, we were talking primarily about business at large—about how the leaders in our industry continue to complicate communication, increase advertising, violate user privacy, and so on. However, with global events such as these, I am reminded that it is our responsibility as a company, and as individuals, to stand against greed and fear, no matter where it comes from. 

And so, while I will not pretend to agree with this executive order, I am hopeful that the current administration will recognize the overbroad consequences of this policy and will act swiftly in bringing it to a close. 

As a global community, we have always been better together. And as an American, I know we can do better. 

Alston

Speakers on stage

Brienne visits her alma mater to promote STEM and entrepreneurship

She Started It” follows the journey of female founders of technology companies, offering a behind-the-scenes look at what it’s like launching and growing a business as a modern woman—documenting everything from raising capital to managing family dynamics. Entefy’s co-founder Brienne is one of the women spotlighted in the film. She recently participated in a panel discussion following a screening of the film at her alma mater, Santa Clara University.

The film’s director Nora Poggi led the panel, which included Brienne, in a lively discussion between the panelists and the audience that delved into the challenges women face professionally, the importance of mentorship, tips for raising capital, and advice for anyone starting a company today. 

“She Started It” is being screened at universities and other venues around the world as part of a larger effort to encourage more women to pursue Science, Technology, Engineering, and Math (STEM) education and entrepreneurship. 

Infographic

Hurry up and read that text message

Entefy’s survey of 1,500 U.S. professionals found that respondents received on average 34 text messages per day. Other data shows that 90% of text messages are opened within 3 minutes and 99% of those messages are read. Which means that we’re interrupting what we’re doing dozens of times every day to read text messages. If you were looking for a definition for “digital distraction,” that may be it.

Now consider that there are more mobile devices than people on the planet. With this kind of accessibility, checking notifications and making yourself available round the clock has become the norm for many people. We’re becoming more and more aware that jumping in and out of apps negatively impacts productivity, efficiency, and even mindfulness. It can be difficult to maintain focus and mental balance in this hyper-connected world.  

Watch the video version of this enFact here.
Entefy’s enFacts are illuminating nuggets of information about the intersection of communications, artificial intelligence, security and cyber privacy, and the Internet of Things. Have an idea for an enFact? We would love to hear from you.

Doctor

Patients are about to see a new doctor: artificial intelligence

The World Health Organization has estimated that there is a global shortfall of approximately 4.3 million doctors and nurses, with poorer countries disproportionally impacted. In the U.S., these shortages are less acute; instead, the country struggles with ever-increasing heath care costs, which often translate into limits on the time a patient is able to spend with a doctor. One study estimated that U.S. doctors spend on average just 13 to 16 minutes with each patient. 

Developments in the use of artificial intelligence technologies in medical care are beginning to demonstrate AI’s potential to improve the quality of patient care. Around the globe, researchers are studying how AI might become a new tool in the caregiver tool kit. From diagnosis, to personalized medical advice, to insights into genetics, AI is emerging as a new factor in medical care. 

So against this backdrop of a global shortage in doctors and nurses, and cost-driven strains in patient care, let’s take a look at some of the ways AI systems are being evaluated for use in medical care.

Enhanced medical diagnosis

In August 2016, an artificially intelligent supercomputer did in 10 minutes what would have taken human doctors weeks to achieve. Presented with the genetic data of a 60-year-old Japanese woman suffering from cancer, the system analyzed thousands of gene mutations to determine that she had a rare type of leukemia.

The patient’s doctors had initially treated her for myeloid leukemia but observed that her recovery was unusually slow. They suspected that she had another form of the disease, but they knew it could take weeks for them to identify which of her thousand genetic mutations were relevant to her illness. Since speed is essential in effectively treating leukemia, the doctors used an AI system to run the analysis instead. Within minutes, the system determined that the patient had a secondary leukemia caused by myelodysplastic syndromes. Her medical team adjusted her treatment plan, and her condition improved soon after.  

Arinobu Tojo, a professor of molecular therapy at a hospital affiliated with the University of Tokyo’s Institute of Medical Science said, “It might be an exaggeration to say AI saved her life, but it surely gave us the data we needed in an extremely speedy fashion.” 

Elsewhere around the globe, scientists and doctors are making use of AI systems to tackle a long list of issues. Artificially intelligent computers are being designed to analyze gene mutations, make better use of scientific studies, and enhance doctors’ clinical knowledge beyond their first-hand experience. 

One emerging theme for AI’s use in medicine: attempts to accelerate the pace of research breakthroughs. AI-driven research is taking place in areas as diverse as heart disease, diabetic complications, and brain trauma. One AI system is being used to analyze rare immunological conditions in children, providing physicians with insights on how to treat and cure issues that offer few clues at present. Machine learning is enabling much of these efforts.

Machine learning in medical care  

The foundation for the use of artificial intelligence in medical care is machine learning, the process by which computers “study” huge datasets to discover insights. Physicians and machine learning scientists, for instance, supply a smart computer with archived medical photos to train the system to spot certain disease markers or mutations. The more data a machine-learning system has, the more adept it becomes at providing accurate results. The system learns by recognizing patterns and updating its analyses as it gains more information. 

Researchers at Imperial College London are testing machine learning applications in treating traumatic brain injuries. When they fed their algorithm images of such injuries, the system identified brain lesions and learned to discern white and grey matter. That ability could provide researchers with valuable information about these life-altering and often fatal conditions. 

In another example of machine learning applications in medicine, scientists recently experimented with how machine learning might facilitate diagnoses of diabetic retinopathy, a condition that causes vision impairment and blindness. The team trained the system using 128,000 images of healthy eyes. They then had the algorithm analyze 12,000 images and graded its ability to recognize signs of disease. The results indicated that the system “matched or exceeded the performance of experts in identifying the condition and grading its severity.”

Other implementations of machine learning are focused on the human genome, where AI systems are processing massive amounts of genomic data. The key to solving the mysteries behind conditions such as autism spectrum disorder lie in human genes, but the human brain simply can’t comprehend all the nuances and complexities of the genome. “I can say for sure that winning a game of Go is actually quite easy compared to understanding human health, and extending life spans, and saving lives,” says Brendan Frey, the co-founder and CEO of an AI genetics company and a professor of engineering and medicine at the University of Toronto.

Machine learning could be a game-changer in medicine because, unlike humans, computers don’t get tired and do have an infinite capacity for learning and memorization. They can process far more data far faster than even the most capable human physician. But that doesn’t mean that flesh-and-blood doctors are an endangered species. 

Instead, AI is being designed to enhance diagnoses by providing doctors with data-driven insights derived from patients’ histories and conditions. Physicians can then use those reports to create treatment plans based on previously unavailable information. These systems can also provide vital insights in geographic areas where doctors, particularly specialists, are few and far between. Which, as the World Health Organization data reflects, is practically everywhere.

AI in the exam room 

The breakthroughs in diagnosis and treatment we’ve been talking about may save lives and alleviate suffering for patients who live with life-threatening and chronic illnesses. But AI’s impact may also be felt in people’s day-to-day health management and routine doctor’s visits. Perhaps widening those brief 13 to 16 minute windows of time that patients spend with doctors. 

One implementation of AI technologies in health care is taking place behind the scenes. Most of us have typed a list of symptoms into a web search at one point or another, and sorted through conflicting or alarming diagnostic information. Search companies are beginning to attach their platforms to AI systems with the goal of providing health information that is personally relevant. One study demonstrated that search histories can identify users who are at risk for pancreatic cancer and, potentially, other diseases. 

Of course, search engines are not replacements for real doctors. But it’s arguably better to see results vetted by experts from Harvard Medical School and the Mayo Clinic than to scour questionable community forums that offer little in the way of knowledge or reassurance. Search platforms may also be able to nudge people to make appointments sooner rather than later when their searches indicate that they may be suffering from an illness. 

Access to AI-powered medical databases could enable physicians to make more precise diagnoses because they would be drawing on a wealth of medical knowledge instead of thumbing through paper charts or trying to read through patient files during their already-short in-person visits. For instance, doctors at St. Jude’s Medical Center in Memphis and Vanderbilt University Medical Center in Nashville currently receive pop-up notifications in patients’ digital files alerting them to potential contraindications with certain medicines. 

The home-grown artificial intelligence system used at Vanderbilt University Medical Center predicts which patients may need specific drugs in the future. It can recommend that doctors run genetic tests on those who have a high likelihood of needing blood clot medications and other drugs, avoiding potential crises and saving time down the road. 

AI is also being designed to help with disease prevention and management. Information pulled from hospital databases, electronic records, in-home monitors, fitness trackers, and implanted devices could help health care providers predict which patients show high indicators for conditions such as congestive heart failure (CHF). Mount Sinai Hospital in New York is already exploring how AI can help track patients who have or may develop CHF so they can prioritize care and help people manage their conditions. 

Facilitating human connections 

Let’s add some context to the “13 to 16 minute” doctor visit stat we cited earlier. One of the primary factors limiting doctor-patient time is something we’re all probably familiar with: paperwork. A study by the American Medical Association followed a group of 57 physicians through 430 hours of patient care. Here’s what it found:

Physicians spent 27% of their time in their offices seeing patients and 49.2% of their time doing paperwork, which includes using the electric health record (EHR) system. Even when the doctors were in the examination room with patients, they were spending only 52.9% of the time talking to or examining the patients and 37.0% doing […] paperwork.

For all of the potentially groundbreaking research we’ve described here—into diseases, genetics, diagnostic automation—the true test of AI’s impact in medical care might be whether it can make a dent in the paperwork problem. Medical records, regulatory documentation, insurance filings, you name it. Because in one pass, you free up time that doctors and nurses can devote to patient care; and perhaps even attract more people to the professions, addressing the global doctor shortage. 

AI-driven efforts are already underway to combat the administrative burden. Some health companies are using apps to cut down on the time nurses and doctors spend gathering patient information. Outsourcing triage and patient interviews to algorithms could cut down on health care costs, but with the risk that this could further reduce the time doctors spend with their patients. The goal would be to improve doctor-patient interactions by freeing physicians from laptop screens and focusing on the people in front of them. 

And that’s really where the true potential of AI in medicine lies. Rather than replace interpersonal connections and involvement, AI can reduce the burden on doctors and nurses so they can focus on the uniquely human elements of patient care. No one will complain if AI helps lower health insurance premiums or helps physicians find the right treatments faster. But people look to doctors and nurses for reassurance, comfort, and insight. Machines simply cannot replicate a human’s bedside manner. 

It’s not an all-or-nothing decision, either. While algorithms can generate comprehensive analyses of a patient’s history and suggest a potential diagnosis, doctors need to evaluate that information and apply their real-world experience to determine the course of care. It’s also worth noting that at present smart machines cannot explain the reasoning behind their suggested diagnosis, so doctors can’t know whether the process behind their recommendations is sound. They can use the information to supplement their own analyses or as a jumping-off point when working out a diagnosis. The responsibility for patient care is their own. 

Ultimately, AI has the potential to enhance the practice of medicine, from disease research to the logistics of routine care. But its greatest impact may be found in how it enhances and empowers human practitioners. 

Additional article contributors: Mehdi Ghafourifar and Brian Walker.

Man working on a desk

Entefy Files 13 New Patents in Artificial Intelligence, Security, and Data Privacy

PALO ALTO, Calif., Jan. 24, 2017 — Entefy Inc. announced today the filing of 13 new patent applications to further drive the technology behind its AI-powered universal communicator. These new patent applications bring the company’s total filed patents to 31, representing widespread innovation in its core product. 

“Invention has always sat at the heart of what we do at Entefy. I’m proud of the team for embodying this spirit and continuing to make progress in pioneering new technologies,” said Alston Ghafourifar, Entefy’s Co-founder and CEO. “Today’s news represents another step forward.”

Entefy’s universal communicator is a smart platform built on advanced computer vision and natural language processing technologies. It is designed to help people better manage their digital ecosystems, the combination of computing devices, software applications, and online services people use at home and at work. 

The company recently released the results of a survey of 1,500 U.S. professionals that showed nearly 80% of respondents regularly use 3 or more different digital devices; respondents also reported using an average of 6.9 separate communication applications (11.2 including social media) to send 224 daily emails, texts, and IMs. In the face of all this complexity, respondents expressed overwhelming interest in simplification of their fragmented digital ecosystems. 

“Entefy is directly addressing some major pain points in people’s digital lives, like communication, search, security, and data privacy,” said Entefy Co-founder Brienne Ghafourifar. “Today’s announcement sums up just how much progress we’ve made towards this goal.” 

ABOUT ENTEFY

Entefy is building the first universal communicator—a smart platform that uses artificial intelligence to help you seamlessly interact with the people, services, and smart things in your life—all from a single application that runs beautifully on all your favorite devices. Our core technology combines digital communication with advanced computer vision and natural language processing to create a lightning fast and secure digital experience for people everywhere. 

Infographic

Skyrocketing video consumption

Every day, YouTube users view 10 to 20 billion videos and the average adult watches 76 minutes of digital video. By 2019, it’s projected that video will account for 80% of consumer Internet traffic, which is one of the reasons mobile data traffic has grown 400,000,000x since 2000.

What’s behind the continuing rise of video? For one thing, reading and watching utilize different brain processes. Reading requires active attention and gives the brain a workout because it takes more cognitive effort. Video is passive, easier to consume, and processed by the brain 60,000 times faster than text. Video might win the popular vote, but the many positive effects of reading shouldn’t be dismissed.

Watch the video version of this enFact here.

Entefy’s enFacts are illuminating nuggets of information about the intersection of communications, artificial intelligence, security and cyber privacy, and the Internet of Things. Have an idea for an enFact? We would love to hear from you.
hand holding a notebook

Entefy’s new look

Since the idea for the company sprang to life on a napkin, we’ve been hard at work developing our ideas and our technology. Today, as Entefy enters its next phase of growth, we’re debuting our new logo, visual style, and exclusive content, all of which can be seen on our new website.

Much has happened since the very early days of Entefy. Big technological breakthroughs. Key product milestones. National user studies. User experience testing. And a long list of amazing individuals—team members, investors, partners, and advisors alike—joining us in our mission to seamlessly connect people with everyone and everything in their lives. All of this has positioned Entefy for a breakout year in 2017.

As you’re looking through, you’ll notice how we’ve done away with boxes, borders, and boundaries of all kinds—we want to emphasize the limitless potential and reach of this amazing, interconnected “smart world” we live in. Seamless interaction has always been the heart of what we do, and our new look reflects this.

If you detect hand-crafted precision in our new visual style, you have a good eye. Born from a pixel-by-pixel design process that included tireless iterations, our redesigned logo and visual theme represent an entirely new visual language for Entefy. One that embraces simplicity, efficiency, and power in this modern, hyper-connected world. You’ll see this new look on our website, on our social channels, on our team t-shirts—basically, everywhere you meet Entefy in-person or online. 

On behalf of all Entefyers, we thank you for taking the time to visit our site and learn more about Entefy and the future of AI-powered digital interaction. 

Infographic

Worried about Friday the 13th? There’s a word for that.

There are many superstitions about Friday the 13th. But whether you’re superstitious or not, the fact that 85% of tall buildings in the U.S. skip floor 13 shows most of us don’t take any chances with the number 13. 

So if you’re worried about leaving the house today, we have sure-fire (and highly unscientific) cure: just say this 23-character word, “paraskevidekatriaphobia,” three time fast. What’s paraskevidekatriaphobia? That’s the fear of Friday the 13th, of course. 

We came across a couple of other hard-to-pronounce terms for phobias. Did you know that “nomophobia” is the fear of being without a mobile device? Or that, ironically, the 36-character word “hippopotomonstrosesquippedaliophobia” is the fear of long words? 

Assuming you weren’t struck by lightning while reading this, here are 13 more facts you might not know about Friday the 13th. Stay safe out there.

Entefy’s enFacts are illuminating nuggets of information about the intersection of communications, artificial intelligence, security and cyber privacy, and the Internet of Things. Have an idea for an enFact? We would love to hear from you.

Inattention: The brain’s complex relationship with social media

We’ve all experienced that itch you get during those in-between moments waiting for an appointment or riding a bus. The one that has you reaching for your mobile device to flick through social media feeds in search of something mentally stimulating.

Social media feeds can be a perfect way to pass time discovering new information from sources we trust and admire. However, if we’re not careful, repeatedly revisiting those feeds is also perfectly suited to driving us into what could be called an information addiction. Because that “itch” runs on the same mental machinery that leads us to overindulge in anything, from exercise to sweets to coffee.


These compulsive behaviors are linked to the brain’s dopamine pathways. Dopamine is the neurotransmitter behind the human motivation to earn a reward. The subsequent enjoyment of the reward is actually the result of the opioid system. Together these neurotransmitters create a behavioral loop of wanting and liking: “Sweetness or other natural pleasures are mere sensations as they enter the brain, and brain systems must actively paint the pleasure onto sensation to generate a ‘liking’ reaction — as a sort of pleasure gloss or varnish.”

Here’s how it works. When your mobile device vibrates with a notification, your dopamine system sparks up as you realize there is new information available that might make you feel good, making you want to read the message. So you check your device, scan your feeds, and enjoy the feeling of being informed and up-to-date thanks to the opioid system. All of which increases the likelihood that you’ll check your device the next time it vibrates—and voila, a new behavioral loop is born. 

An interesting aspect of this brain-circuitry loop is that unpredictable rewards work best. That is, if you cannot be sure when you will get the reward, you will want it even more, and will overcome more “obstacles” to get it. An obstacle like listening attentively to what your friend is saying. This unpredictability aspect seems perfectly suited to social media, as you never really know when an interesting link or article or comment will come up that grabs your attention—but you know it could happen, if only you keep scrolling.

Forming a lasting impression

There is another noteworthy aspect of brain behavior related to social media: memory formation. Research suggests that low levels of focus (as is typical while browsing through social feeds) can inhibit memory formation. You may have come across this when you tried and failed to recall something you skimmed over on your feeds earlier in the day. There are ways to improve memory retention, and it starts with understanding how memories are formed. So what does it take to remember?

There are several stages to memory formation. Any experience starts out in sensory memory, which lasts for a fraction of a second and relates to what we would call perception. That lasting ring you can see when spinning a firecracker around in circles is a product of sensory memory.The final stage in memory formation is long-term memory, the holy grail of remembrance.

The second stage is short-term memory. This lasts in the range of 10-20 seconds unless we decide to focus on the memory by repeating it or using it somehow. It is often used interchangeably with working memory, although there is a slight difference in that working memory is specifically where we manipulate information. Both working and short-term memory have fairly limited capacities, able to hold approximately 5 to 9 items of information at a time.

We use the information in working memory when we try to understand the relationships and order of each item. When you add several numbers together in your head, working memory is where you hold the numbers long enough to perform the calculations. Likewise, it’s also what we use to maintain the important ideas of an article in mind long enough to understand the overall meaning of the piece.

The final stage in memory formation is long-term memory, the holy grail of remembrance. As far as we know, it can hold unlimited items for an indefinite period of time. It is here that we want important information to end up. However, for something to make it into long-term memory, it must survive the filtering processes of the aforementioned memory systems. 


Humans are bombarded with sights and sounds at every moment, yet only a small fraction of our world ever makes its way to our attention, and even less is stored away for long-term use. The takeaway here is that if you want to remember something, be prepared to give it sustained attention, and then revisit it again and again to make it stick.

The disappearance of sustained attention

Despite the fact many of us spend hours a day online, we spend 10-20 seconds on the average web page, barely long enough to read the headline and opening paragraph. This skittish behavior appears on social media, too, as it’s been found that 59% of retweeted links were never read by the people sharing them. Study co-author Arnaud Legout stated:

“People are more willing to share an article than read it. This is typical of modern information consumption. People form an opinion based on a summary, or a summary of summaries, without making the effort to go deeper.”

Research also shows that the more information we try to hold in mind at once, the poorer we are at processing all of it. This means that bouncing back and forth between unrelated information and ideas—switching from a news app to a social app then back to the news, for instance—only inhibits your brain’s natural ability to find coherence, making you more likely to forget everything. Humans are not good mental multitaskers.

If we give in to the information-seeking itch too often, and rush through feeds and links, haphazardly looking for higher and higher levels of mental stimulation and excitement, we end up overloading our working memory system and fail to record anything of substance by curiosity’s end.

Scaling it back

The good news is that moderation and sustained focus can overcome these shortcomings. It needn’t be all that difficult either: take your time reading articles that you’re particularly interested in and be more selective about what you click on or share. Think of it as increased ROI (Return on Investment) of your personal time.

Research from Kaspersky Lab discovered that when information is easy to find online, we tend not to remember the information itself, but where we found it. Basically, we’re offloading (or, more technically, “externalizing”) the memorization of facts to the Internet. This does not have to be a negative, however, as another paper shows that doing so allows us to focus better on the big-picture ideas and abstract concepts those facts pertain to, without getting overwhelmed by all the details.

What’s more, research out of UCL found that the brain increases in plasticity in response to novelty. The hippocampus is a seahorse-shaped structure in the center of the brain that is vital to forming long-term memories and is one of the first systems to break down in Alzheimer’s disease. A hippocampus more capable of changing and adapting would provide a substantial boost to learning, and it just so happens we achieve this when we’re immersed in new experiences. Of course, if the hippocampus opens its arms to novelty, the Internet sits ready to offer as much as can be grasped, and then some.

Social media is filled with novel information, formatted in such a way that it grabs our attention in the first instant or not at all. If we make more of an effort to focus on those items that do grab our attention, and not quickly jump onto the next thing, chances are we will absorb more and crave less.