Football

What would the Super Bowl look like with AI referees and other smart technologies?

It’s late in the 4th quarter of Super Bowl LI. The Falcons are facing 3rd and goal on the Patriots’ 5 yard line. Matt Ryan takes the snap and hands off to Devonta Freeman, already running hard at the goal line. Then, with a crunch audible to the topmost rows of NRG Stadium, Freeman is brought down by Dont’a Hightower right at the goal line. Touchdown?! Silence falls as all eyes turn… not to the referees on the sidelines (there aren’t any) but to giant LCD panels behind the end zones. The screens remain black for several long moments until “TOUCHDOWN” lights up. A roar erupts from about half of the stadium.

Where were the referees in this fictional account of the upcoming Super Bowl LI? They’ve been automated by artificial intelligence systems hooked up to networks of sensors worn by the players and high-speed cameras strategically positioned throughout the stadium. 

Does all of this sound speculative? It is, but not as much as you might think when you take a look around the world of professional sports. “Precursor” technologies that provide the sensory input data for yet-to-be-invented AI algorithms are already in use. The distance between today’s smart sensors and tomorrow’s fully automated officiating is closing fast. 

We won’t see AI refs in Super Bowl LI, but the question of whether there’s room for automated officials in the NFL and other professional sports is quickly becoming a matter of policy, not technology.

Automated officiating is born one technology at a time

Ask any soccer fan about the Hand of God goal, and you’ll hear not a lecture on divine intervention, but a passionate recounting of one of the biggest referee mistakes in sports history. During the 1986 World Cup quarterfinals, Argentinian midfielder Diego Maradona scored a goal by swatting the ball into the net by hand, a clear no-no in soccer. The referees all missed it, and the goal stood despite the handball, which was described by Bleacher Report as “one of the most egregious” mistakes in World Cup history. Because of this error, Argentina ended up winning 2-1 over England.

Refs are only human, and we can’t expect them to be right all of the time. Some calls, such as the Hand of God, occur simply because refs can’t watch every single detail of every single play. Mistakes are inevitable. But the days of bad calls may soon be at an end. Goal-line technology, wearables, nanosensors, and even artificial intelligence are being adopted to help reduce referee errors and level the playing fields. 

Do referees have the toughest jobs in sports? 

The Hand of God goal wasn’t the first referee mistake and it certainly won’t be the last. Questionable calls have landed refs in the crosshairs of irate players and fans since humans first began competing over athletic prowess. The only difference between then and now is that bad calls are recorded and preserved online forever. Everything from simple oversights to politically charged decisions can be witnessed and dissected by every sports fan with a smartphone and a data plan.  

Consider the 1972 Olympics, when officials nearly reignited the Cold War by handing the gold to the Soviet Union’s men’s basketball team. The U.S. team had remained undefeated throughout the Olympics and as time wound down on the gold medal match, looked like they would remain so. But officials added time to the clock during the last seconds of the matchup, a decision ESPN dubs one of the worst calls in sports history. The Soviets eked out a one-point lead in those final moments, unseating the American champions. 

Although the world narrowly avoided political disaster then, referees have continued to confound players and enrage fans with mystifying calls. But what if officials—at the next World Cup, Olympics, or Super Bowl—had access to real-time data on players’ movements and could make faster, more accurate decisions? By reducing human error, there would be fewer opportunities for bad judgment and confusion to alter the outcomes. This would benefit not only the players, but the refs as well. That’s a good thing, right? Well, it’s not that simple. 

The era of goal-line technology 

Referees are just as much a part of the game as players and coaches, facing their own unique brand of pressure every time they set foot on a court or field. The decisions they make determine the course of high-stakes matchups, and coaches, players, and spectators alike don’t hesitate to let them know when they disagree with their calls. That’s where advances in tracking technology prove useful. 

In football or soccer, for instance, goal-line technology could become referees’ first line of defense against disgruntled players and fans who dispute their calls. Goal-line technology determines the exact moment when a ball crosses the threshold to count as a score. These systems use high-speed camera or microchips implanted in the balls to track their movements, and referees can use the real-time data from the systems to confirm and support their judgments. 

The NFL has yet to fully adopt goal-line technology, but FIFA implemented it for the 2014 World CupEuropean Premier League teams have begun using it as well. England’s Premier League coach spoke in favor of adopting the system, saying it would prevent “gross injustices” from occurring in the sport. Referees can access goal data within a second, mitigating lengthy game delays. Rather than relying on their own observations or debating their colleagues, officials will have instant proof of whether a goal should stand. 

Such technology would help referees in other sports as well, allowing them to defend their calls when they’re accused of bias. Case in point: Game 2 of the 1998 NBA finals, when observers noted that many of the refs’ calls favored the Boston Celtics over the Los Angeles Lakers. Emotions flare when championships are on the line, and hard data can help cooler heads prevail. 

Wearables and AI technology are game-changing—literally

Bad calls make great headlines, but the occasional bad call doesn’t always indicate a trend. When the NBA began reviewing officials’ calls in 2015, it found that referees were correct 86% of the time in the crucial final minutes of games. Rod Thorn, then-head of the NBA’s basketball operations, said the referee reviews not only increased transparency, but added “a humanity factor” and proved that “the vast majority of the calls are right.” 

A little humanity can go a long way, especially in the fraught world of sports officiating. Referees often find themselves the targets of insults and vitriol, particularly from fans. There are even online forums where fans go to brainstorm creative barbs to unleash on officials during games. 

Technology is giving refs better cover. Multiple professional leagues are increasingly equipping their athletes with wearable devices that send constant updates to coaches and referees who use the data to assess their performances, spot signs of injuries, and analyze goals and plays. 

In high-profile events, such as the 2016 Rio Olympics, data was often streamed to television broadcasters so spectators could understand what was happening in real-time. Former Olympian Barbara Kendall has said that live stats make sporting events more interactive for fans watching at home. 

But wearables are also invaluable for accurate judging and scoring. In fencing, for instance, sensors indicate the precise timing and location of athletes’ hits. Camera feeds record exactly what referees see in several sports, eliminating controversies around their decisions. It’s hard to dispute officials’ logic when you’re looking at the same numbers and visuals and can see why they ruled the way they did. 

Cameras may also provide a buffer for human errors, which is a real concern for referees. In one memorable instance, officials accidentally allowed the University of Colorado Boulder football team five downs, giving them a game-defining advantage over the University of Missouri. The error occurred in 1990, but it was so widely known, it has its own Wikipedia page. Widespread use of on-field cameras could help referees catch those oversights, as would automated tracking of downs, yards gained and lost, and other critical aspects of the game. 

Artificial intelligence is also already here, helping coaches call plays during games. “Moore’s Law predicts that computational power doubles roughly every two years, so by Super Bowl 100, in 2066, computers should be several million times faster than today. Imagine a robot Bill Belichick flicking through a digital playbook of trillions of moves during the 40-second gap between plays.” 

Immersion and empathy through technology 

Wearables and smart technology are transforming sports like football, soccer, and basketball in unprecedented ways. Athletes and coaches are gaining deeper understandings than ever before about plays, strategies, and players’ abilities. If coaches can monitor athletes’ vitals and performances, they can detect when someone is injured or at risk of hurting themselves. Players can then seek care in time to prolong their careers instead of being devastated by an unforeseen break or trauma. 

Fans are getting in on the action, too. Jerseys equipped with smart sensors give them a feel for what it’s like to be on the field with their favorite players. This smart apparel uses haptic feedback to transmit football plays to viewers as they’re watching them happen.  

Even those who don’t slip into a smart jersey can get up close and personal through wearables. Devices such as Ref Cams and GoPros allow viewers to feel as though they’re on the court or field. The WNBA introduced Ref Cams in 2013 and FoxSports installed GoPros in referees’ hats to innovate in its coverage of the December 2016 Big 10 Championship Game. 

Although refs and players don’t always see eye to eye, they’re likely to agree that facial recognition technology will change sporting environments for the better. Stadiums in Australia are exploring the use of such systems in keeping known troublemakers out of their venues. People who are known to start fights and antagonize refs and players may soon be contained before they’ve had a chance to rile people up and become distractions for officials, players, and fans who are trying to enjoy their games. 

Wearables and other smart technologies will help officials improve their accuracy and do their jobs more effectively. But the greatest benefit may be that when fans and coaches get a ref’s-eye view of the game, they become a little less hostile and a lot more empathetic.  

Will AI replace refs?

We’ve been looking at how new technologies are having an impact in professional sports around the world. These changes are, for the most part, evolutionary: players, coaches, and officials benefit but the games remain largely the same. As long as the chips, dips, and buffalo wings don’t run out, Super Bowl LI won’t feel much different.

When we start talking about artificial intelligence in sports, we enter into a completely new realm. Because when you link the sensors and cameras we’ve been talking about to AI systems, you have the recipe for fully automated officiating. 

As we saw with soccer, on-field tracking systems can detect handballs, identify penalties, and evaluate offside calls. These are working systems already in use; the capabilities of autonomous AI systems will only grow from here. Which is why, in one research study, referees and umpires face a 98% chance of being replaced by AI.

Proponents of automated officiating say that AI could reduce corruption and more accurately enforce rules, and it seems likely that the technology will play an increasingly prominent role in athletics. But the transition – if it happens – won’t occur overnight. 

Although referees are often maligned by angry sports fans, people see officiating a game as a complex task that requires human capabilities. Increased uses of sensors and cameras might provide an AI system with the data to make certain calls, but they may need something closer to a theory of mind before they seem human enough to be embraced by sports teams and fans. As we see with technologies like self-driving cars, the technology is here; it’s the policy and regulation that needs to catch up.

So when you’re watching the Super Bowl this weekend, keep an eye on the referees. Can you picture the game without the zebra jerseys on the sidelines? Are we ready to welcome automated play calling to the wide world of sports? 

Infographic

Something we seemingly can’t live without—224 times a day

According to our research, adults in the U.S. send and receive on average 14 messages every waking hour, counting IMs, texts, and emails. 

The constant stream of notifications this produces can be detrimental to focus and productivity. In fact, it can take nearly 30 minutes to get back on task after responding to an email. Now multiply this out across your whole day. 

We’ve hit a threshold when it comes to how much information we can manage on a daily basis. There’s only one way to go from here, and that’s simplification

Watch the video version of this enFact here.

Entefy’s enFacts are illuminating nuggets of information about the intersection of communications, artificial intelligence, security and cyber privacy, and the Internet of Things. Have an idea for an enFact? We would love to hear from you.

Man

Information overload, fake news, and invisible gorillas. Teach your brain new habits.

Consider newborns. When we first lay eyes on the world around us, every sensation overwhelms our brains. There are colors, shapes, sounds, movement, and people totally unknown to us. Our brain does not yet know which of these experiences to prioritize and which to ignore, so our focus jumps from sensation to sensation. Slowly over time we learn to distinguish between the elements of our environment that are important and those that are not. We figure out how to properly filter the unceasing deluge of information swarming our senses. We grow up. 

This filtering process is important because the brain does not respond well to being overworked. It prefers things to run smoothly and efficiently, which it does best when it is focusing on a few discrete items at once. Furrowed brows are the outward signs of brains at the limits of their processing capabilities. Many of the things we try to do with our brains will quickly test these limits—such as setting out to master advanced calculus or learning how to read sheet music. Only with experience and accumulated knowledge can our brains safely navigate learning with ease and efficiency.

When it comes to getting our information off the Internet, we’re all a little like children—we have limited attention and we’re not always sure where to direct it. It can be difficult to limit how much information we consume when there’s always something new waiting for a click. And then, before we know it, an abundance of messy and complex information has infiltrated our minds. If our processing strategies don’t keep pace, our online explorations create strained confusion instead of informed clarity. Focus is, after all, a finite resource.

Let’s take a look at how it all works, and how we can optimize the time we spend directing our focus.

Attention overload

One fascinating psychology study illustrates just how finite our attention can be. Researchers had participants watch a video of two groups of people passing a ball from person to person. One group wore white shirts, the other wore black. Participants were asked to count how many times those wearing white passed the ball. So far so good. But what initially seems like a task of ignoring the subjects wearing black takes an interesting turn when someone in a gorilla suit casually strolls into the scene, stands in the middle of the game, and thumps its chest. The gorilla, of course, should be an immediate distraction and easily noticeable, yet only half the participants spotted it. The conclusion: too many demands on our attention blind us to everything else.

Distraction has ties to forgetfulness. As we’ve explored previously, for humans to remember information, it must successfully survive the filtering processes of our sensory, short-term, and working memory systems. Only then can it be stored away in long-term memory. The place where we attend to and think about information is in working memory, and trying to jam too much information through this system leads to a substantial drop in performance, both in terms of processing the information and in storing it for later use. How much is too much? George Miller’s research in the 1950’s suggested that the limit was 7 items (plus or minus two), although more recent research has found that the magical number may be as low as 4. Which means that, structurally, the brain can only do so much at once. There goes multitasking.

Silicon Valley pioneer Mitchell Kapor has said that: “Getting your information off the Internet is like taking a drink from a fire hydrant.” On the Internet there’s a guide to everything, opinions abound, and there are more interesting facts and ideas than anyone knows what to do with. So much so that collectively we write as much in a day online as all the books in the Library of Congress combined. Where is this information storm taking us? And how is our limited attentional system going to keep up?

More information is not necessarily better information

There are now around 47 billion web pages online, while the number of Internet users has surpassed 3.5 billion. Meanwhile, a Nielsen report indicates that we spend an hour longer online each day now than we did just a year ago, with most of that increase coming from time spent on mobile devices. As more people jump on board, our reliance on the Internet grows. When Google went down for a few minutes in 2013, it took 40% of Internet traffic with it. Which isn’t that surprising when you consider that there are more than 58,000 Google searches every second. For better or worse, search engines have altered our behavior. In terms of our mental processing power, having answers so close at hand certainly reduces the cognitive load on our mental resources.

Yet when it comes to comprehending abstract concepts and making complex decisions, too much information causes its own set of problems. For instance, it can induce a feeling that we must read and consume more than is really required. We can get stuck in a stasis known as analysis paralysis, in which we set out to make a deliberate decision by considering every last detail, but instead get overwhelmed and struggle to make any decision at all. 

The first study to document this effect came from Sheena Iyengar, a business professor at Columbia University. She set up a booth at a supermarket and offered samples of a selection of jams. She wanted to know how people would behave when presented with a selection of 6 jams compared to 24. What she found was that people initially responded favorably to the extra options—60% of people approached the booth, compared to 40% when only 6 jams were presented. But when it came to making a purchase, the figures reversed dramatically: 30% of people purchased from the selection of 6 jams, while only 3% purchasing from the larger selection. 

Over time this effect compounds. The more choices we have to make, the greater the decline in the quality of our decisions. Psychologists refer to this effect as decision fatigue or mental fatigue, and one striking example of it comes from the judicial system, in which judges were found more likely to grant parole after having come back from a break and a bite to eat. Charted out over time, favorable rulings gradually dropped from around 65% to nearly zero, then returned to 65% after some rest. Rather than ruling case by case, the cumulative weight of making decision after decision eventually snowballed into mental exhaustion, making the judges less willing and able to properly evaluate cases later in the day. 

For a more universal example, a 2010 survey of 1,700 professionals from the U.S., UK, China, South Africa, and Australia found that 59% of respondents had experienced a large increase in the amount of information they were required to process in the workplace, and 62% felt that the quality of their work suffered at times because of this information overload. Adding to this headwind against clear thinking, research out of Stanford University suggests that overthinking diminishes our creativity, making innovative solutions more difficult to come by. 

Trying to tackle too much information clearly interferes with our ability to process the information effectively. This is a real concern in workplaces and schools, and whenever important life-decisions come about. Yet when it comes to the abundance of information we encounter on the Internet, it is not the only concern. Because this phenomenon we’ve taken to calling “fake news” impedes our natural ability to discern fact from fiction.

The informational wolf in sheep’s clothing

It’s not accidental that determining truth and reliability on the Internet is challenging. For every established website or expert that people feel they can trust, there are literally millions of others that we have never heard of and have no background information about. So when we scan through our feeds looking for something interesting to read, the content (subject matter) will often trump any evaluation of the source it comes from. To be fair, we hardly have the time to double check everything we read, yet the implicit trust we’re placing in total strangers is bound to backfire every now and again. Because what we read sticks.

Which brings us to the spread of fake news on social media, a problem that came into focus during the latter days of the 2016 U.S. presidential campaign. People were exposed to “news” items that looked legitimate—the provider appeared legitimate and the news itself may have even seemed probable. Yet it was fake. What is it about our brains that makes fake news possible? How did this whole issue come about?

It starts with how easy it is to assemble a beautiful, intuitive website quickly and with no knowledge of HTML. The Internet is populated with millions of good-looking blogs with snazzy logos and easy-to-use navigation. The problem with this is that psychologists have found that we inherently trust things that are simple and easy to use. This is another of the brain’s many shortcuts. Things (like websites) that we can use intuitively seem familiar, and familiarity breeds trust in our minds. And the lower the cognitive demand, the less critical we are of the information we’re taking in. Which provides absolutely no assurance that the information itself is reliable. 

The Internet has made the whole of human knowledge instantly accessible. This is hardly a bad thing. But if we’re not careful in our information consumption strategies, we become susceptible to misdirection by misinformation. Just as we must learn as children how to filter our environment, we must learn as adults how to filter our information. So just how do we do this?

It comes down to changing habits

Our brains simply weren’t designed to handle the influx of information constantly bombarding us. So we need to find ways to filter and limit what we use. And move beyond efficiency and speed in information consumption to take into account the accuracy and reliability of the information itself. All of which starts with changing some habits. 

For starters, you can’t go wrong with simply limiting what types of information you see and where you see it. Every diet starts with consuming less. If you primarily use your social media accounts as a means to keep up with the goings-on of your friends and family, don’t also clutter those feeds with brands and celebrities. Compartmentalize social from news from entertainment, and consider using tools like RSS readers that consolidate a lot of different types of content into one place and provide easy tools for grouping and categorizing different types of media. And only ever follow sources you know, trust, and respect. 

Next, unleash your inner critic. Teach yourself to simultaneously read an article while evaluating the support for the article’s premise—the claims, facts, and figure you’re reading about should be clearly backed by high-quality research or supported by multiple reliable sources. This improves the reliability of your personal list of trusted websites, publications, and people.

It’s also a good idea to purposefully investigate the opposite of what you expect or assume—people have a natural tendency to gravitate towards information that confirms their beliefs and ignore anything and everything that conflicts with them. This bias can lead us into an ideological bubble, a safe, warm, and self-sustaining echo chamber where we only ever see things that reconfirm our existing beliefs and preferences. When we step outside our comfort zone, we exercise our intellect in new ways and open up the door to discovery.

Another valuable tip for managing your information diet is to produce more than you consume. This can be done in several ways. Trying to generate an answer to a question before looking up the answers helps you remember the correct answer, even if your guess was well off the mark. And once you’ve read something interesting, take the time to summarize it and put it into your own words. Doing so helps you digest the material, spot the loose ends, and cement it in memory more effectively.

You also can’t get around something we’re often so busy we choose to ignore: take breaks. Research has shown that we are less able to find associations and links between facts and ideas when we’re tired, and sleeping (even if it’s only a nap) improves our recall of whatever we were learning before the break. 

You might also be familiar with the notion of getting brilliant ideas while in the shower, and there’s research to back this up. It turns out that when we stop working on a problem, our brains continue processing in the background. As the study authors report: “Our data indicate that creative problem solutions may be facilitated specifically by simple external tasks (i.e., tasks not related to the primary task) that maximize mind wandering.” In order to store knowledge in our memory banks, it must be connected and associated with existing memories and prior knowledge. This unconscious process of creating connections sometimes leads to unexpected linkages—solutions to problems or radical new ideas. 

It is easy to think that gaining more knowledge equates to consuming more information, but this isn’t the case. Knowledge asks more of us. It asks that we take the time to focus on quality, not quantity. That we ignore what is superfluous in order to focus on what’s important. That we take the time to connect new information with what we already know and understand. And, ultimately, that we leave behind sensory overload and learn to filter the meaningful from the noise. It’s how we grow up as information consumers.

Letter

Our CEO’s message about the immigration ban

Dear Entefyers,

On Friday, the President signed an executive order suspending or delaying entry into the United States for citizens of 7 countries: Iran, Iraq, Libya, Somalia, Sudan, Syria, and Yemen, including those citizens who currently hold green cards or visas for valid entry into the country.  

I know that many of you have already felt the repercussions of this executive order on loved ones, friends, and colleagues whose lives have taken an unexpected turn. And to all of you, I extend my greatest sympathies and also my firm commitment to keeping Entefy and its team strong during this period of uncertainty and confusion. 

Entefyers are global citizens who have travelled the world, done business in more than 50 countries, speak a dozen languages, and know something about how things are seen and done differently in other cultures. We are diversity personified. And so, despite this unnerving shift in policy and express abandonment of the core American values of openness and opportunity, Entefy will continue to support all no matter the country of your birth.

Early in Entefy’s young history, our small team sat together and penned our very first blog post to share with the world “Why we exist.” And in the process of communicating our mission, we realized something deeply embedded in our company’s culture and the DNA of everyone whom we proudly call an Entefyer. That, when you strip it all away; when you drill down to the very heart of why we do what we do every day, it is to stand defiantly in the face of greed and fear.

At the time, we were talking primarily about business at large—about how the leaders in our industry continue to complicate communication, increase advertising, violate user privacy, and so on. However, with global events such as these, I am reminded that it is our responsibility as a company, and as individuals, to stand against greed and fear, no matter where it comes from. 

And so, while I will not pretend to agree with this executive order, I am hopeful that the current administration will recognize the overbroad consequences of this policy and will act swiftly in bringing it to a close. 

As a global community, we have always been better together. And as an American, I know we can do better. 

Alston

Speakers on stage

Brienne visits her alma mater to promote STEM and entrepreneurship

She Started It” follows the journey of female founders of technology companies, offering a behind-the-scenes look at what it’s like launching and growing a business as a modern woman—documenting everything from raising capital to managing family dynamics. Entefy’s co-founder Brienne is one of the women spotlighted in the film. She recently participated in a panel discussion following a screening of the film at her alma mater, Santa Clara University.

The film’s director Nora Poggi led the panel, which included Brienne, in a lively discussion between the panelists and the audience that delved into the challenges women face professionally, the importance of mentorship, tips for raising capital, and advice for anyone starting a company today. 

“She Started It” is being screened at universities and other venues around the world as part of a larger effort to encourage more women to pursue Science, Technology, Engineering, and Math (STEM) education and entrepreneurship. 

Infographic

Hurry up and read that text message

Entefy’s survey of 1,500 U.S. professionals found that respondents received on average 34 text messages per day. Other data shows that 90% of text messages are opened within 3 minutes and 99% of those messages are read. Which means that we’re interrupting what we’re doing dozens of times every day to read text messages. If you were looking for a definition for “digital distraction,” that may be it.

Now consider that there are more mobile devices than people on the planet. With this kind of accessibility, checking notifications and making yourself available round the clock has become the norm for many people. We’re becoming more and more aware that jumping in and out of apps negatively impacts productivity, efficiency, and even mindfulness. It can be difficult to maintain focus and mental balance in this hyper-connected world.  

Watch the video version of this enFact here.
Entefy’s enFacts are illuminating nuggets of information about the intersection of communications, artificial intelligence, security and cyber privacy, and the Internet of Things. Have an idea for an enFact? We would love to hear from you.

Doctor

Patients are about to see a new doctor: artificial intelligence

The World Health Organization has estimated that there is a global shortfall of approximately 4.3 million doctors and nurses, with poorer countries disproportionally impacted. In the U.S., these shortages are less acute; instead, the country struggles with ever-increasing heath care costs, which often translate into limits on the time a patient is able to spend with a doctor. One study estimated that U.S. doctors spend on average just 13 to 16 minutes with each patient. 

Developments in the use of artificial intelligence technologies in medical care are beginning to demonstrate AI’s potential to improve the quality of patient care. Around the globe, researchers are studying how AI might become a new tool in the caregiver tool kit. From diagnosis, to personalized medical advice, to insights into genetics, AI is emerging as a new factor in medical care. 

So against this backdrop of a global shortage in doctors and nurses, and cost-driven strains in patient care, let’s take a look at some of the ways AI systems are being evaluated for use in medical care.

Enhanced medical diagnosis

In August 2016, an artificially intelligent supercomputer did in 10 minutes what would have taken human doctors weeks to achieve. Presented with the genetic data of a 60-year-old Japanese woman suffering from cancer, the system analyzed thousands of gene mutations to determine that she had a rare type of leukemia.

The patient’s doctors had initially treated her for myeloid leukemia but observed that her recovery was unusually slow. They suspected that she had another form of the disease, but they knew it could take weeks for them to identify which of her thousand genetic mutations were relevant to her illness. Since speed is essential in effectively treating leukemia, the doctors used an AI system to run the analysis instead. Within minutes, the system determined that the patient had a secondary leukemia caused by myelodysplastic syndromes. Her medical team adjusted her treatment plan, and her condition improved soon after.  

Arinobu Tojo, a professor of molecular therapy at a hospital affiliated with the University of Tokyo’s Institute of Medical Science said, “It might be an exaggeration to say AI saved her life, but it surely gave us the data we needed in an extremely speedy fashion.” 

Elsewhere around the globe, scientists and doctors are making use of AI systems to tackle a long list of issues. Artificially intelligent computers are being designed to analyze gene mutations, make better use of scientific studies, and enhance doctors’ clinical knowledge beyond their first-hand experience. 

One emerging theme for AI’s use in medicine: attempts to accelerate the pace of research breakthroughs. AI-driven research is taking place in areas as diverse as heart disease, diabetic complications, and brain trauma. One AI system is being used to analyze rare immunological conditions in children, providing physicians with insights on how to treat and cure issues that offer few clues at present. Machine learning is enabling much of these efforts.

Machine learning in medical care  

The foundation for the use of artificial intelligence in medical care is machine learning, the process by which computers “study” huge datasets to discover insights. Physicians and machine learning scientists, for instance, supply a smart computer with archived medical photos to train the system to spot certain disease markers or mutations. The more data a machine-learning system has, the more adept it becomes at providing accurate results. The system learns by recognizing patterns and updating its analyses as it gains more information. 

Researchers at Imperial College London are testing machine learning applications in treating traumatic brain injuries. When they fed their algorithm images of such injuries, the system identified brain lesions and learned to discern white and grey matter. That ability could provide researchers with valuable information about these life-altering and often fatal conditions. 

In another example of machine learning applications in medicine, scientists recently experimented with how machine learning might facilitate diagnoses of diabetic retinopathy, a condition that causes vision impairment and blindness. The team trained the system using 128,000 images of healthy eyes. They then had the algorithm analyze 12,000 images and graded its ability to recognize signs of disease. The results indicated that the system “matched or exceeded the performance of experts in identifying the condition and grading its severity.”

Other implementations of machine learning are focused on the human genome, where AI systems are processing massive amounts of genomic data. The key to solving the mysteries behind conditions such as autism spectrum disorder lie in human genes, but the human brain simply can’t comprehend all the nuances and complexities of the genome. “I can say for sure that winning a game of Go is actually quite easy compared to understanding human health, and extending life spans, and saving lives,” says Brendan Frey, the co-founder and CEO of an AI genetics company and a professor of engineering and medicine at the University of Toronto.

Machine learning could be a game-changer in medicine because, unlike humans, computers don’t get tired and do have an infinite capacity for learning and memorization. They can process far more data far faster than even the most capable human physician. But that doesn’t mean that flesh-and-blood doctors are an endangered species. 

Instead, AI is being designed to enhance diagnoses by providing doctors with data-driven insights derived from patients’ histories and conditions. Physicians can then use those reports to create treatment plans based on previously unavailable information. These systems can also provide vital insights in geographic areas where doctors, particularly specialists, are few and far between. Which, as the World Health Organization data reflects, is practically everywhere.

AI in the exam room 

The breakthroughs in diagnosis and treatment we’ve been talking about may save lives and alleviate suffering for patients who live with life-threatening and chronic illnesses. But AI’s impact may also be felt in people’s day-to-day health management and routine doctor’s visits. Perhaps widening those brief 13 to 16 minute windows of time that patients spend with doctors. 

One implementation of AI technologies in health care is taking place behind the scenes. Most of us have typed a list of symptoms into a web search at one point or another, and sorted through conflicting or alarming diagnostic information. Search companies are beginning to attach their platforms to AI systems with the goal of providing health information that is personally relevant. One study demonstrated that search histories can identify users who are at risk for pancreatic cancer and, potentially, other diseases. 

Of course, search engines are not replacements for real doctors. But it’s arguably better to see results vetted by experts from Harvard Medical School and the Mayo Clinic than to scour questionable community forums that offer little in the way of knowledge or reassurance. Search platforms may also be able to nudge people to make appointments sooner rather than later when their searches indicate that they may be suffering from an illness. 

Access to AI-powered medical databases could enable physicians to make more precise diagnoses because they would be drawing on a wealth of medical knowledge instead of thumbing through paper charts or trying to read through patient files during their already-short in-person visits. For instance, doctors at St. Jude’s Medical Center in Memphis and Vanderbilt University Medical Center in Nashville currently receive pop-up notifications in patients’ digital files alerting them to potential contraindications with certain medicines. 

The home-grown artificial intelligence system used at Vanderbilt University Medical Center predicts which patients may need specific drugs in the future. It can recommend that doctors run genetic tests on those who have a high likelihood of needing blood clot medications and other drugs, avoiding potential crises and saving time down the road. 

AI is also being designed to help with disease prevention and management. Information pulled from hospital databases, electronic records, in-home monitors, fitness trackers, and implanted devices could help health care providers predict which patients show high indicators for conditions such as congestive heart failure (CHF). Mount Sinai Hospital in New York is already exploring how AI can help track patients who have or may develop CHF so they can prioritize care and help people manage their conditions. 

Facilitating human connections 

Let’s add some context to the “13 to 16 minute” doctor visit stat we cited earlier. One of the primary factors limiting doctor-patient time is something we’re all probably familiar with: paperwork. A study by the American Medical Association followed a group of 57 physicians through 430 hours of patient care. Here’s what it found:

Physicians spent 27% of their time in their offices seeing patients and 49.2% of their time doing paperwork, which includes using the electric health record (EHR) system. Even when the doctors were in the examination room with patients, they were spending only 52.9% of the time talking to or examining the patients and 37.0% doing […] paperwork.

For all of the potentially groundbreaking research we’ve described here—into diseases, genetics, diagnostic automation—the true test of AI’s impact in medical care might be whether it can make a dent in the paperwork problem. Medical records, regulatory documentation, insurance filings, you name it. Because in one pass, you free up time that doctors and nurses can devote to patient care; and perhaps even attract more people to the professions, addressing the global doctor shortage. 

AI-driven efforts are already underway to combat the administrative burden. Some health companies are using apps to cut down on the time nurses and doctors spend gathering patient information. Outsourcing triage and patient interviews to algorithms could cut down on health care costs, but with the risk that this could further reduce the time doctors spend with their patients. The goal would be to improve doctor-patient interactions by freeing physicians from laptop screens and focusing on the people in front of them. 

And that’s really where the true potential of AI in medicine lies. Rather than replace interpersonal connections and involvement, AI can reduce the burden on doctors and nurses so they can focus on the uniquely human elements of patient care. No one will complain if AI helps lower health insurance premiums or helps physicians find the right treatments faster. But people look to doctors and nurses for reassurance, comfort, and insight. Machines simply cannot replicate a human’s bedside manner. 

It’s not an all-or-nothing decision, either. While algorithms can generate comprehensive analyses of a patient’s history and suggest a potential diagnosis, doctors need to evaluate that information and apply their real-world experience to determine the course of care. It’s also worth noting that at present smart machines cannot explain the reasoning behind their suggested diagnosis, so doctors can’t know whether the process behind their recommendations is sound. They can use the information to supplement their own analyses or as a jumping-off point when working out a diagnosis. The responsibility for patient care is their own. 

Ultimately, AI has the potential to enhance the practice of medicine, from disease research to the logistics of routine care. But its greatest impact may be found in how it enhances and empowers human practitioners. 

Additional article contributors: Mehdi Ghafourifar and Brian Walker.

Man working on a desk

Entefy Files 13 New Patents in Artificial Intelligence, Security, and Data Privacy

PALO ALTO, Calif., Jan. 24, 2017 — Entefy Inc. announced today the filing of 13 new patent applications to further drive the technology behind its AI-powered universal communicator. These new patent applications bring the company’s total filed patents to 31, representing widespread innovation in its core product. 

“Invention has always sat at the heart of what we do at Entefy. I’m proud of the team for embodying this spirit and continuing to make progress in pioneering new technologies,” said Alston Ghafourifar, Entefy’s Co-founder and CEO. “Today’s news represents another step forward.”

Entefy’s universal communicator is a smart platform built on advanced computer vision and natural language processing technologies. It is designed to help people better manage their digital ecosystems, the combination of computing devices, software applications, and online services people use at home and at work. 

The company recently released the results of a survey of 1,500 U.S. professionals that showed nearly 80% of respondents regularly use 3 or more different digital devices; respondents also reported using an average of 6.9 separate communication applications (11.2 including social media) to send 224 daily emails, texts, and IMs. In the face of all this complexity, respondents expressed overwhelming interest in simplification of their fragmented digital ecosystems. 

“Entefy is directly addressing some major pain points in people’s digital lives, like communication, search, security, and data privacy,” said Entefy Co-founder Brienne Ghafourifar. “Today’s announcement sums up just how much progress we’ve made towards this goal.” 

ABOUT ENTEFY

Entefy is building the first universal communicator—a smart platform that uses artificial intelligence to help you seamlessly interact with the people, services, and smart things in your life—all from a single application that runs beautifully on all your favorite devices. Our core technology combines digital communication with advanced computer vision and natural language processing to create a lightning fast and secure digital experience for people everywhere. 

Infographic

Skyrocketing video consumption

Every day, YouTube users view 10 to 20 billion videos and the average adult watches 76 minutes of digital video. By 2019, it’s projected that video will account for 80% of consumer Internet traffic, which is one of the reasons mobile data traffic has grown 400,000,000x since 2000.

What’s behind the continuing rise of video? For one thing, reading and watching utilize different brain processes. Reading requires active attention and gives the brain a workout because it takes more cognitive effort. Video is passive, easier to consume, and processed by the brain 60,000 times faster than text. Video might win the popular vote, but the many positive effects of reading shouldn’t be dismissed.

Watch the video version of this enFact here.

Entefy’s enFacts are illuminating nuggets of information about the intersection of communications, artificial intelligence, security and cyber privacy, and the Internet of Things. Have an idea for an enFact? We would love to hear from you.
hand holding a notebook

Entefy’s new look

Since the idea for the company sprang to life on a napkin, we’ve been hard at work developing our ideas and our technology. Today, as Entefy enters its next phase of growth, we’re debuting our new logo, visual style, and exclusive content, all of which can be seen on our new website.

Much has happened since the very early days of Entefy. Big technological breakthroughs. Key product milestones. National user studies. User experience testing. And a long list of amazing individuals—team members, investors, partners, and advisors alike—joining us in our mission to seamlessly connect people with everyone and everything in their lives. All of this has positioned Entefy for a breakout year in 2017.

As you’re looking through, you’ll notice how we’ve done away with boxes, borders, and boundaries of all kinds—we want to emphasize the limitless potential and reach of this amazing, interconnected “smart world” we live in. Seamless interaction has always been the heart of what we do, and our new look reflects this.

If you detect hand-crafted precision in our new visual style, you have a good eye. Born from a pixel-by-pixel design process that included tireless iterations, our redesigned logo and visual theme represent an entirely new visual language for Entefy. One that embraces simplicity, efficiency, and power in this modern, hyper-connected world. You’ll see this new look on our website, on our social channels, on our team t-shirts—basically, everywhere you meet Entefy in-person or online. 

On behalf of all Entefyers, we thank you for taking the time to visit our site and learn more about Entefy and the future of AI-powered digital interaction.