Are there more digital messages or stars in the universe?

There’s an old brain teaser that asks: Are there more stars in the sky or grains of sand on all the beaches? With all the messages flying around daily, we thought we’d recast that question for the digital age: Are there more digital messages or stars in a galaxy? 

Every year, more than 100 trillion messages are sent and received worldwide. That’s a huge number. So big that to make a comparison, we need to get galactic. Intergalactic, actually. Since a typical galaxy contains about 100 billion stars, 100 trillion messages equate to the number of stars in 1,000 galaxies. 

And that’s the number of messages transmitted every year. Like the physical universe, this number is expanding as more people come online worldwide. At last count, “more than half the world’s population does not use the Internet.” 

Entefy’s enFacts are illuminating nuggets of information about the intersection of communications, artificial intelligence, security and cyber privacy, and the Internet of Things. Have an idea for an enFact? We would love to hear from you.

Infographic

Presidential technology firsts

There is a long history of U.S. Presidents making use of new communication technologies to campaign, advocate, and connect. Here are a few Presidential firsts:

Abraham Lincoln was the first to make widespread use of the telegraph. He had to leave the White House and walk next door to the War Department to send a message.

William McKinley was the first President to be captured on film in 1896, in what today we would call a campaign ad.

Warren Harding was the first President to give a speech over radio, in 1922. 

Franklin Roosevelt was the first President to appear on television, in a broadcast from the 1939 World’s Fair.

Bill Clinton was the first President to launch a website in 1994. The site is archived by the National Archives. He was also the first President to send an email, though he later stated that he sent a total of two messages while in office.

Barack Obama was the first President to tweet, though not as @POTUS. During a 2010 visit to the headquarters of the Red Cross, a Red Cross staffer asked the President to press “Update” on a tweet they wrote about his visit.Entefy’s enFacts are illuminating nuggets of information about the intersection of communications, artificial intelligence, security and cyber privacy, and the Internet of Things. Have an idea for an enFact? We would love to hear from you.

Infographic

The hazards of confirmation bias in life and work

In 1954, the psychologist Leon Festinger infiltrated a cult led by a woman who Festinger dubbed Marian Keech. Keech believed she received messages from an alien race telling her that on a certain date a flying saucer would appear to collect her and her supporters. At which point a catastrophic flood would decimate the remaining population of the earth. 

Events didn’t exactly work out that way. But Festinger hadn’t joined the cult to test the validity of Keech’s claims. Instead, he wanted to observe how the cult would react to the discovery that their prophecy had failed. Would they admit the error and change their beliefs? 

Festinger is now a mainstay in psychology textbooks due to his theories on cognitive dissonance, which describe a state of mind in which a person holds two or more conflicting thoughts, ideas, or opinions. Cognitive dissonance is by its nature an uncomfortable condition, one that the brain wants to quickly resolve by reinterpreting one or more of the conflicting thoughts. Which lets our brain return to a stable, coherent state. 

But of course, reinterpreting facts or beliefs on the fly can be short-sighted. Take for instance the cult that Festinger studied—when the alien apocalypse didn’t materialize, most cult members chose to simply reinterpret what did happen (nothing) as a sign that the aliens had, in fact, saved humanity in response to the cult’s efforts and faithfulness. They opted to restore mental coherency at the expense of truth. 

But don’t let the outlandishness of an alien cult fool you, the bias its members exhibited—interpreting the outside world in terms of pre-existing expectations—is something that we all do. In fact, bias can have beneficial aspects as a labor-saving shortcut that lightens the cognitive load on our brain. But bias is also responsible for limiting our understanding of things that are new or different. We’re not really immune from it at home or at work; it can infect how we discover and interpret information, and impact our behavior in workplace settings. 

Fortunately, having a greater awareness of confirmation bias gives us tools to limit its shortcomings at work and in life. Let’s start by looking at how confirmation bias works for and against us, explore some of the ways it can surround us in ideological bubbles, and then discuss how we can burst our own personal bias bubbles.

Unbelievable

Beliefs we hold dear feel as though they are an inseparable part of us. But this can lead to problems when new information seems to contradict them, particularly when we’re not prepared to readjust. When we reinterpret an experience to conform with our prior beliefs, we are feeding confirmation bias—we place greater emphasis on information that supports our beliefs while discrediting or ignoring conflicting information. 

meta-analysis of 54 social psychiatry experiments concluded that people have stronger memories of events that conflict with their expectations, yet maintain stronger preferences towards things that support them. So much so that we’ll devote 36% more reading time to attitude-consistent material. When we do encounter opposing information, not only are we likely to try to interpret it in a corroborating way, we sometimes go so far as to use contradiction to strengthen already existing beliefs, something researchers call the ‘backfire effect.’

The confirmation bias does not only influence how we interpret new information, it also helps dictate what we go out looking for in the first place, and what we recall from our memory banks in response to certain questions and decisions.  

Take for instance the question “Are you happy with your social life?” It is, on the face of it, basically the same as asking “Are you unhappy with your social life?” The state of someone’s happiness should not be swayed by the words used to ask about it. Yet this is often the case. So those asked if they’re happy will call forth memories of joy, while those asked if they’re unhappy will remember moments of sorrow. 

The effects of this associative bias can appear out of nowhere, in the absence of conscious thought. If you’re about to purchase a particular model of new car, you’ll start noticing that car all over the place. Likewise, you might be considering starting a family and begin to see the world around you filled with children. Or go through a breakup and see everyone else traveling in pairs. 

If we fail to realize that this is simply a byproduct of our brains seeking efficient means of directing our attention, we end up erroneously assuming that that car is really popular or that there are more kids in the area than there in fact are. This is a classic means of creating and maintaining stereotypes in social interactions, as we tend to see what we expect to see and neglect counter evidence. 

Moreover, research suggests that when we’re forming impressions of a person’s personality, we place greater importance on information learned earlier versus that which we learn later. When asked to form an opinion about someone who is “intelligent, industrious, impulsive, critical, stubborn, envious,” subjects rate the person as more positive than when the same words are presented in the reverse order. Like those pesky first impressions that linger, the information we learn about first becomes the baseline against which new evidence is compared—while we may adjust this baseline, each new piece of information ends up correcting it to a smaller degree. “Intelligent” makes for a better first impression than does “envious,” and while the subsequent adjectives provide some nuance, they don’t overwrite that initial judgement.

Bias is an unfortunate side effect of the brain’s need to reduce strain on its processing capabilities. By referencing past experiences and memories when finding and interpreting new information, we avoid having to start from scratch and can more effectively filter our environment for what’s relevant. But when we let bias run unchecked, we end up with some very unfortunate side effects. Especially when it comes to online news and information.

Information bubbles

Confirmation bias shows up in our news feeds and web searches because we tend to network with like-minded individuals and give more attention to belief-confirming information sources.

While search engines have not been shown to display a heavy bias in their results, the language we use in our searches can implicitly support an assumption or belief. If you believe in astrology and search for the Gemini horoscope, you’re going to find what you’re looking for without seeing (or paying much attention to) information that questions astrology itself. In a subtler sense, a simple comparison of a search for “how confirmation bias affects learning” and another for “does confirmation bias affect learning?” returns front page search results that are markedly different—some were the same, some weren’t. 

When it comes to our social media feeds, the ability to personalize the people and brands we follow allows us to form networks that expose us to confirmation bias. Further complicating the issue are recommendation engines, which are designed to show us what the engines think we’ll like and nothing else, minimizing our exposure to information that might provide a well-rounded view or alternative perspective.

What happens if we’re only ever exposed to people and ideas that support our existing beliefs? Those beliefs are reinforced and strengthened by an affirmation feedback loop—each news item, blog post, and status update further demonstrates what we think we already know. And when opposing ideas do manage to sneak in, people tend to quickly label those ideas “exceptions that prove the rule” given that it’s much easier to reinterpret a fact than change a fundamental belief. 

Of course, how can one be expected to traverse complex informational landscapes free from this bias? The brain can only process so much information, after all. It takes time and effort to process and internalize new ideas and concepts. Plus, the Internet is filled with fake news to such an extent that it is seldom possible to check the reliability of every source we encounter. And so despite even good intentions, our bubbles grow.

Bursting the bubble

Perhaps the most important and, thankfully, simplest way to battle confirmation bias is to acknowledge that there are many sides to every story. While we may hold a strong opinion, it is but one perspective, of which there may be many more, each with its valid points and arguments. When we allow ourselves a small measure of doubt, we keep ourselves from drawing conclusions too quickly.

A more time-consuming defense against confirmation bias is to act as though you’ll need to explain yourself to someone. Researchers have found that people were more likely to critically examine information and to learn more about ideas if they believed they would need to explain them to another person who was well-informed, interested in the truth, and whose views they were not already aware of. Call it accounting for the effects of accountability.To go even further, get in the habit of playing devil’s advocate—that is, take the opposite view and try to argue from there. It is not always easy to see things from the other side, but then, if you cannot take an opposing perspective seriously, chances are you don’t really understand your own views, and are missing important elements required for a full understanding of the topic.

Getting to the bottom of an issue is an endeavor that requires time and effort, coupled with robust reasoning skills, and as such it is not possible for each of us to get to the bottom of every important topic. But this does not mean we are doomed to suffer at the hands of our biases. If we can simply admit to ourselves that we are missing important elements of the story—that our view is incomplete—we will be more likely to open ourselves to conflicting ideas, and less likely to cling to inconsistent viewpoints.

When it comes to the beliefs we hold dear, we may benefit when we take the time not only to find support for our views, but to discover the contradictions and counterarguments. Time to put ourselves in the other person’s shoes. Time to imagine the outsider’s perspective. Time to resolve cognitive dissonances. And time to purposefully burst our own bubbles of bias. 

Team Entefy

Entefyers on Entefy

Entefy is growing—teamcapitalproduct and inventions. And we’re seeking talented, amazing people to join our venture. You can check out our open listings here.

Here’s the thing with job listings: they can’t tell you everything. Even detailed job descriptions paint a two-dimensional picture, at best. But when you’re considering going to work for a company you have three-dimensional questions. What’s it really like to work there? What’s a typical day like? What sort of support will you get from the team?

To give you answers to questions like these, we went straight to the source: the Entefyers themselves. Today, we’re adding four short videos to our Careers page. The videos feature interviews with some of the Entefyers who work on different teams and in different roles. Check them all out to get an in-depth look at life at Entefy.

Entefy is like an extended family, and many of us joined Entefy through referrals. So if you know someone who might be interested in tackling one of the biggest challenges in tech, please share a link or send them our way.

Heart

Valentine’s Day and the meaning of flowers

There are 6,909 ways to say “I love you” around the world, a sentiment that gets shared via an estimated 1 billion Valentine’s Day cards sent worldwide each year. In the U.S., romantics spend around $98 on spouses and significant others. Which in some cases are our pets, who receive on average $26 worth of gifts.

Red roses are by far the most popular flower gift, surpassing pink roses and mixed bouquets as the second and third most popular flowers. But be careful before you select roses for your special someone! According to the Farmer’s Almanac, the color of a rose carries meaning. Red roses represent love and desire, yellow is friendship, pink is happiness. 

If you’re on the receiving end of a bouquet of made up of yellow and coral roses, take note: coral represents “sympathy” and yellow “friendship.” Which might not be the language of love you were hoping for.

Entefy’s enFacts are illuminating nuggets of information about the intersection of communications, artificial intelligence, security and cyber privacy, and the Internet of Things. Have an idea for an enFact? We would love to hear from you.

Hand holding a phone

What I learned on my notification vacation

A friend of Entefy conducted a weeklong experiment in curbing digital distraction. Here’s what she learned. 

Recently I went through a period when I just wasn’t feeling myself. I didn’t have my usual energy. Ditto my creativity. So I started reasoning through why that was. Perhaps I wasn’t getting enough sleep? No, that wasn’t it. What was I doing at night? Well, I was on social. I was reading news feeds. I was messaging. I was… all over the place, digitally speaking. And I was all over the place pretty much every evening before bed. In fact, I was getting into bed with my smartphone and tapping away until I fell asleep. Could this have something to do with it? I vowed to find out.

But first, some back story. My smartphone is the first and last part of my day, every day. I check headlines and social before getting out of bed in the morning. I bring my phone on nature walks to track my steps and take photos of my dog along the way. Basically, my phone is attached to my hip all day and then is the last thing I look at before falling asleep. And I am definitely among the 71% of people who sleep with their smartphones within reach.  

The truth is, I like to be connected. So I make myself available to everyone and every app. I have grown accustomed to being available and responsive to messages pretty much 24/7. Until it dawned on me. For someone who loves “me” time for clarity, planning, and meditation, I was giving away precious time and attention… to my phone. 

Which brings us back to the experiment. What would happen if I simplified my nighttime digital habits? The experiment was simple: for one week I kept my phone on silent and put it away at least an hour before bed. Here’s what happened…

Seven days, seven lessons

1. I took action on goals that had been on the backburner. I’ve long wanted to take new exercise classes, schedule more events with friends, get a better handle on my personal finances, and meditate more. But here’s how it would normally go: “I want to take a yoga class tomorrow.” So I’d check three schedules. But then thinking about yoga would get me thinking about my yoga instructor friend in L.A. and so I’d text her. Which would make me curious how much a ticket to L.A. might cost right now. But what’s my budget for a trip like that? I’d check my bank. And before I knew it, 30 minutes had passed, I hadn’t signed up for a class, and besides now it was too late to wake up early for yoga anyways.

The first thing I noticed about not having a device nearby to deter me was that I had the time and mental clarity to add goal setting to my daily routine. The big change was that instead of passively searching for ideas or solutions generated by other people, I got clear on what I wanted and created a personal action plan. Taking the time to do so each evening led to my days feeling far more productive and complete. 

2. I finally started that book on my nightstand. Without digital distractions, I finally started the book that had been gathering dust on my nightstand. Which isn’t an incidental detail. Every night I looked at that book and thought, “Tonight’s the night.” But instead of reading, what I’d find myself doing was scrolling through my Instagram feed. 

Two things happened once I started reading the book. First, even though I had picked it up purely for pleasure, I found information as I was reading that was relevant to my work. Or would read passages that had nothing to do with work, but still find myself creating new ideas for work in the background. And what was cool was how clearly I was able to recall the information the next day. Nighttime reading delivered a great deal of daytime clarity.

The second effect was that reading a physical book at night was therapeutic in itself. I was deeply engrossed in the book, with no outside distractions to impede my focus. No thoughts of the 30 things I needed to do tomorrow. No quick check-ins. And even better, I slept great reading a paper book every night. I awoke very restful and energized. 

3. One night, my device wasn’t on silent. I heard my phone buzz and I snapped out of my zone. The feeling was a little like walking out of a silent room into a noisy party. It turns out that recovering from the cost of interruptions can take a few minutes, even half an hour, and I experienced this right away. My clear mind wanted to go straight back into the notification rabbit-hole. What was interesting was that after just a few nights without notifications, this one buzz actually created a feeling of anxiety—who could that be!? 

4. I was mindful and grateful of the moment. Digital-free evenings gave me time that I used to appreciate the fragrance of my candle, my dog cuddled next to me, even the weather outside. This made me feel calm and relaxed, which reminded me of all the benefits of mindfulness. Home in bed is safe place where you stop worrying about things outside of your control—that is, when you’re not being constantly reminded about them. The mind can wander, that’s heathy, but it isn’t prompted to wander as when the notifications are rolling in. There is a time and place for everything—most 11pm emails aren’t something you can act on until the morning anyways.

5. I came up with better ideas and didn’t forget them in the morning. Sometimes I would come up with ideas right before bed, but I didn’t have a lot of success with actually capturing them by writing them down. Because when I was multitasking on my phone, the idea would appear then disappear the moment something on the screen caught my eye. 

With my journal in hand and phone put away, I found inspiration within myself and took time to write ideas by hand. When I read them in the morning, they stayed with me all day. Whether or not Archimedes actually coined the phrase “Eureka” in the bath, my experience supports the core idea of the story: relaxation creates focus that prepares the mind for discovery and invention. 

6. I didn’t miss my apps, but I did miss my people. There are a few people I like making time to talk to before bed. I also want to be reachable if a team member or someone I know needs something really important. The one part I missed about not having my phone near before bed was not being able to connect more deeply with people. One of the best parts about technology is being able to maintain connections from a distance. I didn’t want a million social media notifications but I did want to be connected to loved ones. 

7. I slept like a baby. There is research that the blue light emitted from devices “negatively affects health and sleep patterns.” Putting my phone away sent my mind a signal that I was shutting down for the day. It’s amazing how fast I fell asleep during my digital hiatus. 

So did I change any habits?

I learned a lot about this phenomenon of information overload just by unplugging for a little while every day. Now, my phone is still with me the majority of the day, but my relationship with it has changed. I’ve pretty much turned off all app notifications permanently. Instead, I mindfully open an app only when I decide I want to, not when it calls out to be checked. I haven’t lost any of my availability to my colleagues, family, and friends; I’m reachable when I’m needed. And, overall I’m more efficient with my time throughout the day. Creativity and energy are off the charts.

As with many things that we grow accustomed to, we don’t realize exactly what our behavior is like until we break the habit, even temporarily. Notifications affected my focus, clarity, even creativity, until I took it upon myself to take control of my digital life. My experiment was one amazing vacation, and I didn’t post a single pic of it. 

Undistractedly,

Entefyer Meghan

Privacy

Seven new assaults on data privacy

The last time we shared a roundup of data trackers, we mentioned an emerging market for monetizing your activity on mobile devices. Something called Telecom Data as a Service, in which service providers collect and sell customer data to third parties, including advertisers. This data “is seen as potentially more valuable than some other consumer data because it directly connects mobile phone interactions to individuals through actual billing information.”

Over the past few years there have been developments involving a similar category of data collection taking place at one of the largest Internet service providers (ISP) in the U.S., AT&T. And if you are someone concerned about protecting your data privacy, you’ll be glad to learn that this story has a happy ending.

In late 2013, AT&T announced the rollout of its “U-Verse with GigaPower” high-speed Internet service, with an important footnote added to the bottom of the announcement: 

“…AT&T may use Web browsing information, like the search terms entered and the Web pages visited, to provide customers with relevant offers and ads tailored to their interests.”

Basically, this type of user data could be converted into an advertising revenue stream. To be clear, the company stated that it did not intend to sell the data to third parties, only create tailored ads based on a customer’s Internet use. But unlike a social media service, for instance, that tracks a user’s activity within its service in order to customize ads, an ISP like AT&T can track the entirety of a user’s Internet activity.  

There was an option to opt out of AT&T’s data collection scheme: buy a higher-priced data plan. Back then, AT&T charged “at least another $29 a month ($99 total) to provide standalone Internet service that doesn’t perform this extra scanning of your Web traffic.”  

But then something unexpected happened. AT&T announced that it would end its GigaPower data tracking program. The company attributed the change to new privacy rules being written at the Federal Communications Commission, the Federal agency that regulates ISPs. It was an important development for anyone interested in asserting their right to cyber privacy. 

Protecting your personal data is an ongoing effort that starts with awareness. So with that in mind, here are seven recent developments that you may have missed about privacy and security in popular apps and services:

1. Evernote attempted to update its privacy policies to make it clear that its employees could read your notes, without the option to opt out. But users protested and the company reversed the changes: ‘We announced a change to our privacy policy that made it seem like we didn’t care about the privacy of our customers or their notes. This was not our intent, and our customers let us know that we messed up, in no uncertain terms. We heard them, and we’re taking immediate action to fix it.’ 

2. A Canadian consumer data privacy advocacy group found that many popular fitness tracking devices transmit your data in ways that make the devices vulnerable to interception or tampering. And the devices can potentially be used to track your movements and profile you: “We discovered severe security vulnerabilities, incredibly sensitive geolocation transmissions that serve no apparent benefit to the end user, and that were not available to users for access and correction, and unclear policies leaving the door open for the sale of users’ fitness data to third parties without express consent of the users.”

3. A study published in the Journal of American Medicine looked at a large collection of diabetes apps on Android and concluded: “Most of the 211 apps (81%) did not have privacy policies. Of the 41 apps (19%) with privacy policies, not all of the provisions actually protected privacy (e.g., 80.5% collected user data and 48.8% shared data). Only 4 policies said they would ask users for permission to share data… Patients might mistakenly believe that health information entered into an app is private (particularly if the app has a privacy policy), but that generally is not the case.”

4. If you’re worried about protecting your activity on Facebook, it’s worth recalling that the social network makes it easy for its advertisers and partners to track you freely: “Most people forget that when they download an app or sign into a website with Facebook, they are giving those companies a look into their Facebook profile. Your profile can often include your email address and phone number as well as your work history and current location.”

5. A data security company found that 1.3 million Android phones have been hacked: “Once again, hackers are showing why you should never, ever download apps outside official app stores. Hackers have gained access to more than 1.3 million Google accounts — emails, photos, documents and more — by infecting Android phones through illegitimate apps.”

6. Meitu, a popular photo-editing app that requires a long list of permissions, has other potential security vulnerabilities: “[Security experts] found numerous serious privacy flaws and avenues for potential leaks of personal data. One eagle-eyed researcher found the Android version of the app asked users for dozens of intrusive permissions, and sends the data to multiple servers in China—including a user’s calendar, contacts, SMS messages, external storage, and IMEI number.”

7. WhatsApp was in the news after a disputed report about a security vulnerability; what emerged from the discussion was awareness that the app’s privacy policies are not clearly defined: “One of the biggest concerns around WhatsApp from a privacy perspective is its opacity, as frequently noted in the Electronic Frontier Foundation’s assessments of which tech providers ‘have your back.’ Whilst [WhatsApp] owner Facebook does have a transparency report, released twice a year, it doesn’t drill down into how many data requests relate to WhatsApp, let alone what kinds of information it can hand over.”

Digital trackers can be unnerving when every day seems to bring headlines of some massive data security hack or another company accused of misusing customer data. The first line of defense is keeping informed. 

Infographic

Email traffic reaches astronomical levels

A technology research firm estimated that “in 2015, the number of emails sent and received per day [totaled] over 205 billion.” Growing at 3% annually over the next four years, email volume is expected to reach over 246 billion by the end of 2019. That’s 70 emails per day for each of the estimated 3.5 billion humans currently online. As large as it is, that number might seem low to you – in Entefy’s research, we found that U.S. professionals send and receive 110 emails per day on average.

Let’s put this number into perspective by comparing emails to miles: Let’s say 1 email = 1 mile. Pluto is 3.67 billion miles from the sun. 246 billion miles is like travelling from Pluto to the sun 67 times. Every day. At that rate, it’s no wonder that people spend hours per day sorting through email, some logging more than 6 hours of email activity daily. Talk about overload.   

Entefy’s enFacts are illuminating nuggets of information about the intersection of communications, artificial intelligence, security and cyber privacy, and the Internet of Things. Have an idea for an enFact? We would love to hear from you.

Football

What would the Super Bowl look like with AI referees and other smart technologies?

It’s late in the 4th quarter of Super Bowl LI. The Falcons are facing 3rd and goal on the Patriots’ 5 yard line. Matt Ryan takes the snap and hands off to Devonta Freeman, already running hard at the goal line. Then, with a crunch audible to the topmost rows of NRG Stadium, Freeman is brought down by Dont’a Hightower right at the goal line. Touchdown?! Silence falls as all eyes turn… not to the referees on the sidelines (there aren’t any) but to giant LCD panels behind the end zones. The screens remain black for several long moments until “TOUCHDOWN” lights up. A roar erupts from about half of the stadium.

Where were the referees in this fictional account of the upcoming Super Bowl LI? They’ve been automated by artificial intelligence systems hooked up to networks of sensors worn by the players and high-speed cameras strategically positioned throughout the stadium. 

Does all of this sound speculative? It is, but not as much as you might think when you take a look around the world of professional sports. “Precursor” technologies that provide the sensory input data for yet-to-be-invented AI algorithms are already in use. The distance between today’s smart sensors and tomorrow’s fully automated officiating is closing fast. 

We won’t see AI refs in Super Bowl LI, but the question of whether there’s room for automated officials in the NFL and other professional sports is quickly becoming a matter of policy, not technology.

Automated officiating is born one technology at a time

Ask any soccer fan about the Hand of God goal, and you’ll hear not a lecture on divine intervention, but a passionate recounting of one of the biggest referee mistakes in sports history. During the 1986 World Cup quarterfinals, Argentinian midfielder Diego Maradona scored a goal by swatting the ball into the net by hand, a clear no-no in soccer. The referees all missed it, and the goal stood despite the handball, which was described by Bleacher Report as “one of the most egregious” mistakes in World Cup history. Because of this error, Argentina ended up winning 2-1 over England.

Refs are only human, and we can’t expect them to be right all of the time. Some calls, such as the Hand of God, occur simply because refs can’t watch every single detail of every single play. Mistakes are inevitable. But the days of bad calls may soon be at an end. Goal-line technology, wearables, nanosensors, and even artificial intelligence are being adopted to help reduce referee errors and level the playing fields. 

Do referees have the toughest jobs in sports? 

The Hand of God goal wasn’t the first referee mistake and it certainly won’t be the last. Questionable calls have landed refs in the crosshairs of irate players and fans since humans first began competing over athletic prowess. The only difference between then and now is that bad calls are recorded and preserved online forever. Everything from simple oversights to politically charged decisions can be witnessed and dissected by every sports fan with a smartphone and a data plan.  

Consider the 1972 Olympics, when officials nearly reignited the Cold War by handing the gold to the Soviet Union’s men’s basketball team. The U.S. team had remained undefeated throughout the Olympics and as time wound down on the gold medal match, looked like they would remain so. But officials added time to the clock during the last seconds of the matchup, a decision ESPN dubs one of the worst calls in sports history. The Soviets eked out a one-point lead in those final moments, unseating the American champions. 

Although the world narrowly avoided political disaster then, referees have continued to confound players and enrage fans with mystifying calls. But what if officials—at the next World Cup, Olympics, or Super Bowl—had access to real-time data on players’ movements and could make faster, more accurate decisions? By reducing human error, there would be fewer opportunities for bad judgment and confusion to alter the outcomes. This would benefit not only the players, but the refs as well. That’s a good thing, right? Well, it’s not that simple. 

The era of goal-line technology 

Referees are just as much a part of the game as players and coaches, facing their own unique brand of pressure every time they set foot on a court or field. The decisions they make determine the course of high-stakes matchups, and coaches, players, and spectators alike don’t hesitate to let them know when they disagree with their calls. That’s where advances in tracking technology prove useful. 

In football or soccer, for instance, goal-line technology could become referees’ first line of defense against disgruntled players and fans who dispute their calls. Goal-line technology determines the exact moment when a ball crosses the threshold to count as a score. These systems use high-speed camera or microchips implanted in the balls to track their movements, and referees can use the real-time data from the systems to confirm and support their judgments. 

The NFL has yet to fully adopt goal-line technology, but FIFA implemented it for the 2014 World CupEuropean Premier League teams have begun using it as well. England’s Premier League coach spoke in favor of adopting the system, saying it would prevent “gross injustices” from occurring in the sport. Referees can access goal data within a second, mitigating lengthy game delays. Rather than relying on their own observations or debating their colleagues, officials will have instant proof of whether a goal should stand. 

Such technology would help referees in other sports as well, allowing them to defend their calls when they’re accused of bias. Case in point: Game 2 of the 1998 NBA finals, when observers noted that many of the refs’ calls favored the Boston Celtics over the Los Angeles Lakers. Emotions flare when championships are on the line, and hard data can help cooler heads prevail. 

Wearables and AI technology are game-changing—literally

Bad calls make great headlines, but the occasional bad call doesn’t always indicate a trend. When the NBA began reviewing officials’ calls in 2015, it found that referees were correct 86% of the time in the crucial final minutes of games. Rod Thorn, then-head of the NBA’s basketball operations, said the referee reviews not only increased transparency, but added “a humanity factor” and proved that “the vast majority of the calls are right.” 

A little humanity can go a long way, especially in the fraught world of sports officiating. Referees often find themselves the targets of insults and vitriol, particularly from fans. There are even online forums where fans go to brainstorm creative barbs to unleash on officials during games. 

Technology is giving refs better cover. Multiple professional leagues are increasingly equipping their athletes with wearable devices that send constant updates to coaches and referees who use the data to assess their performances, spot signs of injuries, and analyze goals and plays. 

In high-profile events, such as the 2016 Rio Olympics, data was often streamed to television broadcasters so spectators could understand what was happening in real-time. Former Olympian Barbara Kendall has said that live stats make sporting events more interactive for fans watching at home. 

But wearables are also invaluable for accurate judging and scoring. In fencing, for instance, sensors indicate the precise timing and location of athletes’ hits. Camera feeds record exactly what referees see in several sports, eliminating controversies around their decisions. It’s hard to dispute officials’ logic when you’re looking at the same numbers and visuals and can see why they ruled the way they did. 

Cameras may also provide a buffer for human errors, which is a real concern for referees. In one memorable instance, officials accidentally allowed the University of Colorado Boulder football team five downs, giving them a game-defining advantage over the University of Missouri. The error occurred in 1990, but it was so widely known, it has its own Wikipedia page. Widespread use of on-field cameras could help referees catch those oversights, as would automated tracking of downs, yards gained and lost, and other critical aspects of the game. 

Artificial intelligence is also already here, helping coaches call plays during games. “Moore’s Law predicts that computational power doubles roughly every two years, so by Super Bowl 100, in 2066, computers should be several million times faster than today. Imagine a robot Bill Belichick flicking through a digital playbook of trillions of moves during the 40-second gap between plays.” 

Immersion and empathy through technology 

Wearables and smart technology are transforming sports like football, soccer, and basketball in unprecedented ways. Athletes and coaches are gaining deeper understandings than ever before about plays, strategies, and players’ abilities. If coaches can monitor athletes’ vitals and performances, they can detect when someone is injured or at risk of hurting themselves. Players can then seek care in time to prolong their careers instead of being devastated by an unforeseen break or trauma. 

Fans are getting in on the action, too. Jerseys equipped with smart sensors give them a feel for what it’s like to be on the field with their favorite players. This smart apparel uses haptic feedback to transmit football plays to viewers as they’re watching them happen.  

Even those who don’t slip into a smart jersey can get up close and personal through wearables. Devices such as Ref Cams and GoPros allow viewers to feel as though they’re on the court or field. The WNBA introduced Ref Cams in 2013 and FoxSports installed GoPros in referees’ hats to innovate in its coverage of the December 2016 Big 10 Championship Game. 

Although refs and players don’t always see eye to eye, they’re likely to agree that facial recognition technology will change sporting environments for the better. Stadiums in Australia are exploring the use of such systems in keeping known troublemakers out of their venues. People who are known to start fights and antagonize refs and players may soon be contained before they’ve had a chance to rile people up and become distractions for officials, players, and fans who are trying to enjoy their games. 

Wearables and other smart technologies will help officials improve their accuracy and do their jobs more effectively. But the greatest benefit may be that when fans and coaches get a ref’s-eye view of the game, they become a little less hostile and a lot more empathetic.  

Will AI replace refs?

We’ve been looking at how new technologies are having an impact in professional sports around the world. These changes are, for the most part, evolutionary: players, coaches, and officials benefit but the games remain largely the same. As long as the chips, dips, and buffalo wings don’t run out, Super Bowl LI won’t feel much different.

When we start talking about artificial intelligence in sports, we enter into a completely new realm. Because when you link the sensors and cameras we’ve been talking about to AI systems, you have the recipe for fully automated officiating. 

As we saw with soccer, on-field tracking systems can detect handballs, identify penalties, and evaluate offside calls. These are working systems already in use; the capabilities of autonomous AI systems will only grow from here. Which is why, in one research study, referees and umpires face a 98% chance of being replaced by AI.

Proponents of automated officiating say that AI could reduce corruption and more accurately enforce rules, and it seems likely that the technology will play an increasingly prominent role in athletics. But the transition – if it happens – won’t occur overnight. 

Although referees are often maligned by angry sports fans, people see officiating a game as a complex task that requires human capabilities. Increased uses of sensors and cameras might provide an AI system with the data to make certain calls, but they may need something closer to a theory of mind before they seem human enough to be embraced by sports teams and fans. As we see with technologies like self-driving cars, the technology is here; it’s the policy and regulation that needs to catch up.

So when you’re watching the Super Bowl this weekend, keep an eye on the referees. Can you picture the game without the zebra jerseys on the sidelines? Are we ready to welcome automated play calling to the wide world of sports? 

Infographic

Something we seemingly can’t live without—224 times a day

According to our research, adults in the U.S. send and receive on average 14 messages every waking hour, counting IMs, texts, and emails. 

The constant stream of notifications this produces can be detrimental to focus and productivity. In fact, it can take nearly 30 minutes to get back on task after responding to an email. Now multiply this out across your whole day. 

We’ve hit a threshold when it comes to how much information we can manage on a daily basis. There’s only one way to go from here, and that’s simplification

Watch the video version of this enFact here.

Entefy’s enFacts are illuminating nuggets of information about the intersection of communications, artificial intelligence, security and cyber privacy, and the Internet of Things. Have an idea for an enFact? We would love to hear from you.