Confirmation bias

The hazards of confirmation bias in life and work [VIDEO]

People spend 36% more time reading articles that support their own established opinions. Why do we put so much effort into confirming what we already believe? The culprit is confirmation bias, the brain’s tendency to interpret the world in ways that are consistent with existing expectations. Confirmation bias is a mental shortcut that lightens the brain’s cognitive load, but the convenience comes with its own challenges. 

This video presents an overview of how confirmation bias works, and how you can teach yourself to burst your own bubbles of bias. Read more about the hazards of confirmation bias in life and work here.

Horse

A timeless lesson in management

Dr. Rudd Canaday is Entefy’s Software Architecture Fellow. He is a co-inventor of the UNIX operating system and was a graduate student at MIT’s pioneering Computer Science & Artificial Intelligence Lab. Rudd shares some of his experiences during computing’s early days below—as well as a timeless career insight.

In 1975, when I was 37, I got my first and only job at Bell Labs that was not in research. Bell Labs had written, a long time before, an elaborate software system named DIR/ECT, for DIRECTORY, used in printing white pages phone books. This sounds like a simple job, but the rules for how listings were arranged, alphabetized, and displayed in the white pages were arcane, dating back to the turn of the 20th century. This software was obsolete, using batch processing with data on magnetic tapes. It was notoriously hard to use and error prone. I was given the unenviable task of figuring out what could be done.

The head of the department maintaining the DIR/ECT system and I looked at the system and decided that it was not practical to bring it into the (then) modern on-line age. We decided that a new system needed to be built. Officially named “DIR/ECT II,” we called it “the upgrade” to emphasize that it would be fully compatible with the old system. The work of maintaining the old system, and of building a new one, was funded by the “operating companies,” the telephone companies that were part of AT&T at the time. So, the DIR/ECT department head and I had to convince the operating companies that a new system was needed, and that they should fund it. Which wasn’t that difficult, since the old system was so challenging to use. My estimate to get the job done? Three years.

I formed a new department focused on the upgrade that eventually grew to 50 people under 7 supervisors. Meanwhile next door, the original DIR/ECT department of about 30 people maintained the old system.

DIR/ECT II was by far the largest department I had managed—and was to be my least successful managerial experience. Since it was being paid for by the AT&T operating companies, I had the job of presenting the project status in a twice-yearly meeting and convincing them to continue funding the project. At first these meetings were easy, but as we missed our deadlines they became more difficult and much more expensive in computer time. In the end, it took us 5 years, not 3, to complete DIR/ECT II.

Part of the problem was my decision early on that we would build the system on UNIX using a DEC (Digital Equipment Corp.) minicomputer. DIR/ECT II was the first AT&T product built on UNIX, and perhaps the first on a minicomputer. The decision to use UNIX was controversial largely because UNIX only ran on minicomputers. It turned out to be an unwise decision. The computational demands of the system were much higher than I had originally anticipated. 

As a result, a couple of years into the project we saw that we needed a faster machine. Fortunately, DEC was about to introduce a new machine that promised faster speed. After all, previous releases of new DEC machines had each doubled the speed of their predecessors. But this time the new machine didn’t perform as well. Without the extra speed that we had been counting on, we were forced to spend quite a bit of time trying to make our system more efficient. But when we finally went live with DIR/ECT II, performance was only marginal.

Meanwhile, the DIR/ECT department head was wrestling with the problem of motivating his people to work on a dinosaur while next door their colleagues were building the sexy new system. To win, he decided to challenge his team to improve the old system so drastically that by the time DIR/ECT II would be available the operating companies wouldn’t see a need for the new system. No one thought this was possible. The old system was just too cumbersome. 

In the end, though, we agreed that the challenge would be good for his team. My colleague invented the slogan “Obviate the Upgrade,” and threw down the gauntlet. To our surprise, by the time my team finished DIR/ECT II, the old system had been transformed into a modern, on-line system. And, indeed, after the new system had successfully passed its trial period, none of the operating companies wanted it and it was abandoned. 

Moral of the story? Competition isn’t always a bad thing as it can lead teams to accomplish remarkable things.

Fishing hook

Hiding in plain sight: 8 digital security threats in everyday life

What’s more valuable to you: protecting digital privacy…or free pizza?

It turns out a startling number of people in a Stanford study chose pizza over protection. In the study, 3,108 undergraduate students were told they were joining a study on the use of Bitcoin for making payments. Students answered survey questions about their views on digital privacy to capture data on their stated preferences. 

A group of students were then offered a free pizza if they divulged the email addresses of three of their friends. The overwhelming majority of students took the cheesy pizza for the low cost of compromising someone else’s digital privacy. 

Interestingly, students who had described digital privacy as important to them were just as likely to choose pizza as the students who had no strong views about privacy. One researcher commented that people ‘are willing to relinquish private data quite easily when incentivized to do so.’ Apparently, the cheesier the incentive the better.

‘Generally, people don’t seem to be willing to take expensive actions or even very small actions to preserve their privacy,’ the study’s author stated. ‘Even though, if you ask them, they express frustration, unhappiness or dislike of losing their privacy, they tend not to make choices that correspond to those preferences.’ Add this to the list of complex paradoxes in the digital world: the deep disconnect between what we say about digital privacy and the actual choices we make. 

These days, threats to our digital privacy and security can come from practically anywhere. And some of them are hiding in plain sight. Take a look: 

1. Think you’re safe from privacy violations at work? You’ll probably want to know that one report estimates 15% of the Fortune 500 make use of secret tracking devices hidden in lights and ID badges. One surveillance vendor reports that 350 different companies are using its products to monitor “conference room usage, employee whereabouts, and ‘latency’—how long someone goes without speaking to another co-worker.”

2. The CEO of iRobot, the maker of the popular Roomba automated vacuum cleaner, caused a stir after apparently suggesting the company was seeking deals to sell data about the layout of users’ homes to third parties. The company later clarified that it didn’t have any plans to sell the data without users’ consent. The situation shines a spotlight on the ongoing tension between personal privacy and the monetary value of certain types of consumer data. 

3. Achieving the elite heights of pro sports apparently doesn’t make you immune to privacy threats. The NBA and its players’ union are in conflict over how much data can be collected and shared using wearables like fitness trackers. The player’s union is seeking control over what data is collected and how it gets used. Exactly the same legal issues and ethical considerations that are being raised as more and more employers deploy wearables to their employees. 

4. Your car is watching. Computer systems in many newer cars create records of pretty much everything you do on the road, from logging telephone calls to recording how fast you drive. The challenge for consumers is figuring out what’s being collected, and where it goes afterward. The legal situation in the U.S. is murky, with no one law covering data collection by automobiles.

5. Be careful what you say in front of Barbie. A study from University of Washington researchers demonstrates how the Internet of Toys is raising new privacy questions. In interviews with parents and children about the use of Internet-connected toys, the researchers found that children were unaware that their toys were recording their voices, and that parents worried about privacy pretty much any time the toys were out of the toy boxes. 

6. A lighthearted Facebook meme may unintentionally telegraph answers to your banking security questions. The post, called “10 Concerts I’ve Been To, One is a Lie,” asks users to share information about concerts they’ve attended. The problem is that “Name the first concert you attended” is a common security question used by banks and other financial institutions for online authentication. Phishing aside, the meme can also “telegraph information about a user’s age, musical tastes and even religious affiliation — all of which would be desirable to marketers hoping to target ads.” 

7. Usage-based insurance (UBI) is the term for insurance products that are priced according to specific usage factors. UBI auto insurance, for example, is priced on factors like how often a driver uses their car, how fast they take corners, and their average speed. University researchers were able to demonstrate that it’s possible to reveal personal data by pointing an AI algorithm at usage-based insurance data stored in the cloud. One researcher commented, ‘An attacker only needs one part of the information provided to a UBI company to discover a driver’s whereabouts, home, work, or who they met with.’

8. An audit by the Internet security nonprofit Online Trust Alliance found that 6 of the 13 “Free File Alliance” tax websites approved by the IRS provide inadequate security and privacy protection. The report states, “Criminals are increasingly penetrating IRS systems, targeting e-file service providers and harming consumers through bank account take-overs, identity theft, ransomware and compromising completed returns to redirect tax refunds.” As if April 15 wasn’t stressful enough.

All of these cases point to an important reality of the digital age: New privacy and security questions are created every time a new device is connected to the Internet. Which is why people will no longer think you’re crazy if you ask, “Is that Roomba watching me?”

Read or speak

Communication speed test [VIDEO]

We tend to read far faster than we speak, but when it comes to communication, speed isn’t everything. When you need to get your message across, context is where it’s at.

In this video enFact, we use some famous lines from Shakespeare to explore how meaning changes with the way words are delivered. An important point in today’s digital world where how you send a message can be as important as what you say.
Read the original version of this enFact here.

Entefy’s enFacts are illuminating nuggets of information about the intersection of communications, artificial intelligence, security and cyber privacy, and the Internet of Things. Have an idea for an enFact? We would love to hear from you. 

Brain

Learn new skills faster using your memory’s biggest weakness

In this era of lifelong learning—in which rapid technological advances are forcing professionals to acquire new skills throughout their careers—getting better at the process of learning is critically important. One element that’s often left out of discussions on learning is…forgetting. Because it turns out that forgetting is as important to learning as is remembering. 

Wouldn’t a perfect memory make life so much easier? No more forgotten names or misplaced keys. Everything you ever read or watched—podcasts, documentaries, books—available forever in perfect fidelity. Learning would be a cinch, just imagine how much smarter you’d become. 

Except: The brain doesn’t work that way. Remembering is an essential element in learning, but it isn’t everything. Because as counterintuitive as it sounds, forgetting is just as valuable as remembering.

Take the case of Solomon Shereshevsky. Shereshevsky had what most people would call a photographic memory. He could recall poems in foreign languages and long mathematical equations years after even a brief exposure. 

For 30 years in the early 20th century, Shereshevsky was studied by Russian psychologist Alexander Luria, who diagnosed him with an extreme case of synesthesia. It’s a condition in which a person’s senses become tangled in unusual ways, like experiencing smells in response to sounds, or colors in response to numbers. 

This rich experience of even mundane moments contributed to Shereshevsky’s exceptional memory. Yet his prodigious memory did not result in greater intelligence. In fact, it caused him difficulties with recognizing faces and extracting meaning from printed text. Shereshevsky was so burdened by his memory that he began writing words down and burning them in an unsuccessful attempt to forget something. 

A perfect memory would overwhelm us with details. Which is why the act of learning something new requires being able to both remember what’s important and forget what’s irrelevant. But how do we nudge our brains to forget the noise and remember all the important stuff? 

Focus and flow

Take a moment to recall one of your most vivid memories. It likely consists of a mixture of sensations—sights, sounds, emotions, smells. These rich experiences are wired together using many different areas of the brain. As we learned from Shereshevsky, this variety is a natural boost to memory. 

The challenge is that learning, the act of sitting down to study, isn’t usually an experience rich with sensory inputs. We can overcome this shortcoming by consciously directing focus and attention, giving the memory more to work with. Deep focus signals to the brain that the information being focused on is significant and worth remembering.

This first thing to keep in mind about attention is that it comes in limited supply. Our mental workspaces can manage up to 7 bits of information at once, a natural limitation that can make it hard to focus on one thing for an extended period of time. Which is why most people have developed the habit of jumping from topic to topic or activity to activity. Multitasking their days away. 

Difficult or not, deliberate focus encodes information into memory and is central to effective learning. A mind divided between disparate tasks is one whose attention and focus are jumping about, reaching for one thing only to put it down moments later to pick up something else, never taking the time to grapple with one item in a meaningful way. 

This is no way to learn. If you don’t find what you’re studying interesting enough to fully focus on, then your brain isn’t going to find it important enough to store long term. The more focus we devote to what we’re learning, in duration and effort, the greater the chance that information can form a lasting impression. 

An interesting thing occurs during those times when we invest all of our attention in a task. We enter what psychologists call a state of flow. Being in the zone. It’s the place where sports stars perform at their peak and musicians hit every note. When we’re in this state, our sense of time becomes warped, we are absorbed completely by what we’re doing, and our performance raises a notch. 

One of the conditions required for entering a state of flow is a challenge. The brain requires an obstacle to overcome, one that isn’t too difficult or too easy. If it is easy, we are more likely to become bored; if it is overly difficult, we will fail or simply give up. The key lies in possessing the skills necessary to complete the challenge even as you push yourself to your limits. 

Not only are these the moments when we perform at our best, they are the moments we best remember. So how do we encourage such a state when we’re trying to learn something new? Learning something new is certainly a worthy challenge, but how do we ensure the right balance between our skills and the difficulty of the task?

Learning isn’t consuming information

Even perfect control over our focus and attention doesn’t make us immune to forgetting. To make it stick, we need to regularly interact with what we’re learning.

Jumping from podcast to video to article to book might appear like a good use of time, but it’s a strategy more likely to cause you to forget most of the details you come across. In fact, unless you purposefully revisit what you’re learning, you’ll end up with a mental jumble of accurate, inaccurate, and misremembered information. 

The rate of forgetting is a documented phenomenon. Hermann Ebbinghaus began studying the effects of repetition on memory formation in 1879. He tried to memorize long lists of nonsensical syllables and found that while he would forget quickly after the first attempt, forgetting would progressively slow down the more he retrieved the information. He plotted these observations onto a graph called the “forgetting curve.”

We learn two things from Ebbinghaus and his forgetting curve. The first is that repetition is a necessary part of learning. It’s almost impossible to learn something effectively from a single exposure. The second important finding is that this repetition is most effective after some time has elapsed. Forgetting is beneficial to learning. 

Here’s how it works. You study for a while and then allow time to pass, inevitably forgetting important elements of what you wanted to remember. If you take the time to revisit that lost information, your memories are jogged and the process of forgetting slows dramatically. After several exposures, forgetting ceases altogether and you’ve learned something new. 

Learning through revisiting what we forget creates long-term memories. But it also allows us to look upon what we’re studying with a fresh perspective, often leading to new associations among facts. 

There is another important element to this idea of spaced repetition: We need to retrieve the information “manually” by deliberately recalling it. Recall helps memory formation more than simply looking back over the original material. 

Think of it this way. Once you’ve learned what you’re trying to learn, you probably aren’t going to keep carrying around your study material. Nor do you generally have time to do a web search every time you want to cite a fact. Ensuring that your brains houses the information you want requires practice retrieving it, not re-consuming it. We shouldn’t just practice putting the information in, we need to practice getting it out by testing ourselves. 

Learning is timeless, knowledge is not

We’ve all misplaced someone’s name in an inopportune moment, had the right answer on the tip of our tongue, or noticed how different is a friend’s recollection of a shared experience. The brain holds onto only a very small fraction of everything we’ve ever experienced. Forgetting helps us extract what matters and what is useful, and not waste resources by filling the brain with superfluous information. 

Taking advantage of the power of forgetting requires reevaluating how we learn. Quickly glancing over an article or speeding up a podcast to twice its normal speed will not help learning. Effective learning demands more care and deliberate effort. First, we must concentrate on the subject with everything we have, aiming not for speed but rather the slow extraction of meaning and understanding. Second, we must revisit this new knowledge periodically, to ensure that what we remember is accurate and comprehensive. 

Gone are the days when you finished learning. When a Bachelor’s degree meant you held all of the knowledge you would need in your career. With the future defined by constant change, continuing education is important and relevant to all of us. 

When it comes to teaching ourselves the best ways to learn continuously, we shouldn’t be focused on the end result but the process. After all, specific knowledge can become obsolete practically overnight. The ability to quickly and effectively learn new skills is timeless.

Carjacking

The next frontier of cybercrime: car hacking

Dive under the hood of the average car and you’ll find that 100 million lines of code are at work running or supporting practically every system. Cars with first-generation assisted driving systems have 200 million or more lines of code, and climbing.

The challenge is that bugs in code can create security vulnerabilities that provide hackers opportunities to cause problems. Coming soon to a headline near you: a computer virus that shuts down specific models of cars, or a ransomware attack against a major car brand. It’s looking less and less like science fiction.

After all, the more lines of code, the more potential vulnerabilities. One estimate of defect density in code suggests that large, complex programs—like automobile operating systems—can carry 1 vulnerability per 1,500 lines of code. Suggesting more than 66,000 potential vulnerabilities in the average automobile. No wonder cybersecurity is such a big deal at auto makers. It’s up to them to ensure car hacking doesn’t become the new carjacking.
Entefy’s enFacts are illuminating nuggets of information about the intersection of communications, artificial intelligence, security and cyber privacy, and the Internet of Things. Have an idea for an enFact? We would love to hear from you.

Entefy logo

Entefy Raises Series A at $150 Million Valuation

AI communication company, Entefy, secures $8 million in Series A for its universal communicator

PALO ALTO, August 24, 2017 – Entefy Inc. announced today that the company has secured its Series A with $8 million of capital at a $150 million valuation. This brings the company’s total venture funding to date to $17.7 million. The new capital supports the company’s steady march toward product launch as well as additional hiring.

“Closing our Series A is a significant milestone for the company, adding to Entefy’s already strong momentum in innovation, product development, and hiring. We’ve created an advanced artificial intelligence platform that brings communication, search, and security to a new level,” said Entefy CEO and Co-Founder Alston Ghafourifar. “Internally, we’ve been using the core technology and expect private beta deployments. It’s an exciting time for our team and the AI industry as a whole.”

2017 has already been an active year for Entefy with its advances in core AI, communication, search, and cyber security technology. In May, Entefy announced it had been issued a patent covering state-of-the-art context awareness in digital messages. In March, the company announced the issuance of another patent covering encrypted search, strengthening its data security and search capabilities. These patent issuances follow on the heels of the January announcement that Entefy had filed a group of 13 new patents in artificial intelligence, security, and cyber privacy, bringing total patents filed to 31.

“Our team has been heads down building the first universal communicator,” said Entefy Co-Founder Brienne Ghafourifar. “With our Series A formalized, we look forward to unveiling more about why our team and investors are so excited.”

Entefy was founded by the sibling duo Alston Ghafourifar and Brienne Ghafourifar. Their vision is nothing short of democratizing digital communication—freeing users from walled garden platforms and protecting them from data insecurity and cyber privacy violations. 

About Entefy

Entefy is building the first universal communicator—a smart platform that uses artificial intelligence to help you seamlessly interact with the people, services, and smart things in your life—all from a single application that runs beautifully on all your favorite devices. Our core technology combines digital communication with advanced computer vision and natural language processing to create a lightning fast and secure digital experience for people everywhere. 

AI referees

What would professional sports look like with AI referees and other smart tech? [VIDEO]

Artificial intelligence has the potential to take on the toughest job in professional sports: the referee. But are fans ready for automated play calling? Throughout professional sports, advanced technologies like AI and smart sensors are already having an impact on the roles of not just refs, but coaches, players, and even fans. 

In this video, we look around the wide world of sports at how smart technologies are impacting the ways games like football, soccer, basketball, and fencing are played and watched. 

You can read more about the use of AI in professional sports here.

AI Hand

Making smart use of smart systems: AI’s disruptive impact in 10 industries

Entefy recently covered AI advances in “traditional” industries like agriculture and banking, the follow-up to a look at disruptive technologies in manufacturing and heavy industries like aerospace and automobiles. For all of the differences between these industries, they share in common the need to understand the potential and limitations of artificial intelligence, and plan accordingly.

Can we spot the same common threads running through industries known for their advanced uses of technology? From telecommunications to travel to media, companies large and small are pursuing entirely new products, services, and capabilities created by smart uses of AI algorithms. 

Here are 10 examples of disruptive AI technology in action:

1. Mobile telecomm 

Wireless telecommunication companies have access to volumes of data from their millions of customers. One telecom implemented a machine learning-powered real-time customer analytics system that enabled it to track and respond to consumers immediately. The data gathered by the new program facilitated better customer service communication driven by the insights from the custom AI system.

2. Investments 

People are not, generally speaking, purely rational investors and their irrationality is what makes markets unpredictable. An artificial intelligence algorithm that can anticipate human behavior while also monitoring economic signals in real-time could be highly disruptive to today’s markets (though some insiders have their doubts). Whether or not an AI “super investor” appears on the scene, the investments industry will require ever-smarter safeguards against exploitation and risk.

3. Travel 

Online travel booking is nothing new, but AI-assisted vacation planning? That’s more of a novelty. Beyond aggregating flight times and hotel prices, computer programs now pull data about customers’ online behaviors and use learning systems powered by past preferences to personalize recommendations. When a human agent isn’t in the picture, chatbots can now answer questions and book reservations as well. ‘Nothing will ever replace the expertise and intuitive nature of travel agents,’ said one travel industry veteran. ‘Artificial intelligence brings just another component to their tool kit.’

4. Information Technology 

IT professionals in particular find themselves at an exciting turning point in their careers. As more companies integrate AI into their processes, to one extent or another, IT teams are learning how to engage with these new technologies. A 2016 report from Narrative Science and the National Business Research Institute predicted that 62% of enterprises will adopt and use AI by 2018. Given that, IT could soon encompass competencies in machine learning platforms, natural language processing, decision management software, and AI-optimized hardware.

5. News media

The media has been under siege by critics and fake news purveyors during the past several years, but it may find an ally in AI. The Associated Press uses AI software to crank out earnings reports, and data companies are increasingly generating information useful to reporters. The lightning speed at which AI algorithms can gather and process multiple types of data could be a boon to journalists, enabling them to report breaking news as it happens. The Los Angeles Times encountered this firsthand in 2013, when it used a bot to report on an earthquake almost as it was happening.

6. Pharmaceuticals

Pharmaceutical researchers have begun integrating AI into developing new drugs. Using machine learning to transform drug creation, these platforms analyze medical histories, chemical databases, and past scientific findings to identify correlations between genetic markers and patient outcomes. This method of drug testing costs 50% less than traditional approaches and provides insight into how a treatment might impact certain types of patients. Pattern-recognition technology can provide a view into how different diseases work as well, allowing researchers to develop drugs that will target them more effectively. Most important, AI deep learning enables doctors to provide more targeted treatment plans based on an individual’s genetics and history.  

7. Online dating

Can autonomous systems make better matches than people? After all, people have been matchmaking practically since there were people to match. Dating itself is ripe for disruption: it is time- and labor-intensive and carries a high failure rate. There’s plenty of room for improvement. So it’s not surprising that the online dating industry is exploring adding AI to the game of love, addressing common online dating complaints like dishonesty in profiles and increasing the relevance of the data underpinning matchmaking algorithms. 

8. Motion pictures

Hollywood loves making movies about AI. Now it’s using AI to make and sell movies. There are AI systems that have been used to create movie preview trailers and even write screenplays. But the movie business might see an even bigger impact from AI systems that predict the likelihood that a given script will be a blockbuster. The system was trained using scripts and box office revenue data going back to the 1980’s. Given that just 20% of movies break even, there is a lot of room to improve the greenlighting process.

9. Publishing

With more than 1 million books published each year—a figure up 400% from just 10 years ago—competition for readers’ attention is fierce. Data can help publishers make decisions about which books to publish, but the best-in-class reader analytics solutions can take up to 4 weeks to process data before providing actionable insights. A new generation of AI publishing systems is rewriting the rules, analyzing the text of books to predict reader engagement and sales performance.

10. Semiconductors

You don’t have to do much more than read business headlines to grasp the impact AI is having on the semiconductor industry. Nvidia, until recently known for its graphics processors used in video games, is emerging as a leader in processors for AI number crunching. Google has launched its own AI-focused chip. The CPU king Intel is making acquisitions to catch up. Winners and losers TBD, but clearly the chip industry is being shaken up by the demand for AI processing power.

Taking a step back from the details, what we see in these ten examples are companies moving quickly to take advantage of the new capabilities artificial intelligence creates. 

Malware

The rise of malware

Malware. Even the word sounds sinister. It’s a fitting term for software designed to wreak havoc on computers—and people’s lives. Malware often infects computers when users install or run otherwise useful software that contains malicious code hidden within it. Some forms of malware grab headlines, like the “WannaCry” ransomware attack that infected 300,000 computers and effectively shut down the UK National Health Services.

How big a deal is malware? The computer security company Symantec detected 401 million unique malware variants in 2016. Not 401 million cases of malware, but that many unique malware threats, each with the potential to infect thousands, even millions, of computer systems. With the population of the U.S. at 325 million, that stat represents 1.2 unique malware variants per man, woman, or child in America. 

But don’t get down. There are ways to protect your digital self, starting with Entefy’s roundup of guides to protecting the online you.
Entefy’s enFacts are illuminating nuggets of information about the intersection of communications, artificial intelligence, security and cyber privacy, and the Internet of Things. Have an idea for an enFact? We would love to hear from you.