Entefyer

Cutting-edge technology meets cutting-edge health science

Recently, Entefyers were treated to a health and wellness seminar called Health 360 presented by our advisor and investor Dr. Farzan Rajput. Known fondly to our team as Dr. Fuzz, Dr. Rajput is a cardiologist and the founder of Southcoast Cardiology in Southern California. He is also one of the masterminds behind Health 360.

The seminar begins with two key insights. First, most diet plans use a one-size-fits all methodology leaving many people dissatisfied with the results. Second, health and nutrition go far beyond balancing calories in and calories out or basic measuring of body mass index (BMI); the most recent nutritional science tells us that metabolism is a far more important factor. In fact, each of us have dietary and exercise needs that are as unique as a fingerprint. 

Through that lens, optimal health depends on factors like how the body processes macronutrients (carbohydrates, fats, and proteins), produces insulin, and activates different muscle groups during exercise. 

The seminar concluded with an informative Q&A focused on tips for making better diet and exercise decisions. As Dr. Fuzz pointed out, working at a fast-paced company like Entefy means getting creative about fitting health and wellness into our lives. Health 360 helps promote wellness in ways that sustain momentum and productivity far into the future. 

Entefy is committed to promoting the health and wellness of its team members, including hosting a previous Health 360 seminar in May.

Innovation

Big innovation takes courage

Innovation. A cornerstone concept in Silicon Valley, but a term that has worn a little thin from overuse. These days the word is trotted out to describe just about any new idea. Recall that the $700 Internet-enabled juicer Juicero was hailed as “the latest Silicon Valley innovation” before failing spectacularly after news broke that anyone could hand-squeeze the company’s proprietary juice pouches without needing the juicer. 

Innovation’s reputation has declined to where economists and tech insiders question whether the age of innovation has passed. As early as 2011, entrepreneur Max Levchin said that U.S. innovation was “somewhere between dire straits and dead.” His PayPal co-founder Peter Thiel agreed, asserting that “If you look outside the computer and the Internet, there has been 40 years of stagnation.” 

Well, yes and no. Every day it seems there’s some new app or service designed to make our lives easier or save us a few minutes here or there. There’s innovation in there somewhere. But aren’t most new startups examples of incremental improvement, and not fundamentally disruptive innovation? 

Despite the flurry of new tech products that have come to market in the past 10 years, innovation’s economic impact has flagged considerably. The Wall Street Journal noted that:

“Economies grow by equipping an expanding workforce with more capital such as equipment, software and buildings, then combining capital and labor more creatively. This last element, called ‘total factor productivity,’ captures the contribution of innovation. Its growth peaked in the 1950s at 3.4% a year as prior breakthroughs such as electricity, aviation and antibiotics reached their maximum impact. It has steadily slowed since and averaged a pathetic 0.5% for the current decade.

Outside of personal technology, improvements in everyday life have been incremental, not revolutionary. Houses, appliances and cars look much like they did a generation ago. Airplanes fly no faster than in the 1960s. None of the 20 most-prescribed drugs in the U.S. came to market in the past decade.”

Economists view this innovation slump as the reason that the standard of living in the U.S. is the same as it was in 2000. Even though there are more jobs than ever in science and engineering and even though patent approvals are at an all-time high, productivity growth is trending downward. Some experts say that the low-hanging fruit in tech innovation has already been captured, and all that’s left to pursue are complex, risky fields that are subject to strict regulation and public scrutiny.  

So, the question becomes, can you still innovate when every idea has been done to death?

Shifting the innovation conversation 

Let’s define what we mean by the term “innovation.” At Entefy, we evaluate innovative concepts according to three criteria:

  • Ideas that are novel for their time
  • Ideas with clear benefits
  • Ideas that are developed and implemented effectively 

In other words, innovative ideas are original, beneficial, ahead of their time, and launched successfully. We also prioritize the concept of courageous innovation, which often involves taking the more difficult path toward finding and developing truly transformative ideas. Ideas that require overcoming major obstacles: financial and technological risks, political headwinds, and the power of incumbents. The market doesn’t demand more incremental lifestyle improvements – after all, how many social sharing apps does one world need? The world’s most interesting and urgent problems require complex-challenging-difficult-to-bring-to-life solutions.

Seen through that light, it’s clear that innovation isn’t dead. The bar is simply much higher than it used to be. And the good news is that there’s reason for optimism. Advances in artificial intelligence, which encompasses a range of groundbreaking technologies, and blockchain are just two examples of foundational technologies that companies can use to create true innovation. Both artificial intelligence and blockchain are notoriously complex, but they offer great opportunities to create new solutions in a variety of industries.  

Finding new paths to true innovation 

Companies that want to innovate have literally limitless opportunities to do so. But they must be bold and resilient enough to tackle real problems. Fifteen years ago, social media platforms were revolutionary. They provided us with new ways to keep in touch with family members, friends, and classmates, not to mention gather real-time news, and amplify the voices of oppressed communities around the world. 

But today unless you have figured out how to build a social network that defies the trend toward ageist design, combats fake news, improves fact-checking, and resists bots and fake accounts, the chances are your idea isn’t all that innovative. Same goes for online delivery services and e-commerce platforms. The innovative ideas of today, and the coming decades, will solve more critical problems, and they’ll do it in groundbreaking ways. 

If you want to innovate for the long term within your company, start with business intelligence. Instead of tweaking existing products or chasing competitors’ latest updates, build something entirely new. By tracking your audience’s digital shopping and behavioral patterns, you’ll get a sense of what their pain points and frustrations are. That’s where you’ll find opportunities to deliver substantial value. 

But as we discussed above, we’re long past the need for better vacation rental apps. Yesterday’s innovators are today’s dominant platforms. Copying or iterating on what they’ve done does not add up to innovation. Instead of playing catch-up to past disruptors, use your business intelligence data to innovate for your audience’s needs several years out. If you can identify the products and services they’ll need in the coming years, you give yourself some lead time to invest in genuinely innovative ideas. 

Sometimes bigger companies find themselves in an innovative rut. As they grow larger, they’re beholden to a greater number of stakeholders, and they become risk-averse. One way to shake up your team’s thinking under these circumstances is to give them constraints. Even if your business is flush with cash, limit their budgets or the types of ideas you’ll accept. Set clear parameters for what counts as an innovative concept, and see what they come up with. 

You might also consider partnering with a company outside your field. Even if your existing product lines don’t have much in common, you can provide a fresh take on one another’s offerings. With shared values and a commitment to finding truly groundbreaking ideas, you’ll likely be able to create a product that sets a new standard in both your respective fields. Better yet, the collaboration could help both brands make inroads into new markets. 

Finally, tap your internal networks for ideas. Intrapreneurship programs could yield innovative ideas from unexpected places. Your product development team might be burned out, but an IT employee or marketing associate could be sitting on the next big thing. By instituting an intrapreneurship policy, you encourage people from across the company to think about your clients’ or customers’ biggest problems. 

Most important, you create a culture in which they feel free to voice those ideas instead of staying silent until they one day strike out on their own. Small businesses are hard to launch and even harder to sustain. With your resources and your employees’ insights, you both stand a better chance of bringing innovative concepts to market. 

At Entefy, we believe that innovation is alive and well. The appetite for truly new, beneficial uses of technology has never been stronger. But we also believe that innovating requires inventors to roll up their sleeves and do the hard work of building and launching complex systems. Those that rise to meet the challenge really will change the world.

IOT House

The Internet of Things is wildly insecure. Here are 8 ways to protect your smart home.

Consumers are connecting 5.5 million new Internet of Things devices every day around the world. This despite high-profile, successful cyberattacks like the distributed denial-of-service (DDoS) assault on domain-name service provider Dyn in early 2017. In that attack, hackers took advantage of security vulnerabilities in IoT devices like smart refrigerators and toys to create a massive botnet that temporarily shut down Netflix, Twitter, Spotify, and Pinterest. 

When it comes to a smart home filled with these vulnerable IoT devices, just the risk of being hacked can be nerve-wracking. The convenience of existing and future IoT products are tremendous, and they’ll only become more valuable as their underlying technologies and cybersecurity mechanisms improve

For smart home early adopters, the challenge is really the sheer number of vulnerabilities that exist in the Internet of Things sphere. But there are several easy-to-implement security steps you can take to increase the security profile of your smart home.

1. Understand terms of service 

The first step toward securing your data is understanding what you’re agreeing to each time you connect a new IoT device. Read the terms of service carefully, and don’t be afraid to ask questions. A simple web search about an unclear phrase will likely reveal forum responses and blog posts dedicated to your question. 

If you can’t find what you’re looking for, contact the company and ask them to clarify anything that’s making you uneasy. Find out what types of data they collect, how the company uses your information internally, and what they share with outside partners. 

In some cases, you may be able to limit the information a device tracks, and you may choose to use that ability liberally. Some people are willing to trade their privacy for convenience, and that’s their decision. We all value privacy to varying degrees, and one person’s threshold for giving out personal information may be much higher than another’s. But it’s a choice you should make with eyes wide open, and that begins with clearly understanding the terms of service. 

2. Use secure passwords 

Hackers are becoming more sophisticated all the time, but don’t make their jobs easy. When creating new passwords, do not use a common string of numbers and avoid obvious choices such as “admin” or your kids’ names. Assume that hackers have access to personal data about you, and steer clear of any names and numbers associated with you like phone numbers, addresses, family names, and so on. Instead, make complex, tough-to-crack passwords and use different ones for each of your accounts and devices. 

When possible, add another layer of security such as multi-factor authentication or biometric verification. These might include having a code texted to your phone to ensure that you’re the person trying to sign into a device, or using your fingerprint to unlock your phone. It’s well worth adding these to your security protocols to add an extra layer of protection for your home devices. 

3. Update your software regularly 

Yes, it’s annoying to perform software updates, especially when you receive multiple update alerts each week. But software updates aren’t just about cool new features. They often include security updates as well, and you don’t want to expose yourself to hackers simply because you didn’t have the time to run an update. 

If you see that you have several update notifications, batch them. Set them all to begin at a time when you don’t need your computer or when you won’t be on your phone, such as while eating dinner or relaxing with your family. You might even set a reminder to do this at the same time every few days or once a week. Then it becomes routine, so you feel less hassled but are also using the most secure versions at all times. 

4. Secure the perimeter

Because individual IoT devices often contain security vulnerabilities, one strategy is to “secure the perimeter.” That is, focus on improving the security and setup of your home Wi-Fi network. There are several actions you can take to do this:

  • Use the strongest Wi-Fi security protocols your Internet router supports. The older WEP protocol offers less security than newer protocols like WPA2. 
  • Disable guest network access, which simply provide another potential weak point in your security.
  • Give your Wi-Fi network a name that doesn’t reveal information about you, your home, or your location.

Taking these steps presents another layer of defense that hackers would have to compromise. 

5. Secure devices that control IoT devices 

Around 33% of smartphone users don’t password protect their phones, creating another potential point of vulnerability that hackers can use to access your home network and connected devices. Most IoT devices in a smart home are controlled by smartphone app, so protecting the device running the app is a no-brainer.

6. Create two Wi-Fi networks

Another strategy for protecting your home from an online attack is to create two Wi-Fi networks. Limit access to the first network to only your smart devices like tablets, laptops, and smartphones; these are the devices that are storing and accessing important data like online banking and medical records. The second network is used solely by smart home IoT devices. If any one of these devices become compromised by hackers, they are not able to use the network to access your personal devices.

7. Change default usernames and passwords

Here’s another quick and easy step for protecting your smart home from hackers. Most IoT devices are sold with default usernames and passwords. Hackers that access IoT devices already know the manufacturer default settings, and thus can easily take over control of a given device. Changing those defaults takes away that option and makes hacking a device considerably harder.

8. Turn off inactive devices

Here’s a win-win security step. Turn off devices when they’re not in use. A powered down device can’t be accessed remotely, limiting your security vulnerability, and also uses no power, lowering your energy consumption. This won’t apply for gadgets that need to be left on 24/7, like smart blinds and thermostats. But hardware like wireless printers and smart TVs can be safely powered down at night when not in use. Consider plugging these devices into timers that automate this for you—it is a smart home after all! 

Taking these 8 steps will add more cybersecurity smarts to your smart home. And be sure to check out Entefy’s article, Smart homes make smart spies, for additional insights about IoT security.

Suicide prevention hasn’t improved for 40 years. Thankfully, AI is changing that.

We’ve covered the use of AI in a variety of industries, from law to sports. But advances in medicine are perhaps the most important our society can make. Unfortunately, they’re also among the most challenging to achieve. From cancer research to Alzheimer’s studies, scientists are working tirelessly to better understand devastating conditions and create better treatments. But progress moves slowly, and nowhere is this more apparent than in suicide prevention. In 2016, researchers came to the grim finding “that there has been no improvement in the accuracy of suicide risk assessment over the last 40 years.” 

The challenges in suicide prevention are substantial. When confronted with decisions about whether to hospitalize potentially suicidal patients, clinicians must determine the likelihood that someone will take their own lives in the immediate future. In some cases, hospitalization is vital. But in others, the patient might benefit from other therapeutic techniques and coping mechanisms that will help them manage drastic emotional incidents in the future. These are life and death decisions, and the pressure is enormous. 

Yet psychiatrists and other practitioners can refer only to guidelines that often prove less than useful in assessing someone’s suicide risk. A working group from the Department of Veterans Affairs and the Department of Defense said of existing suicide screening protocols, “suicide risk assessment remains an imperfect science, and much of what constitutes best practice is a product of expert opinion, with a limited evidence base.” 

Suicide is the tenth leading cause of death among Americans, with more than 44,000 people dying by their own hands each year. Depression and anxiety, which are closely correlated with suicide attempts, is on the rise in the U.S., including among teenagers. Last year, the suicide rate in the U.S. reached its highest point in 30 years. Doctors, caregivers, and loved ones are desperate to help people who are suffering. But many of the indicators commonly used to gauge someone’s risk level, such as past hospitalizations or incidents of self-harm, can be misleading. 

Fortunately, researchers may have found a powerful new tool for improving risk assessment methods. Recent experiments in using artificial intelligence to predict whether patients are at risk for committing suicide have shown promising results and returned surprising indicators that human observers are likely to miss. 

Augmented suicide prevention 

Software-based suicide prevention monitoring systems have already been used to track young students’ web searches and flag any alarming usage patterns, such as those related to suicide. However, artificial intelligence could offer a sharper, more proactive approach to risk detection and prevention. One group of scientists and researchers are working on a machine learning algorithm that so far has an 80-90% accuracy rate predicting whether a patient will try to commit suicide in the next two years. When analyzing whether someone might try to kill themselves in the next week, the accuracy rate went to 92%. 

The algorithm learned by analyzing 5,167 cases in which patients had either harmed themselves or expressed suicidal tendencies. One might wonder how a computer program could do in months what doctors with years of experience struggle with regularly. The answer is by finding underlying indicators that humans might not think to look for. While talk of suicide and depression are obvious indicators that someone is suffering, frequent use of melatonin may not jump out as much. Melatonin doesn’t cause suicidal behaviors, but it is used as a sleep aid. According to the researchers, reliance on the supplement could indicate a sleep disorder, which, like anxiety and depression, correlates strongly with suicide risk.  

Researchers are discovering that rather than there being a few tell-tale signs, such as a history of drug abuse and depression, suicide risk may be better assessed through a complex network of indicators. Machine learning systems can identify common factors among thousands of patients to find the threads that doctors and scientists don’t see. They can also make sense of the web of risk factors in ways the human mind simply can’t process. For instance, taking acetaminophen may indicate a higher chance of attempting suicide, but only in combination with other factors. Computer programs that can identify those combinations could dramatically enhance doctors’ abilities to predict suicide risk

Machine learning is being explored for other predictive uses as well. Scientists are experimenting with using machine learning to study fMRI brain scans to gauge a patient’s suicide risk. In a recent study, a machine learning program detected which subjects had suicidal ideas with 90% accuracy. Granted, the study only involved 34 people, so more research is needed. But the results align with other work being done, and it seems that the potential for machine learning to play a critical role in suicide prediction is strong. 

Machine learning could also become an essential tool for diagnosing post-traumatic stress disorder (PTSD). Between 11-20% of veterans who served in the Iraq and Afghanistan wars suffer from PTSD, and the most recent data available showed veteran suicides comprising 18% of deaths in the U.S. Psychiatrists and counselors may struggle to diagnose PTSD if soldiers don’t share the full extent of their trauma or symptoms with them, making it difficult to know whether they’re at risk for committing suicide. However, one ongoing study is looking at how voice analysis technology and machine learning can be used to diagnose PTSD and depression. The program is being fed thousands of voice samples and learning to identify cues such as pitch, tone, speed, and rhythm for signs of brain injury, depression, and PTSD. Doctors would then be able to help people who can’t or won’t articulate the pain they’re experiencing. 

Other forms of AI will become increasingly useful in the race to prevent suicide as well. Natural language processing algorithms could analyze social media posts and messages to identify concerning phrases or conversations. They could then alert humans who would intervene by reaching out to the potentially troubled person or contacting a resource who could offer support. Popular social media platforms already offer resources and support, to varying degrees, for both users who are considering harming themselves and for concerned friends and family who spot alarming posts. 

However, increasingly sophisticated natural language processing and machine learning techniques could identify at-risk users with greater accuracy and frequency. If we rely solely on people to report concerning content, there’s a good chance cries for help will be missed. The massive amount of content uploaded to popular social platforms each minute makes it impossible for users to see everything their friends have posted. But computer programs can scour for language that points to problems at all times, adding an important buffer for people who need help. 

Some researchers are even looking to leverage data mining and behavioral data to better identify and assist people in need. Commercial brands regularly use behavioral information to hone their marketing messages according to people’s buying patterns and preferences. But doctors, social workers, and support organizations could soon use those tools for a more altruistic purpose.    

Wearables may also play a role in suicide prevention. If doctors could persuade at-risk patients to use tracking apps that gather data about their speech patterns and behavioral changes, they might be able to use that information to track when someone is more likely to become suicidal. The breadth of data gathered through apps and wearables could be analyzed to better understand mental health issues and intervene before patients’ circumstances become extreme. 

From heartbreak to healing 

It’s important to note that while AI may support suicide prevention, people will continue to play a critical role in helping at-risk loved ones recover and maintain a healthy mental state. Social connectedness and support are essential to suicide prevention. Regular, positive interactions with family, friends, peers, religious communities, and cultural groups can mitigate the effects of risk factors like trauma and drug dependence and alleviate anxiety and depression. 

Nothing is more heartbreaking to a family than learning a loved one has taken their own life and wondering what they could have done to help. Artificial intelligence soon may give people a greater chance of intervening before it’s too late and give those suffering from severe mental illness an opportunity to experience rich, healthy lives.  

Light bulbs

Too many cooks in your kitchen? 5 ideas for better collaboration

Have you ever sat in an overcrowded conference room and had that old expression, “Too many cooks in the kitchen,” come to mind? Modern business is defined by increasing amounts of collaboration, a reality encouraged by open floorplan offices, mixed teams of onsite and remote workers, and participation in multiple projects with multiple teams. To name a few.

Yet in many ways, we each have a different view of just what “collaboration” is. And not always a positive view. Collaboration researchers noted that “Teamwork all too often feels inefficient (search and coordination costs eat up time), risky (can I trust others to deliver for my client?), low value (our own area of expertise always seems most critical), and political (a sneaky way of self-promoting to other areas of one’s firm).”

Below we’re sharing research and advice for collaborating in general, and for setting up effective meetings specifically: 

  1. Keep the group small. While Dunbar estimated that the limit for stable social relationships we can maintain is 150, the optimal team size is much smaller. There’s no one magic number, but research into optimizing small group size suggests that collaborative groups work best when they have around 5 members. Adding additional members above 5 can improve output but at the cost of adding management overhead. Another study of team size found that smaller groups “participated more actively on their team, were more committed to their team, were more aware of the goals of the team, had greater awareness of other team members, and were in teams with higher levels of rapport.”
  2. Make sure goals are clear. Limiting the number of people in the group forces us to choose the best people for the job, and choosing the best people means considering exactly what the goals and objectives are. Having people sit in on meetings that are irrelevant to them or outside of their skillset wastes time; letting people know what the point of the meeting is and what their role is within it helps keep the group on track and focused. 
  3. Don’t forget alone time. When a group’s goals are clear and members’ roles are spelled out, individuals can branch off for independent work. Schedule time to spend together and leave all communication until such time—people are less likely to enter a state of flow if they’re getting distracted too often. 
  4. Ideas first, critiques later. To reduce the likelihood of people focusing on the first piece of information presented or feeding into the predispositions of others, have people form their own hypothesis and ideas before any sharing occurs. During meetings, have people write things down as opposed to calling out or raising hands, so that they won’t be influenced by what others do and say. 
  5. Challenging work encourages flow. Professionals generally work better when their goals are just within their abilities. Like Goldilocks, not too hard, not too easy. Walking this fine line also allows us to enter the highly creative, productive state of mind known as “flow.” Flow is not solely an individual phenomenon, but possible in groups as well. The state of flow is characterized by intense focus and the blurring of time, and it occurs when a challenge is not too easy to become boring and not too difficult to be impossible. This is more easily achieved when the right members are paired to the right projects, and when the group is small and cohesive enough to trade ideas and provide feedback while staying focused and on track. 

Keep these 5 ideas in mind before scheduling your next meeting and watch your team’s productivity soar.

Private data

Collected, bundled, and sold: your sensitive private data [SLIDES]

Why does it matter that so many apps, webs services, and devices collect data about practically every aspect of your life? The answer is partly because it often happens without your knowledge or permission. And partly because once data is created, it can live on indefinitely—and who’s to say where the data that you consider sensitive might end up? 

You can read an in-depth look at these data trackers in Entefy’s article, Collected, bundled, and sold: your sensitive private data.

Happy Thanksgiving

Thanks, with a heaping side of giving

Surrounded by friends and family, delectable dinners, and companionable conversations, everyone at Entefy will be taking inventory of where we are today and where we hope to be tomorrow. 

In these moments of reflection, it’s all too easy to focus on the many challenges the world faces today. It can be much harder to recognize the many areas where things are going well. And the many signs of a better tomorrow just over the horizon. It is for these bright shimmering signs of hope and progress that we are thankful.

So much can be accomplished with more giving by more people and we believe it is giving that truly expresses our thanks. Companies giving back to the people and places that make them successful. Individuals giving a little bit of ground to ensure a fair compromise and a respectful listen. 

From everyone at Entefy, we’d like to extend to all our warmest wishes for a bountiful Thanksgiving season. 

The Entefyers

Network cables

The digital chasm: Ageist design in technology is a problem for everyone

If you’re above the age of 25 and you use Snapchat, there’s a good chance a teenager taught you how to navigate the youth-centric disappearing photo app. Snapchat became wildly popular after its 2011 debut, especially among users on the younger end of the millennial spectrum. But the app confounded most older users, giving even tech-savvy 30-somethings a taste of what it’s like to use tech products that cater to youth. Snapchat is just one example of the tech industry’s disregard for the needs and preferences of many older users. And that disregard is a big problem for everyone.

We’re all familiar with the trope of parents and grandparents relying on their kids to teach them how to turn on their computers or connect their smartphones to Wi-Fi. But beneath the joke lies a more substantial issue. Technology is often designed for the young, by the young, and the rest of the population is left to play catch-up. 

That’s a problem, because as everyone becomes more reliant on technology for everything from buying groceries to accessing medical care, poor user experience design excludes people from important services. It’s also short-sighted from a business perspective, since a middle-aged executive likely has far more buying power than a smartphone-savvy teen. Yet many of our most commonly-used platforms seem to disregard usability factors for all but the youngest consumers. 

The digital chasm

In a given day, Americans check their phones 8 billion times a day, and the average person will spend five years of their lives on social media. Given all of that connectivity, you’d assume that the mobile apps that consume so much of our time are created with usability as the number one focus. Yet across many digital experiences, it’s clear that that’s not the case – especially if you’re older than 35.  

At first glance, the youth-focused nature of tech design makes some sense. Research indicates that 18-34-year-olds show the highest rates of app downloads, time spent in-app, and device usage overall. Because they’re connected to a wide range of peers through social networks, they’re more likely to learn about and embrace new apps and digital products faster than older consumers.  

While baby boomers and seniors are unlikely to outpace millennials on screen time (research shows that young users spend more time on their smartphones each day than they do with other people), smartphone and social media use is on the rise among older cohorts. One third of senior citizens use social media to get their news and entertainment, and 40% own smartphones. 

Yet many older adults feel excluded from current technology trends. For instance, 77% percent say they would need help setting up a tablet or smartphone. Once they get online, however, many are quite active digitally and feel positively about their web experiences. Improved usability could extend the benefits of the Internet and mobile devices to a much wider range of people.

Despite this diversity, many tech platforms appear to be designed with little regard for older users. A digital native might intuitively know how to navigate the overwhelming number of features on a given platform. But millions of smartphone and social users didn’t grow up with these technologies, and designers aren’t making it any easier for them to adapt. 

Ageist design on major platforms

Consider Facebook. When you open the page or app, you’re bombarded with activity from your friends and family. If you don’t get sucked into your constantly updating news feed, you may find yourself inundated with group and event listings, chat notifications, and friend recommendations for people you may or may not know. It’s easy to spend hours wading through all of the content and options. And many people do. The average Facebook user spends 50 minutes per day on the platform, nearly six hours per week. That’s 42.6 billion man hours per month given the platform’s 1.65 billion monthly active users.

But is the design user-friendly for all? Not so much. That’s why you see older users unwittingly posting personal messages as status updates or announcing to the world that they don’t know how to copy and paste. All of which indicates real usability problems – especially when you consider the privacy implications. 

Even people who have used Facebook since its launch in 2004 gripe about its confusing privacy settings. As users become more aware of how their social posts impact their employment prospects and even their data security, they want more control over who sees what and when. Facebook has responded to demands for increased privacy control, but usability challenges persist for users already overwhelmed by the interface or how to manage their accounts. 

But Facebook isn’t the only company that seems to take a less-than-inclusive approach to usability design. In recent years, critics have taken Apple to task for emphasizing aesthetics over functionality. Two of Apple’s earliest designers lambasted the company, which they say was once “a champion of the graphical user interface,” for abandoning “the fundamental principles of good design.” They say that in its quest for beautiful design, Apple created “obscure gestures that are beyond even the developer’s ability to remember.” And if a developer doesn’t find them intuitive, what chance does a late-adopter stand?

Then, of course, there’s Snapchat. One writer described his experience of attempting to use the app this way: “I end up flustered and sweating, haplessly punching runic symbols in a doomed bid to accomplish the basic task of viewing my friends’ messages before they expire. Snapchat, in short, makes me feel old.” Another put it even more succinctly, writing, “Snapchat has an age limit. But it isn’t set by asking you for your birthday. It is set through an interface that is so confusing you need to be young to get it.” 

Not every app is going to appeal to every age group, and there’s nothing inherently wrong with catering to young users. But shouldn’t interfaces prioritize universal usability? 

Inclusive design improves lives 

Although baby boomers and senior citizens are slower to adopt new tech than their younger counterparts, it’s often not for a lack of desire. Data shows that older people are increasingly embracing new technology, especially products that accommodate the challenges of aging, such as vision impairment and arthritis. However, users over the age of 60 are reluctant to purchase smartphones because the “smaller screens and complex menus” are more difficult for them to use. 

A move toward inclusive usability design has long-term benefits. Terry Bradwell, chief enterprise strategy and information officer at AARP, weighed in saying “As long as tech changes, there will always be a divide of some sort.” Most millennials grew up with constant connectedness, so while they’re not quite digital natives, they’re adapting OK – for the time being. But as we’ve witnessed in the past 20 years, technology is advancing at an unprecedented rate. Unless we begin prioritizing usability for all ages now, Snapchat will only be the tip of the iceberg in a generational technology divide. 

Chess

“Intrapreneurship” isn’t a typo. It’s your company’s best defense against competitive disruption.

The idea of intrapreneurship is attracting more and more attention from professionals and companies alike. And for good reason. Intrapreneurship represents a new way of thinking about innovation inside organizations.

If you haven’t encountered the term before, here’s how intrapreneurship relates to the more familiar entrepreneurship. Entrepreneurship encompasses all of the activities related to dreaming up and launching new business ventures. Intrapreneurship, by comparison, represents those same activities taking place inside a company, focusing on areas like identifying new market opportunities or improving policies and processes. In a word, intrapreneurship is innovation and change-making focused on improving the competitive position of an existing company.

Below we’re sharing 11 core ideas about intrapreneurship and intrapreneurs. This list is a mix of advice for professionals interested in the topic and companies looking to create an effective innovation culture.

  1. Go deep, not broad. The first thing to know about intrapreneurship is that employees should focus on specific challenges their company already faces. So be hyper-focused when pitching your manager. Explain the issue you’ve identified, how you plan to solve it, the impact your project will have, and the types of resources you need to get it done. Although intrapreneurship endeavors can be great opportunities for cultivating new skills, make sure you have enough baseline know-how to see the project through. Alternatively, you can showcase your leadership instincts by assembling an informal team of colleagues who possess the complementary skills the solution requires. 
  2. Drive productivity. Just as entrepreneurs require room to experiment, intrapreneurs need independence to thoroughly investigate the problem they are trying to solve. The payoff is that, “Intrapreneurs take risks and find more effective ways to accomplish tasks. An intrapreneur, in the most basic sense, is a skilled problem solver that can take on important tasks within a company.” 
  3. Earn buy-in before you present your idea. If you already have a track record of self-directed success, your bosses may give you some leeway. But if you’re new to intrapreneurship and want to make a good impression, find a champion for your idea. Figure out who stands to benefit most from your solution, and get their feedback before running it through the official channels. Having internal support before you’ve even pitched the concept will give you credibility with key decisionmakers.
  4. Own innovation—or else. At the core, intrapreneurship is about innovation: “The means by which large, mature corporates can develop and harness the commercial energy that will grow the business in a constantly changing and fiercely competitive environment.” With the speed of business today, even long-established market leaders face constant threats of competitive disruption. By fostering innovation, companies can stay one step ahead of that hot startup gunning for them.
  5. Leverage the millennial spirit. It’s often said that millennials are natural entrepreneurs, flush with skills like leadership, innovative thinking, and a bias for action. Those talents are prerequisites for intrapreneurial roles inside a company, and millennial professionals can contribute original thinking to innovation teams.
  6. Let mavericks thrive. Companies with more conservative cultures may have difficulty with the concept, but those are also the companies likely in need of radical thinking to identify emerging competitive threats, generate new product ideas, and demonstrate the effectiveness of new ways of thinking. 
  7. Recognize your intrapreneurs or lose them. Data shows that 70% of successful entrepreneurs dreamed up their startup while working at a previous employer. To build your innovation engine, your company needs to find ways to energize and incentivize your employees to be more intrapreneurial, and then capture and implement their best ideas. Your best intrapreneurs are only one step away from forming your industry’s next innovative startup. 
  8. Create a culture of intrapreneurship. Any professional with experience at innovative companies can contribute to creating a culture where intrapreneurship thrives. That means communicating and supporting values like transparency, rewarding proactive behavior (something corporation aren’t always good at), fixing problems as they arise to avoid normalizing the bad, and encouraging healthy internal competition. 
  9. Don’t focus on solutions at first. In some cases, intrapreneurs shouldn’t be given a specific problem to solve. The process of finding a solution entails crossing off possibilities one by one—and one of those discarded ideas might actually be the winner. “It’s better to stay in what we call the ‘problem space’ for as long as possible.” So give intrapreneurs space to develop a deep understanding of the problem before setting them on the path to developing viable solutions. 
  10. Define successDon’t chase trends, you’ll always lag behind. True innovation comes from creating products and services that are singular experiences that other businesses can’t replicate. Setting up dedicated teams focused on discovering where true innovation lies is a cornerstone of long-term customer engagement and long-term success.
  11. Turbocharge intrapreneurship with artificial intelligence. As Entefy has written previously, AI turbocharges intrapreneurship. Here’s how it works. As AI enters the mainstream, cognitive systems assume low-level tasks and free employees to focus on higher-value work. The professionals that will thrive in this environment are—you guessed it—the natural intrapreneurs, who transform their newfound freedom from drudgery into powerful new ideas. 

Intrapreneurship programs are win-wins for businesses and employees. They allow companies to get the best work and ideas from their best and brightest thinkers by giving them more autonomy to do their best work.

Mobile phone

Humans make irrational and emotional decisions. But why?

What separates humans from all the other creatures sharing the Earth with us? One difference is of course our capacity for rational thought and logical decisionmaking. Then there’s our now 50,000-year-old legacy of supplementing our natural capabilities with increasingly sophisticated tools—everything from labor-saving devices to medicines to, most recently, smartphones.

But are we humans always rational? Do we always use our tools optimally? Well, no. Just think of the last time you made a hasty snap decision when deeper analysis would have been better. Or you procrastinated on work in favor of more time catching up on social media.

We live in such an interesting time, where our most ubiquitous computing devices—smartphones—are equal parts Swiss Army knives of powerful capabilities and, when we’re not careful, the ultimate time thieves. But that is changing as cognitive technologies like artificial intelligence move out of the labs and into our lives. Because one thing AI does very well is make recommendations by studying data (about you, about the weather, about diseases, about everything), delivering insights that would otherwise elude us.

It’s just one small step from AI-powered recommendations to AI-powered decisions made on your behalf. But there are two important questions to answer before we hand over decision-making duties to our favorite devices. Do we want technology making decisions for us in the first place? And, if so, what’s gained and what’s lost when smart machines become even more central to our day-to-day lives?

The complex relationship between emotions and logic

Before digging into the technology side of things, it’s worth noting why we so often get things wrong despite our best intentions. Good decisions require putting together the right information in the right way. The more information there is, the more complex the analysis and the easier it is to overlook or misread the details. Of course, we don’t always have the time or energy to gather all the information that might exist, and so we rely—more than we might realize—on gut reactions and rules of thumb. 

An interesting example of this comes from the work of neuroscientist Antonio Damasio. In the 1990’s his research into how emotions impact decisionmaking found that when the brain regions responsible for processing emotions are damaged, people can retain their ability to reason yet become unable to make seemingly simple decisions.

Damasio studied a patient known as Elliot, who at one point was a successful businessman and loving husband. Elliot suffered from a brain tumor which damaged parts of his orbitofrontal cortex, the brain region that connects the frontal lobes with our emotional machinery. 

Elliot retained a good IQ score, and those around him felt that he remained an intelligent and able man. Yet he began leaving work unfinished to the point he was fired from his job, he divorced his wife only to marry someone his family disapproved of, divorced again, and then went bankrupt after going into business with a shady character. 

Damasio found that Elliot could think up many options and ideas regarding certain decisions, but something broke down when actually making a decision. ‘I began to think that the cold-bloodedness of Elliot’s reasoning prevented him from assigning different values to different options,’ recalled Damasio, ‘and made his decision-making landscape hopelessly flat.’

Perhaps it’s here that technology could help. When we lack the time or energy for thorough consideration, our smart devices can come to our aid. With their ability to quickly process data, they should be able to help point us in a direction that’s more logical than our instincts. Except that the technology that could do this still faces some man-made headwinds. Namely, the modern-day realities of choice overload and information overload.

Choice overload

We face and make so many choices every day. Choice seems like a good thing, offering us greater freedom and autonomy, the ability to find something that perfectly suits our needs. But all these options have a downside. When we are asked to make decision after decision, our brains become tired and frustrated. Psychologists call this decision fatigue, and when it sets in it takes a concerted effort to muster the self-control needed to continue making smart choices throughout the day.

Psychologist Roy F. Baumeister’s research into the power of self-control has found that people who fight their urges, such as resisting cookies and sweets, are actually less able to resist subsequent temptations. When he had people watch an emotional movie while explicitly trying not to display their emotions, they became more emotional more quickly on subsequent tests of emotional self-control. 

This effect can be seen in a dramatic way in the justice system. Research into judicial decisions found that prisoners at parole hearings who were evaluated early in the day were granted parole 70% of the time, while just 10% of those judged later in the day received a favorable ruling. 

The judges responsible for grating parole get worn down as the day goes by, and when they lack the energy to make a detailed analysis, they choose the easy option or the one that leaves them more options in the future. 

Interestingly, one solution to this problem is to eat. A new study shows that the brain consumes nearly 20% of the body’s energy and is a prime candidate for increased glucose when we get tired. Another solution is to use technology in ways that reduce, not increase, our options. Easy. Except for our second headwind to technology-powered decisions: information overload.

Information overload

Part of the reason we have so much choice is that we have so much information. When trying to decide something that requires some research, it’s easier than ever to consume article after article hoping to come to a full and complete understanding of the topic, and thus primed to make a perfectly rational and optimal decision. If only our brains would cooperate. Because the human brain doesn’t do well with information overload

For some insight into how the brain can mishandle information, we turn to the world of professional sports. Economist Richard Thaler conducted a study of star players selected during the NFL draft and found that teams often place a disproportionately large value on the early picks. Thaler observed that the scouts responsible for examining players could become overconfident in their decisions the more information they gathered. Thaler wrote, ‘The more information teams acquire about players, the more overconfident they will feel about their ability to make fine distinctions.’

While accumulating information seems like the ideal way to get to the bottom of something, any objective analysis requires not only that we know where and what to look for, but also that we interpret that information correctly

During the search stage, it is easy to favor information that supports or confirms a hunch that we already have. Thanks to all the information available online, it is not difficult to search for what you believe to be true, but this will often leave us with a biased perspective. When there is no expert consensus regarding some idea or hypothesis, we are likely to find many theories, opinions, and observations, which could examine the problem from many opposing or differing angles. Yet we often select the information that supports our initial hunch.

The mind prefers cohesion and holding multiple ideas that don’t work with each other can make us feel uncomfortable—what psychologists call cognitive dissonance. After all, the brain likes nothing better than to jump to a conclusion simply to escape getting overheated. 

The best strategy for countering information overload and cognitive bias is to expose yourself to contradictory information and counter-arguments. But it’s also important to know when to call off the search. Answers aren’t always found in the time we have, and time is a scarce resource. It pays to establish boundaries to your search by limiting how long you work on the problem or raising your threshold for what information you consider relevant.

Using technology to decide

We are not built to make decisions all day long and, when we try, we tend to grow tired and impulsive. Technology can certainly help. Cognitive technologies like artificial intelligence are capable of crunching numbers and solving problems too complex or time-consuming for us. And as artificial intelligence and computing power continue to improve, this ability will only increase. 

Yet before we can fully come to rely on our devices to augment our decisionmaking capacities, we need to establish trust. We need to be sure that the decisions being made are in our best interests, and not the interests of the owner of the platform providing the technology. That’s a big challenge of choice overload: machines need to support us when we need it, not just assume our responsibilities. 

Then there’s the delicate role of information overload: designing systems that interpret data the way we would.

With time and the ongoing advances in AI, we’re not too far off from confidently making the right choices about the tools we’ll use to make choices.