Web security

Powerful arguments to convince anyone why net neutrality matters

On December 14, 2017, the FCC voted to repeal the rules that ensured that Internet Service Providers (ISPs) provide Internet access without restriction, preferences, or prioritization. It was a sad day in the history of the Internet.

The term net neutrality describes the principle of a free, open, democratic Internet where no one website or service is given priority over another. Despite evidence that 83% of Americans opposed its repeal, net neutrality is no longer the law of the land and, instead, ISPs are able to use their pricing power and near-monopoly status in many markets to slice and dice the Internet in whatever ways best boost their profits. Without net neutrality, it’s easy to imagine ISPs charging customers extra for “fast lane” access to certain websites—at their sole discretion—in the same way that cable television providers offer tiered pricing and channel packages.

The good news is that the principle of net neutrality isn’t dead. At least not yet. But it will take Congress changing current laws or favorable rulings in the courts to reinstate a free, open Internet.

In case you’re on the fence about net neutrality, or in need of some motivation to take action and show your support, we’ve assembled 6 reasons why net neutrality is a good thing. Each of these arguments on their own justify enshrining the right to a free Internet into law. Taken together, they’re an unassailable argument in favor of protecting the Internet from the narrow interests of a handful of industry Goliaths.

At the end of the article we’ve shared resources for following the net neutrality fight and expressing your support for the cause.

  1. Net neutrality protects consumers choice and free speech. The top 4 broadband ISPs control 75% of the residential market. For fast Internet (100 Mbps) access, 88% of the country has either one or no provider. And many of the largest ISPs are also content producers, which mean their incentives to limit competing content is complicated at best. At its core, this conflict of interest has free speech implications: “In 1776, Thomas Paine didn’t need the permission of any other content creator or distributor to circulate Common Sense. But without rules prohibiting blocking, throttling, and the like, broadband providers would gain the power to limit what unpopular content flows over their networks—to the detriment of consumers and democracy.”  
  2. Net neutrality is pro-business. The free availability to any kind of information drives personal and corporate productivity, enables new products and services, and allows for healthy competition among established companies and disruptive upstarts. The only businesses that win without net neutrality are the handful of ISPs. “Without net neutrality rules, prioritization of internet traffic by telecom and cable companies would skew the competition for content, as well as tilt the scales in the dissemination of all political and social views in favor of websites and companies that are able to pay internet access providers.”
  3. Net neutrality is pro-freedom. Without net neutrality, ISPs are allowed to take certain actions that directly impact four Internet freedoms that consumers have come to value highly. Michael Powell, the FCC Chairman under President Bush, defined these freedoms in a 2004 speech: The freedom to access any content, so long as it was legal; the freedom to access any service or application; the freedom to use their Internet connections on any computer or device; and the freedom to get detailed, transparent subscription information from their ISP. These freedoms are now in jeopardy.
  4. Net neutrality gives consumers the power of choice. In the absence of net neutrality, ISPs have the right—and the financial incentive—to bundle Internet services, including charging more for access to specific websites and services. With few or no options for many, consumers won’t even be able to vote with their pocketbooks when it comes to any new pricing models. ISPs gain the upper hand.
  5. Net neutrality preserves the competitiveness of the U.S. technology sector. A free, open, non-preferential Internet has an important feature: every company that relies on the Internet for its products or services can compete on even footing with every other. This enables the quintessentially American ideal of fair competition. Without net neutrality, ISPs are in a powerful position of being able to pick winners and losers through their pricing power. Said one economics professor: “U.S. growth and worldwide dominance of high technology would be significantly challenged without network neutrality.”
  6. Without net neutrality, the U.S. joins the out-crowd. Another result of the FCC vote is that the U.S. joins an exclusive group of countries without consumer protections for free, open Internet access. That group includes North Korea, Saudi Arabia, Cuba, and Iran. Also on the list is Portugal, where the “country’s wireless carrier Meo requires users to pay additionally for apps and services they would like to use, like WhatsApp, Facebook, Snapchat, and Messenger. Video apps are also offered as paid add-ons in a variety of bundles.”

Concerned about the future of the Internet? The fight for net neutrality will continue, so there’s still time to make a difference. The nonprofit Fight for the Future maintains a website called www.battleforthenet.com that features an automated tool for contacting your Congressional representatives. It is a good starting point for learning about the issues and taking action to support Internet freedom. 

Chess

Deepen your understanding of any subject with these 6 strategies

Always be learning. For professionals today, keeping pace with the changing dynamics of business is an imperative. And the idea that a diploma represents the end of learning is an old fashioned one. Yet knowing that’s true isn’t the same as knowing how to learn effectively, or to manage the growth of your knowledge and skillsets into new specialties and directions.

If continual learning seems daunting, it doesn’t have to be. It’s not necessary—or even desirable—to start from scratch every time you sit down to explore a new subject. There’s one very powerful toolbox on your side: mental models.

The investor Charlie Munger is Warren Buffet’s longtime partner. In a talk at USC Business School in 1994, he laid out the rationale for using mental models to better understand the world. Here’s what he said:

What is elementary, worldly wisdom? Well, the first rule is that you can’t really know anything if you just remember isolated facts and try and bang ’em back. If the facts don’t hang together on a latticework of theory, you don’t have them in a usable form.

You’ve got to have models in your head. And you’ve got to array your experience—both vicarious and direct—on this latticework of models. You may have noticed students who just try to remember and pound back what is remembered. Well, they fail in school and in life. You’ve got to hang experience on a latticework of models in your head.

What are the models? Well, the first rule is that you’ve got to have multiple models—because if you just have one or two that you’re using, the nature of human psychology is such that you’ll torture reality so that it fits your models, or at least you’ll think it does.

He goes on to recommend that your own mental models should come from different areas and disciplines, giving you the intellectual flexibility it takes to foster expertise in a particular subject. Mental models serve a similar function as that of the pilot’s preflight checklist. There are many dozens of things that have to be confirmed to ensure a safe flight. Sure, you can just try and remember the list mentally but consistent use of preflight checklists improves safety dramatically. 

Similarly, when considering a situation, there are many ways of looking at that event. There is a scientific view, an economic view, a social view, a sustainability perspective, and so on. The more complex and consequential the topic, the more worthwhile it is to have a cognitive toolbox of mental models to apply. This becomes your own personal “latticework of theory,” to use Charlie Munger’s term.

You might be surprised by just how many of these mental models you already know about, or even use frequently. As you look over the following list, take note of which models “click” with you—these are the ones that deserve your attention and are likely to be useful to you in the future.

Here are 6 mental models that you can use to deepen your understanding of practically any subject:

  1. Occam’s razor. Among competing and equally plausible explanations for a phenomenon, simplicity should be given preference; that is, the explanation requiring the least number of assumptions. When you hear the sound of galloping hooves, first assume horses, not zebras.
  2. The map-territory relation. A representation of reality is not necessarily reality itself. Complex systems require abstract representation in order to simplify them sufficiently to be understood. We use maps, pictures, sketches, and measurements to represent something, but those representations are always potentially fallible. There is an imperfect relationship between reality and the models we use to represent and understand reality. This mental model suggests two questions: Is what I am looking at the map or the territory? And, is the map an accurate representation of the territory?
  3. Bell curves or normal distribution. You’ve probably heard of bell curves and standard deviations. For many things in life, there is a normal distribution of outcomes that can be represented as a bell curve. Find the average, then expect an equal distribution on either side of that average. Height and IQ are two normally distributed attributes. The average IQ might be 100, but 2% will have scores above 130 and 2% will have scores below 70. This mental model triggers the question: Is what I am looking at the average or the exception? 
  4. Feedback loops. Many systems have one or more feedback mechanisms that can impact strategic decisionmaking. Some systems are very simple: A causes B. In complex systems, there are many steps like A causes B, B causes C, and C causes D, ultimately leading to the outcome H. In these cases the feedback mechanisms is so that B is confirmed back to A and C is confirmed back to B, ensuring sustained quality along the way.For example, you move into an office which, unknown to you, has a remotely controlled thermostat. It feels chilly so you bring in a portable electric heater. No matter how warm you set the heater, the room stays cool because the thermostat is a hidden feedback mechanism. The warmer you set the heater, the cooler the ventilation becomes in order to maintain the thermostatic setting. In order to accurately change complex systems, you have to know what all the feedback mechanisms are. 
  5. Zero-sum or non-zero-sum. Some systems are zero-sum and others are non-zero-sum, and it is vital to know the difference. Zero-sum systems have winners and losers. Sports games, spelling bees, and chess are all examples where there is a winner and a loser. Non-zero-sum systems are far more desirable because everyone can emerge better off than they were before—everyone can be a winner. Voluntary markets are an example of a non-zero-sum systems where everyone participates and finishes better off than if they had not participated. Zero-sum systems rest on the question: How do I win? Non-zero-sum systems rest on the question: How can we win? 
  6. Correlation is not causation. This mental model is one of the most fundamental laws in statistics. When one thing happens and then a second thing happens, you can’t rely on the fact that there was a causal relationship between the two. Many things are correlated that are not causally related. For example, data will show a correlation between the age of Miss America and the number of people who die by hot steam; clearly, there is no cause-and-effect relationship between these two things. Our pattern-seeking brains fall prey to natural confirmation bias all the time, making unrelated correlations one of the most common decisionmaking mistakes. This mental model suggests the question: Am I confident that I know the real causal relationships?

Remember, these are only 6 of many possible mental models. They are listed here as a useful set of perspectives that can help you understand complex situations or get up to speed on a new topic. Using them and discovering your own can quickly deepen your understanding of the world around you.

10 industries

AI’s disruptive impact in 10 industries [SLIDES]

$57.6 billion is expected to be invested in artificial intelligence and other cognitive technologies by 2021. Across a very diverse set of industries, AI is supporting new products and services, and upsetting longstanding competitive dynamics.

This presentation focuses on AI disruption in industries known for making smart use of emerging technologies. From telecommunications to travel to media, companies large and small are pursuing opportunities created by new AI algorithms. 

You can read more about the research featured in this presentation in Entefy’s article, Making smart use of smart systems: AI’s disruptive impact in 10 industries.

3D model

Investing in artificial intelligence? Here are 3 things to do today to ensure ethical AI tomorrow.

As internally developed artificial intelligence systems move from lab to deployment, the importance of creating unbiased, ethical systems is greater than ever. The challenge is that there is no simple solution to building ethical consideration into AI algorithms. But there are a few things you can do early on that help.

To get a sense of the many issues, let’s check out a hypothetical AI-powered job candidate review system. Designed as an AI chatbot, the system vets potential candidates, analyzing a wide range of objective criteria to determine whether someone should pass through the initial application stage. The company decided to restrict access by the system to data about a candidate’s gender, age, and ethnicity, in order to promote a level playing field for candidates who might otherwise be overlooked. 

The system provides another benefit, reducing the chance that a hiring manager’s bad mood or distracted mindset will hurt a qualified applicant’s prospects of landing the job. Operating without emotion, the AI system evaluates candidates based on their experience, skillsets, and even their empathy levels. But the system doesn’t make decisions, it makes recommendations, passing along the most promising candidates to the company’s human managers, who then rely on their professional judgment to make a final decision.

This simplified example highlights the need to anticipate non-technical factors like data bias in designing AI systems. Decisions made early on in the planning process help ensure your company successfully engineers an ethical AI system.

Indispensable human judgment

Data is vital to decisionmaking, and AI helps gather and parse that information. It can even generate reports and recommendations based on the objectives with which it’s been programmed. But a machine learning algorithm can’t tell you whether a decision is ethical or whether it will irreparably damage morale within your organization. It hasn’t spent years honing its business intuition – the kind of intuition that tells you that even though a decision looks right on paper, it would be a betrayal of your core client base. 

That’s where human judgment enters the picture. There are a number of approaches you can take for integrating AI into your decisionmaking strategies. Depending how high the stakes are and the problem you’re trying to solve, you might outsource the job to AI but insist that a person review its findings before action is taken. Or you might identify key areas that will largely be the domain of AI, relieving you of the need to be involved in every decision related to that particular process. 

But human judgment will remain central to business decisions for some time to come. In fact, judgment and interpersonal skills will be at a premium in the workforce of the future. As AI becomes an increasingly prominent tool in our professional arsenals, we must ensure that we’re using it ethically. Here are some ways to do that: 

1. Identify your company’s core values 

Systematizing your company’s core values starts with identifying and documenting those values. Start a process that captures the values that have become central to your company culture. Writing in Harvard Business Review, one researcher made a useful distinction between “values” as marketing and “values” as deeply held beliefs: “If you’re not willing to accept the pain real values incur, don’t bother going to the trouble of formulating a values statement.”

If an AI program suggests a course of action that makes sense on paper but not in the broader context of your organization’s long-term goals, you’ll need a strong internal compass to make the right call. Data is important, but you’re ultimately responsible for your decisions. When called upon to explain your actions, you can’t default to saying, “The AI made me do it.” Use the tools to gather information and add context to your decisionmaking process. But when you make a choice, human nature should be in the mix. 

2. Establish an AI oversight group 

Machine learning systems are only as good as the data we feed them. Which immediately creates a challenge for AI system developers: humans are biased. Even the most fair-minded person carries unconscious bias, sometimes without being aware of it. And so without meaning to, developers can end up corrupting the systems they’re designing to help us make more objective decisions. 

To get around this problem, create internal AI watchdog groups that periodically review your algorithms’ outputs and can address complaints about discrimination and bias. Then use the group’s findings to refine your AI-assisted approach to leadership. 

3. Use AI to facilitate better experiences for customers and employees alike 

Machine learning systems can generate powerfully personalized experiences—for both customers and employees. The World Economic Forum suggests that using AI ethically includes shifting employee performance metrics from output-based measurements to evaluating the creative value they bring to the company. “Although there are roles under threat, there are also roles that will become needed more than ever. It’s more cost efficient to retrain current employees to fill the roles you need in the future than it is to hire new ones, and they are also more likely to be loyal to your organisation.” 

One great part of the power of AI tools in the office is that people don’t have to do drudge work anymore. As their roles become more dynamic, so, too, should your evaluation standards.

By investigating these 3 areas early on in the development process, your company is better positioned to build new AI systems that reflect—and protect—your company’s values. And improve the experiences of your customers and employees alike. 

Happiness

Top employee benefit for 2018: happiness

In the not-too-distant past, worker happiness seemed to fall pretty far down the priority list at most companies. Our culture celebrated the myth of busyness with little regard for how those long hours and the pressure to perform were impacting people’s well-being and sense of fulfillment.

Flash forward to today and happiness is of paramount importance. Some companies even employ Chief Happiness Officers, whose job is – you guessed it – to foster employee happiness. Their job duties range from organizing morale-boosting team events to conducting emotional check-ins with employees to implementing new happiness-oriented policies. They’re tasked with creating an environment that inspires worker loyalty and drives increased productivity.

Although some people are critical of the concept of having a dedicated happiness officer in the C-suite, few would disagree with the fact that happy workers are more productive workers. Research indicates that productivity jumps 12% among happy employees, and it drops by 10% among their less cheerful colleagues. Harvard University researcher Shawn Anchor, who wrote The Happiness Advantage, found that increased happiness changes the way our brains function. People who are more optimistic are better problem-solvers because they’re more likely to spot new opportunities and potential solutions.

Whether you hire a Chief Happiness Officer or simply integrate happiness-supporting policies into your culture, it’s clear that employee satisfaction isn’t just a nice-to-have. It’s another competitive advantage for successful modern businesses.

Happiness and wellness go hand-in-hand   

What does happiness really mean from a workplace perspective? Certainly, a positive, collegial environment is more attractive than a tense, ulcer-inducing office space. However, discussions of happiness and productivity are a gateway to a broader topic: how employers can foster happiness through the right benefits and perks.

Tech startups in particular became notorious in recent years for offering flashy perks, such as foosball tables and craft beer on tap. But the novelty of having a fun office environment seems to be giving way to desires for more substantial, lifestyle benefits. Interestingly, many of the perks associated with tech startups – such as ping-pong set-ups and unlimited snacks and beer – aren’t as in-demand as more substantial benefits.

One study showed that only 12% of workers surveyed feel that employers should encourage games in the workplace, indicating that fun but superficial perks aren’t relevant to employee happiness. However, people are drawn to companies offering more creative and impactful job perks, such as pet insurance, help paying down student loan debt and covering wedding expenses, and paid time off to volunteer.

That may be in part because Millennials, who represent the largest demographic in the American workforce, are entering new stages in their lives. As they marry or co-habitat with long-term partners, buy houses, and start families, office game rooms and happy hours carry less appeal than flexible work policies, fitness incentives, and robust health insurance benefits. This age group is also adamant about finding work that fulfills and challenges them. An employer that pays you to volunteer (and pays your student loans) checks a lot of those boxes.  

The shift is likely due to a growing cultural awareness around well-being too. Celebrity business leaders such as Arianna Huffington have been leading the charge for a fundamental overhaul of how we think about productivity and work-life integration. Huffington shared her story of collapsing at her desk one evening when she was exhausted and overworked. Since then, she’s made it her mission to sound the alarm about the dangers of burnout and fatigue. Her company, Thrive Global, advises big-name businesses such as JPMorgan on enhancing productivity and improving behavioral patterns through holistic methods. The company recently raised $30 million and is now valued at $120 million, indicating the growing interest in this field.

The business case for happiness

The emphasis on more valuable perks and cultural wellness makes sense from a business perspective. We know that happy workers are more productive workers, and it’s hard to be happy when you’re stressed and exhausted. The person who pulls 12-hour days to impress her bosses but barely has time to see her family isn’t going to be chipper for very long. The stress of her job, combined with constant anxiety about student loans and guilt over not spending more time with her kids – not to mention a lifestyle that involves a lot of take-out and very little exercise – is a recipe for a breakdown, not a professional breakthrough. Physical and emotional well-being directly influence people’s moods and energy lives, which in turn impact their productivity.

Given the correlation between well-being, happiness, and productivity, it’s not surprising that the businesses atop Fortune’s 100 Best Companies to Work For 2017 list offer a range of holistic benefits such as onsite childcare and fitness facilities, flexible work policies, tuition reimbursement, discounted gym membership rates, and other lifestyle-enhancing perks.

Having so many attractive options on-site likely keeps people in the office and working hard. But it also signals that it’s OK to take a few minutes to rest or refuel in healthy ways, rather than working from sun up to sundown. It’s becoming clear that the things that make us happy at work are the same that make us happy in our personal lives. Healthy habits, fulfilling work, and stability for our families leads to improved emotional states across the board.

While companies on the Fortune list have set high standards for employee benefits, you don’t have to have a multi-million-dollar budget to support workers’ happiness. A few core wellness policies can make a tremendous difference to your employees’ satisfaction and productivity.

Global impact

AI’s global impact is expected to match the GDP of 177 countries

Consulting powerhouse PwC released projections for the economic value of artificial intelligence. They found that the world’s $75 trillion Gross Domestic Product (or GDP, the measurement of the total economic output of a place) could grow 14% higher by 2030 on the back of advancements in AI. That works out to an additional $15.7 trillion in economic value. PwC projects that China will see the greatest gains, with 26% GDP growth from AI, while North America should see a 14% boost. Big picture, the biggest sector gains will be in retail, financial services, and healthcare.

It’s easy to read a ginormous figure like $15.7 trillion dollars without the size settling in mentally. So here’s some context to this projection of AI’s global impact. Using data on 2016 GDP from the World Bank, $15.7 trillion represents the total economic output of 177 different countries. Ranking the world by GDP size, with U.S. #1, China #2, and so on, $15.7 trillion equates to the output of every country below #17 on the list – from the Netherlands, Switzerland, and Saudi Arabia all the way down to #195 Tuvalu.

Or to put it another way: By 2030, AI will create as much economic value as the current GDP of 177 countries. 

Entefy’s enFacts are illuminating nuggets of information about the intersection of communications, artificial intelligence, security and cyber privacy, and the Internet of Things. Have an idea for an enFact? We would love to hear from you. 

Crystal ball

Predicting predictions: useful traits that can make you better at predicting the future

Here’s something to keep in mind when you’re scanning the next “Predictions for 2018” article you come across. Because whether it’s an expert in a magazine or your own company’s market forecast, it’s easy to get caught up in predictions that sound great but have a flimsy evidentiary foundation. The hazard of forecasting is that people struggle to see past their own commitment to a particular vision for the future, and end up blinded by questionable assumptions or their own cognitive biases. 

There’s plenty of data to support the notion that people are terrible forecasters—but there’s more to the story. Philip E. Tetlock is one of the leading researchers in this area of making forecasts. He notes in his book Superforecastingthat how you think matters more than what you think. He goes on to identify a number of traits that are associated with individuals who do a very good job of making usefully accurate forecasts:

  • Be cautious
  • Be humble
  • Be intellectually curious and open-minded
  • Value and synthesize diverse views
  • Believe it’s possible to improve

With the speed at which the world moves these days, much of what we have great confidence in today has a good chance of being either wrong or irrelevant in a year, five years, or a decade down the line. Caution and humility usually do not come naturally to us, so much so that there’s a term for it: the Dunning-Kruger Effect.

So with the gates of a new year wide open before us, let’s take a pause to look at some very prominent people making very prominent predictions that were so wildly inaccurate as to be comic in hindsight.

So what is ahead for 2018? More predictions, of course. Just remember that none of us will know whether they were good or bad for some time to come.

2017

2017 was a great year for emerging technologies

As 2017 comes to a close, we thought we’d share a roundup of some major technology milestones that took place during the year. This report features technologies that you probably heard about in the news—like blockchain and genetic therapy—with important advances that didn’t get a lot of attention like “hot” solar and liquid biopsies. Taken as a whole, it’s inspiring how many major improvements to people’s lives are just over the horizon. 

Here are 11 breakthrough technologies from 2017:

  1. Quantum computers reach 50 quibit threshold. Today’s computers have become known as “classical” computers because of the rapid advancement of an entirely new computing framework: quantum computing. Classical computers are based on binary systems that use bits that can either be on or off—the familiar 1 or 0. Quantum computation works using qubits that can be simultaneously 1 and 0, which greatly expands their power for doing complex, computationally-intensive operations. Quantum computing is not a recent idea, it has been explored theoretically by researchers for some time. But during 2017 university and corporate researchers built functioning quantum computers that herald a new horizon in computing, including one system said to use 50 quibits—a significant milestone because that’s the level at which quantum computers begin to surpass even the fastest classical supercomputers.
  2. “Hot” solar cells. Solar panel technology has advanced in recent years, bringing costs down. In 2017 solar achieved a major new milestone in efficiency. These new “hot” solar cells convert heat to light and enable cheaper solar panels that provide continuous power. The secret to “hot” solar lies in converting heat into the spectrum of light that is most efficient at powering solar cells. Early prototypes are expected to move into commercial development in the years ahead.
  3. Harvesting clean water from air. In 2017, a team of scientists from MIT and the University of California, Berkeley successfully tested a process for harvesting clean water from the air using porous crystals that convert the water using no energy at all. Previous iterations of the technology required high moisture levels and a lot of electricity. By combining the water extraction tech with solar, zero-energy water creation becomes possible, with disruptive possibilities in everything from home appliances to agriculture to city water supplies. 
  4. Gene therapy 2.0. Countless diseases are caused by a single anomaly in a single gene. Researchers have been theorizing and experimenting with gene therapy for decades. The idea is compelling: use a uniquely-engineered virus to deliver healthy copies of a gene into patients with defective versions. In 2017, scientists made major strides towards deploying gene-based therapies for the first time, opening the door to treating previously untreatable conditions. 
  5. Precision farming. Agriculture may be one of the oldest human enterprises, but 2017 saw food production rocket into the future. Commercial agriculture and individual farmers alike are deploying sensors, automation, drones, GPS-mapping tools, and data-analytics to create new capabilities that boost crop yields and food quality while also reducing water and chemical use. Australian researchers, for example, demonstrated a streamlined, low-cost monitoring system in Indonesia that uses sustainable solar and smartphones. 
  6. Reversing paralysis. Many scientists hoping to turn brain-computer interfacing (BCI) into viable systems focus their work on paralysis, which often occurs after connections between the brain and nervous system are damaged. In 2017, a major milestone in reversing paralysis was reached when researchers wirelessly connected a BCI device to electrical stimulators elsewhere in the body. This created a ‘neural bypass’ that allowed a paralysis patient to move his limbs. Next up are attempts to use similar technology to reverse blindness and restore memories lost to Alzheimer’s disease.
  7. Liquid biopsies. 2017 saw a major step forward in doctors’ ability to evaluate cancer. Traditional biopsies involve collecting tissue samples from the patient’s body for analysis by a lab. Because they often require surgery, biopsies are invasive, risky, and painful. Rapidly advancing through clinical trials, liquid biopsies allow cancer detection from a vial of blood, offering a faster and easier alternative to traditional biopsies as well as entirely new cancer detection capabilities. 
  8. Payments using facial recognition. During 2017, years of development in facial recognition technologies paid off in China, where facial-recognition systems went mainstream. These systems use a person’s face as a “key” to do things like authorize payments, pick up train tickets, and open doors in secure buildings. In the years ahead, the same technology is expected to expand into new areas like policing and everyday interactions with banks, stores, and public transportation.
  9. Space race 2.0. Space X, the private space technologies company, achieved a major milestone earlier in 2017 when it landed one of its Falcon 9 rockets. Not because of the landing, but because that same rocket had been launched and landed previously. By demonstrating that reusable rockets are possible, the space industry took a major step towards lower-cost access to space. But that wasn’t the only big news in the space industry. Blue Origin announced that it would begin carrying tourists to space within 18 months. And then there’s NASA’s announcement of a mission to an asteroid with $10,000 quadrillion worth of nickel and iron
  10. Blockchain beyond Bitcoin. Bitcoin’s skyrocketing price got most of the media attention during 2017, but there were big milestones taking place in blockchain, the foundational technology that enables cryptocurrencies like Bitcoin. New uses of blockchain were launched in applications as diverse as music distribution, automobile title registry, and government IDs. Blockchain is rapidly disrupting industries like healthcare, insurance, and financial services, with $5.4 billion dollars in global investment expected by 2023.
  11. 4D printing. Just as 3D printing is moving into the mainstream in the aerospace, energy, electronics, and manufacturing industries, a new technology is coming of age: 4D printing. In 4D printing, the fourth dimension refers to time, because 4D-printed materials are designed to change shape when exposed to water, temperature changes, or other factors. New uses of the technology emerged during 2017 when the aerospace industry began testing the use of 4D printing to send compact, collapsed materials into space where they can self-assemble. 

In the months and years ahead, we expect many of these technological marvels to create not only new capabilities but entirely new products and even industries. The future looks bright.

Blurry image

15 jobs that barely existed 15 years ago

Creative destruction is the economics term for the incessant churn of markets and economies. New technologies and macro-level trends create new products and services that, over time, replace the demand for legacy products and services. The horse-and-buggy industry gave way to the automobile industry, which today is transitioning to the autonomous, electric vehicle future. 

Jobs, too, evolve with the times as demand for one set of skills is replaced with the need for entirely new skills. The past 15 years have been no exception. Many of today’s highest-paying roles have come into existence only recently; and some of these roles are already in decline as new technologies herald the need for entirely new skillsets.

As 2017 draws to a close, we thought it would be timely to highlight just how many of today’s jobs simply didn’t exist just 15 years ago. It’s surprising that a relatively short period of time, just a decade and half, represents the birth of the technological fixtures of our lives like the smartphone and on-demand services like ride hailing. And the jobs that make them possible.

Here are 15 jobs that barely existed 15 years ago:

  1. Mobile app developer. The iPhone arrived in 2007, followed soon after by Android devices. And just 10 years later, half of the world’s adults have a smartphone. Many tasks that once got done sitting in front of a desktop and keyboard are now more commonly done on a mobile device, everything from banking to communication to shopping. So naturally there was an explosion in the number of mobile app developers.
  2. Social media manager. These days, businesses large and small have personnel dedicated to managing a company’s social media presence. Social media is used for traditional marketing as well as customer service. Social media managers (and related jobs) evolved from earlier digital marketing roles to encompass the specific skills needed to help a brand stand out on today’s social media channels. 
  3. Ride-sharing driver. Once upon a time, urbanites stood on street corners with arms raised, hoping to attract a passing taxi cab. Today, of course, ride hailing is done with a few taps of an app, and yesterday’s taxi driver is giving way to today’s gig economy driver. Uber, the leading ride-sharing company, was founded in 2009 and quickly grew to become the world’s most valuable startup. One interesting dynamic about app-enabled ride sharing is that it has created driving jobs that will prove to be only temporary. Companies are investing heavily in self-driving cars and have their sights set on flying cars as well.
  4. Self-driving car engineer. While driverless cars are poised to make a major dent in transportation, they are also beginning to create new types of jobs. Driverless cars can’t yet develop or repair themselves, so engineers, mechanics, and software developers are in-demand jobs today and should remain so into the foreseeable future.
  5. Cloud computing specialist. You don’t have to go back much more than 10 years or so for the statement “I work in the cloud” to sound downright crazy. While the concept of distributed software services had been around for some time, cloud computing really took off after 2006. Today, more than half of U.S. businesses use cloud services of one form or another. 
  6. Big Data analyst and data scientist. Digital data management by IT professionals isn’t new; but beginning in the mid-2000’s the concept of Big Data took off as the management and productive use of data entered a new phase. These days, the path out from under all the data piling up every second lies with the skills of Big Data analysts and data scientists. With data volumes growing 40% per year, there is tremendous demand for specialists who can analyze, process, and make useable all of this information. 
  7. Sustainability manager. Sustainability was once a niche concept, reserved for progressive companies focused on finding productive uses of human and natural resources. But in the same way as electric cars have gone mainstream in recent years, so too has sustainability. Professionals who develop corporate sustainability programs are in demand, working on projects like identifying “green” cost-saving opportunities and federal, state, and local tax rebates.
  8. Vloggers. The term “weblog” evolved into the more familiar “blog” and “blogger” in the late 1990’s. You would be forgiven for not recognizing the term “vlogger,” a more recent term that combines “video” with “blogger.” With the rise of video sharing platforms like YouTube and Vimeo, vlogging has become big business, with top YouTube stars earning millions of dollars through advertising, social media, and sponsorship deals.
  9. Commercial drone operators. Even though government regulations covering unmanned aerial vehicles—commonly called drones—are just now being written, demand for drone pilots is skyrocketing. Once reserved for military use, commercial use of drones is expanding to include everything from deliveries by UPS and Amazon to aerial photography and videography of commercial real estate sites. 
  10. Digital content marketers. The marketing industry is one of many that has seen phenomenal changes over the last decade and it continues to develop constantly. Go back 15 years or so and marketing budgets were still focused on traditional advertising channels like radio and TV and print brochures. Beginning with email marketing, though, marketing departments evolved quickly to encompass sophisticated data-driven digital campaigns covering websites, social media, and digital ads. Driving much of this transformation was the rise of “content marketing,” the strategy of using high-value content like articles and explainer videos to market without really marketing. 
  11. Search Engine Optimization (SEO) specialist. Before Google came along and gave the world near-instant search of the Internet, companies like Yahoo! thrived by providing directories that categorized websites. But as the number of websites exploded, a marketing problem emerged: how does my business get found amongst all the noise? “Getting found” became the domain of Search Engine Optimization, the art and science of selecting keywords to support business objectives. Once limited to a handful of tips and hacks, SEO today is a complex and ever-changing field, and its professionals are in-demand at businesses large and small.
  12. Market research data miner. Businesses are awash in data. Making sense of the data involves turning all the 1’s and 0’s into actionable insights. The role of market researchers has evolved in lock step with the Big Data revolution, so that today these prophets of market trends are a highly technical specialty skilled at finding meaning in reams of marketing and customer data. 
  13. Elder-care services coordinator. Two powerful trends are driving the demand for jobs in the elder-care space: the aging U.S. population and the increasing complexity of the healthcare system. Demand by older Americans for in-home healthcare services is growing, but because so much of the healthcare systems is DIY, figuring out how to access those services can be complicated. Elder-care services coordinators are specialists who understand the healthcare needs of elderly patients and the intricacies of the services available to them. 
  14. Medical biller and coder. Here’s another newly emerged role driven by changes to technology. The U.S. healthcare system is largely structured around fixed rates that doctors receive as reimbursement for services. A regular office visit is reimbursed at one rate, treating a broken ankle at another. Medical billers and coders provide the expertise that connects doctors’ services to insurance and governmental reimbursement, commonly employed at doctors’ offices, hospitals, and other healthcare facilities.
  15. Genetic counselor. Advances in genetics are taking place at a breathtaking pace, as seen by the successful uses of the Crispr-cas9 gene editing technology. As genetic technologies move out of the labs and into mainstream, a new job title has emerged as well: genetic counselor. Genetic counselors help patients interpret the results of genetic tests to aid in the prevention and management of illness. As medical capabilities around genetic testing improve, and as new technologies like genetic modification of embryos mature, these counselors will be increasingly in demand at healthcare facilities the world over.

There’s an important insight to be gleaned from this list: given the rapid advances in technology, 15 years from now the world will no doubt look quite different than it does today, and we’ll take for granted new technologies that are now in their infancy. Think: artificial intelligence, robotics, autonomous transportation, and smart cities. For any professional expecting their career to continue into this future, the lesson is to keep learning the new skills that will keep you competitive and productive. The need to learn new skills through lifelong learning has perhaps never been greater.