Modern tractor

AI and blockchain are taking root in the global agriculture industry

There is already enough food to feed everyone on Earth, with agricultural producers yielding 17% more food now than they did just three decades ago. Yet 925 million people worldwide suffer from a lack of food security, including 42.2 million in the United States alone. Several factors contribute to the problem, including poor storage and sanitation systems, low crop yields, and political upheaval. As the global population marches ever higher, the Food and Agriculture Organization predicts that food production will need to increase 70% to feed the world by 2050

How will the agriculture industry keep pace? Through cutting-edge technologies like artificial intelligence, IoT, and blockchain, of course.

IoT in the field

When people hear the term Internet of Things (IoT), they often think of smart homes and self-starting appliances. Or perhaps they think about how at present, the Internet of Things is still wildly insecure. But from an agricultural production perspective, the Internet of Things holds real potential for increasing crop yields and reducing losses.

IoT sensors installed in the field enable agricultural companies to monitor their crops in real time. The sensors capture data on a range of metrics and send back information that enables producers to optimize their growing processes. They can monitor nutrient and water levels in the soil and adjust them as needed to get the greatest number of healthy, saleable crops. Imaging programs allow them to see exactly what’s happening across different fields and intervene immediately if they identify diseased plants. 

One company predicts that by 2050, farms will produce approximately 4.1 million data points each day, and every one of them can be used to grow more and better food. Already, smart farming devices are shaping the future of agriculture by helping farmers increase yields and lower their production expenses. On average, the farms included in one series of studies saw a 1.75% jump in yields and a decrease in energy expenses of up to $13 per acre. 

Smart robots and predictive analytics 

Artificial intelligence (AI) complements agricultural IoT devices and further improves the growing and selling processes via predictive analytics. These programs can help farmers determine which crops to grow and anticipate potential threats by combining historical information about weather patterns and crop performance with real-time data. The more information that’s collected, the more precise the insights producers will be able to glean about what’s happening with their crops and where they can optimize conditions. 

Smart robots will likely come into play as well. Already, companies are developing robots that can analyze crops in the field to not only identify disease indicators but to prune away weak plants to give strong ones the space and resources needed to thrive. One U.S.-based company expects that smart systems, such as the artificial intelligence programs it uses to capture images of tomatoes grown in its greenhouses, will ultimately boost its yields by 20%. 

Blockchain equals better business 

Blockchain technology is revolutionary because of its implications for creating secure, transparent records. This ability will be incredibly valuable for the agriculture industry, where it can be used to establish smart contracts and track food from its origins to grocery stores. An American company recently conducted a massive sale of 60,000 pounds of soybeans to a Chinese buyer all on blockchain. Agricultural deals of this scale often require a great deal of back-and-forth, with multiple agents and decisionmakers involved in the trade. When such sales are conducted using pen and paper, the risk of mistakes and logistical delays is high. 

Blockchain simplifies the process because everyone has access to the same information, such as receipts, letters of credit, and necessary certificates, all of which are stored clearly and accurately for every party to see. The businesses involved in the deal reported a fivefold reduction in time spent on the logistics of the agricultural commodity trade, which they claimed was the first to be completed via blockchain. 

Blockchain may also help curb future food safety crises, such as the e.coli outbreak that caused people in 25 states to become sick after eating contaminated lettuce. Scientists struggled for at least two months to identify where the diseased crops came from, highlighting the need for better supply chain documentation and management throughout the industry. If agricultural growers and their supply chain partners tracked crops on a blockchain, they could easily identify where different crops originated and who was involved in growing and transporting them. 

A major American food seller found that blockchain programs allowed it to trace a piece of the produce’s origins in just over two seconds, compared to the nearly seven days it takes via more manual processes. Not only does the instantaneous nature of blockchain appeal to consumers who are increasingly concerned about where their food comes from, it has the potential to minimize and even halt widespread contamination outbreaks. From a business perspective, this means pinpointing the source and acting precisely to root out the problem, as opposed to fumbling through disorganized data and taking losses on food that was unnecessarily discarded or unsold as a result of the outbreak. 

The future of agriculture 

Agriculture companies have proven that they’re up to the task of producing food for most of the world. Growers and their partners must do even better if they’re to reduce food insecurity and keep pace with population growth. The time to invest in smart technologies and precision agriculture has arrived. Companies that use IoT devices, AI systems, and blockchain stand to improve their yields, reduce costs, and enhance their profits. The business case for tech-enabled agriculture is clear; now it’s a matter of broader adoption and implementation. 

Briane

The 4 digital headwinds impacting productivity and growth [VIDEO]

Productivity, efficiency, and growth all measure different aspects of something fairly straightforward: the transformation of inputs (like time or resources) into outputs (ideas, goods, services, solutions). A simple concept with profound implications for individuals and organizations alike.

Digital technology has an important role in personal and organizational productivity. From instantaneous messaging between colleagues and friends to automated Human Resources systems, digital technologies are central to our lives, at home or at work. But there is room for tremendous improvement in the effectiveness of many of these technologies.

In this quick video, Entefy’s Co-Founder Brienne Ghafourifar presents an overview of 4 key challenges faced by individuals and organizations alike in their quest for productivity, efficiency, and growth.

Robot

5 Reasons why the world needs ethical AI

In the U.S., 98% of medical students take a pledge commonly referred to as the Hippocratic oath. The specific pledges vary by medical school and bear little resemblance to the 2,500-year-old oath attributed to the Greek physician Hippocrates. Modern pledges recognize the unique role doctors play in their patients’ lives and delineate a code of ethics to guide physicians’ actions. One widely used modern oath states:

“I will not permit considerations of age, disease or disability, creed, ethnic origin, gender, nationality, political affiliation, race, sexual orientation, social standing, or any other factor to intervene between my duty and my patient.”

This clause is striking for its relevance to a different set of fields that today are still in their infancy: AI, machine learning, and data science. Data scientists are technical professionals who use machine learning and other techniques to extract knowledge from datasets. With AI systems already at work in practically every area of life, from medicine to criminal justice to surveillance, data scientists are key gatekeepers to the data powering the systems and solutions shaping daily life.

So it’s perhaps not surprising that members of the data science community have proposed an algorithm-focused version of a Hippocratic oath. “We have to empower the people working on technology to say ‘Hold on, this isn’t right,’” said DJ Patil, the U.S. chief data scientist under President Obama. The group’s 20 core principles include ideas like “Bias will exist. Measure it. Plan for it.” and “Exercise ethical imagination.” The full oath is posted to GitHub.

The need for professional responsibility in the field of data science can be seen in some very high-profile cases of algorithms exhibiting biased behavior resulting from the data used in their training. The examples that follow here add more weight to the argument that ethical AI systems are not just beneficial, but essential.

1.     Data challenges in predictive policing

AI-powered predictive policing systems are already in use in cities including Atlanta and Los Angeles. These systems leverage historic demographic, economic, and crime data to predict specific locations where crime is likely to occur. So far so good. The ethical challenges of these systems became clear in a study of one popular crime prediction tool. PredPol, the predictive policing system developed by the Los Angeles police department in conjunction with university researchers, was shown to worsen the already problematic feedback loop present in policing and arrests in certain neighborhoods. “If predictive policing means some individuals are going to have more police involvement in their life, there needs to be a minimum of transparency. Until they do that, the public should have no confidence that the inputs and algorithms are a sound basis to predict anything,” said one attorney from the Electronic Frontier Foundation.

2.     Unfair credit scoring and lending

Operating on the premise that “all data is credit data,” the financial services industry is designing machine learning systems that can determine creditworthiness using not only traditional credit-worthiness data, but social media profiles, browsing behaviors, and purchase histories. The goal on the part of a bank or other lender is to reduce risk by identifying individuals or businesses most likely to default. Research into the results of these systems has identified cases of bias like, for example, two businesses of similar creditworthiness will receive different scores due to the neighborhood the business is located in.

3.     Biases introduced into natural language AI

The artificial intelligence technologies of natural language processing and computer vision are what give computer systems digital eyes, ears, and voices. Keeping human bias out of those systems is proving to be challenging. One Princeton study into AI systems that leverage information found online demonstrated that the same biases people exhibit make their way into AI algorithms via the systems’ use of Internet content. The researchers observed, “Here we show for the first time that human-like semantic biases result from the application of standard machine learning to ordinary language—the same sort of language humans are exposed to every day.” This is significant because these are the same datasets often used to train machine learning systems used in other products and systems.

4.     Limited effectiveness of healthcare diagnosis

There is limitless potential for AI-powered healthcare systems to improve patients’ lives. Entefy has written extensively on the topic, including this analysis of 9 paths to AI-powered affordable healthcare. The ethical AI considerations in the healthcare industry emerge from the data that’s available to train machine learning systems. That data has a legacy of biases tied to variability in the general population’s access to and quality of healthcare. Data from past clinical trials, for instance, is likely to be far less diverse than the face of today’s patient population. Said one researcher, “At its core, this is not a problem with AI, but a broader problem with medical research and healthcare inequalities as a whole. But if these biases aren’t accounted for in future technological models, we will continue to build an even more uneven healthcare system than what we have today.”

5.     Impaired judgement in criminal justice sentencing

Advanced artificial intelligence systems are at work in courtrooms performing tasks like supporting judges in bail hearings and sentencing. One study of algorithmic risk assessment in criminal sentencing revealed how much more work is needed to remove bias from some of the systems supporting the wheels of justice. Examining the risk scores of more than 7,000 people arrested in Broward County, Florida, the study concluded that the system was not only inaccurate but plagued with biases. For instance, it was only 20% accurate in predicting future violent crimes and twice as likely to inaccurately flag African-American defendants as likely to commit future crimes. Yet these systems contribute to sentencing and parole decisions.

Entefy previously examined specific action steps for developing ethical AI that companies can use to help ensure the creation of unbiased automation. 

Machine robot

The machine learning revolution: ML transformation in 7 global industries

With a technology as impactful as machine learning, it can be difficult to avoid hyperbole. Sure, billions of dollars in investment are pouring into ML projects. Yes, machine learning is a centerpiece of digital transformation strategies. And, to be certain, machine learning is often what people are talking about when they use the umbrella term “AI.” So it’s worth taking the time to look at real-world ML capabilities being developed and deployed at digitally nimble companies around the globe.

Entefy published two previous articles covering the machine learning revolution, which you can access here and here. We continue this global survey below with a look at how 8 more industries are making use of machine learning technology.

Pharmaceuticals & Life Sciences

Wherever you fall on the death disruption debate, we can all agree that aging is a challenging experience. Even if you don’t aspire to immortality, you likely recognize that increased joint pain and susceptibility to illness and injury will erode your quality of life. But deep learning may be able to slow the aging process. Scientists are now using the technology to identify biomarkers associated with aging. Soon enough, a simple blood test could tell you which parts of your body are showing signs of wear and tear, and your doctor could help you mitigate, and perhaps reverse, those affects through lifestyle recommendations and medication.

Food

Up to 40% of a grocer’s revenue comes from sales of fresh produce. So, to say that maintaining product quality is important is something of an understatement. But doing so is easier said than done. Grocers are at the whims of their supply chains and consumer fickleness. Keeping their shelves stocked and their products fresh is a delicate balancing act.

But grocers are discovering that machine learning is the secret to smarter fresh-food replenishment. They can train machine learning programs on historical datasets and input data about promotions and store hours as well, then use the analyses to gauge how much of each product to order and display. Machine learning systems can also collect information about weather forecasts, public holidays, order quantity parameters, and other contextual information. They then issue a recommended order every 24 hours so that the grocer always has the appropriate products in the appropriate amounts in stock. 

Businesses that have implemented machine learning in their replenishment workflows reduce their out-of-stock rates by up to 80%, along with up to 9% in gross-margin increases.

Media & Entertainment

Machine learning allows media companies to make their content more accessible to consumers through automatic captioning systems. Since implementing an automatic captioning program, YouTube has enabled 1,000,000 functionally-deaf Americans and 8,000,000 hearing impaired to watch and enjoy its videos. As of 2017, its machine learning programs had become sophisticated enough to include captions for common non-speech audio, such as laughter and music, creating an even more complete experience.

Information Technology

Although machine learning is generating unprecedented business insights, many organizations have failed to invest adequately in AI systems. For instance, McKinsey found that “the EU public sector and healthcare have captured less than 30 percent of the potential value” of Big Data and analytics. Organizations that want to avoid a similar mistake will need to ramp up their data science abilities – but so will workers who want to stay competitive. By 2020, there will be more than 2.7 million data science jobs, and the demand for workers who understand and can work with machine learning technology will only grow from there.

Law

Deep learning applications are especially impressive in the legal sector due to the nature of the language these programs must parse. Legal phrasing can be complex and difficult to decipher, yet deep learning systems are already capable of analyzing tens of thousands of vital documents. When legal teams needed to dissect contract clauses that upset their or their client’s business and invoicing processes, they once had to manually review stacks of rigorously prepared documents. Now, they can feed them into a program that works far faster than any lawyer and can pick out important phrases for further analysis by the legal team.

Insurance

Improving risk prediction and underwriting is in everyone’s interest, which is why machine learning is such a gift to the insurance industry. In auto insurance, for instance, machine learning algorithms can use customer profiles and real-time driving data to estimate their risk levels. They can then formulate personalized rates based on that information, potentially creating savings for both consumers and insurance companies.

This process may be enhanced by even more in-depth analyses, in which machine learning programs pull in seemingly unrelated social media data to create a more precise profile. The insurance industry could use artificial intelligence to identify which policyholders are gainfully employed and which seem to be in good health. Theoretically, someone who is responsible in those areas of their lives will be a responsible driver as well.

Education

Intelligent tutoring systems (ITS) hold enormous potential for disrupting the classroom and helping students learn. These AI programs serve as virtual tutors, and they adapt their digital lessons based on each child’s strengths and weaknesses. Each time the student completes a task or quiz, a machine learning program processes that information to customize future materials.

By “learning” a user’s unique needs and identifying which types of lessons are most effective for them, the ITS helps the student overcome learning challenges and retain more knowledge. Research indicates that students who use intelligent tutoring systems perform better on tests than their peers who learn via large-group instruction.

For an overview to key concepts in artificial intelligence, check out Entefy’s article Essential AI: A brief introduction to terms and applications.

Alston

Entefy CEO Alston Ghafourifar discusses the power of multi-modal AI at the 2018 MIT Sloan CIO Symposium

On May 24th, Entefy CEO Alston Ghafourifar spoke at the 2018 MIT Sloan CIO Symposium in Cambridge, Massachusetts on the topic of: “Building the Intelligent Enterprise using AI, ML, Mobility, and Cloud Services.” The annual event attracts more than 800 CIOs and other senior executives from global organizations of all sizes. 

During a wide-ranging discussion about digital transformation strategies, Alston presented his views on the importance of a multi-modal approach when implementing AI solutions to build intelligence into enterprise systems. While unimodal AI can deliver promising results when applied to direct use cases, Alston argued, they often fail to capture much of the overall value that multi-modal AI systems can harness and apply to a broader set of business processes. Alston shared an analogy with cloud technologies. An enterprise that transitions only its CRM platform to the cloud can capture some business value; but that value may be a fraction of what can be harnessed when applying digital transformation to a broader spectrum of systems. Alston argued that, when it comes to extracting optimal value out of an organization’s lake of data, multi-modal AI systems that leverage multiple domains of AI such as computer vision, natural language, audio, and other data intelligence can significantly outperform unimodal systems. It truly is a case of “greater than the sum of its parts.” 

Over the course of the program, multiple speakers presented their thoughts on the broad trends impacting digital transformation, including AI and machine learning, cybersecurity and data privacy in the post-GDPR world, infrastructure and interconnection strategies, and workload-to-cloud projects using multiple IoT and mobile interfaces. 

On behalf of Alston and everyone here at Entefy, we offer our sincere thanks to the event organizers, symposium attendees, and Ryan Mallory of Equinix, who moderated the discussion. 

Face rendering

Computer vision technologies in the age of “fake news”

You click through to a dated-looking website. A video begins to play with the mysterious title “Real-time Facial Reenactment.” In the video, a tabletop TV shows cable news network footage of President George W. Bush. Beside the TV is a young man seated in front of a webcam, stretching his mouth and arching his eyebrows dramatically like a cartoon character. And then it clicks. The President is making the exact same facial movements as the young man, perfectly mirroring his exaggerated expressions. The seated figure is, to all appearances, controlling a President of the United States. Or at least his image on a video screen. It’s worth checking out the video to see this digital puppetry in action, with 4 million views and counting.

Welcome to the 21st century version of “seeing is believing.” The video in question is the product of a group of university researchers working in a specialty field of artificial intelligence called computer vision. Computer vision algorithms can do things like digitally map one face onto another in real-time, allowing anyone with a webcam to project their own facial expressions onto a video image of a second person. No special media or expensive technology is required. The researchers used an off-the-shelf webcam and publicly available YouTube videos of American presidents and celebrities.

AI-powered computer vision technologies that manipulate video can be impressively realistic, often eerie, and just the sort of thing that should ignite discussions of authenticity in the age of “fake news.”

Photoshop, the verb

The photo editing software Photoshop has been around so long at this point that we use it as a verb. But AI computer vision technologies with the power to manipulate what we see are something else entirely. After all, it’s one thing to airbrush a model in an advertisement and quite another to manipulate what’s said on the nightly newscast.

To see why, consider a few facts. Billions of people worldwide, including 2 out of 3 Americans, rely on social media and other digital sources for news. And in doing so, they overwhelmingly choose video as their preferred format. These informational ecosystems are already struggling with “fake news,” the term for deliberate disinformation masquerading as legitimate reporting.

If AI computer vision enables low-cost, highly convincing tools for manipulating video—for making anyone appear to say anything someone might want them to say—those tools have an extremely high potential for mayhem, or worse. After all, once we see something, we can’t unsee it. The brain won’t let us. Even if later we learn that what we saw was fake, that first impression remains.

AI raises some big questions, and they’re important ones. What happens when we can no longer rely on “seeing is believing?” How can we ensure reliable authenticity in the digital age? Technology is moving fast, and new AI tools remind us that sometimes we need to put aside the cool factor and think instead about how specific technologies impact our lives. Because digital technology is at its best when it improves lives and empowers people.

Putting the dead to work

Part of the reason that AI computer vision technology is so remarkable is that it compresses hours or days of professional work into something that happens in the moment.

The technology has a clear application in entertainment. The production of the immersive worlds and intricate characters we see in film and video games is quite expensive. CGI (computer-generated imagery) is lauded for its ability to make the unreal appear as close to real as possible, to turn imagination into reality. But there is a lot of effort behind these transformations.

CGI may fill theaters, but we’ve also seen that people place limits on the type of unreality they’ll accept. Look no further than Rogue One, the 2016 film set in George Lucas’ Star Wars universe. The film’s story takes place before the events of the original Star Wars film from 1977. The Rogue One filmmakers wanted to feature one of the characters from the original film but ran into a small problem: the actor, Peter Cushing, died in the 1990’s.

CGI to the rescue. The producers employed computer graphics to generate a digital version of Cushing, then brought him to simulated life using motion capture gear and another actor, Guy Henry, to voice Cushing’s character. The result was a rather convincing resurrection of Cushing, which simulated everything from his facial tics to the differences in lighting between films.

If you’ve seen the film, you probably remember this character. Because watching a dead actor brought back to life…isn’t quite right. And not surprisingly the film attracted critics who raised ethical concerns about the dignity of the dead and the right to use a deceased actor’s likeness. It was exactly the feeling of unease we talk about when we talk about the “uncanny valley.”

Rogue One wasn’t the first digital resurrection to raise these concerns. Other reanimations have met with mixed responses from the public. There was Tupac’s holographic performance at Coachella. And Marilyn Monroe, John Wayne, and Steve McQueen posthumously pitching products in commercials. Despite the public’s unease, more than $3 billion is spent annually on marketing and licensing deceased celebrities for advertisements. But none of this suggests that we’re ready to give up “seeing is believing” just yet.

Authenticity is timeless

The invention of Photoshop didn’t cause people to completely distrust every digital photo. We still happily share pictures from vacations and selfies with celebrities without worry that our friends will doubt their authenticity.

The challenge with AI computer vision tech isn’t how it might be misused—after all, practically any technology can be misused. It’s that it joins a growing list of technologies that are developing so quickly that people haven’t had enough time to collectively decide how we want them to be used. This is as true about some forms of AI as it is about robotics and gene editing.

But if we did come together to have this discussion, what we’re likely to find is a lot of common ground around the idea that the best way to use all of these revolutionary technologies is to make life better for people, solve global problems, and empower individuals. After all, for every potential misuse of computer vision AI, there are hundreds of positive and impactful applications like real-time monitoring of crime, improved disaster response, automated medical diagnosis, and on and on.

Fake news is a problem worth solving. But until we’ve successfully leveraged advanced technology in support of truth and authenticity, let’s not abandon “seeing is believing” just yet.

Globe

The machine learning revolution: 6 examples of ML innovation

AI augmentation is expected to generate $2.9 trillion in value by 2021, freeing up 6.2 billion hours of worker productivity in the process. Within the AI category, machine learning attracted about 60% of the estimated $8 to $12 billion in external investment in AI capabilities during 2016. Machine learning’s outsized role comes in large part from its usefulness in enabling other capabilities, like robotics, process automation, and speech recognition. Not surprisingly, machine learning sits squarely at the center of digital transformation strategies the world over.

Entefy recently shared a look at ML innovation in 8 different industries. Below we continue our survey of noteworthy machine learning projects, this time in verticals as diverse as banking, nonprofits, energy, and government.

Banking

Machine learning analytics programs offer banks the potential to increase profits substantially. One report found that companies that invested in advanced analytics saw an average profit increase of approximately $369,000,000 a year. They achieve this by making better use of their customer data.

By analyzing client behavior, one institution learned when to intervene before customers disengaged from the company, leading to a 15% reduction in churn. Other organizations used the insights gleaned through machine learning programs to identify new segments within its existing customer base and phase out costly, unnecessary discount practices. Machine learning programs “discover” hidden trends and insights that companies can use to strategize more effectively.

Energy

Deep and machine learning will prove invaluable to combatting climate change and achieving more efficient energy usage. Companies are already using machine learning to sell solar panels in more cost-efficient ways, while researchers predict that cloud-based monitoring systems will be able to optimize energy usage in real time.

Machine learning could also help tech companies reduce their carbon footprints, optimizing energy usage according to a variety of conditions and ultimately decreasing their power draws by up to 15%. This is will be of growing importance as rising demands for computing power raise serious environmental concerns.

Healthcare

With the global medical community facing a shortfall of more than 4 million medical providers, it’s clear that the healthcare system needs help. As Entefy wrote last year, artificial intelligence is here to help doctors. Researchers are exploring the use of machine learning to diagnose disease, a breakthrough that could lead to faster, more accurate patient treatments. 

One group of researchers used machine learning to predict whether patients would be hospitalized due to heart disease. They achieved an 82% accuracy rate, which was 26% percent higher than the average rate using one of the most common existing prediction models. Identifying a patient’s risk for heart disease and hospitalization could allow doctors to make life-saving recommendations before the condition reaches a critical stage.

Retail

Machine learning makes for a more dynamic and interactive retail experience. Instead of wandering store aisles for hours searching for the right tool or outfit or home appliance, home makeover businesses are embracing artificial intelligence to create more personalized – and convenient – brand encounters.

One company used machine learning to create a tool for better home decorating. Customers will soon be able upload photos of their homes and see realistic simulations of what different shades of paint, pieces of furniture, and light fixtures would look like in their living rooms. No more guesswork, no more aggravating trips to the store because you bought the wrong shade of paint. Thanks to machine learning, you could be sure to get it right the first time. 

Government

Machine learning could make the country safer and more equitable, and that’s not just idealism talking. A 2017 study found that a machine learning algorithm was more adept than judges at predicting which defendants were flight risks while awaiting trials. The program assigned risk scores based on details such as defendants’ ages, their rap sheets, the offenses for which they were awaiting trial, and where they had been arrested.

Researchers determined that the program could decrease the number of defendants in jail while awaiting trial by 40% without risking an increase in crime rates. Widespread use of such algorithms would alleviate strains on the criminal justice systems and could even prevent future crimes. The program’s accuracy would serve as a safeguard against judges’ erroneous decisions, as the defendants they release sometimes commit additional crimes before their trials or fail to appear for their court dates.

Nonprofit

Nonprofits are using machine learning to identify trends related to mental health crises, such as indicators for suicide. Importantly, machine learning can make connections that humans might not see, and that information enables crisis counselors to reach people who are in urgent need. In one example, a machine learning program identified that the term “ibuprofen” was a likelier indicator of an imminent threat than the term “suicide.” Therefore, the computer program prioritizes users who have mentioned ibuprofen in their communications to ensure they reach a counselor as quickly as possible.

If you’re just getting up to speed on artificial intelligence, be sure to check out Entefy’s article Essential AI: A brief introduction to terms and applications.

Alston

Universal interaction: the new S-curve of innovation [VIDEO]

One of the core insights that drives our work at Entefy is that the world is in the early stages of a new S-curve of innovation: universal interaction. Think about the sum of all communication taking place between people, devices, and services every minute of every day. Broadly speaking, we’re talking about everything from instant messaging to global IoT sensor activity to conversational AI.

In this video, Entefy CEO Alston Ghafourifar gives an overview of the new S-curve and its implications for people, industries, markets, and our global community.

AI Brain

Depression costs the world $1 trillion annually. AI can improve treatment and reduce costs.

Anxiety and depression plague the U.S., and their impact grows by the year. Forty million adults throughout the country suffer from an anxiety disorder, and 16.2 million have gone through at least one major depressive episode. Not only do these mental health conditions wreak havoc on people’s well-being, personal lives, and careers, they also carry a significant economic toll.

Depression costs the U.S. $210 billion each year. That number includes costs directly linked to depressive cases, as well as related mental health issues and physical ailments and losses associated with workplace absenteeism due to depression. Globally, depression-related costs are $1 trillion, according to the World Health Organization. With mental health issues on the rise, the need for life-saving, cost-effective depression and anxiety treatments becomes ever more urgent.

But current treatment protocols are imperfect, for both patients and providers. Researchers found that although people suffering from depression will most often go to their primary care doctors for guidance, those doctors may not be best-equipped to help them. The predominant approach to disease management for depression doesn’t include ongoing monitoring and education. Without a plan for how to cope with and improve their conditions, people are more likely to continue suffering from depression and related health problems and therefore will need additional care. Expensive additional care.

Troublingly, researchers found that many medical practices lack the infrastructure and protocols to help people on an ongoing basis. Depression and anxiety require more time and potentially more involved patient visits than strictly physical diseases, and many providers are already overworked and strapped for time. They may also struggle to find the financial resources, especially as some insurance companies are still reluctant to cover mental health expenses.

Now technology may have a role to play in alleviating mental health suffering and reducing the cost burden on healthcare providers. Researchers and scientists are already exploring the use of artificial intelligence in diagnosing disease. But increasingly, AI appears to have applications in mental health treatment as well. All of which has the potential to finally bend the mental health cost curve significantly.

Deep learning for better treatment

Depression may emerge in a patient for a number of reasons, including genetic predisposition, biological factors, and hormone imbalances. There are also a number of risk factors that can trigger depressive episodes, such as anxiety and other mental health conditions, trauma, and some prescription medications.

With so many variables at play, there is no one-size-fits-all treatment for depression. Patients sometimes react adversely to one medication, forcing them to try out several prescriptions until they find one that helps. Some benefit from talk therapy while others require a combination of treatments. But finding the right fit isn’t a precise science and identifying an effective approach can be a costly and time-consuming process for everyone involved.

Deep learning may be able to help doctors create personalized depression treatment plans based on patients’ conditions and histories. By aggregating and analyzing a person’s medical records, current medications, and other factors, a deep learning algorithm could recommend potential treatments that doctors can incorporate into their assessments. There are no guarantees those plans would work, but they would save doctors a good deal of trial-and-error when trying to determine which course is most likely to help a patient.

Importantly, an AI program would be able to cross-check potential treatments to ensure that prescriptions won’t negatively interfere with one another or set off additional health crises. Doctors would of course still have final say in recommending a course of action. But AI could help them do so quickly and more effectively.

Chatbots for prevention and care

Ideally, access to mental health specialists would be ubiquitous for all patients. In reality, however, many people cannot afford to see a psychologist or psychiatrist. Even those who can afford therapy sessions may be reluctant to do so for fear of being stigmatized. Chatbots that use natural language processing (NLP) and other forms of artificial intelligence can bridge the gap for people in these circumstances. Free chatbot apps help people suffering from depression and anxiety by giving them someone to “talk” to and a place to gather resources and recommendations for coping with their conditions.

As with diagnosing and treating mental health conditions, an AI chatbot can’t replace a licensed therapist long-term. But daily check-ins via chatbot may help people stay motivated to take actions that will help them manage their depression and anxiety. Given that ongoing care is one of the challenges primary care physicians face with patients who have these diseases, a chatbot can help sustain important habits and lifestyle changes that lead to improved outcomes. To the extent that such chatbots contribute to people’s mental wellness, they also reduce the financial burdens the healthcare system and the workforce bear due to mental illnesses.

Biometrics: the next frontier in preventive medicine

Doctors may ultimately receive assistance in the form of biometric data gathered from patients’ smartphones. One company is exploring how gathering information about how patients use their phones, their pulse rates, and other physical indicators can be used to paint a more accurate picture for their doctors. When a doctor notices that a patient’s pulse rate is surprisingly high, she knows to discuss that issue and identify ways of addressing it. In mental health and other areas, AI may become a powerful tool for monitoring and managing health conditions.

Having quality data on people’s health also takes the conversation beyond a patient’s self-reported state of wellness. Someone might be embarrassed to admit that he’s felt anxious or depressed and so might avoid having that conversation with his doctor. But by looking at his biometric data, the doctor can detect patterns that might be cause for concern.

In theory, aggregating this data could help on a broader level as well. Collecting biometric data could enable deep learning algorithms to find trends in behavioral and emotional patterns and help doctors identify patients who have high suicide risks.

Mental health issues are complex and difficult to manage. Healthcare providers who are already overworked and underprepared to address the needs of depressed and anxious patients would be well-served by AI-powered technologies that make diagnosis and ongoing treatment easier and more cost-effective. 

Machine

The machine learning revolution: ML innovation in 8 industries

These days, there is a lot of confusion surrounding the term “artificial intelligence,” with people attributing it at different times to different technology domains. Machine learning and deep learning are on that list, too, though they’re distinct AI approaches. Both refer to techniques by which computer programs become “smarter” as they process more information. Entefy produced a primer on key terms in AI in our article Essential AI: A brief introduction to terms and applications.

From a business perspective, what is most interesting about machine learning is not the mechanics of how it works but the vast array of revolutionary applications being developed using it. To show just how impactful this technology already is, we put together this roundup of machine learning projects in 8 different industries. 

Aerospace 

Machine learning could make flying safer and more user-friendly. Pilots and flight crews operate around the world, and each of them generates significant amounts of data. In addition to information that’s automatically recorded via the aircraft’s computers, human personnel catalog their notes and observations as well. But there aren’t necessarily standard documentation processes for the latter, which means valuable information might slip through the cracks. 

Machine learning may soon be able to parse these varied forms of data, interpreting different types of shorthand and slang and organizing all available information into a central database. In the future, an AI pilot may well draw on that database to respond to in-flight challenges and ensure flight safety.   

Automotive

The more data companies gather about driver behaviors, the better they understand why accidents happen and how to prevent them, which is why machine learning may well make our roads safer. Businesses that operate vehicle fleets have begun using telematics to collect information about all aspects of driver performance and deploy machine learning algorithms to do just that.

Every time an employee over-accelerates, takes a turn too sharply, or fails to buckle their seatbelt, sensors and tracking systems detect and record it. This information enables companies to better train drivers by homing in on common driving problems. They can compare numbers of accidents against how many hours drivers are working and adjust employee schedules accordingly. Of course, improving human driving skills may be a stopgap on the path toward even greater road safety, as autonomous vehicles could prevent 90% of accidents

Manufacturing

Manufacturing companies use machine learning algorithms to cut waste and other expenses in their processes. Smart programs analyze existing workflows and key in on areas that can be improved. In a report on artificial intelligence in the industrial sector, the authors found that machine learning programs could process thousands of data points gathered from multiple machine types and subprocesses. The results of such analyses could lower expenses in semiconductor production alone by 30% and could boost manufacturing productivity more generally by 20%. 

Transportation & Logistics

Forecasting is incredibly important in supply chain logistics. A sudden storm, market upset, or rise in transportation costs could severely impact logistics companies and their clients. But machine learning is helping these businesses become both more agile and resilient. Smart programs can use contextual data to predict potential problems so companies can create contingencies. The more accurate and current their data, the better equipped they are to respond to crises. 

Agriculture

Imagine going to work in the morning and being faced with millions of options for solving a given problem. Being spoiled for choice may sound good, but human beings simply cannot process that amount of information, especially on a short and urgent timescale. Yet that’s the situation plant breeders encounter all the time. Determining which breeds are most likely to thrive in a given climate, region, or season is no small feat, and the outcomes of their decisions impact the food supply for millions of people.

Fortunately, machine learning is up to the demands of modern agriculture and can analyze historical datasets to identify which breeds are suited to different circumstances. The technology can also be used to spot diseased crops through pattern recognition, so growers can intervene and save the rest of their yields. This type of precise science will be increasingly important as the global community copes with food insecurity and the growing impact of climate change. 

Consumer Goods & Services

Effective use of artificial intelligence depends on data, and retailers gain access to more customer information every day. Companies that combine consumer profiles with behavioral data and market trends can create powerful sales strategies, and machine learning programs can analyze wide-ranging data sets to identify optimal selling conditions. A business that deploys a dynamic pricing model powered by machine learning insights can promote its products “at the right price, with the right message, to the right targets,” according to the McKinsey Global Institute’s discussion paper, Artificial Intelligence: The Next Digital Frontier? Companies that get this right could see up to a 30% increase in online sales. 

Hospitality & Travel

Travel companies use machine learning to identify behavioral trends among consumers so they can tailor their booking experiences accordingly. Machine learning enables travel industry brands to extract trends in which factors influence travelers’ decisions most and which devices and search methods they use for different types of queries. For instance, the number of reviews a property garners matters more to consumers than the actual number of stars it receives. Knowing this, hotels may try to incentivize people to leave reviews so they can get on other travelers’ radars. 

Communications

Approximately 9 in 10 American adults use the Internet, and usage among the 18-29 age group stands at 98%. At that level of Internet penetration, telecommunication companies can’t afford downtime or infrastructure lapses. But their vast networks of cell towers, satellites, and fiber optic cables ae difficult to monitor manually.

Now, one major telco is using cameras on wings, or COWs, which are drones that capture images of their cell towers and commercial installments and diagnose problems in real-time. The company anticipates a future in which it runs the drone-captured data through a machine learning algorithm that pinpoints problems and fixes them automatically. Not only would this mean faster problem resolution, it would also create safer working conditions since there would be less need for technicians to climb to the tops of towers and telephone poles to make the fixes themselves.