AI Globe

From robot bees to crunchier potato chips, 6 creative uses of AI today

AI is a popular topic these days. It seems as though every day, someone announces a new product or service that has AI at its core. McKinsey reported that, as of 2019, 58% of companies surveyed had implemented AI in at least one business unit within their organization. This represents a 23% increase in AI adoption over prior year.

Across industries, business are beginning to learn how best to leverage AI and machine learning to improve performance and generate positive return on investment (ROI). Think next generation process automationbetter customer service, virtual agents, and physical robots that can out run, out muscle, or out navigate any of us.

Outside of the more common use cases, however, there’s a diverse and fascinating world where AI and machine intelligence is being used to enhance both life and business.

6 Creative uses of AI today 

  1. Farming with robot bees – AI has been used in agriculture for some time. Until recently, the focus has been on optimizing core farming tasks—watering crops, determining correct time and dosage for pesticide use, and so on. However, with the recent decline in honeybee population threatening crop pollination for nearly a third of our food supply, multiple organizations are working on ways to use robo-bees (bee-sized robots) to help supplement the work bees do. These autonomous swarming drones can be trained to learn and follow pollination paths using AI and GPS. While this doesn’t solve the problem of colony collapse among bees, it can help ensure the future of our food supply.
  2. Restoring touch and control – The medical field is the perfect playground for useful applications of machine learning, so it’s not surprising that researchers are employing AI to help restore prosthetic hand control for select amputees. For this, “the machine learning algorithm learns what muscular stimuli at the site of amputation correlate to specific hand motions.” There is still much work to be completed in this area, but the initial results show immense promise. 
  3. Accelerating drug development – With the world in the middle of a global pandemic, there is an immediate need to find ways to speed up drug development. For illustration, a viable vaccine can take more than 10 years to fully develop as researchers and doctors work through the various stages. This includes everything from research and discovery to rigorous testing, regulatory approval of the drug, as well as manufacturing and distribution at scale. AI can help reduce the amount of time it takes by eliminating manual and time-consuming processes that make up the bulk of the 10-year process.
  4. Producing fresher food – AI andother forms of automation have been a part of the manufacturing process for decades to streamline a number of processes. However, recently manufacturers have also found ways to help ensure food is fresh and crisp. For instance, potato chip manufacturer, Frito Lay, is using lasers and machine learning to test the crispness and crunchiness of their chips without having to touch them. The system fires lasers at the chips and listens to the noise they make. That sound is then correlated into texture that reveals the quality of the product.
  5. Preventing poaching – Poaching is a problem that threatens wildlife populations around the world. Regardless of whether poachers are capable of reducing populations to critical numbers unless action is taken (the recent plight of the pangolin is a great example of this). To help prevent at-risk species from being wiped off the planet, countries have turned to Protection Assistant for Wildlife Security (PAWS), a predictive AI software. PAWS determines the most effective patrol routes to catch poachers based on data collected in the field. The data collected (evidence of poacher activity such as snares, footprints, and vehicle tracks) is fed into PAWS to “predict potential poaching hotspots.”
  6. Assisting firefighters – Navigating through burning buildings and fire zones requires firefighters to pay attention to a number of hazards and complex coordination steps that, if not properly managed, can lead to injury or death. To assist firefighters on the ground, NASA developed AUDREY, “the Assistant for Understanding Data through Reasoning, Extraction, and synthesis.” AUDREY is a virtual agent that tracks teams and provides individual updates to each firefighter on the team based on their location. The system also recommends ways to improve collaboration among team members. As the “guardian angel in the cloud,” AUDREY can learn and make predictions about the resources firefighters need to battle fires.

What can you do with AI

When it comes to introducing AI into your business, one of the biggest challenges is figuring out what opportunities are real and present. A great way to kick start an AI project is to look for areas where too much human effort is required to make better decisions or complete routine tasks while large volumes of data (internal or external) sits idle or is underutilized. Bringing advanced data intelligence and automation to processes and workflows can significantly boost productivity and team morale.

Another important lesson with AI implementations is embracing change and not being afraid to think outside of standard applications. You don’t fire lasers at potato chips to measure their crunchiness without a little creativity and sense for adventure.

To brush up on key AI terminology, be sure to read Entefy’s 53 useful terms in the world of artificial intelligence.

Smiley face emoji

Creating better customer experience with better AI

When people think of AI, good customer service may not be the first thing that pops to mind. Most people are going to jump to robots that lack empathy and simply respond to queries based on their programming.

But today, AI is more Rosie the Robot. It’s here to help find answers and support customer service and sales teams. AI can provide instant insights that would take a person years or even a lifetime of experience to generate. Businesses that provide their customers with highly personalized experiences can benefit from increased sales and better ROI on marketing spend. Artificial intelligence used in this way can boost revenue by 58% while increasing engagement by 54%.

Here’s how AI is shaping the future of customer service:

Better personalization and better offers

We produce an ocean of data as we surf the Internet. Every time we visit a website, use an app, or interact with a company on social media, we leave behind bits of information that indicate our preferences or buying habits. It can sound a little unsettling to think we’re leaving all that information behind, but, done correctly, that information can be useful in producing highly personalized offers to customers while protecting their privacy.

AI is reaching the point where it no longer just recommends scary movies because you watched a couple of horror flicks a few years ago. It’s capable of analyzing larger, more complex datasets to create intuitive and useful experiences. This matters now more than ever as customer expectations are at an all-time high.

Say an online shopper has been browsing through a specific set of products. If they’ve made prior purchases from a particular merchant, then the merchant can notify the shopper when those products go on sale. Or better yet, predict related needs for that specific shopper before they even come up. In addition, the simple fact that the shopper is online doesn’t exclude offline conditions from making an impact on their purchasing. Advanced AI systems can now take into account external factors such as weather or economic conditions to hyper-personalize the online shopper’s experience.

Organizations are beginning to pay attention to customer sentiments too by analyzing customer support tickets and social media. Here, words really do matter and properly assessing customer experiences can be the difference between brand loyalty and brand fatigue. By anticipating brand fatigue and finding ways to avoid customer churn, businesses can improve profitability. “It is 6-7X more expensive for companies to attract new customers than to keep existing customers.”

The best part of all this is that AI can create these personalized experiences faster and faster. AI dramatically improves the way information is processed and thus an invaluable ally in delivering personalization. With proper implementation, personalization can even manifest in real-time, with a targeted message to the right customer at the right moment.

Localization

It’s a global market. Businesses are no longer stuck selling to people who live nearby. As a result, being able to provide a local experience on a global scale is something that matters to customers.

Providing customers with an experience that reflects their reality gives them a feeling of inclusivity. With localization, it’s sometimes the small things that make the difference. For example, AI and machine learning can help companies move beyond basic language translation to include actual localized content from a variety of sources.

Although true localization relies heavily on human-centric effort, AI can help deliver data-driven recommendations that can highlight differences in local behavior and culture. Everything from language patterns to naming conventions to local holidays can impact the customer’s experience with a company. Providing that localized experience can make customers feel right at home, regardless of where home is.

Provide answers faster

Eventually, everybody has questions when they engage with a business. It could be about a specific product feature or how something should be repaired. Regardless of the question, no one likes waiting around for an answer. Not at a physical location and definitely not online.

It wasn’t that long ago that emails and Polaroids felt instant. These days, instant has a whole new meaning when mere seconds can mean the difference between a happy customer and a negative review. Some things can still take time, but when it comes to customer service, customers are 7 times more likely to buy from a company that gets back to them within an hour. What is surprising is that 24% of companies take longer than 24 hours to respond and 23% never respond at all. Chatbots can help fill that gap by providing automated responses as soon as a question has been asked.

Successful implementation of AI-powered technologies such as chatbots can help reduce the customer query response time even further by providing answers in real-time, around the clock. Faster answers lead to happier customers.

Getting the best of both worlds

We’re just starting to experience how AI is changing the way we engage with our favorite brands, online and offline. Businesses across industries, from retailers to banks, airlines, and entertainment companies have all begun investing in AI technologies to improve customer service. When it comes to customer experience, the true promise of AI is not to replace the human element but rather augment it with better insights and recommendations in less time.

For a closer look at how companies are using AI for the many aspects of customer engagement, be sure to read our previous blog, “AI and the 5-star customer experience.”

Patents

Entefy granted new patents in support of its advanced communication and remote workforce technology

Entefy expands its IP portfolio with a set of newly awarded patents by the USPTO 

PALO ALTO, Calif. May 31, 2020. Entefy Inc. continues to expand its intellectual property portfolio with new trade secrets and newly issued patents by the U.S. Patent and Trademark Office (USPTO). Entefy’s patents represent a range of novel software and intelligent systems that serve to strengthen the company’s core technology, protect its business, and better serve its customers.

“We recognize the value and need for innovation, especially as current economic, social, and health crises are ushering in a new normal,” said Entefy’s CEO, Alston Ghafourifar. “As a company and as a team, we’ve been focused on the type of smart technologies that can power our society at the complex intersection of people, data, and processes. Particularly the type of technologies essential to the remote workforce.”

Expanding on Entefy’s universal communication and collaboration technology, Patent No. 10,587,553 and Patent No. 10,606,871 offer improved methods to simultaneously manage conversations across multiple channels or formats. This set of Entefy capabilities is designed to utilize robust, multimodal machine intelligence to analyze conversations, communication patterns, and individual/group behavior in order to increase worker productivity, streamline knowledge management, and reduce “inbox overload.” For businesses, this technology can also provide managers with unparalleled insights and recommendations regarding organizational dynamics and productivity.

Patent No. 10,587,585 describes the “system and method of presenting dynamically-rendered content in structured documents” and Patent No. 10,606,870 describes the “system and method of dynamic, encrypted searching.” These patents contribute to Entefy’s overarching work in AI-powered search and knowledge management technologies that preserve data privacy while sharing assets.

Entefy was also awarded Patent No. 10,491,690, which describes the “distributed natural language message interpretation engine.” This engine offers specific technical advancements, including Entefy’s AI-powered Message Understanding Service, which can improve performance of natural language-based systems such as digital personal assistants, chatbots, or other conversational AI services.

Entefy has developed an exclusive set of intellectual property assets spanning a series of domains from digital communication to artificial intelligence (AI), dynamic encryption, enterprise search, and others. “As a company, we invest heavily in R&D to create new technologies that can address high value business and consumer needs,” said Ghafourifar. Today’s update is the latest in a series of patent announcements, including earlier Entefy patents that cover the Company’s universal interaction platform, intelligent search capabilities, and APC technology.

ABOUT ENTEFY

Entefy is an advanced AI and process automation company, introducing the world’s 1st end-to-end, multisensory AI software platform. Businesses use Entefy to optimize operations for every corner of their organization—from knowledge management to communication, search, process automation, cybersecurity, data privacy, IP protection, customer analytics, forecasting, and much more.

Entefy’s integrated intelligence platform encapsulates advanced capabilities in machine cognition, computer vision, natural language processing, audio analysis, and other data intelligence. Get started at www.entefy.com.

People

How machine learning will help us outsmart the coronavirus

COVID-19 is a new disease and we are still learning how it spreads…” At time of this writing, this is the message you’ll find when visiting the CDC (Centers for Disease Control and Prevention) website looking for information on how this novel coronavirus can spread.

What’s been so worrisome about COVID-19, the disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), is its accelerating rate of transmission. What emerged in Wuhan, China only 3 months ago, has rapidly infected people in nearly every country. According to the World Health Organization (WHO), “It took 67 days for 100,000 cases to be reported, but just 3 days to go from 400,000 to 500,000 cases.” This, despite unprecedented efforts by many countries large and small trying to contain this disease. And as the world finds itself underprepared in dealing with this type of crisis, countless battalions of experts in varying disciplines are contributing to containment and recovery efforts. One such set of experts includes data scientists, software engineers, and automation experts who are unleashing information and technology as our allies in this emergency.

Monitoring and Forecasting

Machine learning is already at work 24/7, assisting with improved tracking of COVID-19 data as well as predicting its spread on domestic and international scale. The breadth and depth of data produced on this pandemic makes it unfeasible for humans to review and analyze. This includes information from global news sources, health organizations, research teams, governments, the travel industry, as well as manufacturing and logistics data.

AI algorithms are being used by a number of experts to examine this mountainous, diverse set of data in order to better identify relevant information and pinpoint valuable correlations that exist between certain data points. For example, how to mitigate transmission risks, how the spread of the disease and its associate mortality rate maps from one area to another during a particular time interval, or how to forecast the efficacy of certain public health practices.

These findings have grown exponentially over recent months and have become the basis for a growing library of research papers now released as part of CORD-19 (the COVID-19 Open Research Dataset)—“the most extensive collection of scientific literature related to the ongoing pandemic.” CORD-19 came together as a result of a global partnership among leading research groups and the dataset is being offered as a free, open resource to researchers everywhere who can benefit from the currently 45,000 plus scholarly articles pertaining to the coronavirus family including COVID-19.

Both traditional data analytics and machine learning can prove essential in analyzing this rapidly expanding sea of data. While traditional data analytics is useful in descriptive ways, explaining current or historical events, machine learning shines in its endless predictive capabilities, learning from different types of structured and unstructured data. Good examples of unstructured data include news articles, images and videos, research reports, or communication threads between any number of people or groups. For this pandemic, AI learning systems can rapidly comb through and analyze massive amounts of data from hundreds of thousands of sources to expose pertinent patterns, correlations, and recommendations.

Diagnosis and Treatment

Modeling the spread of the virus is important and can help save lives by ensuring preparedness, optimized resource allocation, and efficient delivery of care. However, without rapid improvements in diagnosis and treatment, our collective abilities to contain the transmission and treat the virus will continue to be compromised. This is another area where machine learning can be of incredible value. And, in recognition, the U.S. Government has announced the COVID-19 High Performance Computing Consortium to provide researchers access to world-class supercomputers for advanced data science and artificial intelligence modeling.

AI has also silently emerged as a transformative technology for the healthcare industry, enabling incredible efficiencies in diagnosis, drug discovery, and drug development. In some cases, the medical community has already seen the benefits of AI and big data for managing the coronavirus outbreak. WHO and China teamed up for a joint mission, headed by Dr. Bruce Aylward of WHO and Dr. Wannian Liang of the People’s Republic of China, to understand this novel disease and inform next steps for readiness and preparedness of the rest of the global community. The 40-page report released last month describes how these new technologies were implemented “to strengthen contact tracing and the management of priority populations.” As the virus continues to spread, more data is being made available each day, broadening the scope of what can be accomplished with AI technologies.  

Other use cases include computer vision technology used on cameras in airports, railway stations, and other public areas to detect and flag individuals with fever. With this technology, a task which would otherwise require an army of people to administer can now be safely accomplished via machines at a rate of 300 people per minute. Computer vision can also help interpret CT scans and detect coronavirus in as little as 20 seconds versus the estimated 5-15 minutes it would take a human doctor to diagnose. Relying on humans to review and interpret millions of CT images per day is impracticable at best. With computer vision, machines can process those same millions of CT images at lightning speed and with accuracy on par with that of human doctors

Diagnosis is only part of the COVID-19 journey. As the world is currently experiencing in China, Italy, and the United States, healthcare systems are struggling to meet needs for treatment and patient care. Current estimates for availability of a COVID-19 vaccine are as high as 18 months or longer. That doesn’t count the time needed to manufacture and distribute the vaccine at the potential scale required. Even in ordinary times when the world is not facing a global pandemic, development of a single drug or vaccine requires incredible effort, resources, experimentation, testing, and time.

AI and machine learning have already proven successful at accelerating drug discovery by enabling massively more efficient chemical compound analysis, outcome estimation, and drug interaction modeling. These are tasks which can traditionally take billions of dollars and years of effort from armies of scientists before leading to positive results. AI can cut this time and cost significantly, allowing faster migration from discovery to development and ultimately release. Drug development focuses on transforming compounds into products that are safe for consumption, something for which machine learning technologies can be used to improve analysis and drug production yields. Today, the difference between efficiently producing a compound that works and one that doesn’t can mean the difference between making things better or much worse.

Manufacturing and Logistics 

AI has already proven transformative in manufacturing, logistics, delivery infrastructure and other aspects of supply chains. These are critical pillars in the global response to COVID-19 encompassing everything from personal protective equipment (PPE) to life-saving ventilators to everyday household supplies and food items. As demand for supplies and equipment continues to increase the world-over, optimizing these important pillars becomes more important than ever.  The modern supply chain is a vast network of producers, vendors, retailers, distributors, warehouses, and transportation companies connected to create and deliver goods to end customers. This network is complex and rich in data generated by people and the many smart sensors and devices along the entire chain. However, the entities participating in this process are mostly unprepared to fully harness the true power of predictive analytics, real-time insights, and intelligent automation needed to optimize costs, units, and operations. In fact, “94% of the Fortune 1000 are seeing coronavirus supply chain disruptions.” The current pandemic is accelerating work in a number of areas including advanced robotics for delivery and sterilization, as well as machine learning for demand forecasting, risk assessment, sourcing, cost, inventory, and logistics optimization. Delivery networks are also being stretched to the limit with numerous efforts in place to use machine learning to balance load and predict demand while also exploring new AI-powered machine delivery methods such as drone delivery where computer vision plays an important role. 

News and Education

At the Munich Security Conference in February, WHO Director-General Tedros Adhanom Ghebreyesus stated that “We’re not just fighting an epidemic; we’re fighting an infodemic.” Throughout news and social media, citizens are inundated with reports, tips, stats, and more, much of which is unclear, conflicting, and sometimes even inaccurate. This means that important messages can be lost in the noise and misinformation can permeate the knowledge sphere. This is another area where AI can prove valuable.

Similarly to how computer vision systems can rapidly scan images and video feeds at a scale unfeasible by humans, natural language processing (NLP) can be unleashed on the world’s news outlets and social media feeds to synthesize the sheer volume of information, remove redundancies, filter out old news, flag misinformation, and prioritize new or unique content. Misinformation and “fake” news can spread faster than any person can keep up, but AI systems with robust and diverse NLP capabilities can scale as far and wide as needed when powered by the right computing infrastructure.

Conclusion 

This novel coronavirus has inspired significant global collaboration with people around the world working day and night to contain, manage, support, and treat those negatively impacted. Over the past several weeks, it has become clear that the need for answers and solutions is growing, and innovation is vital in order to accelerate recovery. Ultimately, the role of machine intelligence is to save time and create efficiency, something which may be more important now than ever before.

Machine learning is a powerful weapon in the arsenal of defense as it can help monitor and forecast the spread of the virus plus provide faster, more precise diagnostics and treatments. But it can also optimize manufacturing and distribution of goods and help in educating the public about the disease and our individual responsibilities in the context of COVID-19’s broader impact on our economy, healthcare system, businesses, and society at large. Significant resources are being poured into solutions which can help healthcare professionals and others rise to this unique challenge. This may be just the beginning, yet we’re already seeing examples where machine learning is helping us outsmart the coronavirus.

AI Butterfly

What makes advanced AI unique

Artificial intelligence is the umbrella term for computer systems that can interpret, analyze, and learn from data in ways similar to human cognition. The field of AI is vast, encapsulating numerous subfields and applications related to machine intelligence. With AI, computers can perform a wide range of tasks—from playing chess to diagnosing cancer and virtually everything in between. 

The term artificial intelligence was first introduced by American computer scientist John McCarthy in 1956 at a summer conference at Dartmouth College in New Hampshire. That conference is believed by many to have launched AI as a genuine field of research. In the ensuing decades, a number of inventions, discoveries, and experiments have led to the many ways AI turns data into insights, powering our society and influencing how we use computers every day.

With AI and machine learning, computers are programmed or “trained” to perform intelligent tasks. Tasks that are either “narrow” or “general.” Artificial narrow intelligence or weak AI pertains to specific, pre-defined tasks such as predicting the weather, recommending your favorite music, or even autonomous driving. Narrow AI can on its own transform the way we treat a particular process or task. Most of what we see today in terms of machine intelligence falls within this category and shouldn’t be taken for granted. Narrow AI is capable of analyzing massive volumes of data, thousands of times faster than people and typically with fewer errors. Narrow AI also relieves us of mundane tasks so that we can be more efficient with our time.   

Artificial general intelligence (AGI) or strong AI is related to more complex functionality that is expected to match human level capabilities across multiple domains. Think about the very advanced AI systems you see in sci-fi movies where the interactions between people and machines are seamless and feel conscious. An AGI system can draw valuable insights from diverse data sets (e.g. images, text, audio files, logs) and use cognitive computing to perform functions that are indistinguishable from those performed by a human.

As described in one of our prior blogs, traditional data analytics and machine learning differ in several key ways, including structure, purpose, and benefits. Without diving into too many details, in short, traditional data analysis is descriptive and quite useful in explaining current or historical data while machine learning is predictive and capable of learning from data in ways that provide valuable insights and recommendations.

AI/machine learning is a dynamic process, often requiring algorithmic model training, validation, testing, refinement, and integration with other software components to create real value. Unlike many other engineering functions such as traditional software engineering, where you can create a solution based on certain known requirements, quality machine learning requires deep model and data exploration to arrive at something useful. Simply put, experimentation and embracing the unknown is par for the course in advanced AI.

Building models and proper orchestration are also core to success here. Added complexity sets in when the intended use case is multimodal and the data requires multimodal AI processing, creative ensembling of multiple models, and intricate queuing and software orchestration. This is where combinatorial expertise in machine learning, compute infrastructure, and software engineering is needed but currently in rare supply.      

Then there are the 4 Vs of data which are important criteria for success in advanced AI initiatives. The 4 Vs include data volumevarietyvelocity, and veracity. The road from data to insights can be patchy and long, requiring many types of expertise. Dealing with the 4 Vs early in the exploration process can help accelerate discovery and unlock otherwise hidden value.     

It is also important to note that high accuracy and precision in artificial intelligence is the byproduct of rigorous scientific, engineering, and design efforts. This is where advance science meets art to deliver results. And the journey from ideation to implementation for even a single AI application requires cooperation with other contributors including those fluent in business, operations, legal, and cybersecurity—18 skills in all

For a quick refresher on key AI terminology be sure to read the 53 useful terms in the world of artificial intelligence

Shopping cart

AI and the future of shopping

If you’ve been paying attention to retail spending over the past few years, it won’t surprise you to learn that e-commerce in United States continues to gain traction at record speed. In fact, e-commerce is expected to “surpass 10% of total US retail sales for the first time in history.” By 2023, online spending by U.S. consumers is expected to grow to $970 billion (a 65% increase to this year’s volume) and global retail e-commerce sales are expected to balloon to $6.5 trillion in that same year. There are several key factors that contribute to this growth and consumers’ attraction to e-commerce. These factors include convenience of 24/7/365 accessibility, nearly limitless product selection, quick price comparisons, fast checkouts using one-touch purchase options, real-time updates of new product launches, exclusive promotions, same-day delivery, as well as enhanced personalization and customer service using artificial intelligence.

These days, retailers and e-tailers have access to tremendous amounts of data about their customers, competition, and the market at large. But, this collection of data isn’t easy to manage, growing in volume and complexity daily. Therefore, for online and brick-and-mortar merchants the challenge remains connecting and making sense of all of that data, making it actionable for either revenue growth or cost reduction. And that’s where machine learning steps in to power the future.

With AI and machine learning, companies can turn idle data into valuable insights. For example, using AI can automatically categorize products, make better recommendations, dynamically adjust pricing based on customer behavior and inventory levels, provide virtual assistance to support customer queries or concerns, and optimize supply chains like never before. Given the broad applicability of AI and its promise to level the playing field in an increasingly competitive industry, retailers worldwide are predicted to spend $7.3 billion on AI by 2022, up from $2 billion in 2018.

Depending on the data type, specific machine learning methods and models are used to get to the intended outcome. For instance, computer vision, a subfield of AI, is used to classify and contextualize the content of digital images and videos. Computer vision gives companies the ability to use machines to detect and label objects in images without having their personnel do the same. Companies can also unclutter their listings or filter out offensive images in this way. This is the same technology that gives consumers the ability to find their favorite products (or something similar) by simply using a picture of the item.

Other uses for computer vision include facial recognition, sentiment analysis, and logo detection. Merchants can use facial recognition and sentiment analysis to recognize repeat customers, personalize the customer experience, and in some cases, provide better security by identifying and monitoring high-risk individuals. Logo detection by machines are used to support marketing, identify counterfeits, and protect brands against pervasive infringement. Take things to the next level and you’ll get to fully automated, cashless stores where consumers can simply walk into a location, grab their favorite merchandise off the shelf, and walk out with those items without ever having to stop at the cashier to scan any item or pull out a credit card.

Natural language processing (NLP) is a subfield of AI focused on processing and analyzing natural human language or text data. Here, finding the right product becomes much easier because NLP can interpret the customer’s intent and the shopping context much better than traditional search systems that rely solely on exact “keyword” matching. This includes better performance even when the user requests involve typos or poor grammar. NLP can also help drastically improve customer service, both in terms of leveraging sentiment analysis capabilities to better support customer needs and using conversational chatbots to streamline the call center experience. Imagine, smarter systems where clunky “press 1,” “press 2” prompts are a way of the past, replaced by NLP-powered machines that can seamlessly answer questions and carry on natural conversations.

While AI is improving your shopping experience, it is also being employed to simplify the less visible aspects of the supply chain which are responsible for production and distribution of the products you can buy. Ultimately, better supply chain management means less waste, faster production cycles, and lower costs. Historically, data analysis in these areas has been performed using traditional data analytics but this is fast changing due the explosion of data volume and complexity. Companies are turning to the “predictive” power of advanced machine learning to optimize everything from manufacturing to warehousing to transportation and logistics. For example, studies indicate “that unplanned downtime costs manufacturers an estimated $50 billion annually, and that asset failure is the cause of 42 percent of this unplanned downtime.” Predictive maintenance powered by AI is now delivering the required ROI by reducing unplanned downtime and improving asset efficiency.

As U.S. and global consumer demand for retail products continues to rise, it’s clear that the world’s reliance on advance AI and machine learning capabilities will continue to rise in lockstep. These new capabilities present new opportunities for retailers and e-tailers to deliver more personalized customer experience at greater scale than ever before.

For a quick refresher on key AI terminology, refer to our previous article defining 53 useful terms in the world of artificial intelligence.

AI Brain

Four new patents issued to protect Entefy intelligent search and cyber privacy technologies

USPTO awards Entefy new patents for core inventions in intelligent search and cyber privacy

PALO ALTO, Calif. October 31, 2019. Entefy Inc. inventors have been awarded 4 new patents by the U.S. Patent and Trademark Office (USPTO), covering company’s innovations in areas of intelligent search and cyber privacy.

Patent No. 10,353,754 describes the “application programming interface analyzer for a universal interaction platform.” This API analyzer acts as an intelligent service discovery mechanism to identify and automatically determine the formats and protocols necessary to enable natural language communication between web sites, smart devices, other API-accessible services, and users of Entefy’s core intelligence systems.

“In today’s fast-moving tech landscape, users expect smart natural language interfaces to be available for a broad range of services and IoT devices,” said Entefy’s CEO, Alston Ghafourifar. “This API analyzer technology can power a number of AI-based use cases involving advanced people-to-service communication.”

Continuing with Entefy’s advanced work in search and knowledge management, USPTO awarded Entefy Patent No. 10,394,966 which describes “systems and methods for multi-protocol, multi-format, universal searching.” This invention works in concert with Entefy’s patented universal message object (UMO) structure to manage complex mapping between diverse datatypes and corresponding user preferences—for example, learning how even the same word can differ in meaning between various users, thus enabling better precision in search.

Entefy’s Adaptive Privacy Control (APC) technology enables new levels of data protection across a number of popular data types and formats. APC provides individual users with unprecedented control over the visibility and shareability of their content. With APC, users can encrypt even small bits of information within larger files such as their name in an important document, their social security number in a spreadsheet, or even a small region of pixels in a photograph. With Patent No. 10,395,047 and Patent No. 10,410,000, protection for APC technology now includes even more options in complex media such as audio and video files.

“Entefy has always considered invention as a prime part of our culture and a true necessity as we work to advance the state of the art in our industry,” said Mr. Ghafourifar. Today’s update is the latest in a series of patent announcements, including earlier Entefy patents that cover the Company’s technologies related to its universal interaction platform, APC, and secure document collaboration.

ABOUT ENTEFY

Entefy is an AI software company with multimodal machine learning technology (on-premise and SaaS solutions) designed to redefine automation and power the intelligent enterprise. Entefy’s multimodal AI platform encapsulates advanced capabilities in machine cognition, computer vision, natural language processing, audio analysis, and other data intelligence. Organizations use Entefy solutions to accelerate their digital transformation and dramatically improve existing systems— everything from knowledge management to communication, search, process automation, cybersecurity, data privacy, IP protection, customer analytics, forecasting, and much more. Get started at www.entefy.com.

AI Pill

AI and pharma pair up to accelerate drug discovery, development, and commercialization

The pharmaceutical industry grapples with a daunting challenge—producing and delivering more effective drugs at ever increasing costs. Over the years, it has become more difficult for drug companies to keep pace with grueling market and regulatory demands. The cost of drug research, clinical trials, manufacturing, and compliance are reaching new highs and competition is pressuring the industry to adopt new technologies that can deliver efficiency to every aspect of the development and distribution process.

Let’s examine 3 core areas within the drug product life cycle in which AI can boost performance and results:

1.     Drug discovery. Discovering even a single new drug requires tremendous effort and commitment to experimentation. For instance, “it takes about a decade of research — and an expenditure of $2.6 billion” for a single drug to go from the research phase to being available on the shelves for purchase. Scientists painstakingly assess each compound within the initial screening to verify the possibility of success or failure. All the while, the company has to spend significant amounts of time and money to keep up with the regulatory and scientific rigor required during the drug discovery process. This is where AI can help. Uses of AI and machine learning (ML) to enhance the drug discovery process include quicker initial screenings of different compounds within a particular drug as well as targeting and identifying specific components needed to formulate a certain drug using advanced data analytics. The result? Faster and more cost-effective discovery, which could ultimately create more treatment choices and more affordable healthcare for all.

2.     Drug development. Unlike drug discovery, drug development focuses on transforming the newly discovered compound to a product that is safe for market consumption and approved by the appropriate regulatory authorities. In pharma, drug development brings to light a trend that was first observed in the 1980s, “Eroom’s law” (Moore’s law but spelled backwards). Eroom’s law states that despite technological advancements, cost of drug development is increasing year over year while the number of actual drug approvals are decreasing. This is a concern for many within the pharma industry and AI is being targeted as a solution to help reverse this trend.  

Clinical trials represent important steps in the drug development process and are designed to collect safety and efficacy data related to new drugs. These clinical trials consist of multiple phases, “with Phase III trials requiring a larger pool of patients and being significantly more expensive and complex than Phase I trials.” Even with the significant amount of resources allocated to such trials, only 1 out of 10 drugs that enter Phase I are approved by the FDA. In general, clinical trials are fraught with inefficiencies including bottlenecks in recruitment, flaws in study design, and data management issues related to participants taking the right dosage or delays. Application of AI can help improve the entire process and put large volumes of data to use in unprecedented ways, including information contained in clinical notes, authorized medical records, and patient-generated data. 

3.     Commercialization. After years of research and clinical development as well as the required approvals by the FDA, a new drug can finally have the opportunity to be marketed and be made available for sale to the public. During the commercialization phase, drug companies manage a number of important operations including manufacturing, quality, and supply chain to ensure a successful delivery and market adoption of their newly approved drugs. Whether it is related to customer service, supply chain, personalization of medicine targeted to specific patients, regulatory compliance, or risk management, AI can perform a role in making commercialization more efficient and productive. For example, customer service bots can help create a more interpersonal connection for the patient in the process of finding an optimal treatment option. In terms of the supply chain, AI can implement multiple projections of analytics in the real time to “better forecast demand, and automatically identify and mitigate supply risks.” AI can help determine “a new therapy’s efficacy and side-effects profile for a specific patient or patient group.” This allows for more personalized treatment options that differ between patients and their respected medical histories. Within post-marketing surveillance, AI and ML can also help better manage risk by monitoring both web and social platforms continuously.

Patients and doctors are already benefiting from the impacts of AI in ways that felt more like science-fiction only a few years ago. In pharma, the meteoric rise in costs in face of growing market and regulatory demands, the need for efficiency is more prevalent than ever. So, what will the future hold for the pharma industry? If the early activity is any indication, advanced technologies powered by AI are slowly transforming the pharma industry, promising to disrupt the future of drug discovery, development, and commercialization. Article contributors:Entefy

AI keyboard keys

53 Useful terms for anyone interested in artificial intelligence

These days, artificial intelligence (AI) seems to be an active ingredient in virtually every conversation about advanced technologies and automation. Given the hyperactivity in the domain, many professionals and business leaders are evaluating the power of AI and machine learning technologies to ensure a competitive edge going into the next decade.

Needless to say, artificial intelligence is a rich field for discovery and understanding. However, without deeper AI training and education, it can be quite challenging to stay abreast of the rapid changes taking place within the field. At Entefy, we’re passionate about breakthrough computing and the many ways it can help people live and work better. So, to help demystify artificial intelligence and its many sub-components, our team has assembled this list of useful terms for anyone interested in AI and machine learning.

Be sure to bookmark this page for a handy quick-reference resource.

Algorithm. A procedure or formula, often mathematical, that defines a sequence of operations to solve a problem or class of problems.

Artificial intelligence (AI). The umbrella term for computer systems that can interpret, analyze, and learn from data in ways similar to human cognition.

Cardinality. In mathematics, a measure of the number of elements present in a set.

Centroid model. A type of classifier that computes the center of mass of each class and uses a distance metric to assign samples to classes during inference.

Chatbot. A computer program (often designed as an AI-powered virtual agent) that provides information or takes actions in response to the user’s voice or text commands or both. Current chatbots are often deployed to provide customer service or support functions.

Class. A category of data indicated by the label of a target attribute.

Classifier. An instance of a machine learning model trained to predict a class.

Class imbalance. The quality of having a non-uniform distribution of samples grouped by target class.

Cognitive computing. A term that describes advanced AI systems that mimic the functioning of the human brain to improve decisionmaking and perform complex tasks.

Computer vision (CV). An artificial intelligence field focused on classifying and contextualizing the content of digital video and images. 

Data curation. The process of collecting and managing data, including verification, annotation, and transformation.Also see training and dataset.

Data mining. The process of targeted discovery of information, patterns, or context within one or more data repositories.

DataOps: Management, optimization, and monitoring of data retrieval, storage, transformation, and distribution throughout the data life cycle including preparation, pipelines, and reporting.

Deep learning. A subfield of machine learning that uses artificial neural networks with two or more hidden layers to train a computer to process data, recognize patterns, and make predictions.

Derived feature. A feature that is created and the value of which is set as a result of observations on a given dataset, generally as a result of classification, automated preprocessing, or sequenced model output.

Ensembling. A powerful technique whereby two or more algorithms, models, or neural networks are combined in order to generate more accurate predictions.

F1 Score. A measure of a test’s accuracy calculated as the harmonic mean of precision and recall.

Feature. In ML, a specific variable or measurable value that is used as input to an algorithm.

Generative adversarial network (GAN). A class of AI algorithms whereby two neural networks compete against each other to improve capabilities and become stronger.

Hyperparameter. In ML, a parameter whose value is set prior to the learning process as opposed to other values derived by virtue of training.

Intelligent process automation (IPA). A collection of technologies, including robotic process automation (RPA) and AI, to help automate certain digital processes. Also see robotic process automation (RPA).

Logistic regression. A type of classifier that measures the relationship between one variable and one or more variables using a logistic function.

Machine learning (ML). A subset of artificial intelligence that gives machines the ability to analyze a set of data, draw conclusions about the data, and then make predictions when presented with new data without being explicitly programmed to do so.

MIMI. The term used to refer to Entefy’s multimodal AI platform and technology.

Multimodal AI. Machine learning models that analyze and relate data processed using multiple modes or formats of learning.

N-gram model. In NLP, a model that counts the frequency of all contiguous sequences of [1, n] tokens.

Naive Bayes. A probabilistic classifier based on applying Bayes Rule which makes strong (naive) assumptions about the independence of features.

Named entity recognition (NER). An NLP model that locates and classifies elements in text into pre-defined categories.

Natural language processing (NLP). A field of computer science and artificial intelligence focused on processing and analyzing natural human language or text data.

Natural language understanding (NLU). A specialty area within Natural Language Processing focused on advanced analysis of text to extract meaning and context. 

Neural networks. A specific technique for doing machine learning that is inspired by the neural connections of the human brain. The intelligence comes from the ability to analyze countless data inputs to discover context and meaning.

Ontology. A data model that represents relationships between concepts, events, entities, or other categories. In the AI context, ontologies are often used by AI systems to analyze, share, or reuse knowledge.

Precision. In machine learning, a measure of accuracy computing the ratio of true positives against all true and false positives in a given class.

Primary feature. A feature, the value of which is present in or derived from a dataset directly. 

Random forest. An ensemble machine learning method that blends the output of multiple decision trees in order to produce improved results.

Recall. In machine learning, a measure of accuracy computing the ratio of true positives guessed against all actual positives in a given class.

Reinforcement learning (RL). A machine learning technique where an agent learns independently the rules of a system via trial-and-error sequences.

Robotic process automation (RPA). Business process automation that uses virtual software robots (not physical) to observe the user’s low-level or monotonous tasks performed using an application’s user interface in order to automate those tasks. Also see intelligent process automation (IPA).

Self-supervised learning. Autonomous Supervised Learning, whereby a system identifies and extracts naturally-available signal from unlabeled data through processes of self-selection.

Semi-supervised learning. A machine learning technique that fits between supervised learning (in which data used for training is labeled) and unsupervised learning (in which data used for training is unlabeled).

Strong AI. Theterm used to describe artificial general intelligence or a machine’s intelligence functionality that matches human cognitive capabilities across multiple domains. Often characterized by self-improvement mechanisms and generalization rather than specific training to perform in narrow domains. Also see weak AI.

Structured data. Data that has been organized using a predetermined model, often in the form of a table with values and linked relationships. Also see unstructured data.

Supervised learning. A machine learning technique that infers from training performed on labeled data. Also see unsupervised learning.

Taxonomy. A hierarchal structured list of terms to illustrate the relationship between those terms. Also see ontology. 

Time series. A set of data structured in spaced units of time.

Training. The process of providing a dataset to a machine learning model for the purpose of improving the precision or effectiveness of the model. Also see supervised learning and unsupervised learning.

Transfer learning. A machine learning technique where the knowledge derived from solving one problem is applied to a different (typically related) problem.

Tuning. The process of optimizing the hyperparameters of an AI algorithm to improve its precision or effectiveness. Also see algorithm.

Unstructured data. Data that has not been organized with a predetermined order or structure, often making it difficult for computer systems to process and analyze.

Unsupervised learning. A machine learning technique that infers from training performed on unlabeled data. Also see supervised learning.

Vectorization. The process of transforming data into vector representation using numbers.

Weak AI. Theterm used to describe a narrow AI built and trained for a specific task. Also see strong AI.

Word Embedding. In NLP, the vectorization of words and phrases.

before & after

Demystifying Enterprise Intelligence: Traditional Data Analytics & Machine Learning [INFOGRAPHIC]

The rapid pace of modern business demands an agile approach to enterprise intelligence. Whether developing AI-powered knowledge management solutions or improving automation with intelligent decision making and orchestration, there are more options than ever when considering how best to uncover important insights from data.

Traditional data analysis is “descriptive” and useful in reporting, explaining data, and generating new models for current or historical events. Machine Learning is “predictive” and can learn from data to provide valuable insights and recommendations to help optimize processes, reduce costs, and open up new operating models. Which technology approach is right for your organization? That’s largely dependent on the target use case, data complexity, and the need for longer term expandability and scalability.

This infographic highlights the key differences between traditional data analytics and machine learning, focusing on the core benefits, protocols, data, and models.

You can read more about the 18 important skills required to bring AI solutions to life at your enterprise and Entefy’s quick video introduction to the emerging area of multimodal AI.