People

How machine learning will help us outsmart the coronavirus

COVID-19 is a new disease and we are still learning how it spreads…” At time of this writing, this is the message you’ll find when visiting the CDC (Centers for Disease Control and Prevention) website looking for information on how this novel coronavirus can spread.

What’s been so worrisome about COVID-19, the disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), is its accelerating rate of transmission. What emerged in Wuhan, China only 3 months ago, has rapidly infected people in nearly every country. According to the World Health Organization (WHO), “It took 67 days for 100,000 cases to be reported, but just 3 days to go from 400,000 to 500,000 cases.” This, despite unprecedented efforts by many countries large and small trying to contain this disease. And as the world finds itself underprepared in dealing with this type of crisis, countless battalions of experts in varying disciplines are contributing to containment and recovery efforts. One such set of experts includes data scientists, software engineers, and automation experts who are unleashing information and technology as our allies in this emergency.

Monitoring and Forecasting

Machine learning is already at work 24/7, assisting with improved tracking of COVID-19 data as well as predicting its spread on domestic and international scale. The breadth and depth of data produced on this pandemic makes it unfeasible for humans to review and analyze. This includes information from global news sources, health organizations, research teams, governments, the travel industry, as well as manufacturing and logistics data.

AI algorithms are being used by a number of experts to examine this mountainous, diverse set of data in order to better identify relevant information and pinpoint valuable correlations that exist between certain data points. For example, how to mitigate transmission risks, how the spread of the disease and its associate mortality rate maps from one area to another during a particular time interval, or how to forecast the efficacy of certain public health practices.

These findings have grown exponentially over recent months and have become the basis for a growing library of research papers now released as part of CORD-19 (the COVID-19 Open Research Dataset)—“the most extensive collection of scientific literature related to the ongoing pandemic.” CORD-19 came together as a result of a global partnership among leading research groups and the dataset is being offered as a free, open resource to researchers everywhere who can benefit from the currently 45,000 plus scholarly articles pertaining to the coronavirus family including COVID-19.

Both traditional data analytics and machine learning can prove essential in analyzing this rapidly expanding sea of data. While traditional data analytics is useful in descriptive ways, explaining current or historical events, machine learning shines in its endless predictive capabilities, learning from different types of structured and unstructured data. Good examples of unstructured data include news articles, images and videos, research reports, or communication threads between any number of people or groups. For this pandemic, AI learning systems can rapidly comb through and analyze massive amounts of data from hundreds of thousands of sources to expose pertinent patterns, correlations, and recommendations.

Diagnosis and Treatment

Modeling the spread of the virus is important and can help save lives by ensuring preparedness, optimized resource allocation, and efficient delivery of care. However, without rapid improvements in diagnosis and treatment, our collective abilities to contain the transmission and treat the virus will continue to be compromised. This is another area where machine learning can be of incredible value. And, in recognition, the U.S. Government has announced the COVID-19 High Performance Computing Consortium to provide researchers access to world-class supercomputers for advanced data science and artificial intelligence modeling.

AI has also silently emerged as a transformative technology for the healthcare industry, enabling incredible efficiencies in diagnosis, drug discovery, and drug development. In some cases, the medical community has already seen the benefits of AI and big data for managing the coronavirus outbreak. WHO and China teamed up for a joint mission, headed by Dr. Bruce Aylward of WHO and Dr. Wannian Liang of the People’s Republic of China, to understand this novel disease and inform next steps for readiness and preparedness of the rest of the global community. The 40-page report released last month describes how these new technologies were implemented “to strengthen contact tracing and the management of priority populations.” As the virus continues to spread, more data is being made available each day, broadening the scope of what can be accomplished with AI technologies.  

Other use cases include computer vision technology used on cameras in airports, railway stations, and other public areas to detect and flag individuals with fever. With this technology, a task which would otherwise require an army of people to administer can now be safely accomplished via machines at a rate of 300 people per minute. Computer vision can also help interpret CT scans and detect coronavirus in as little as 20 seconds versus the estimated 5-15 minutes it would take a human doctor to diagnose. Relying on humans to review and interpret millions of CT images per day is impracticable at best. With computer vision, machines can process those same millions of CT images at lightning speed and with accuracy on par with that of human doctors

Diagnosis is only part of the COVID-19 journey. As the world is currently experiencing in China, Italy, and the United States, healthcare systems are struggling to meet needs for treatment and patient care. Current estimates for availability of a COVID-19 vaccine are as high as 18 months or longer. That doesn’t count the time needed to manufacture and distribute the vaccine at the potential scale required. Even in ordinary times when the world is not facing a global pandemic, development of a single drug or vaccine requires incredible effort, resources, experimentation, testing, and time.

AI and machine learning have already proven successful at accelerating drug discovery by enabling massively more efficient chemical compound analysis, outcome estimation, and drug interaction modeling. These are tasks which can traditionally take billions of dollars and years of effort from armies of scientists before leading to positive results. AI can cut this time and cost significantly, allowing faster migration from discovery to development and ultimately release. Drug development focuses on transforming compounds into products that are safe for consumption, something for which machine learning technologies can be used to improve analysis and drug production yields. Today, the difference between efficiently producing a compound that works and one that doesn’t can mean the difference between making things better or much worse.

Manufacturing and Logistics 

AI has already proven transformative in manufacturing, logistics, delivery infrastructure and other aspects of supply chains. These are critical pillars in the global response to COVID-19 encompassing everything from personal protective equipment (PPE) to life-saving ventilators to everyday household supplies and food items. As demand for supplies and equipment continues to increase the world-over, optimizing these important pillars becomes more important than ever.  The modern supply chain is a vast network of producers, vendors, retailers, distributors, warehouses, and transportation companies connected to create and deliver goods to end customers. This network is complex and rich in data generated by people and the many smart sensors and devices along the entire chain. However, the entities participating in this process are mostly unprepared to fully harness the true power of predictive analytics, real-time insights, and intelligent automation needed to optimize costs, units, and operations. In fact, “94% of the Fortune 1000 are seeing coronavirus supply chain disruptions.” The current pandemic is accelerating work in a number of areas including advanced robotics for delivery and sterilization, as well as machine learning for demand forecasting, risk assessment, sourcing, cost, inventory, and logistics optimization. Delivery networks are also being stretched to the limit with numerous efforts in place to use machine learning to balance load and predict demand while also exploring new AI-powered machine delivery methods such as drone delivery where computer vision plays an important role. 

News and Education

At the Munich Security Conference in February, WHO Director-General Tedros Adhanom Ghebreyesus stated that “We’re not just fighting an epidemic; we’re fighting an infodemic.” Throughout news and social media, citizens are inundated with reports, tips, stats, and more, much of which is unclear, conflicting, and sometimes even inaccurate. This means that important messages can be lost in the noise and misinformation can permeate the knowledge sphere. This is another area where AI can prove valuable.

Similarly to how computer vision systems can rapidly scan images and video feeds at a scale unfeasible by humans, natural language processing (NLP) can be unleashed on the world’s news outlets and social media feeds to synthesize the sheer volume of information, remove redundancies, filter out old news, flag misinformation, and prioritize new or unique content. Misinformation and “fake” news can spread faster than any person can keep up, but AI systems with robust and diverse NLP capabilities can scale as far and wide as needed when powered by the right computing infrastructure.

Conclusion 

This novel coronavirus has inspired significant global collaboration with people around the world working day and night to contain, manage, support, and treat those negatively impacted. Over the past several weeks, it has become clear that the need for answers and solutions is growing, and innovation is vital in order to accelerate recovery. Ultimately, the role of machine intelligence is to save time and create efficiency, something which may be more important now than ever before.

Machine learning is a powerful weapon in the arsenal of defense as it can help monitor and forecast the spread of the virus plus provide faster, more precise diagnostics and treatments. But it can also optimize manufacturing and distribution of goods and help in educating the public about the disease and our individual responsibilities in the context of COVID-19’s broader impact on our economy, healthcare system, businesses, and society at large. Significant resources are being poured into solutions which can help healthcare professionals and others rise to this unique challenge. This may be just the beginning, yet we’re already seeing examples where machine learning is helping us outsmart the coronavirus.

AI Butterfly

What makes advanced AI unique

Artificial intelligence is the umbrella term for computer systems that can interpret, analyze, and learn from data in ways similar to human cognition. The field of AI is vast, encapsulating numerous subfields and applications related to machine intelligence. With AI, computers can perform a wide range of tasks—from playing chess to diagnosing cancer and virtually everything in between. 

The term artificial intelligence was first introduced by American computer scientist John McCarthy in 1956 at a summer conference at Dartmouth College in New Hampshire. That conference is believed by many to have launched AI as a genuine field of research. In the ensuing decades, a number of inventions, discoveries, and experiments have led to the many ways AI turns data into insights, powering our society and influencing how we use computers every day.

With AI and machine learning, computers are programmed or “trained” to perform intelligent tasks. Tasks that are either “narrow” or “general.” Artificial narrow intelligence or weak AI pertains to specific, pre-defined tasks such as predicting the weather, recommending your favorite music, or even autonomous driving. Narrow AI can on its own transform the way we treat a particular process or task. Most of what we see today in terms of machine intelligence falls within this category and shouldn’t be taken for granted. Narrow AI is capable of analyzing massive volumes of data, thousands of times faster than people and typically with fewer errors. Narrow AI also relieves us of mundane tasks so that we can be more efficient with our time.   

Artificial general intelligence (AGI) or strong AI is related to more complex functionality that is expected to match human level capabilities across multiple domains. Think about the very advanced AI systems you see in sci-fi movies where the interactions between people and machines are seamless and feel conscious. An AGI system can draw valuable insights from diverse data sets (e.g. images, text, audio files, logs) and use cognitive computing to perform functions that are indistinguishable from those performed by a human.

As described in one of our prior blogs, traditional data analytics and machine learning differ in several key ways, including structure, purpose, and benefits. Without diving into too many details, in short, traditional data analysis is descriptive and quite useful in explaining current or historical data while machine learning is predictive and capable of learning from data in ways that provide valuable insights and recommendations.

AI/machine learning is a dynamic process, often requiring algorithmic model training, validation, testing, refinement, and integration with other software components to create real value. Unlike many other engineering functions such as traditional software engineering, where you can create a solution based on certain known requirements, quality machine learning requires deep model and data exploration to arrive at something useful. Simply put, experimentation and embracing the unknown is par for the course in advanced AI.

Building models and proper orchestration are also core to success here. Added complexity sets in when the intended use case is multimodal and the data requires multimodal AI processing, creative ensembling of multiple models, and intricate queuing and software orchestration. This is where combinatorial expertise in machine learning, compute infrastructure, and software engineering is needed but currently in rare supply.      

Then there are the 4 Vs of data which are important criteria for success in advanced AI initiatives. The 4 Vs include data volumevarietyvelocity, and veracity. The road from data to insights can be patchy and long, requiring many types of expertise. Dealing with the 4 Vs early in the exploration process can help accelerate discovery and unlock otherwise hidden value.     

It is also important to note that high accuracy and precision in artificial intelligence is the byproduct of rigorous scientific, engineering, and design efforts. This is where advance science meets art to deliver results. And the journey from ideation to implementation for even a single AI application requires cooperation with other contributors including those fluent in business, operations, legal, and cybersecurity—18 skills in all

For a quick refresher on key AI terminology be sure to read the 53 useful terms in the world of artificial intelligence

Shopping cart

AI and the future of shopping

If you’ve been paying attention to retail spending over the past few years, it won’t surprise you to learn that e-commerce in United States continues to gain traction at record speed. In fact, e-commerce is expected to “surpass 10% of total US retail sales for the first time in history.” By 2023, online spending by U.S. consumers is expected to grow to $970 billion (a 65% increase to this year’s volume) and global retail e-commerce sales are expected to balloon to $6.5 trillion in that same year. There are several key factors that contribute to this growth and consumers’ attraction to e-commerce. These factors include convenience of 24/7/365 accessibility, nearly limitless product selection, quick price comparisons, fast checkouts using one-touch purchase options, real-time updates of new product launches, exclusive promotions, same-day delivery, as well as enhanced personalization and customer service using artificial intelligence.

These days, retailers and e-tailers have access to tremendous amounts of data about their customers, competition, and the market at large. But, this collection of data isn’t easy to manage, growing in volume and complexity daily. Therefore, for online and brick-and-mortar merchants the challenge remains connecting and making sense of all of that data, making it actionable for either revenue growth or cost reduction. And that’s where machine learning steps in to power the future.

With AI and machine learning, companies can turn idle data into valuable insights. For example, using AI can automatically categorize products, make better recommendations, dynamically adjust pricing based on customer behavior and inventory levels, provide virtual assistance to support customer queries or concerns, and optimize supply chains like never before. Given the broad applicability of AI and its promise to level the playing field in an increasingly competitive industry, retailers worldwide are predicted to spend $7.3 billion on AI by 2022, up from $2 billion in 2018.

Depending on the data type, specific machine learning methods and models are used to get to the intended outcome. For instance, computer vision, a subfield of AI, is used to classify and contextualize the content of digital images and videos. Computer vision gives companies the ability to use machines to detect and label objects in images without having their personnel do the same. Companies can also unclutter their listings or filter out offensive images in this way. This is the same technology that gives consumers the ability to find their favorite products (or something similar) by simply using a picture of the item.

Other uses for computer vision include facial recognition, sentiment analysis, and logo detection. Merchants can use facial recognition and sentiment analysis to recognize repeat customers, personalize the customer experience, and in some cases, provide better security by identifying and monitoring high-risk individuals. Logo detection by machines are used to support marketing, identify counterfeits, and protect brands against pervasive infringement. Take things to the next level and you’ll get to fully automated, cashless stores where consumers can simply walk into a location, grab their favorite merchandise off the shelf, and walk out with those items without ever having to stop at the cashier to scan any item or pull out a credit card.

Natural language processing (NLP) is a subfield of AI focused on processing and analyzing natural human language or text data. Here, finding the right product becomes much easier because NLP can interpret the customer’s intent and the shopping context much better than traditional search systems that rely solely on exact “keyword” matching. This includes better performance even when the user requests involve typos or poor grammar. NLP can also help drastically improve customer service, both in terms of leveraging sentiment analysis capabilities to better support customer needs and using conversational chatbots to streamline the call center experience. Imagine, smarter systems where clunky “press 1,” “press 2” prompts are a way of the past, replaced by NLP-powered machines that can seamlessly answer questions and carry on natural conversations.

While AI is improving your shopping experience, it is also being employed to simplify the less visible aspects of the supply chain which are responsible for production and distribution of the products you can buy. Ultimately, better supply chain management means less waste, faster production cycles, and lower costs. Historically, data analysis in these areas has been performed using traditional data analytics but this is fast changing due the explosion of data volume and complexity. Companies are turning to the “predictive” power of advanced machine learning to optimize everything from manufacturing to warehousing to transportation and logistics. For example, studies indicate “that unplanned downtime costs manufacturers an estimated $50 billion annually, and that asset failure is the cause of 42 percent of this unplanned downtime.” Predictive maintenance powered by AI is now delivering the required ROI by reducing unplanned downtime and improving asset efficiency.

As U.S. and global consumer demand for retail products continues to rise, it’s clear that the world’s reliance on advance AI and machine learning capabilities will continue to rise in lockstep. These new capabilities present new opportunities for retailers and e-tailers to deliver more personalized customer experience at greater scale than ever before.

For a quick refresher on key AI terminology, refer to our previous article defining 53 useful terms in the world of artificial intelligence.

AI Brain

Four new patents issued to protect Entefy intelligent search and cyber privacy technologies

USPTO awards Entefy new patents for core inventions in intelligent search and cyber privacy

PALO ALTO, Calif. October 31, 2019. Entefy Inc. inventors have been awarded 4 new patents by the U.S. Patent and Trademark Office (USPTO), covering company’s innovations in areas of intelligent search and cyber privacy.

Patent No. 10,353,754 describes the “application programming interface analyzer for a universal interaction platform.” This API analyzer acts as an intelligent service discovery mechanism to identify and automatically determine the formats and protocols necessary to enable natural language communication between web sites, smart devices, other API-accessible services, and users of Entefy’s core intelligence systems.

“In today’s fast-moving tech landscape, users expect smart natural language interfaces to be available for a broad range of services and IoT devices,” said Entefy’s CEO, Alston Ghafourifar. “This API analyzer technology can power a number of AI-based use cases involving advanced people-to-service communication.”

Continuing with Entefy’s advanced work in search and knowledge management, USPTO awarded Entefy Patent No. 10,394,966 which describes “systems and methods for multi-protocol, multi-format, universal searching.” This invention works in concert with Entefy’s patented universal message object (UMO) structure to manage complex mapping between diverse datatypes and corresponding user preferences—for example, learning how even the same word can differ in meaning between various users, thus enabling better precision in search.

Entefy’s Adaptive Privacy Control (APC) technology enables new levels of data protection across a number of popular data types and formats. APC provides individual users with unprecedented control over the visibility and shareability of their content. With APC, users can encrypt even small bits of information within larger files such as their name in an important document, their social security number in a spreadsheet, or even a small region of pixels in a photograph. With Patent No. 10,395,047 and Patent No. 10,410,000, protection for APC technology now includes even more options in complex media such as audio and video files.

“Entefy has always considered invention as a prime part of our culture and a true necessity as we work to advance the state of the art in our industry,” said Mr. Ghafourifar. Today’s update is the latest in a series of patent announcements, including earlier Entefy patents that cover the Company’s technologies related to its universal interaction platform, APC, and secure document collaboration.

ABOUT ENTEFY

Entefy is an AI software company with multimodal machine learning technology (on-premise and SaaS solutions) designed to redefine automation and power the intelligent enterprise. Entefy’s multimodal AI platform encapsulates advanced capabilities in machine cognition, computer vision, natural language processing, audio analysis, and other data intelligence. Organizations use Entefy solutions to accelerate their digital transformation and dramatically improve existing systems— everything from knowledge management to communication, search, process automation, cybersecurity, data privacy, IP protection, customer analytics, forecasting, and much more. Get started at www.entefy.com.

AI Pill

AI and pharma pair up to accelerate drug discovery, development, and commercialization

The pharmaceutical industry grapples with a daunting challenge—producing and delivering more effective drugs at ever increasing costs. Over the years, it has become more difficult for drug companies to keep pace with grueling market and regulatory demands. The cost of drug research, clinical trials, manufacturing, and compliance are reaching new highs and competition is pressuring the industry to adopt new technologies that can deliver efficiency to every aspect of the development and distribution process.

Let’s examine 3 core areas within the drug product life cycle in which AI can boost performance and results:

1.     Drug discovery. Discovering even a single new drug requires tremendous effort and commitment to experimentation. For instance, “it takes about a decade of research — and an expenditure of $2.6 billion” for a single drug to go from the research phase to being available on the shelves for purchase. Scientists painstakingly assess each compound within the initial screening to verify the possibility of success or failure. All the while, the company has to spend significant amounts of time and money to keep up with the regulatory and scientific rigor required during the drug discovery process. This is where AI can help. Uses of AI and machine learning (ML) to enhance the drug discovery process include quicker initial screenings of different compounds within a particular drug as well as targeting and identifying specific components needed to formulate a certain drug using advanced data analytics. The result? Faster and more cost-effective discovery, which could ultimately create more treatment choices and more affordable healthcare for all.

2.     Drug development. Unlike drug discovery, drug development focuses on transforming the newly discovered compound to a product that is safe for market consumption and approved by the appropriate regulatory authorities. In pharma, drug development brings to light a trend that was first observed in the 1980s, “Eroom’s law” (Moore’s law but spelled backwards). Eroom’s law states that despite technological advancements, cost of drug development is increasing year over year while the number of actual drug approvals are decreasing. This is a concern for many within the pharma industry and AI is being targeted as a solution to help reverse this trend.  

Clinical trials represent important steps in the drug development process and are designed to collect safety and efficacy data related to new drugs. These clinical trials consist of multiple phases, “with Phase III trials requiring a larger pool of patients and being significantly more expensive and complex than Phase I trials.” Even with the significant amount of resources allocated to such trials, only 1 out of 10 drugs that enter Phase I are approved by the FDA. In general, clinical trials are fraught with inefficiencies including bottlenecks in recruitment, flaws in study design, and data management issues related to participants taking the right dosage or delays. Application of AI can help improve the entire process and put large volumes of data to use in unprecedented ways, including information contained in clinical notes, authorized medical records, and patient-generated data. 

3.     Commercialization. After years of research and clinical development as well as the required approvals by the FDA, a new drug can finally have the opportunity to be marketed and be made available for sale to the public. During the commercialization phase, drug companies manage a number of important operations including manufacturing, quality, and supply chain to ensure a successful delivery and market adoption of their newly approved drugs. Whether it is related to customer service, supply chain, personalization of medicine targeted to specific patients, regulatory compliance, or risk management, AI can perform a role in making commercialization more efficient and productive. For example, customer service bots can help create a more interpersonal connection for the patient in the process of finding an optimal treatment option. In terms of the supply chain, AI can implement multiple projections of analytics in the real time to “better forecast demand, and automatically identify and mitigate supply risks.” AI can help determine “a new therapy’s efficacy and side-effects profile for a specific patient or patient group.” This allows for more personalized treatment options that differ between patients and their respected medical histories. Within post-marketing surveillance, AI and ML can also help better manage risk by monitoring both web and social platforms continuously.

Patients and doctors are already benefiting from the impacts of AI in ways that felt more like science-fiction only a few years ago. In pharma, the meteoric rise in costs in face of growing market and regulatory demands, the need for efficiency is more prevalent than ever. So, what will the future hold for the pharma industry? If the early activity is any indication, advanced technologies powered by AI are slowly transforming the pharma industry, promising to disrupt the future of drug discovery, development, and commercialization. Article contributors:Entefy

AI keyboard keys

53 Useful terms for anyone interested in artificial intelligence

These days, artificial intelligence (AI) seems to be an active ingredient in virtually every conversation about advanced technologies and automation. Given the hyperactivity in the domain, many professionals and business leaders are evaluating the power of AI and machine learning technologies to ensure a competitive edge going into the next decade.

Needless to say, artificial intelligence is a rich field for discovery and understanding. However, without deeper AI training and education, it can be quite challenging to stay abreast of the rapid changes taking place within the field. At Entefy, we’re passionate about breakthrough computing and the many ways it can help people live and work better. So, to help demystify artificial intelligence and its many sub-components, our team has assembled this list of useful terms for anyone interested in AI and machine learning.

Be sure to bookmark this page for a handy quick-reference resource.

Algorithm. A procedure or formula, often mathematical, that defines a sequence of operations to solve a problem or class of problems.

Artificial intelligence (AI). The umbrella term for computer systems that can interpret, analyze, and learn from data in ways similar to human cognition.

Cardinality. In mathematics, a measure of the number of elements present in a set.

Centroid model. A type of classifier that computes the center of mass of each class and uses a distance metric to assign samples to classes during inference.

Chatbot. A computer program (often designed as an AI-powered virtual agent) that provides information or takes actions in response to the user’s voice or text commands or both. Current chatbots are often deployed to provide customer service or support functions.

Class. A category of data indicated by the label of a target attribute.

Classifier. An instance of a machine learning model trained to predict a class.

Class imbalance. The quality of having a non-uniform distribution of samples grouped by target class.

Cognitive computing. A term that describes advanced AI systems that mimic the functioning of the human brain to improve decisionmaking and perform complex tasks.

Computer vision (CV). An artificial intelligence field focused on classifying and contextualizing the content of digital video and images. 

Data curation. The process of collecting and managing data, including verification, annotation, and transformation.Also see training and dataset.

Data mining. The process of targeted discovery of information, patterns, or context within one or more data repositories.

DataOps: Management, optimization, and monitoring of data retrieval, storage, transformation, and distribution throughout the data life cycle including preparation, pipelines, and reporting.

Deep learning. A subfield of machine learning that uses artificial neural networks with two or more hidden layers to train a computer to process data, recognize patterns, and make predictions.

Derived feature. A feature that is created and the value of which is set as a result of observations on a given dataset, generally as a result of classification, automated preprocessing, or sequenced model output.

Ensembling. A powerful technique whereby two or more algorithms, models, or neural networks are combined in order to generate more accurate predictions.

F1 Score. A measure of a test’s accuracy calculated as the harmonic mean of precision and recall.

Feature. In ML, a specific variable or measurable value that is used as input to an algorithm.

Generative adversarial network (GAN). A class of AI algorithms whereby two neural networks compete against each other to improve capabilities and become stronger.

Hyperparameter. In ML, a parameter whose value is set prior to the learning process as opposed to other values derived by virtue of training.

Intelligent process automation (IPA). A collection of technologies, including robotic process automation (RPA) and AI, to help automate certain digital processes. Also see robotic process automation (RPA).

Logistic regression. A type of classifier that measures the relationship between one variable and one or more variables using a logistic function.

Machine learning (ML). A subset of artificial intelligence that gives machines the ability to analyze a set of data, draw conclusions about the data, and then make predictions when presented with new data without being explicitly programmed to do so.

MIMI. The term used to refer to Entefy’s multimodal AI platform and technology.

Multimodal AI. Machine learning models that analyze and relate data processed using multiple modes or formats of learning.

N-gram model. In NLP, a model that counts the frequency of all contiguous sequences of [1, n] tokens.

Naive Bayes. A probabilistic classifier based on applying Bayes Rule which makes strong (naive) assumptions about the independence of features.

Named entity recognition (NER). An NLP model that locates and classifies elements in text into pre-defined categories.

Natural language processing (NLP). A field of computer science and artificial intelligence focused on processing and analyzing natural human language or text data.

Natural language understanding (NLU). A specialty area within Natural Language Processing focused on advanced analysis of text to extract meaning and context. 

Neural networks. A specific technique for doing machine learning that is inspired by the neural connections of the human brain. The intelligence comes from the ability to analyze countless data inputs to discover context and meaning.

Ontology. A data model that represents relationships between concepts, events, entities, or other categories. In the AI context, ontologies are often used by AI systems to analyze, share, or reuse knowledge.

Precision. In machine learning, a measure of accuracy computing the ratio of true positives against all true and false positives in a given class.

Primary feature. A feature, the value of which is present in or derived from a dataset directly. 

Random forest. An ensemble machine learning method that blends the output of multiple decision trees in order to produce improved results.

Recall. In machine learning, a measure of accuracy computing the ratio of true positives guessed against all actual positives in a given class.

Reinforcement learning (RL). A machine learning technique where an agent learns independently the rules of a system via trial-and-error sequences.

Robotic process automation (RPA). Business process automation that uses virtual software robots (not physical) to observe the user’s low-level or monotonous tasks performed using an application’s user interface in order to automate those tasks. Also see intelligent process automation (IPA).

Self-supervised learning. Autonomous Supervised Learning, whereby a system identifies and extracts naturally-available signal from unlabeled data through processes of self-selection.

Semi-supervised learning. A machine learning technique that fits between supervised learning (in which data used for training is labeled) and unsupervised learning (in which data used for training is unlabeled).

Strong AI. Theterm used to describe artificial general intelligence or a machine’s intelligence functionality that matches human cognitive capabilities across multiple domains. Often characterized by self-improvement mechanisms and generalization rather than specific training to perform in narrow domains. Also see weak AI.

Structured data. Data that has been organized using a predetermined model, often in the form of a table with values and linked relationships. Also see unstructured data.

Supervised learning. A machine learning technique that infers from training performed on labeled data. Also see unsupervised learning.

Taxonomy. A hierarchal structured list of terms to illustrate the relationship between those terms. Also see ontology. 

Time series. A set of data structured in spaced units of time.

Training. The process of providing a dataset to a machine learning model for the purpose of improving the precision or effectiveness of the model. Also see supervised learning and unsupervised learning.

Transfer learning. A machine learning technique where the knowledge derived from solving one problem is applied to a different (typically related) problem.

Tuning. The process of optimizing the hyperparameters of an AI algorithm to improve its precision or effectiveness. Also see algorithm.

Unstructured data. Data that has not been organized with a predetermined order or structure, often making it difficult for computer systems to process and analyze.

Unsupervised learning. A machine learning technique that infers from training performed on unlabeled data. Also see supervised learning.

Vectorization. The process of transforming data into vector representation using numbers.

Weak AI. Theterm used to describe a narrow AI built and trained for a specific task. Also see strong AI.

Word Embedding. In NLP, the vectorization of words and phrases.

before & after

Demystifying Enterprise Intelligence: Traditional Data Analytics & Machine Learning [INFOGRAPHIC]

The rapid pace of modern business demands an agile approach to enterprise intelligence. Whether developing AI-powered knowledge management solutions or improving automation with intelligent decision making and orchestration, there are more options than ever when considering how best to uncover important insights from data.

Traditional data analysis is “descriptive” and useful in reporting, explaining data, and generating new models for current or historical events. Machine Learning is “predictive” and can learn from data to provide valuable insights and recommendations to help optimize processes, reduce costs, and open up new operating models. Which technology approach is right for your organization? That’s largely dependent on the target use case, data complexity, and the need for longer term expandability and scalability.

This infographic highlights the key differences between traditional data analytics and machine learning, focusing on the core benefits, protocols, data, and models.

You can read more about the 18 important skills required to bring AI solutions to life at your enterprise and Entefy’s quick video introduction to the emerging area of multimodal AI.

AI

USPTO awards Entefy 5 new patents

Entefy has been issued new patents by USPTO for core inventions in intelligent communication and data privacy 

PALO ALTO, Calif. June 28, 2019. Entefy Inc. is announcing a series of newly-awarded patents by the U.S. Patent and Trademark Office (USPTO). In addition to a number of trade secrets, the company’s IP portfolio now includes 51 combined issued and pending patents. Entefy’s latest 5 patents cover company innovations in areas of intelligent communication and data privacy.

Patent No. 10,135,764 describes the company’s “universal interaction platform for people, services, and devices.” This invention enables uniform communication between users, their devices, and popular services and is an important component of Entefy’s unique multimodal intelligence platform as well as its universal communicator application. This includes a modular message format and delivery mechanism for translating communication packets seamlessly from one format into another. At the core of the system is a communication intelligence that can receive a message in one format and automatically transform that message into a different format as required by the receiving user, service, or device. This technology can power any number of AI-based use cases involving people-to-machine and machine-to-machine communication.

Patent No. 10,169,300, “Advanced zero-knowledge document processing and synchronization,” explains additional methods by which live, multi-user document collaboration products can provide the same level of convenience for users without sacrificing security or privacy.

Continuing with the company’s history of key inventions in multi-protocol messaging, USPTO awarded Entefy with a new patent, No. 10,169,447, dealing with a “system and method of message threading for a multi-format, multi-protocol communication system.” This patent covers methods that enable threading of messages in a conversation between one or more users involving multiple protocols—for example, threading user-to-user conversations taking place across email, SMS, and instant message services.

When it comes to sharing digital assets such as documents, photos, videos, and more, Entefy’s Adaptive Privacy Control (APC) technology enables unprecedented levels of data protection. This technology works by allowing users to encrypt even small bits of information within a larger file such as a name in a document, important values in a spreadsheet, or even a small region of pixels in a photograph. With Patent No. 10,169,597 and Patent No. 10,305,683, APC extends to multi-layered protection in video files and multichannel audio files as well. Solutions using this technology provide users and content creators alike with advanced control over the viewing, distribution, and modification of their digital assets.

“Invention has always been and will always remain a major part of Entefy’s culture. I’m very proud of the team’s creativity and commitment to solving problems that are not just technically challenging, but also have potential for broad impact,” said Entefy’s CEO, Alston Ghafourifar. “As a team, we feel fortunate to be developing cutting edge capabilities in rapidly evolving areas of digital communication, security, data privacy, and machine intelligence.”

Today’s release is the latest in a series of patent announcements, including earlier Entefy patents that enables new forms of zero-trust system authentication as well as secure document collaboration.

ABOUT ENTEFY 

Entefy is an AI software company with multimodal machine learning technology (on-premise and SaaS solutions) designed to redefine automation and power the intelligent enterprise. Entefy’s multimodal AI platform encapsulates advanced capabilities in machine cognition, computer vision, natural language processing, audio analysis, and other data intelligence. Organizations use Entefy solutions to accelerate their digital transformation and dramatically improve existing systems—knowledge management, search, communication, intelligent process automation, cybersecurity, data privacy, and much more. Get started at www.entefy.com.

Globe

Building a healthy global population with AI

The World Health Organization (WHO) has called for the importance of universal healthcare coverage, declaring that health is not a privilege, but a right. According to WHO, full coverage of essential health services are still unavailable to more than half of the world’s population. And now, as part of the Sustainable Development Goals, all UN Member States are working toward the ambitious universal health coverage (UHC) by 2030.  

Although universal healthcare is a hot-button topic here in the United States, it’s important to remember that impoverished or underdeveloped nations around the world are still lacking access to even the most basic forms of healthcare. Limitations in educational resources also translate into limited opportunities for doctors to be educated and trained in their native countries. AI can bring comprehensive healthcare resources to these areas and help provide half of the world’s population with this all-important human right.

When it comes to solving the healthcare shortage, it isn’t as easy as shipping doctors, medical equipment, or computers across the globe. Many places still lack the necessary resources and infrastructure such as clean water, electricity, or Internet access to sufficiently run clinics. So, smarter and more comprehensive solutions are needed to tackle this challenge. The introduction of Early Detection and Prevention System (EDPS) to India in 1998 is an example of such a solution. A study by Kempegowda Institute of Medical Sciences involving 933 patients illustrated an overall consistency rate of 94% between the EDPS and physicians.

AI has already shown the potential for improving patient-doctor relationships, but it can also assist doctors in diagnosing medical conditions. This makes it invaluable in rural areas where doctors are scarce and often operate without the support of specialists and peers to help make more complex diagnoses. This will allow for more effective treatment plans that can be reasonably accomplished within the limits of those regions.

AI can be utilized in areas where access to doctors is either limited or nonexistent. The spread of mobile and cloud technologies, especially in resource-poor areas, have made this easier. For example, apps have been developed and launched in rural Rwanda where blood for transfusions can be ordered and delivered by drone within minutes. In certain regions within Thailand, India, and China, machine learning and natural language processing (NLP) are being leveraged to guide cancer treatments. “Researchers trained an AI application to provide appropriate cancer treatment recommendations by giving it descriptions of patients and telling the application the best treatment options. The AI application uses NLP to mine the medical literature and patient records—including doctor notes and lab results—to provide treatment advice. When examining different patients, this application agreed with experts in more than 90% of patients in one study and 50% in another.”

Of course, AI’s value isn’t limited to only a handful of use cases in healthcare. It can also provide invaluable support to countries struck by natural disasters. One of the best examples of this is Nepal after the devastating 2015 earthquake. Entire villages were flattened by the quake, leaving many survivors destitute and without immediate shelter or aid. The United Nations Office for the Coordination of Humanitarian Affairs utilized AI in its recovery efforts. It mapped key information pertaining to the disaster from satellites, cell phones, and social media posts to learn what was needed and where, expediting the delivery of aid and supplies exactly where they were needed. The system also took into consideration structural damage to generate digital maps and ensure that relief workers could move safely. Doing so helped prevent further death and injury that commonly occurs in relief efforts reliant on traditional means.

Disease prevention and management can be critical to a developing country’s economy. Advanced machine learning can help by keeping track of new cases of particularly contagious pathogens to determine the risks and predict outbreak patterns. It’s simpler, more effective, and often cheaper to prevent an epidemic than it is to treat it. AI is already used to model and predict epidemics, based on how the disease is transmitted and the natural occurrences that can have an impact. It’s had success in predicting and mitigating the transmission of dengue fever in Manila, prompting researchers to work closely with the Philippine government in expanding the AI initiative. With half the world’s population at risk of developing dengue, this is no small milestone.

With advances in computing and software, healthcare globally is beginning to feel the positive impact of AI on a number of areas including patient care, drug manufacturing, and disaster recovery. With mobile and cloud technologies growing more widespread and becoming smarter with machine learning, universal health coverage for all no longer seems like a distant dream. 

3D models

What the Hawaii missile alert fiasco teaches us about UX

When people hear the term “user experience,” many assume it refers to some sort of technical design or development. But user experience (UX) is much more than that. UX serves as the conduit between an organization and its audience. User experience encompasses every aspect about how someone interacts with your company. The usability of your website, the functionality of your product, how adept your chatbots are at answering support questions – all of these fall under UX. 

But building a logical, intuitive UX matters for your employees as well. As industries are transformed by artificial intelligence, you must make sure your team members know how to use new platforms safely and efficiently. If your processes are built on clunky, difficult-to-use systems or user interfaces (UI), bad things can happen.

You may recall that back in January, residents of Hawaii received a terrifying alert that a ballistic missile was headed straight for them. Fortunately, it was a false alarm, though it left people deeply shaken. Many were outraged that such an error could even happen.

It turns out that bad UX design played a role in the Hawaii missile warning mistake. One local paper published a photo recreation of the state’s alert notification interface. The option for sending drill alerts was almost indistinguishable from the one that would sound an actual alarm. The idea that someone could click the wrong link wasn’t at all far-fetched.

Thankfully, no one was hurt as a result of that mistake. But the error highlighted the importance of great UX.

How will you treat your guests?

In an earlier piece on the importance of user experience, Entefy referenced the words of designer Charles Eames, who said, “The role of the designer is that of a very good, thoughtful host, anticipating the needs of his guests.” We find that Eames’ design wisdom translates well to UX, as the concept of user-as-guest can be a useful starting point for shaping the experience.

What is the tone of your messaging when someone discovers your organization online or uses your products? Is it inviting or is it impersonal? When they arrive at your homepage, do your aesthetics, content, and navigation make them want to stay awhile? Or will they find a warmer reception somewhere else? No matter how impressive your décor, gourmet your food, or prominent your guest list, you will fail as a host if you’re not engaging.

You can apply the same logic to the UX of software your organization uses internally. Examine the technology your team uses from their perspectives. Does it make their jobs easier? Do the tools they rely on facilitate cooperation and dialogue? Or is every day a grind because they’re forced to interact with confusing or outdated software? As we saw with the Hawaii missile mishap, the quality of your UX has real consequences.

Three pillars of UX design

Your UX design should always be evolving. New tech platforms and industry trends will change how your audience interacts with your company and how your employees serve your customers and clients. There is no done with UX. You’ll always be revising your site, your messaging, your marketing channels, and your customer service workflows.

But there are core principles around which your UX should be designed. Every decision should begin and end with these in mind:

1. UX is all about the end user

Great UX design is rooted in empathy. As the creator of a product or service, you’re naturally close to what you’ve built. You understand everything about how it works, and you know what your company stands for. But you need to take a step back and consider the experience from the perspective of someone who’s never used it before. What are the stumbling blocks? What are their workflows? How could you make their interactions with your company or products easier, more valuable, or more enjoyable?

You can ask your end users for feedback through surveys and in-person discussion groups. But those aren’t always feasible. Besides, what people say and what they do are often very different. Someone might say they feel confident using your platform, but they might struggle more than they let on.

That’s where usability tests prove helpful. Just a handful of tests can reveal substantial gaps in the user experience and solving those could be a game-changer. Just remember that like your broader UX strategy, there is no end point for usability testing. You always want to be measuring performance and seeing where you can make your UX that much better.

2. Design for your entire audience

One size rarely fits all. Everyone who interacts with your company brings different needs, expectations, and comfort levels to the table. Some may be quite tech-savvy, while others will face a learning curve. The best user experiences are built with all of these people in mind. They’re inclusive and responsive, and they come with a high level of user support.

Previously, Entefy examined the issue of ageist design in technology and how it excludes people from accessing goods and services. When you don’t consider the full spectrum of users’ needs, that all-important dialogue between you and your audience breaks down. Someone who can’t easily navigate your site or is greeted with radio silence on your support channels isn’t going to feel heard. And you can be sure that they will shift their attention – and their business – to a company that’s more responsive or accommodating.

3. Be thoughtful about how you use technology

Tools such as automation platforms and chatbots can improve your UX, if they make sense for your employees and your audience. In the rush to prepare their companies for the age of AI, some leaders believe they must use every new tech tool at their disposal. But automated workflows, chatbots, and analytics programs are only effective if they support your end users.

Before implementing any new program, view it through the lens of your employees or audience. Will this feature directly impact their productivity or satisfaction? If the answer is yes, integrate it into your UX, but make sure to educate them about the change. Even if a tool or feature seems intuitive to you, it may be more difficult for your audience to grasp. But if you explain its purpose and help them through the initial adjustment period, they’ll likely be willing to follow your lead.

We live in an age of more content and more sophisticated technology than the world has ever seen. But communication can become a lost art if we’re not careful about how we apply those riches. Good UX design helps you maximize your resources and engage users in meaningful conversations for years to come.