AI Brain

Four new patents issued to protect Entefy intelligent search and cyber privacy technologies

USPTO awards Entefy new patents for core inventions in intelligent search and cyber privacy

PALO ALTO, Calif. October 31, 2019. Entefy Inc. inventors have been awarded 4 new patents by the U.S. Patent and Trademark Office (USPTO), covering company’s innovations in areas of intelligent search and cyber privacy.

Patent No. 10,353,754 describes the “application programming interface analyzer for a universal interaction platform.” This API analyzer acts as an intelligent service discovery mechanism to identify and automatically determine the formats and protocols necessary to enable natural language communication between web sites, smart devices, other API-accessible services, and users of Entefy’s core intelligence systems.

“In today’s fast-moving tech landscape, users expect smart natural language interfaces to be available for a broad range of services and IoT devices,” said Entefy’s CEO, Alston Ghafourifar. “This API analyzer technology can power a number of AI-based use cases involving advanced people-to-service communication.”

Continuing with Entefy’s advanced work in search and knowledge management, USPTO awarded Entefy Patent No. 10,394,966 which describes “systems and methods for multi-protocol, multi-format, universal searching.” This invention works in concert with Entefy’s patented universal message object (UMO) structure to manage complex mapping between diverse datatypes and corresponding user preferences—for example, learning how even the same word can differ in meaning between various users, thus enabling better precision in search.

Entefy’s Adaptive Privacy Control (APC) technology enables new levels of data protection across a number of popular data types and formats. APC provides individual users with unprecedented control over the visibility and shareability of their content. With APC, users can encrypt even small bits of information within larger files such as their name in an important document, their social security number in a spreadsheet, or even a small region of pixels in a photograph. With Patent No. 10,395,047 and Patent No. 10,410,000, protection for APC technology now includes even more options in complex media such as audio and video files.

“Entefy has always considered invention as a prime part of our culture and a true necessity as we work to advance the state of the art in our industry,” said Mr. Ghafourifar. Today’s update is the latest in a series of patent announcements, including earlier Entefy patents that cover the Company’s technologies related to its universal interaction platform, APC, and secure document collaboration.

ABOUT ENTEFY

Entefy is an AI software company with multimodal machine learning technology (on-premise and SaaS solutions) designed to redefine automation and power the intelligent enterprise. Entefy’s multimodal AI platform encapsulates advanced capabilities in machine cognition, computer vision, natural language processing, audio analysis, and other data intelligence. Organizations use Entefy solutions to accelerate their digital transformation and dramatically improve existing systems— everything from knowledge management to communication, search, process automation, cybersecurity, data privacy, IP protection, customer analytics, forecasting, and much more. Get started at www.entefy.com.

AI Pill

AI and pharma pair up to accelerate drug discovery, development, and commercialization

The pharmaceutical industry grapples with a daunting challenge—producing and delivering more effective drugs at ever increasing costs. Over the years, it has become more difficult for drug companies to keep pace with grueling market and regulatory demands. The cost of drug research, clinical trials, manufacturing, and compliance are reaching new highs and competition is pressuring the industry to adopt new technologies that can deliver efficiency to every aspect of the development and distribution process.

Let’s examine 3 core areas within the drug product life cycle in which AI can boost performance and results:

1.     Drug discovery. Discovering even a single new drug requires tremendous effort and commitment to experimentation. For instance, “it takes about a decade of research — and an expenditure of $2.6 billion” for a single drug to go from the research phase to being available on the shelves for purchase. Scientists painstakingly assess each compound within the initial screening to verify the possibility of success or failure. All the while, the company has to spend significant amounts of time and money to keep up with the regulatory and scientific rigor required during the drug discovery process. This is where AI can help. Uses of AI and machine learning (ML) to enhance the drug discovery process include quicker initial screenings of different compounds within a particular drug as well as targeting and identifying specific components needed to formulate a certain drug using advanced data analytics. The result? Faster and more cost-effective discovery, which could ultimately create more treatment choices and more affordable healthcare for all.

2.     Drug development. Unlike drug discovery, drug development focuses on transforming the newly discovered compound to a product that is safe for market consumption and approved by the appropriate regulatory authorities. In pharma, drug development brings to light a trend that was first observed in the 1980s, “Eroom’s law” (Moore’s law but spelled backwards). Eroom’s law states that despite technological advancements, cost of drug development is increasing year over year while the number of actual drug approvals are decreasing. This is a concern for many within the pharma industry and AI is being targeted as a solution to help reverse this trend.  

Clinical trials represent important steps in the drug development process and are designed to collect safety and efficacy data related to new drugs. These clinical trials consist of multiple phases, “with Phase III trials requiring a larger pool of patients and being significantly more expensive and complex than Phase I trials.” Even with the significant amount of resources allocated to such trials, only 1 out of 10 drugs that enter Phase I are approved by the FDA. In general, clinical trials are fraught with inefficiencies including bottlenecks in recruitment, flaws in study design, and data management issues related to participants taking the right dosage or delays. Application of AI can help improve the entire process and put large volumes of data to use in unprecedented ways, including information contained in clinical notes, authorized medical records, and patient-generated data. 

3.     Commercialization. After years of research and clinical development as well as the required approvals by the FDA, a new drug can finally have the opportunity to be marketed and be made available for sale to the public. During the commercialization phase, drug companies manage a number of important operations including manufacturing, quality, and supply chain to ensure a successful delivery and market adoption of their newly approved drugs. Whether it is related to customer service, supply chain, personalization of medicine targeted to specific patients, regulatory compliance, or risk management, AI can perform a role in making commercialization more efficient and productive. For example, customer service bots can help create a more interpersonal connection for the patient in the process of finding an optimal treatment option. In terms of the supply chain, AI can implement multiple projections of analytics in the real time to “better forecast demand, and automatically identify and mitigate supply risks.” AI can help determine “a new therapy’s efficacy and side-effects profile for a specific patient or patient group.” This allows for more personalized treatment options that differ between patients and their respected medical histories. Within post-marketing surveillance, AI and ML can also help better manage risk by monitoring both web and social platforms continuously.

Patients and doctors are already benefiting from the impacts of AI in ways that felt more like science-fiction only a few years ago. In pharma, the meteoric rise in costs in face of growing market and regulatory demands, the need for efficiency is more prevalent than ever. So, what will the future hold for the pharma industry? If the early activity is any indication, advanced technologies powered by AI are slowly transforming the pharma industry, promising to disrupt the future of drug discovery, development, and commercialization. Article contributors:Entefy

AI keyboard keys

53 Useful terms for anyone interested in artificial intelligence

These days, artificial intelligence (AI) seems to be an active ingredient in virtually every conversation about advanced technologies and automation. Given the hyperactivity in the domain, many professionals and business leaders are evaluating the power of AI and machine learning technologies to ensure a competitive edge going into the next decade.

Needless to say, artificial intelligence is a rich field for discovery and understanding. However, without deeper AI training and education, it can be quite challenging to stay abreast of the rapid changes taking place within the field. At Entefy, we’re passionate about breakthrough computing and the many ways it can help people live and work better. So, to help demystify artificial intelligence and its many sub-components, our team has assembled this list of useful terms for anyone interested in AI and machine learning.

Be sure to bookmark this page for a handy quick-reference resource.

Algorithm. A procedure or formula, often mathematical, that defines a sequence of operations to solve a problem or class of problems.

Artificial intelligence (AI). The umbrella term for computer systems that can interpret, analyze, and learn from data in ways similar to human cognition.

Cardinality. In mathematics, a measure of the number of elements present in a set.

Centroid model. A type of classifier that computes the center of mass of each class and uses a distance metric to assign samples to classes during inference.

Chatbot. A computer program (often designed as an AI-powered virtual agent) that provides information or takes actions in response to the user’s voice or text commands or both. Current chatbots are often deployed to provide customer service or support functions.

Class. A category of data indicated by the label of a target attribute.

Classifier. An instance of a machine learning model trained to predict a class.

Class imbalance. The quality of having a non-uniform distribution of samples grouped by target class.

Cognitive computing. A term that describes advanced AI systems that mimic the functioning of the human brain to improve decisionmaking and perform complex tasks.

Computer vision (CV). An artificial intelligence field focused on classifying and contextualizing the content of digital video and images. 

Data curation. The process of collecting and managing data, including verification, annotation, and transformation.Also see training and dataset.

Data mining. The process of targeted discovery of information, patterns, or context within one or more data repositories.

DataOps: Management, optimization, and monitoring of data retrieval, storage, transformation, and distribution throughout the data life cycle including preparation, pipelines, and reporting.

Deep learning. A subfield of machine learning that uses artificial neural networks with two or more hidden layers to train a computer to process data, recognize patterns, and make predictions.

Derived feature. A feature that is created and the value of which is set as a result of observations on a given dataset, generally as a result of classification, automated preprocessing, or sequenced model output.

Ensembling. A powerful technique whereby two or more algorithms, models, or neural networks are combined in order to generate more accurate predictions.

F1 Score. A measure of a test’s accuracy calculated as the harmonic mean of precision and recall.

Feature. In ML, a specific variable or measurable value that is used as input to an algorithm.

Generative adversarial network (GAN). A class of AI algorithms whereby two neural networks compete against each other to improve capabilities and become stronger.

Hyperparameter. In ML, a parameter whose value is set prior to the learning process as opposed to other values derived by virtue of training.

Intelligent process automation (IPA). A collection of technologies, including robotic process automation (RPA) and AI, to help automate certain digital processes. Also see robotic process automation (RPA).

Logistic regression. A type of classifier that measures the relationship between one variable and one or more variables using a logistic function.

Machine learning (ML). A subset of artificial intelligence that gives machines the ability to analyze a set of data, draw conclusions about the data, and then make predictions when presented with new data without being explicitly programmed to do so.

MIMI. The term used to refer to Entefy’s multimodal AI platform and technology.

Multimodal AI. Machine learning models that analyze and relate data processed using multiple modes or formats of learning.

N-gram model. In NLP, a model that counts the frequency of all contiguous sequences of [1, n] tokens.

Naive Bayes. A probabilistic classifier based on applying Bayes Rule which makes strong (naive) assumptions about the independence of features.

Named entity recognition (NER). An NLP model that locates and classifies elements in text into pre-defined categories.

Natural language processing (NLP). A field of computer science and artificial intelligence focused on processing and analyzing natural human language or text data.

Natural language understanding (NLU). A specialty area within Natural Language Processing focused on advanced analysis of text to extract meaning and context. 

Neural networks. A specific technique for doing machine learning that is inspired by the neural connections of the human brain. The intelligence comes from the ability to analyze countless data inputs to discover context and meaning.

Ontology. A data model that represents relationships between concepts, events, entities, or other categories. In the AI context, ontologies are often used by AI systems to analyze, share, or reuse knowledge.

Precision. In machine learning, a measure of accuracy computing the ratio of true positives against all true and false positives in a given class.

Primary feature. A feature, the value of which is present in or derived from a dataset directly. 

Random forest. An ensemble machine learning method that blends the output of multiple decision trees in order to produce improved results.

Recall. In machine learning, a measure of accuracy computing the ratio of true positives guessed against all actual positives in a given class.

Reinforcement learning (RL). A machine learning technique where an agent learns independently the rules of a system via trial-and-error sequences.

Robotic process automation (RPA). Business process automation that uses virtual software robots (not physical) to observe the user’s low-level or monotonous tasks performed using an application’s user interface in order to automate those tasks. Also see intelligent process automation (IPA).

Self-supervised learning. Autonomous Supervised Learning, whereby a system identifies and extracts naturally-available signal from unlabeled data through processes of self-selection.

Semi-supervised learning. A machine learning technique that fits between supervised learning (in which data used for training is labeled) and unsupervised learning (in which data used for training is unlabeled).

Strong AI. Theterm used to describe artificial general intelligence or a machine’s intelligence functionality that matches human cognitive capabilities across multiple domains. Often characterized by self-improvement mechanisms and generalization rather than specific training to perform in narrow domains. Also see weak AI.

Structured data. Data that has been organized using a predetermined model, often in the form of a table with values and linked relationships. Also see unstructured data.

Supervised learning. A machine learning technique that infers from training performed on labeled data. Also see unsupervised learning.

Taxonomy. A hierarchal structured list of terms to illustrate the relationship between those terms. Also see ontology. 

Time series. A set of data structured in spaced units of time.

Training. The process of providing a dataset to a machine learning model for the purpose of improving the precision or effectiveness of the model. Also see supervised learning and unsupervised learning.

Transfer learning. A machine learning technique where the knowledge derived from solving one problem is applied to a different (typically related) problem.

Tuning. The process of optimizing the hyperparameters of an AI algorithm to improve its precision or effectiveness. Also see algorithm.

Unstructured data. Data that has not been organized with a predetermined order or structure, often making it difficult for computer systems to process and analyze.

Unsupervised learning. A machine learning technique that infers from training performed on unlabeled data. Also see supervised learning.

Vectorization. The process of transforming data into vector representation using numbers.

Weak AI. Theterm used to describe a narrow AI built and trained for a specific task. Also see strong AI.

Word Embedding. In NLP, the vectorization of words and phrases.

before & after

Demystifying Enterprise Intelligence: Traditional Data Analytics & Machine Learning [INFOGRAPHIC]

The rapid pace of modern business demands an agile approach to enterprise intelligence. Whether developing AI-powered knowledge management solutions or improving automation with intelligent decision making and orchestration, there are more options than ever when considering how best to uncover important insights from data.

Traditional data analysis is “descriptive” and useful in reporting, explaining data, and generating new models for current or historical events. Machine Learning is “predictive” and can learn from data to provide valuable insights and recommendations to help optimize processes, reduce costs, and open up new operating models. Which technology approach is right for your organization? That’s largely dependent on the target use case, data complexity, and the need for longer term expandability and scalability.

This infographic highlights the key differences between traditional data analytics and machine learning, focusing on the core benefits, protocols, data, and models.

You can read more about the 18 important skills required to bring AI solutions to life at your enterprise and Entefy’s quick video introduction to the emerging area of multimodal AI.

AI

USPTO awards Entefy 5 new patents

Entefy has been issued new patents by USPTO for core inventions in intelligent communication and data privacy 

PALO ALTO, Calif. June 28, 2019. Entefy Inc. is announcing a series of newly-awarded patents by the U.S. Patent and Trademark Office (USPTO). In addition to a number of trade secrets, the company’s IP portfolio now includes 51 combined issued and pending patents. Entefy’s latest 5 patents cover company innovations in areas of intelligent communication and data privacy.

Patent No. 10,135,764 describes the company’s “universal interaction platform for people, services, and devices.” This invention enables uniform communication between users, their devices, and popular services and is an important component of Entefy’s unique multimodal intelligence platform as well as its universal communicator application. This includes a modular message format and delivery mechanism for translating communication packets seamlessly from one format into another. At the core of the system is a communication intelligence that can receive a message in one format and automatically transform that message into a different format as required by the receiving user, service, or device. This technology can power any number of AI-based use cases involving people-to-machine and machine-to-machine communication.

Patent No. 10,169,300, “Advanced zero-knowledge document processing and synchronization,” explains additional methods by which live, multi-user document collaboration products can provide the same level of convenience for users without sacrificing security or privacy.

Continuing with the company’s history of key inventions in multi-protocol messaging, USPTO awarded Entefy with a new patent, No. 10,169,447, dealing with a “system and method of message threading for a multi-format, multi-protocol communication system.” This patent covers methods that enable threading of messages in a conversation between one or more users involving multiple protocols—for example, threading user-to-user conversations taking place across email, SMS, and instant message services.

When it comes to sharing digital assets such as documents, photos, videos, and more, Entefy’s Adaptive Privacy Control (APC) technology enables unprecedented levels of data protection. This technology works by allowing users to encrypt even small bits of information within a larger file such as a name in a document, important values in a spreadsheet, or even a small region of pixels in a photograph. With Patent No. 10,169,597 and Patent No. 10,305,683, APC extends to multi-layered protection in video files and multichannel audio files as well. Solutions using this technology provide users and content creators alike with advanced control over the viewing, distribution, and modification of their digital assets.

“Invention has always been and will always remain a major part of Entefy’s culture. I’m very proud of the team’s creativity and commitment to solving problems that are not just technically challenging, but also have potential for broad impact,” said Entefy’s CEO, Alston Ghafourifar. “As a team, we feel fortunate to be developing cutting edge capabilities in rapidly evolving areas of digital communication, security, data privacy, and machine intelligence.”

Today’s release is the latest in a series of patent announcements, including earlier Entefy patents that enables new forms of zero-trust system authentication as well as secure document collaboration.

ABOUT ENTEFY 

Entefy is an AI software company with multimodal machine learning technology (on-premise and SaaS solutions) designed to redefine automation and power the intelligent enterprise. Entefy’s multimodal AI platform encapsulates advanced capabilities in machine cognition, computer vision, natural language processing, audio analysis, and other data intelligence. Organizations use Entefy solutions to accelerate their digital transformation and dramatically improve existing systems—knowledge management, search, communication, intelligent process automation, cybersecurity, data privacy, and much more. Get started at www.entefy.com.

Globe

Building a healthy global population with AI

The World Health Organization (WHO) has called for the importance of universal healthcare coverage, declaring that health is not a privilege, but a right. According to WHO, full coverage of essential health services are still unavailable to more than half of the world’s population. And now, as part of the Sustainable Development Goals, all UN Member States are working toward the ambitious universal health coverage (UHC) by 2030.  

Although universal healthcare is a hot-button topic here in the United States, it’s important to remember that impoverished or underdeveloped nations around the world are still lacking access to even the most basic forms of healthcare. Limitations in educational resources also translate into limited opportunities for doctors to be educated and trained in their native countries. AI can bring comprehensive healthcare resources to these areas and help provide half of the world’s population with this all-important human right.

When it comes to solving the healthcare shortage, it isn’t as easy as shipping doctors, medical equipment, or computers across the globe. Many places still lack the necessary resources and infrastructure such as clean water, electricity, or Internet access to sufficiently run clinics. So, smarter and more comprehensive solutions are needed to tackle this challenge. The introduction of Early Detection and Prevention System (EDPS) to India in 1998 is an example of such a solution. A study by Kempegowda Institute of Medical Sciences involving 933 patients illustrated an overall consistency rate of 94% between the EDPS and physicians.

AI has already shown the potential for improving patient-doctor relationships, but it can also assist doctors in diagnosing medical conditions. This makes it invaluable in rural areas where doctors are scarce and often operate without the support of specialists and peers to help make more complex diagnoses. This will allow for more effective treatment plans that can be reasonably accomplished within the limits of those regions.

AI can be utilized in areas where access to doctors is either limited or nonexistent. The spread of mobile and cloud technologies, especially in resource-poor areas, have made this easier. For example, apps have been developed and launched in rural Rwanda where blood for transfusions can be ordered and delivered by drone within minutes. In certain regions within Thailand, India, and China, machine learning and natural language processing (NLP) are being leveraged to guide cancer treatments. “Researchers trained an AI application to provide appropriate cancer treatment recommendations by giving it descriptions of patients and telling the application the best treatment options. The AI application uses NLP to mine the medical literature and patient records—including doctor notes and lab results—to provide treatment advice. When examining different patients, this application agreed with experts in more than 90% of patients in one study and 50% in another.”

Of course, AI’s value isn’t limited to only a handful of use cases in healthcare. It can also provide invaluable support to countries struck by natural disasters. One of the best examples of this is Nepal after the devastating 2015 earthquake. Entire villages were flattened by the quake, leaving many survivors destitute and without immediate shelter or aid. The United Nations Office for the Coordination of Humanitarian Affairs utilized AI in its recovery efforts. It mapped key information pertaining to the disaster from satellites, cell phones, and social media posts to learn what was needed and where, expediting the delivery of aid and supplies exactly where they were needed. The system also took into consideration structural damage to generate digital maps and ensure that relief workers could move safely. Doing so helped prevent further death and injury that commonly occurs in relief efforts reliant on traditional means.

Disease prevention and management can be critical to a developing country’s economy. Advanced machine learning can help by keeping track of new cases of particularly contagious pathogens to determine the risks and predict outbreak patterns. It’s simpler, more effective, and often cheaper to prevent an epidemic than it is to treat it. AI is already used to model and predict epidemics, based on how the disease is transmitted and the natural occurrences that can have an impact. It’s had success in predicting and mitigating the transmission of dengue fever in Manila, prompting researchers to work closely with the Philippine government in expanding the AI initiative. With half the world’s population at risk of developing dengue, this is no small milestone.

With advances in computing and software, healthcare globally is beginning to feel the positive impact of AI on a number of areas including patient care, drug manufacturing, and disaster recovery. With mobile and cloud technologies growing more widespread and becoming smarter with machine learning, universal health coverage for all no longer seems like a distant dream. 

3D models

What the Hawaii missile alert fiasco teaches us about UX

When people hear the term “user experience,” many assume it refers to some sort of technical design or development. But user experience (UX) is much more than that. UX serves as the conduit between an organization and its audience. User experience encompasses every aspect about how someone interacts with your company. The usability of your website, the functionality of your product, how adept your chatbots are at answering support questions – all of these fall under UX. 

But building a logical, intuitive UX matters for your employees as well. As industries are transformed by artificial intelligence, you must make sure your team members know how to use new platforms safely and efficiently. If your processes are built on clunky, difficult-to-use systems or user interfaces (UI), bad things can happen.

You may recall that back in January, residents of Hawaii received a terrifying alert that a ballistic missile was headed straight for them. Fortunately, it was a false alarm, though it left people deeply shaken. Many were outraged that such an error could even happen.

It turns out that bad UX design played a role in the Hawaii missile warning mistake. One local paper published a photo recreation of the state’s alert notification interface. The option for sending drill alerts was almost indistinguishable from the one that would sound an actual alarm. The idea that someone could click the wrong link wasn’t at all far-fetched.

Thankfully, no one was hurt as a result of that mistake. But the error highlighted the importance of great UX.

How will you treat your guests?

In an earlier piece on the importance of user experience, Entefy referenced the words of designer Charles Eames, who said, “The role of the designer is that of a very good, thoughtful host, anticipating the needs of his guests.” We find that Eames’ design wisdom translates well to UX, as the concept of user-as-guest can be a useful starting point for shaping the experience.

What is the tone of your messaging when someone discovers your organization online or uses your products? Is it inviting or is it impersonal? When they arrive at your homepage, do your aesthetics, content, and navigation make them want to stay awhile? Or will they find a warmer reception somewhere else? No matter how impressive your décor, gourmet your food, or prominent your guest list, you will fail as a host if you’re not engaging.

You can apply the same logic to the UX of software your organization uses internally. Examine the technology your team uses from their perspectives. Does it make their jobs easier? Do the tools they rely on facilitate cooperation and dialogue? Or is every day a grind because they’re forced to interact with confusing or outdated software? As we saw with the Hawaii missile mishap, the quality of your UX has real consequences.

Three pillars of UX design

Your UX design should always be evolving. New tech platforms and industry trends will change how your audience interacts with your company and how your employees serve your customers and clients. There is no done with UX. You’ll always be revising your site, your messaging, your marketing channels, and your customer service workflows.

But there are core principles around which your UX should be designed. Every decision should begin and end with these in mind:

1. UX is all about the end user

Great UX design is rooted in empathy. As the creator of a product or service, you’re naturally close to what you’ve built. You understand everything about how it works, and you know what your company stands for. But you need to take a step back and consider the experience from the perspective of someone who’s never used it before. What are the stumbling blocks? What are their workflows? How could you make their interactions with your company or products easier, more valuable, or more enjoyable?

You can ask your end users for feedback through surveys and in-person discussion groups. But those aren’t always feasible. Besides, what people say and what they do are often very different. Someone might say they feel confident using your platform, but they might struggle more than they let on.

That’s where usability tests prove helpful. Just a handful of tests can reveal substantial gaps in the user experience and solving those could be a game-changer. Just remember that like your broader UX strategy, there is no end point for usability testing. You always want to be measuring performance and seeing where you can make your UX that much better.

2. Design for your entire audience

One size rarely fits all. Everyone who interacts with your company brings different needs, expectations, and comfort levels to the table. Some may be quite tech-savvy, while others will face a learning curve. The best user experiences are built with all of these people in mind. They’re inclusive and responsive, and they come with a high level of user support.

Previously, Entefy examined the issue of ageist design in technology and how it excludes people from accessing goods and services. When you don’t consider the full spectrum of users’ needs, that all-important dialogue between you and your audience breaks down. Someone who can’t easily navigate your site or is greeted with radio silence on your support channels isn’t going to feel heard. And you can be sure that they will shift their attention – and their business – to a company that’s more responsive or accommodating.

3. Be thoughtful about how you use technology

Tools such as automation platforms and chatbots can improve your UX, if they make sense for your employees and your audience. In the rush to prepare their companies for the age of AI, some leaders believe they must use every new tech tool at their disposal. But automated workflows, chatbots, and analytics programs are only effective if they support your end users.

Before implementing any new program, view it through the lens of your employees or audience. Will this feature directly impact their productivity or satisfaction? If the answer is yes, integrate it into your UX, but make sure to educate them about the change. Even if a tool or feature seems intuitive to you, it may be more difficult for your audience to grasp. But if you explain its purpose and help them through the initial adjustment period, they’ll likely be willing to follow your lead.

We live in an age of more content and more sophisticated technology than the world has ever seen. But communication can become a lost art if we’re not careful about how we apply those riches. Good UX design helps you maximize your resources and engage users in meaningful conversations for years to come. 

Fingerprint

Locking out cybersecurity threats with advanced machine learning

In the early days of the Internet, being hacked wasn’t usually newsworthy. It was more of a nuisance or an inconvenient prank. Threats and malware were something that could be easily mitigated with basic tools or system upgrades. But as technology became smarter and more dependent on the Internet, it also became more vulnerable. Today, cloud and mobile technologies serve as a veritable buffet of access points for hackers to exploit, compared to the slimmer pickings of a decade or so ago. As a result, cyberattacks are now significantly more widespread, and can damage more than just a computer. Being hacked nowadays is much more than a mere annoyance. What started out historically as simple pranks now has the muscle to cripple hospitals, banks, power grids, and even governments. But where hacking has gotten more sophisticated, so has cybersecurity with the help of advanced machine learning.

Cybersecurity is the top concern of an economic, social, and political world that depends almost entirely on the Internet. But most systems aren’t yet capable of catching all the threats, which are growing more every day at an incredible rate—an average of 350,000 new varieties per day. Within the past two years, there have been a surge of significant cyberattacks, ranging from DDoS attacks to ransomware infections, targeting hospital patient databases, banking networks, and military communications. All manner of information is at risk in these sorts of attacks, including personal health and banking information, classified data, and government services. The city of Atlanta recently spent over $2.6 million recovering from a ransomware attack that crippled the city’s online services. A major search engine paid out $50 million in a settlement after a massive customer database breach. So it’s easy to glean that hacking has moved on from inconvenient to downright costly.

Security analysts and experts are responsible for hunting down and eliminating potential security threats. But this is tedious and often strenuous work. It involves massive sets of complex data, with plenty of opportunity for false flags and threats that can go undetected. And when breaches are found, the time it takes for a fix to be built and implemented varies, depending on industry, between 100-245 days. This is more than enough time for a hacker to cause serious and costly damage. The shortcomings of current cybersecurity, coupled with the dramatic rise in cyberattacks will mean that by 2021, 3.5 million cybersecurity jobs will be needed, and cyberattacks will cost an estimated $6 trillion per year. But because burnout is so common in the cybersecurity industry, it’s speculated that most of those jobs won’t be or remain filled.

The key to effective cybersecurity is to work smarter, not harder. Supplementing cybersecurity systems with AI can be an intelligent countermeasure to cyberattacks. AI as a security measure has already been implemented in some small ways in current technology. Its ability to scan and process biometrics in real time has already been implemented in mainstream smartphones. Large tech firms have already begun to use AI protection in their networks and cloud operations.

AI’s most significant capability is robust, lightning fast data analysis that can learn in much greater volume and in less time than human security analysts ever could. It can be trained by white hat hackers who develop malware for security firms to recognize malicious patterns and behaviors in software before conventional antivirus programs and firewalls can learn to identify them. Natural language processing can, by scanning news reports and articles for cyberattacks, prepare a system for incoming threats. Artificial intelligence can be at work around the clock without fatigue, analyzing trends and learning new patterns. Maybe most important, AI can actively spot errors in security systems and “self-heal” by patching them in real time, cutting back significantly on remediation time.

Cyberattacks may now be newsworthy for the damages they have caused but implementing AI into cybersecurity systems can greatly diminish the risk. Machine learning can speed up an otherwise time-consuming and costly process by identifying breaches in security before the damage becomes too widespread.

Politics

AI earning a seat in politics

Alexandria Ocasio-Cortez is no stranger to making headlines. She made quite a few after her election to Congress in 2018, namely as a woman, a millennial, and a democratic socialist who had just upended a ten-term incumbent to secure a seat in one of the highest branches of government. And in 2019, she shows no signs of slowing down, particularly after her comments at SXSW about automation and the future of the American economy. She is not the first to remark on AI’s place in the world, but she is among the more high-profile politicians. Her comments, and the subsequent public response, brought AI to the American political platform more pointedly than before. It’s clear that automation is going to become a hot-button topic in the future of politics, not just in America, but around the globe, with opinions as divided as politics itself.

In recent years, AI has been making its way into politics worldwide in ways both blatant and subtle. In the Tama city area of Tokyo, Japan, Mitchido Matsuda ran a “fair and balanced” campaign for mayor that garnered 4,000 votes. That may not sound particularly impressive on its own, but is notable considering that Mitchido-san is a robot. On a larger scale, in Russia, an AI named Alisa not only managed to secure a nomination for president, she garnered more than 25,000 votes. She ultimately lost to Putin, but her nomination and the subsequent base of support highlight growing confidence in AI systems in governance and politics.

So far, AI has been primarily used to support the campaigns of flesh-and-blood politicians and machine learning algorithms have been instrumental in targeted political advertising and Internet-based opinion polls. But the appeal of AI politicians seems to be in how they can compensate for traits that often cost support for human politicians. AI has unending stamina, can quickly react to new events, and multitask in ways previously unimaginable. But is that enough to make an AI system a worthy policymaker and a viable threat to human politicians?

Ocasio-Cortez has called for the public to embrace automation in ways that free up our time to create and live as we choose. This was her response to concerns raised by some of her constituents and perhaps much of the country: will AI take over jobs? This concern exemplifies a powerful dichotomy when it comes to public opinion of AI. A recent study in Europe has uncovered that while the public is still fearful of what automation could do to the job market, “one in four Europeans would prefer artificial intelligence to make important decisions about the running of their country.” This may be in response to widespread frustration with government. However, what seems to be gaining ground is the belief that AI’s logic-based framework may be better suited to policymaking. Although Alisa and Mitchido-san didn’t win their political races, they’ve opened our eyes to new opportunities for AI in politics.

AI Vault

Other “F” words: fraud in finance

Of all the “F” words that raise our collective blood pressure, none does so quite as effectively as “fraud.” It’s a crime that predates currency and, alongside society and technology, has only gotten more sophisticated with time. The increase of online banking and ecommerce has only exacerbated the opportunities for fraud. It has become a pervasive and costly problem with depth and complexity that makes it almost impossible to prevent. Fortunately, with artificial intelligence and its potent data intelligence capabilities, early detection of fraud is becoming a reality.

Needless to say, wherever there’s money, there’s usually a system to monitor its activity and check for fraud. These types of systems often track and gauge transaction activities, spending habits, or react to reported cases of fraud by those impacted. There’s already a certain degree of automation integrated in anti-fraud systems, but overall, manual human reviews remain the primary line of defense. This is a challenge because manual reviews of volumous financial data are error-prone, repetitive, and time consuming, not to mention costly due to wages and ongoing mandatory training of staff. In fact, according to a North American business survey, “manual review staff account for the largest single slice of [the] fraud management budget.”

Historically, fraud detection has been focused on discovering fraud rather than preventing it. This practice often results in higher rates of false positives or falsely-flagged transactions, which can erode or even destroy consumer trust. Merchants and banks lose out too on revenue as well as customers who, as a result of false flags, are hesitant to use their services or trust in their ability to protect data. In financial services, the technology already deployed to mitigate fraud leaves much to be desired. The traditional paradigm for fraud detection lies in customer history, where rule-based automated systems react to certain preset parameters. This can be a challenge because “false positives occur regularly with traditional rule-based anti-fraud measures, where the system flags anything that falls outside a given set of parameters.” Anti-fraud measures based purely on rules are devoid of human sensibility which is often essential to the understanding and mitigation of fraud. 

What sets machine learning systems apart is their ability to learn from massively diverse sets of data points, rather than merely focusing on customer history or a set of predetermined parameters. This way, sophisticated algorithmic models can build a more comprehensive profile for each customer and provide the right context to their purchasing history. Financial service juggernauts have already begun utilizing deep learning in their fraud detection systems, and by doing so, one company was able to cut down their number of false positives in half.

The implementation of AI in anti-fraud processes doesn’t entirely eliminate the need for the human touch. The human-AI feedback cycle necessitates a strong collaboration between people and machines. Many AI systems remain dependent on human input, especially in its early phases, and for integrating diversity into its models and understanding of data. Highly skilled analysts retain the ability to think like fraudsters and understand the complexity of human emotions that can affect how and why fraud is committed. In return, AI can help process vast, complex sets of data in a fraction of the time and do so around the clock.

These sorts of advantages are exactly what the financial industry has been looking for, especially in the wake of notorious financial fraud cases making headlines around the world. We’ve previously discussed how advanced AI can help streamline compliance processes, but at a business level, it provides even more. Leveraging the power of AI is a smart way to combat fraud and give peace of mind to concerned customers who are tired of hearing the “F” word.