Artificial general intelligence, the Holy Grail of AI

From Mary Shelley’s Frankenstein to James Cameron’s Terminator, the possibility of scientists creating an autonomous intelligent being has long fascinated humanity—as has the potential impact of such a creation. In fact, the invention of machine intelligence with broad, humanlike capabilities, often referred to as artificial general intelligence, or AGI, is an ultimate goal of the computer science field. AGI refers to “a machine’s intelligence functionality that matches human cognitive capabilities across multiple domains. Often characterized by self-improvement mechanisms and generalization rather than specific training to perform in narrow domains.”

Some well-known thinkers such as Stephen Hawking have warned that a true artificial general intelligence could quickly evolve into a “superintelligence” that would surpass and threaten the human race. However, others have argued that a superintelligence could help us solve some of the greatest problems facing humanity. Via rapid and advanced information processing and decision making, AGI can help us complete hazardous jobs, accelerate research, and eliminate monotonous tasks. As global birthrates plummet and the old outnumber the young, at the bare minimum, robots with human-level intelligence might help address the shortage of workers in critical fields such as healthcare, education, and yes, software engineering.

AI all around us

Powerful real-life demonstrations of AI, from computers that beat world chess champions to virtual assistants in our devices that speak to us and execute tasks, may lead us to think that a humanlike AGI is just around the corner. However, it’s important to note that virtually all AI implementations and use cases you experience today are examples of “narrow” or “weak” artificial intelligence—that is, artificial intelligence built and trained for specific tasks.   

Narrow forms of AI that actually work can be useful because of their ability to process information exponentially faster (and with fewer errors) than human beings ever could. However, as soon as these forms of narrow AI come upon a new situation or variation of their expert tasks, their performance falters.

To understand how ambitious the notion of creating AGI truly is, it’s helpful to reflect on the vast complexity of human intelligence.

Complex and combinatorial intelligence

Father of computer science Alan Turing offered what is famously called the Turing Test for identifying AGI. He proposed that any machine that could imitate a human being to the point of fooling another human being could pass the test.

However, MIT roboticist Rodney Brooks has outlined four key developmental stages of human intelligence that can help us measure progress in machine intelligence:

  1. A two-year-old child can recognize and find applications for objects. For example, a two-year-old can see the similarity between a chair and a stool, and the possibility of sitting on both a rock and a pile of pillows. While the visual object recognition and deep learning capabilities of AI have advanced dramatically since the 1990s, most forms of current AI aren’t able to apply new actions to dissimilar objects in this manner.
  2. A four-year-old child can follow the context and meaning of human language exchange through multiple situations and can enter a conversation at any time as contexts shift and speakers change. To some degree, a four-year-old can understand unspoken implications, responding to speech as needed, and can spot and use both lying and humor. In a growing number of cases, AI is able to successfully converse and use persuasion in manners similar to human beings. That said, none currently has the full capacity to consistently give factual answers about the current human-social context in conversation, let alone display all the sophistication of a four-year-old brain.
  3. A six-year-old child can master a variety of complex manual tasks in order to engage autonomously in the environment. Such tasks include self-care and goal-oriented tasks such as dressing oneself and cutting paper in a specific shape. A child at this age can also manually handle younger siblings or pets with elevated sensitivity. There is hope that AI might power robotic assistants to the elderly and disabled by the end of the decade.
  4. An eight-year-old child can infer and articulate the motivations and goals of other human beings by observing their behavior in a given context. They are socially aware and can identify their own motivations and goals and explain them in conversations, in addition to understanding the personal ambitions explained by their conversation partners. Eight-year-olds can implicitly understand the goals and objectives of assignments others give them without a full spoken explanation of the purpose. In 2020, MIT scientists successfully developed a machine learning algorithm that could determine human motivation and whether a human successfully achieved a desired goal through the use of artificial neural networks. This represents a promising advancement in independent machine learning.

Why the current excitement over AGI?

Artificial general intelligence could revolutionize the way we work and live, rapidly accelerating solutions to many of the most challenging problems plaguing human society—from the need to reduce carbon emissions to the need to react to and manage disruptions to global health, the economy, and other aspects of society. Some have posited that it could even liberate human beings from all forms of taxing labor, leaving us free to pursue pleasurable pursuits full time with the help of a universal basic income.

For his 2018 book, Architects of Intelligence, futurist Martin Ford surveyed 23 of the leading scientists in AI and asked them how soon they thought research could produce a genuine AGI. “Researchers guess [that] by 2099, there’s a 50 percent chance we’ll have built AGI.”

So why do news headlines make it seem as though AGI may be only a few years off? Hollywood is partly to blame for both popularizing and glamorizing super-smart machines in pseudo realistic films. Think Iron Man’s digital sidekick J.A.R.V.I.S., Samantha in the movie Her, or Ava in the movie Ex Machina. There is also the dramatic progress in machine learning that has occurred over the past decade in which a combination of advances in big data, computer vision, and speech recognition, along with the application of graphic processing units (GPUs) to algorithmic design, allowed scientists to better replicate patterns of the human brain through artificial neural networks.

Recent big digital technology trends in machine learning, hyperautomation, and the metaverse, among others, have made researchers hopeful that another important scientific discovery could help the field of AI make a giant leap forward toward complex human intelligence. As with previous revolutions in computing and software, it’s magical and inspiring to witness the journey of machine intelligence from narrow AI to AGI and the remarkable ways it can power society.

For more about AI and the future of machine intelligence, be sure to read our previous blogs on important AI terms, the ethics of AI, and the 18 valuable skills needed to ensure success in enterprise AI initiatives.

Introducing Entefy’s new look

As Entefy enters its next phase of growth, we’re excited to publicly unveil our new branding, complete with our redesigned logo, updated visual theme, and exclusive content, all of which can be seen on our brand new website. Reflecting on the evolution of our company and services over recent years, it’s time to update our look on the outside to echo the progress we’ve made on the inside. While this new branding changes our look and feel, our mission and core values remain unchanged. 

With our new visual theme, you’ll notice how we’ve emphasized clarity, simplicity, and efficiency, beginning with the black and white primary color palette. Our suite of 2D and 3D graphic assets are all inspired by geometric shapes and principles to create coherence and harmony, especially when dealing with more complex aspects of data and machine learning. This is Entefy’s fresh visual design language offering a fresh perspective on AI, automation, and the future of machine intelligence.

Entefy’s new website provides important information about the company’s core technologies, highlighted products and services, as well as our all-in-one bundled EASI subscription.

Why Entefy? What is the Mimi AI Engine? How does the Entefy’s 5-Layer Platform work? How do Entefy AI and automation products solve problems at organizations across diverse industries? How does the EASI Subscription work and what are some customer success stories? You’ll find answers to these questions and much more on our new website. We encourage you to visit the new site and blog to learn more about Entefy and a better way to AI.

Much has happened since the early days of Entefy. Technological breakthroughs and inventions. Key product milestones. Expansion of Entefy’s technical footprint across countries and industries. And a long list of amazing people (team members, investors, partners, advisors, and customers alike) who have joined Entefy in manifesting the future of machine intelligence in support of our mission—saving people time so they can work and live better.

Interested in digital and AI transformation for your organization? Request a demo today.

Got the Entefy moxie and interested in new career opportunities? If so, we want you on our team.

Cybersecurity threats are on the rise. AI and hyperautomation to the rescue.

In the early days of the Internet, cybersecurity wasn’t the hot topic it is now. It was more of a nuisance or an inconvenient prank. Back then, the web was still experimental and more of a side hustle for a relatively narrow, techie segment of the population. Nothing even remotely close to the bloodline it is for modern society today. Cyber threats and malware were something that could be easily mitigated with basic tools or system upgrades. But as technology grew smarter and more dependent on the Internet, it became more vulnerable. Today, cloud, mobile, IoT technologies, as well as endless applications, software frameworks, libraries, and widgets, serve as a veritable buffet of access points for hackers to exploit. As a result, cyberattacks are now significantly more widespread and can damage more than just your computer. The acceleration of digital trends due to the COVID-19 pandemic has only exacerbated the problem.

What is malware

Malware, or malicious software, is a general industry term that covers a slew of cybersecurity threats. Hackers use malware and other sophisticated techniques (including phishing and social-engineering-based attacks) to infiltrate and cause damage to your computing devices and applications, or simply steal valuable data. Malware includes common threats such as ransomware, viruses, adware, and spyware as well as, the lesser known threats, such as trojan horses, worms, or crypto jacking. Cybercriminals hack to control data and systems for money, to make political statements, or just for fun.

Today, cybersecurity represents real risk to consumers, corporations, and governments virtually everywhere. The reality is that most systems are not yet capable of identifying or protecting against all cyber threats, and those threats are only rising in volume and sophistication.

Examples of cybersecurity attacks

Cyberattacks in recent years have risen at an alarming rate. There are too many examples of significant exploits to list here. However, the Center for Strategic & International Studies (CSIS) provides a 12-month summary illustrating the noticeable surge in significant cyberattacks, ranging from DDoS attacks to ransomware targeting health care databases, banking networks, military communications, and much more.

All manner of information is at risk in these sorts of attacks, including personally identifiable information, confidential corporate data, and government secrets and classified data. For instance, in February 2022, “a U.N. report claimed that North Korea hackers stole more than $50 million between 2020 and mid-2021 from three cryptocurrency exchanges. The report also added that in 2021 that amount likely increased, as the DPRK launched 7 attacks on cryptocurrency platforms to help fund their nuclear program in the face of a significant sanctions regime.”

Another example, is an exploitation in July 2021 by Russian hackers who “exploited a vulnerability in Kaseya’s virtual systems/server administrator (VSA) software allowing them to deploy a ransomware attack on the network. The hack affected around 1,500 small and midsized businesses, with attackers asking for $70 million in payment.”

Living in a digitally-interconnected world also means growing vulnerability for physical assets. Last year, you may recall hearing about the largest American fuel pipeline, the Colonial Pipeline, being the recipient of a targeted ransomeware attack. In May 2021, “the energy company shut down the pipeline and later paid a $5 million ransom. The attack is attributed to DarkSide, a Russian speaking hacking group.” 

Cybersecurity is a matter of national (and international) security

As a newer operational component of the Department of Homeland Security, the Cybersecurity & Infrastructure Security Agency (CISA) in the U.S. “leads the national effort to understand, manage, and reduce risk to our cyber and physical infrastructure.” Their work helps “ensure a secure and resilient infrastructure for the American people.” CISA was established in 2018 and has since issued a number of emergency directives to help protect information systems. One recent example is the Emergency Directive 22-02 aimed at the Apache Log4J vulnerability which threatened the global computer network. In December 2021, CISA concluded that Log4J security vulnerability posed “an unacceptable risk to Federal Civilian Executive Branch agencies and requires emergency action.” 

In the private sector, “88% of boards now view cybersecurity as a business risk.” Historically, protection against cyber threats meant overreliance on traditional rules-based software and human monitoring efforts. Unfortunately, those traditional approaches to cybersecurity are failing the modern enterprise. This is due to several factors that frankly, in combination, have surpassed our human ability to effectively manage. These factors include the sheer volume of digital activity, the growing number of unguarded vulnerabilities in computer systems (including IoT, BYOD, and other smart devices), code bases, operational processes, the divergence of the two “technospheres” (Chinese and Western), the apparent rise in the number of cybercriminals, and the novel approaches used by hackers to stay one step ahead of their victims.

Costs of cybercrimes 

Security analysts and experts are responsible for hunting down and eliminating potential security threats. But this is tedious and often strenuous work, involving massive sets of complex data, with plenty of opportunity for false flags and threats that can go undetected.

When critical cyber breaches are found, the remediation efforts can take 205 days, on average.  The shortcomings of current cybersecurity, coupled with the dramatic rise in cyberattacks translate into significant costs. “Cybercrime costs the global economy about $445 billion every year, with the damage to business from theft of intellectual property exceeding the $160 billion loss to individuals.”

Cybercrimes can cause physical harm too. This “shifts the conversation from business disruption to physical harm with liability likely ending with the CEO.” By 2025, Gartner expects cybercriminals to “have weaponized operational technology environments successfully enough to cause human casualties.” Further, it is expected that by 2024, the security incidents related to cyber-physical systems (CPSs) will create personal liablity for 75% of CEOs. And, without taking into account “the actual value of a human life into the equation,” by 2023, CPS attacks are predicted to create a financial impact in excess of $50 billion.

Modern countermeasures include zero trust systems, artificial intelligence, and automation

The key to effective cybersecurity is working smarter, not harder. And working smarter in cybersecurity requires a march toward Zero Trust Architecture and autonomous cyber, powered by AI and intelligent automation. Modernizing systems with the zero trust security paradigm and machine intelligence represents powerful countermeasures to cyberattacks.

The White House describes ‘Zero Trust Architecture’ in the following way:

“A security model, a set of system design principles, and a coordinated cybersecurity and system management strategy based on an acknowledgement that threats exist both inside and outside traditional network boundaries.  The Zero Trust security model eliminates implicit trust in any one element, node, or service and instead requires continuous verification of the operational picture via real-time information from multiple sources to determine access and other system responses.  In essence, a Zero Trust Architecture allows users full access but only to the bare minimum they need to perform their jobs.  If a device is compromised, zero trust can ensure that the damage is contained.  The Zero Trust Architecture security model assumes that a breach is inevitable or has likely already occurred, so it constantly limits access to only what is needed and looks for anomalous or malicious activity.  Zero Trust Architecture embeds comprehensive security monitoring; granular risk-based access controls; and system security automation in a coordinated manner throughout all aspects of the infrastructure in order to focus on protecting data in real-time within a dynamic threat environment.  This data-centric security model allows the concept of least-privileged access to be applied for every access decision, where the answers to the questions of who, what, when, where, and how are critical for appropriately allowing or denying access to resources based on the combination of sever.”

AI’s most significant capability is robust, lightning fast data analysis that can learn in much greater volume and in less time than human security analysts ever could. Further, with the right compute infrastructure, artificial intelligence can be at work around the clock without fatigue, analyzing trends and learning new patterns. As a security measure, AI has already been implemented in some small ways such as the ability to scan and process biometrics in mainstream smartphones or advanced analysis of vulnerability databases. But, as advanced technology adoption accelerates, autonomous cyber is likely to provide the highest level of protection against cyberattacks. It will add resiliency to critical infrastructure and global computer networks via sophisticated algorithms and intelligently orchestrated automation that can not only better detect threats but can also take corrective actions to mitigate risks. Actively spotting errors in security systems and automatic “self-healing” by patching them in real time, significantly reduces remediation time.

Conclusion

Accelerating digital trends and cyber threats have elevated cybersecurity to a priority topic for corporations and governments alike. Technology and business leaders are recognizing the limitations of their organizations’ legacy systems and processes. There is growing community consciousness about the notion that, in light of the ever-evolving threat vectors, static and rules-based approaches are destined to fail. Augmenting cybersecurity systems with AI and automation can mitigate risks and speed up an otherwise time-consuming and costly process by identifying breaches in security before the damage becomes too widespread.

Is your enterprise prepared for what’s next in cybersecurity, artificial intelligence, and automation? Begin your enterprise AI Journey and read about the 18 skills needed to materialize your AI initiative, from ideation to production implementation.

The 18 valuable skills needed to ensure success in your enterprise AI initiatives

Today, artificial intelligence (AI) is among the top technologies transforming industries and disrupting software across the globe. Organizations are quickly realizing the incredible value that AI can add in process and workflow automation, workforce productivity, cost reduction, energy and waste optimization, customer engagement, and much more.

AI and machine learning rely on a special blend of science and engineering to create intelligence for machines—preparing data, choosing algorithms, training models, tuning parameters, testing performance, creating workflows, managing databases, developing software, and more. Building models and architecting thoughtful software orchestration are central to work in this field and continual experimentation is simply par for the course. However, this is only part of the AI journey.

At Entefy, we’re obsessed with machine intelligence and its potential to change lives for the better. Given our work in advanced AI and intelligent process automation over the years and various inventions in the field, we’ve had the good fortune of working with amazing people to build systems that bring unprecedented efficiency to operations.

So, what does it take to make it all work and work well? What are the proficiencies and competencies required to bring useful and scalable AI applications to life? Below are 18 valuable skills organizations need in order to successfully materialize each AI project from ideation to production implementation.

18 Skills needed for enterprise AI

Architecture: Design and specification of software subsystems, supporting technologies, and associated orchestration services to ensure scalable and performant delivery.

Infrastructure Engineering: Implementation and maintenance of physical and virtual resources, networks, and controls that support the flow, storage, and analysis of data.

DataOps: Management, optimization, and monitoring of data retrieval, storage, transformation, and distribution throughout the data life cycle including preparation, pipelines, and reporting. 

Machine Learning Science: Research and development of machine learning models including designing, training, validation, and testing of algorithmic models.

Machine Learning Engineering: Development and software optimization of machine learning models for scalable deployment.

DevOps: Building, packaging, releasing, configuring, and monitoring of software to streamline the development process.

Backend Engineering: Development of server-side programs and services including implementation of core application logic, data storage, and APIs.

Frontend Engineering: Development of client-side applications and user interfaces such as those present in websites, desktop software, and mobile applications.

Security Engineering: Implementation and management of policies and technologies that protect software systems from threats.

Quality Assurance: Quality and performance testing that validates the business logic and ensures the proper product function.

Release Management: Planning, scheduling, automating, and managing the testing and deployment of software releases throughout a product lifecycle.

UI Design: Wireframing, illustrations, typography, image, and color specifications that help visually bring user interfaces to life. 

UX Design: Specification of the interaction patterns, flows, features, and interface behaviors that enhance accessibility, usability, and overall experience of the user interaction.  

Project Management: Coordination of human resources, resolution of dependencies, and procurement of tools to meet project goals, budget, and delivery timeline.

Product Management: Scoping user, business, and market requirements to define product features and manage plans for achieving business goals.

Technical Writing and Documentation: Authoring solution and software specifications, usage documentation, technical blogs, and data sheets. 

Compliance and Legal Operations: Creation and monitoring of policies and procedures to ensure a solution’s adherence to corporate and governmental regulations through development and deployment cycles, including IP rights, data handling, and export controls. 

Business Leadership: Strategic decision making, risk assessment, budgeting, staffing, vendor sourcing, and coordination of resources all in service of the core business and solution objectives.

Conclusion

It takes a village to nurture an AI project. Whether you have people within your team or rely on external support, the resources are available to bring your AI initiatives to life. Begin your enterprise AI journey here. And be sure to check out our other resources to help you shorten your project implementation time and avoid common AI missteps.

Important AI terms for professionals and tech enthusiasts

Cut through the noise with this list of 106 useful AI terms curated by Entefy ML scientists and engineers

Unless you live on a deserted island, you are likely to have heard about artificial intelligence (AI) and ways it continues to rapidly gain traction. AI has evolved from an academic topic into a rich field of practical applications for businesses and consumers alike. And, like any advanced technical field, it has its own lexicon of key terms and phrases. However, without deeper AI training and education, it can be quite challenging to stay abreast of the rapid changes taking place within the field.

So, to help demystify artificial intelligence and its many sub-components, our team has assembled this list of useful terms for anyone interested in practical uses of AI and machine learning. This list includes some of the most frequently-used terms as well as some which may not be used as often, but are important in understanding foundational AI concepts.

We encourage you to bookmark this page for quick-reference in the future.

A

Activation function. A function in a neural network that defines the output of a node given one or more inputs from the previous layer. Also see weight.

Algorithm. A procedure or formula, often mathematical, that defines a sequence of operations to solve a problem or class of problems.

Anomaly detection. The process of identifying instances of an observation that are unusual or deviate significantly from the general trend of data. Also see outlier detection.

Artificial general intelligence (AGI) (also, strong AI). The term used to describe a machine’s intelligence functionality that matches human cognitive capabilities across multiple domains. Often characterized by self-improvement mechanisms and generalization rather than specific training to perform in narrow domains.

Artificial intelligence (AI). The umbrella term for computer systems that can interpret, analyze, and learn from data in ways similar to human cognition.

Artificial neural network (ANN) (also, neural network). A specific machine learning technique that is inspired by the neural connections of the human brain. The intelligence comes from the ability to analyze countless data inputs to discover context and meaning.

Autoencoder. An unsupervised learning technique for artificial neural network, designed to learn a compressed representation (encoding) for a set of unlabeled data, typically for the purpose of dimensionality reduction.

AutoML. The process of automating certain machine learning steps within a pipeline such as model selection, training, and tuning.

B

Backpropagation. A method of optimizing multilayer neural networks whereby the output of each node is calculated and the partial derivative of the error with respect to each parameter is computed in a backward pass through the graph. Also see model training.

Bagging. In ML, an ensemble technique that utilizes multiple weak learners to improve the performance of a strong learner with focus on stability and accuracy.

Bias. In ML, the phenomenon that occurs when certain elements of a dataset are more heavily weighted than others so as to skew results and model performance in a given direction.

Bigram. An n-gram containing a sequence of 2 words. Also see n-gram.

Boosting. In ML, an ensemble technique that utilizes multiple weak learners to improve the performance of a strong learner with focus on reducing bias and variance.

C

Cardinality. In mathematics, a measure of the number of elements present in a set.

Categorical variable. A feature representing a discrete set of possible values, typically classes, groups, or nominal categories based on some qualitative property. Also see structured data.

Centroid model. A type of classifier that computes the center of mass of each class and uses a distance metric to assign samples to classes during inference.

Chatbot. A computer program (often designed as an AI-powered virtual agent) that provides information or takes actions in response to the user’s voice or text commands or both. Current chatbots are often deployed to provide customer service or support functions.

Class. A category of data indicated by the label of a target attribute.

Class imbalance. The quality of having a non-uniform distribution of samples grouped by target class.

Classification. The process of using a classifier to categorize data into a predicted class.

Classifier. An instance of a machine learning model trained to predict a class.

Clustering. An unsupervised machine learning process for grouping related items into subsets where objects in the same subset are more similar to one another than to those in other subsets.

Cognitive computing. A term that describes advanced AI systems that mimic the functioning of the human brain to improve decisionmaking and perform complex tasks.

Computer vision (CV). An artificial intelligence field focused on classifying and contextualizing the content of digital video and images. 

Convolutional neural network (CNN). A class of neural network that utilizes multilayer perceptrons, where each neuron in a hidden layer is connected to all neurons in the next layer, in conjunction with hidden layers designed only to filter input data. CNNs are most commonly applied to computer vision. 

Cross-validation. In ML, a technique for evaluating the generalizability of a machine learning model by testing the model against one or more validation datasets.

D

Data cleaning. The process of improving the quality of dataset in preparation for analytical operations by correcting, replacing, or removing dirty data (inaccurate, incomplete, corrupt, or irrelevant data).

Data preprocessing. The process of transforming or encoding raw data in preparation for analytical operations, often through re-shaping, manipulating, or dropping data.

Data curation. The process of collecting and managing data, including verification, annotation, and transformation. Also see training and dataset.

Data mining. The process of targeted discovery of information, patterns, or context within one or more data repositories.

DataOps. Management, optimization, and monitoring of data retrieval, storage, transformation, and distribution throughout the data life cycle including preparation, pipelines, and reporting.

Deep learning. A subfield of machine learning that uses neural networks with two or more hidden layers to train a computer to process data, recognize patterns, and make predictions. Also see deep neural network.

Derived feature. A feature that is created and the value of which is set as a result of observations on a given dataset, generally as a result of classification, automated preprocessing, or sequenced model output.

Descriptive analytics. The process of examining historical data or content, typically for the purpose of reporting, explaining data, and generating new models for current or historical events. Also see predictive analytics and prescriptive analytics.

Dimensionality reduction. A data preprocessing technique to reduce the number of input features in a dataset by transforming high-dimensional data to a low-dimensional representation.

Discriminative model. A class of models most often used for classification or regression that predict labels from a set of features. Synonymous with supervised learning. Also see generative model.

E

Ensembling. A powerful technique whereby two or more algorithms, models, or neural networks are combined in order to generate more accurate predictions.

Embedding. In ML, a mathematical structure representing discrete categorical variables as a continuous vector. Also see vectorization.

Embedding space. An n-dimensional space where features from one higher-dimensional space are mapped to a lower dimensional space in order to simplify complex data into a structure that can be used for mathematical operations. Also see dimensionality reduction.

F

F1 Score. A measure of a test’s accuracy calculated as the harmonic mean of precision and recall.

Feature. In ML, a specific variable or measurable value that is used as input to an algorithm.

Federated learning. A machine learning technique where the training for a model is distributed amongst multiple decentralized servers or edge devices, without the need to share training data.

Fine-tuning. In ML, the process by which the hyperparameters of a model are adjusted to improve performance against a given dataset or target objective.

G

Generative adversarial network (GAN). A class of AI algorithms whereby two neural networks compete against each other to improve capabilities and become stronger.

Generative model. A model capable of generating new data based on a given set of training data. Also see discriminative model.

Gradient boosting. An ML technique where an ensemble of weak prediction models, such as decision trees, are trained iteratively in order to improve or output a stronger prediction model.

Ground truth. Information that is known (or considered) to be true, correct, real, or empirical, usually for the purpose of training models and evaluating model performance.

H

Hidden layer. A construct within a neural network between the input and output layers which perform a given function, such as an activation function, for model training. Also see deep learning.

Hyperparameter. In ML, a parameter whose value is set prior to the learning process as opposed to other values derived by virtue of training.

Hyperplane. In ML, a decision boundary that helps classify data points from a single space into subspaces where each side of the boundary may be attributed to a different class, such as positive and negative classes. Also see support vector machine.

I

Inference. In ML, the process of applying a trained model to data in order to generate a model output such as a score, prediction, or classification. Also see training.

Input layer. The first layer in a neural network, acting as the beginning of a model workflow, responsible for receiving data and passing it to subsequent layers. Also see hidden layer and output layer.

Intelligent process automation (IPA). A collection of technologies, including robotic process automation (RPA) and AI, to help automate certain digital processes. Also see robotic process automation (RPA).

K

K-means clustering. An unsupervised learning method used to cluster n observations into k clusters such that each of the n observations belongs to the nearest of the k clusters.

K-nearest neighbors (KNN). A supervised learning method for classification and regression used to estimate the likelihood that a data point is a member of a group, where the model input is defined as the k closest training examples in a data set and the output is either a class assignment (classification) or a property value (regression).

Knowledge distillation. In ML, a technique used to transfer the knowledge of a complex model, usually a deep neural network, to a simpler model with a smaller computational cost.

L

Layer. In ML, a collection of neurons within a neural network which perform a specific computational function, such as an activation function, on a set of input features. Also see hidden layer, input layer, and output layer.

Logistic regression. A type of classifier that measures the relationship between one variable and one or more variables using a logistic function.

Long Short Term Memory (LSTM). A recurrent neural network (RNN) that maintains history in an internal memory state, utilizing feedback connections (as opposed to standard feedforward connections) to analyze and learn from entire sequences of data, not only individual data points.

M

Machine learning (ML). A subset of artificial intelligence that gives machines the ability to analyze a set of data, draw conclusions about the data, and then make predictions when presented with new data without being explicitly programmed to do so.

Mimi. The term used to refer to Entefy’s multimodal AI engine and technology.

MLOps. A set of practices to help streamline the process of managing, monitoring, deploying, and maintaining machine learning models.

Model training. The process of providing a dataset to a machine learning model for the purpose of improving the precision or effectiveness of the model. Also see supervised learning and unsupervised learning.

Multimodal AI. Machine learning models that analyze and relate data processed using multiple modes or formats of learning.

N

N-gram. A token, often a string, containing a contiguous sequence of n words from a given data sample.

Naive Bayes. A probabilistic classifier based on applying Bayes Rule which makes strong (naive) assumptions about the independence of features.

Named entity recognition (NER). An NLP model that locates and classifies elements in text into pre-defined categories.

Natural language processing (NLP). A field of computer science and artificial intelligence focused on processing and analyzing natural human language or text data.

Natural language understanding (NLU). A specialty area within Natural Language Processing focused on advanced analysis of text to extract meaning and context. 

Neural network (NN) (also, artificial neural network). A specific machine learning technique that is inspired by the neural connections of the human brain. The intelligence comes from the ability to analyze countless data inputs to discover context and meaning.

O

Ontology. A data model that represents relationships between concepts, events, entities, or other categories. In the AI context, ontologies are often used by AI systems to analyze, share, or reuse knowledge.

Outlier detection. The process of detecting a datapoint that is unusually distant from the average expected norms within a dataset. Also see anomaly detection.

Output layer. The last layer in a neural network, acting as the end of a model workflow, responsible for delivering the final result or answer such as a score, class label, or prediction. Also see hidden layer and input layer.

Overfitting. In ML, a condition where a trained model over-conforms to training data and does not perform well on new, unseen data. Also see underfitting.

P

Precision. In machine learning, a measure of model accuracy computing the ratio of true positives against all true and false positives in a given class.

Predictive analytics. The process of learning from historical patterns and trends in data to generate predictions, insights, recommendations, or otherwise assess the likelihood of future outcomes. Also see descriptive analytics and prescriptive analytics.

Prescriptive analytics. The process of using data to determine potential actions or strategies based on predicted future outcomes. Also see descriptive analytics and predictive analytics.

Primary feature. A feature, the value of which is present in or derived from a dataset directly. 

R

Random forest. An ensemble machine learning method that blends the output of multiple decision trees in order to produce improved results.

Recall. In machine learning, a measure of model accuracy computing the ratio of true positives guessed against all actual positives in a given class.

Recurrent neural network (RNN). A class of neural networks that is popularly used to analyze temporal data such as time series, video and speech data.

Regression. In AI, a mathematical technique to estimate the relationship between one variable and one or more other variables. Also see classification.

Reinforcement learning (RL). A machine learning technique where an agent learns independently the rules of a system via trial-and-error sequences.

Robotic process automation (RPA). Business process automation that uses virtual software robots (not physical) to observe the user’s low-level or monotonous tasks performed using an application’s user interface in order to automate those tasks. Also see intelligent process automation (IPA).

S

Self-supervised learning. Autonomous Supervised Learning, whereby a system identifies and extracts naturally-available signal from unlabeled data through processes of self-selection.

Semi-supervised learning. A machine learning technique that fits between supervised learning (in which data used for training is labeled) and unsupervised learning (in which data used for training is unlabeled).

Strong AI (also, AGI). The term used to describe artificial general intelligence or a machine’s intelligence functionality that matches human cognitive capabilities across multiple domains. Often characterized by self-improvement mechanisms and generalization rather than specific training to perform in narrow domains. Also see weak AI.

Structured data. Data that has been organized using a predetermined model, often in the form of a table with values and linked relationships. Also see unstructured data.

Supervised learning. A machine learning technique that infers from training performed on labeled data. Also see unsupervised learning.

Support vector machine (SVM). A type of supervised learning model that separates data into one of two classes using various hyperplanes. 

T

Taxonomy. A hierarchal structured list of terms to illustrate the relationship between those terms. Also see ontology. 

Teacher Student model. A type of machine learning model where a teacher model is used to generate labels for a student model. The student model then tries to learn from these labels and improve its performance. This type of model is often used in semi-supervised learning, where a large amount of unlabeled data is available but labeling it is expensive.

Tokenization. In ML, a method of separating a piece of text into smaller units called tokens, representing words, characters, or subwords, also known as n-grams.

Time series. A set of data structured in spaced units of time.

TinyML. A branch of machine learning that deals with creating models that can run on very limited resources, such as embedded IoT devices.

Transfer learning. A machine learning technique where the knowledge derived from solving one problem is applied to a different (typically related) problem.

Transformer. In ML, a type of deep learning model for handling sequential data, such as natural language text, without needing to process the data in sequential order.

Tuning. The process of optimizing the hyperparameters of an AI algorithm to improve its precision or effectiveness. Also see algorithm.

U

Underfitting. In ML, a condition where a trained model is too simple to learn the underlying structure of a more complex dataset. Also see overfitting.

Unstructured data. Data that has not been organized with a predetermined order or structure, often making it difficult for computer systems to process and analyze.

Unsupervised learning. A machine learning technique that infers from training performed on unlabeled data. Also see supervised learning.

V

Validation. In ML, the process by which the performance of a trained model is evaluated against a specific testing dataset which contains samples that were not included in the training dataset. Also see training.

Vectorization. The process of transforming data into vector representation using numbers.

W

Weak AI. The term used to describe a narrow AI built and trained for a specific task. Also see strong AI.

Weight. In ML, a learnable parameter in nodes of a neural network, representing the importance value of a given feature, where input data is transformed (through multiplication) and the resulting value is either passed to the next layer or used as the model output.

Word Embedding. In NLP, the vectorization of words and phrases, typically for the purpose of representing language in a low-dimensional space.

Beyond Bitcoin: 8 other uses for blockchain

To some, Bitcoin is the greatest thing since kale. Whether a bubble or the next big thing, it’s clear that Bitcoin dominates the headlines, thanks to its ever-fluctuating valuation, minting millionaires on a seemingly continual basis. Until recently, speculation about Bitcoin’s legitimacy overshadowed discussions of blockchain technology, the foundation upon which Bitcoin and other cryptocurrencies are built. However, innovators across a broad range of industries have begun to see blockchain’s potential, and the versatile technology is emerging from the cryptocurrency shadows once and for all.

What is a blockchain?

Before we dive into some of the ways blockchain technology is being applied to data security, poverty reduction, smart energy, and more, we need to properly define it. A blockchain is essentially a digital record of transactions that anyone can access, like an open ledger. And, with this openness comes transparency as everyone looking at the ledger sees the same information. Every transaction recorded in the ledger (also known as a “block”) is cryptographically validated and therefore immutable once recorded. Additionally, new transactions are only recorded in the blockchain after all of the computers on the network verify accuracy and authenticity of such transactions. The cryptographic nature of a blockchain prevents any single party from altering past records and it’s very difficult to hack, hence its use in cybersecurity defenses.

In short, blockchain holds widespread appeal because of its decentralized, encrypted, and auditable nature. These attributes make it extremely applicable to efforts that secure data, streamline information systems, and improve public and private services. With the recent popularization of non-fungible tokens, or NFTs, blockchain technology is seeing an even more widespread appeal in proving ownership and authenticity of digital assets such as files or physical assets including homes and vehicles.

Blockchain in action

From identity protection to traceability, here are 8 examples of blockchain at work:

  1. Personal identity protection. The inherent properties of blockchain technology make it an excellent candidate for enabling authentic, ownable, verifiable, and auditable digital identities for individuals, corporations, smart devices, digital agents, and more across the internet or even the emerging metaverse. For example, the idea of an SSI, a self-sovereign identity, is something that experts believe may bring a greater level of democratization and equality to the Internet. Be it individual or enterprise use, SSIs can utilize the same underlying blockchain technology that powers cryptocurrencies and NFTs to give holders full ownership and control over their digital footprint. This removes the need for oodles of account passwords, profiles, data privacy settings, legacy directory systems, and more. SSIs may even evolve to combat risks posed by deep fakes, misinformation / disinformation, impersonation, and an ever-evolving mix of cybersecurity threats.
  2. Distributing diplomas. There’s something deeply satisfying about crossing a stage to receive a physical diploma that you can hang on your office wall for years to come. However, paper diplomas fade and they’re susceptible to damage and deterioration. But diplomas issued via blockchain will survive virtually forever. Blockchain isn’t just useful for saving diplomas for posterity, however. By issuing virtual degrees, schools enable graduates to submit this information to employers who may want to verify their application information or delve into their academic histories. Through use of private-public encryption keys, students would have full control over their data and would be able to decide which prospective employers should gain access. 
  3. Establishing lineage & provenance. As the global demand for goods continues to increase, recordability and traceability throughout supply chains is fast becoming a top opportunity for tech disruption from AI, hyperautomation, and blockchain. One prominent area where blockchain can have an outsized impact is establishing lineage and provenance of products, raw materials, components, and parts as they move through the value chain. This type of blockchain implementation “assists in regulatory compliance, specific recalls, and prevents counterfeit components.” It was one of the “top use cases for blockchain technology in terms of market share worldwide [in] 2021” at just over 10%.
  4. Improving shipping logistics. If you’ve ever shipped a meticulously crafted care package across the country, you know that shipping costs can seem staggeringly high. Fortunately, such issues may become concerns of the past, thanks to blockchain. UPS and other members of the Blockchain in Trucking Alliance have indicated that blockchain, in combination with artificial intelligence and predictive analytics, will help shipping companies reduce costs and improve delivery times and methods. The transparent nature of blockchain infrastructures will streamline order fulfillment and help companies operate more efficiently. Using high-quality market data and blockchain-secured platforms, shipping businesses will be able to decrease errors and lower labor costs.
  5. Tracking farm-to-table food. Agricultural manufacturer Cargill served up a novel use of blockchain for Thanksgiving. Its Honeysuckle White brand used the technology to document and record the journey of each of its family-farm-grown turkeys. Honeysuckle White catalogued the process so that customers could trace their birds directly to the farms on which they were raised. The pilot program provides a model for how other companies can deliver on growing demands for transparency, education, and ethics in how food is grown and distributed.
  6. Proving asset ownership. It was only a matter of time before the foundations that powered cryptocurrencies were applied to physical goods where verification of authenticity and ownership would have tangible real world value. Enter, the NFT. By end of 2021, NFT sales in aggregate reached “approximately $1.8 billion per month.” Some NFTs have transformed into fully tradable assets. For instance, CryptoPunk 7523 (a.k.a. “Covid Alien”) which was sold by Sotheby’s for a record $11.8M. Whether for personal, professional, or enterprise use, the benefits of utilizing NFTs are becoming increasingly clear for managing the ownership, authenticity, and tradability of assets, whether digital in nature or virtual representation of physical goods such as equipment, buildings, products, and more. And, with accelerating tech investments in digital twins and metaverse as tech trends in 2022 and beyond identified by Entefy, NFTs are likely to play a growing role in the future of blockchain technology. 
  7. Reducing voter fraud. Every election carries with it the risks of voter fraud and discrimination, and until recently, even the most well-intentioned attempts to alleviate these problems sometimes fell short. Requiring voters to show photo identification at polling stations helped reduce fraud, but it marginalized people who didn’t have qualifying IDs. Blockchain’s use to defend against voter fraud circumvents both problems by allowing voters to use data stored on their smartphones to verify their identities and vote privately and securely. Remember that encryption is a core feature of blockchain technology and that it’s very difficult to alter previously recorded transactions. By allowing people to verify their identities through their smartphones, both by using personal encryption keys and through biometric measures, governments could engage a greater number of voters and reduce the chances of fraud and ballot tampering.
  8. Securing medical records. There are few greater risks to your personal data security than having your medical records stolen. Yet most hospitals and medical providers are highly vulnerable to cyberattacks. Criminals who gain access to your medical data can sell it, use it to hack your bank accounts, or bill costly medical procedures to your insurance. Beyond security concerns, obtaining your medical records can be a cumbersome and frustrating process, and the lack of centralized access can lead to gaps and inefficiencies in care. Fortunately, blockchain offers a better way. In addition to providing a much more secure alternative to standard electronic record databases, blockchain affords patients more control over their information. Instead of chasing your records from provider to provider, all of your information would be stored in one secure, up-to-date location and you would have full control over what to share with different doctors.

Conclusion

The above use cases are only a sampling of how blockchain technology is evolving across industries. Although many companies are merely exploring ways in which blockchain can help them innovate and deliver better products and services, we can expect to see the technology increasingly take center stage—and perhaps even eclipse Bitcoin in the public conversation. 

Be sure to check out some of our other favorite blogs covering topics from AI cryptocurrency trading to creative uses of AI and avoiding common missteps in bringing AI projects to life.

AI ethics and adding trust to the equation

The size and scope of the increasingly global digital activity is well documented. The growing consumer and business demand, in essentially every vertical market, for productivity, knowledge, and communication is pushing the world into a new paradigm of digital interaction, very much like mobility, the Internet, and the PC before it. But, on a more massive scale. The new paradigm includes bringing machine intelligence to virtually every aspect of our digital world, from conversational interfaces to robot soldiers, autonomous vehicles, deep fakes, virtual tutoring systems, shopping and entertainment recommendations, and even minute-by-minute cryptocurrency price predictions.

The adoption of new technologies typically follows a fairly predictable S-curve during their lifecycles—from infancy to rapid growth, maturity, and ultimately decline. With machine learning, in many ways, we are still in the early stages. This is especially the case for artificial general intelligence (AGI) which aims at the type of machine intelligence that is human like in terms of self reasoning, planning, learning, and conversing. In science fiction, AGI experiences consciousness and sentience. Although we may be many years away from the sci-fi version of human-like systems, today’s advanced machine learning is creating a step change in interaction between people, machines, and services. At Entefy, we refer to this new S-curve as universal interaction. It is important to note this step change impacts nearly every facet of our society, making ethical development and practices in AI and machine learning a global imperative.

Ethics

Socrates’ teachings about ethics and standards of conduct nearly 2,500 years ago has had a lasting impact on our rich history ever since. The debate about what constitutes “good” or “bad” and, at times, the blurry lines between them is frequent and ongoing. And, over millenia,  disruptive technologies have influenced this debate with new significant inventions and discoveries.   

Today, ethical standards are in place across many industries. In most cases, these principles exist to protect lives and liberty and to ensure integrity in business. For example, the Law Enforcement Oath taken by local and state police officers. The oath in the field of law required for attorneys, as officers of the court. Professional code of ethics and code of conduct. The mandatory oath for federal employees or armed forces.

The Hippocratic Oath in medicine is another well-known example. Although the specific pledges vary by medical school, in general, there’s recognition as to the unique role doctors play in their patients’ lives and the delineation of code of ethics that guide the actions of physicians. The more modern version of the Hippocratic oath takes guidance from the Declaration of Geneva which was adopted by the World Medical Association (WMA). Among other ethical commitments, the Physician’s Pledge states:

“I will not permit considerations of age, disease or disability, creed, ethnic origin, gender, nationality, political affiliation, race, sexual orientation, social standing, or any other factor to intervene between my duty and my patient.”

This clause is striking for its relevance to a different set of technology fields that today are still in their early phases: AI, machine learning, hyperautomation, cryptocurrency. With AI systems already at work in many areas of life and business, from medicine to criminal justice to surveillance, perhaps it’s not surprising that members of the AI and data science community have proposed an algorithm-focused version of a Hippocratic oath. “We have to empower the people working on technology to say ‘Hold on, this isn’t right,’” said DJ Patil, the U.S. Chief Data Scientist in President Obama’s administration. The group’s 20 core principles include ideas such as “Bias will exist. Measure it. Plan for it.” and “Exercise ethical imagination.”

The case for trustworthy and ethical AI

The need for professional responsibility in the field of artificial intelligence cannot be understated. There are many high-profile cases of algorithms exhibiting biased behavior resulting from the data used in their training. The examples that follow add more weight to the argument that AI ethics are not just beneficial, but essential:

  1. Data challenges in predictive policing. AI-powered predictive policing systems are already in use in cities including Atlanta and Los Angeles. These systems leverage historic demographic, economic, and crime data to predict specific locations where crime is likely to occur. So far so good. The ethical challenges of these systems became clear in a study of one popular crime prediction tool. The predictive policing system developed by the Los Angeles Police Department in conjunction with university researchers, was shown to worsen the already problematic feedback loop present in policing and arrests in certain neighborhoods. An attorney from the Electronic Frontier Foundation said, “‘if predictive policing means some individuals are going to have more police involvement in their life, there needs to be a minimum of transparency.”’
  2. Unfair credit scoring and lending. Operating on the premise that “all data is credit data,” machine learning systems are being designed across the financial services industry in order to determine creditworthiness using not only traditional credit data, but social media profiles, browsing behaviors, and purchase histories. The goal on the part of a bank or other lender is to reduce risk by identifying individuals or businesses most likely to default. Research into the results of these systems has identified cases of bias such as two businesses of similar creditworthiness receiving different scores due to the neighborhood in which each business is located.
  3. Biases introduced into natural language processing. Computer vision and natural language processing (NLP) are subfields of artificial intelligence that give computer systems digital eyes, ears, and voices. Keeping human bias out of those systems is proving to be challenging. One Princeton study into AI systems that leverage information found online showed that the biases people exhibit can make their way into AI algorithms via the systems’ use of Internet content. The researchers observed “that human-like semantic biases result from the application of standard machine learning to ordinary language—the same sort of language humans are exposed to every day.” This matters because other solutions and systems often use machine learning models that were trained on similar types of datasets.
  4. Limited effectiveness of health care diagnosis. There is limitless potential for AI-powered systems to improve patients’ lives using trustworthy and ethical AI. Entefy has written extensively on the topic, including the analysis of 9 paths to AI-powered affordable health care, how machine learning can outsmart coronavirus, improving the relationship between patients and doctors, and AI-enabled drug discovery and development.

    The ethical AI considerations in the health care industry emerge from the data and whether the data includes biases tied to variability in the general population’s access to and quality of health care. Data from past clinical trials, for instance, is likely to be far less diverse than the face of today’s patient population. Said one researcher, “At its core, this is not a problem with AI, but a broader problem with medical research and healthcare inequalities as a whole. But if these biases aren’t accounted for in future technological models, we will continue to build an even more uneven healthcare system than what we have today.”
  5. Impaired judgement in the criminal justice system. AI is performing a number of tasks for courts such as supporting judges in bail hearings and sentencing. One study of algorithmic risk assessment in criminal sentencing revealed the need for removing bias from some of their systems. Examining the risk scores of more than 7,000 people arrested in Broward County, Florida, the study concluded that the system was not only inaccurate but plagued with biases. For example, it was only 20% accurate in predicting future violent crimes and twice as likely to inaccurately flag African-American defendants as likely to commit future crimes. Yet these systems contribute to sentencing and parole decisions.

Ensuring AI ethics in your organization

The topic of ethical AI is no longer purely philosophical. It is being shaped and legislated to protect consumers, business, and international relations. According to a White House fact sheet, “the United States and European Union will develop and implement AI systems that are innovative and trustworthy and that respect universal human rights and shared democratic values, explore cooperation on AI technologies designed to enhance privacy protections, and undertake an economic study examining the impact of AI on the future of our workforces.”

The European Commission has already drafted “The Ethics Guidelines for Trustworthy Artificial Intelligence (AI)” to promote ethical principles for organizations, developers, and society at large. To bring trustworthy AI to your organization, begin with structuring AI initiatives in ways that are lawful, ethical, and robust. Ethical AI demands respect for applicable laws, regulations, as well as ethical principles and values, while making sure the AI system itself is resilient and safe. In their guidance, the European Commision presents the following 7 requirements for trustworthy AI:

  1. Human agency and oversight
    Including fundamental rights, human agency and human oversight
  2. Technical robustness and safety
    Including resilience to attack and security, fall back plan and general safety, accuracy, reliability and reproducibility
  3. Privacy and data governance
    Including respect for privacy, quality and integrity of data, and access to data
  4. Transparency
    Including traceability, explainability and communication
  5. Diversity, non-discrimination and fairness
    Including the avoidance of unfair bias, accessibility and universal design, and stakeholder participation
  6. Societal and environmental wellbeing
    Including sustainability and environmental friendliness, social impact, society and democracy
  7. Accountability
    Including auditability, minimisation and reporting of negative impact, trade-offs and redress

Be sure to read Entefy’s previous article on actionable steps to include AI ethics in your digital and intelligence transformation initiatives. Also, brush up on AI and machine learning terminology with our 53 Useful terms for anyone interested in artificial intelligence.

Can AI be officially recognized as an inventor?

Invention is generally considered a pinnacle of the creative mind. Since the dawn of the computer age, the question of whether an artificial intelligence (“AI”) can be such a creative mind has been the subject of philosophical debate. More recently, with advancements in computing and deep learning, in particular multimodal machine learning, this question is no longer purely philosophical. It has reached the realm of practical consideration. While still limited, advanced AI systems are now able to assimilate vast amounts of data and synthesize learning to produce discovery at scales far beyond human capability.

Today, machine intelligence is used to decode genetic sequences for vaccine development, determine how to fold protein structures, accurately detect human emotions, compose music, operate autonomous vehicles, identify fraud in finance, generate smart content in education, create smarter robots, and much more. These are all made possible through a series of machine learning techniques. In short, these techniques teach machines how to make sense of the data and signals exposed to them, often in very human ways. At times, generative machine learning models are even capable of creating content that is equivalent to or better than those created by people. A recent study found that AI-generated images of faces “are indistinguishable from real faces and more trustworthy.” From the perspective of practical application, there is little debate that AI systems can learn and engage in creative discovery. More recently, however, the question of the creative potential of artificial intelligence has posed an intriguing legal question: Can an AI system be an “inventor” and be issued a patent?

This legal question was put to the test by physicist Dr. Stephen Thaler, inventor of DABUS (Device for Autonomous Boot-strapping of Unified Sentience). In 2019, Dr. Thaler filed two patent applications in a number of countries naming DABUS as the sole inventor. One patent application related to making of containers using fractal geometry, and the other to producing light that flickers rhythmically in patterns mimicking human neural activity.

While the actual invention claims were found allowable, the United States Patent and Trademark Office (USPTO) decided that only a natural person could be an inventor. Since DABUS is a machine and not a natural person, a patent could not be issued. The matter was appealed to the U.S. District Court for the Eastern District of Virginia, which agreed with the USPTO based on the plain language of the law describing the inventor as an “individual.” The Court noted that while the term “person” can include legal entities, such as corporations, the term “individual” refers to a “natural person” or a human being. According to the Court, whether an artificial intelligence system can be considered an inventor is a policy question best left to the legislature.

This same question came before the Federal Court of Australia, when the Australian patent agency refused to award a patent to DABUS. The Australian Federal Court, however, did not limit its consideration to a narrow discussion on the meaning of a word, but asked a more fundamental question as to the nature and source of human versus AI creativity: “We are both created and create. Why cannot our own creations also create?” The Australian Federal Court found that, while DABUS did not have property rights to ‘own’ a patent, it was nonetheless legally able to be an ‘inventor.’

Both this debate and the practical applications will have global implications on how people interact with AI. International approaches will continue to vary and perhaps further complicate matters. In the case of Saudi Arabia, and later the United Arab Emirates, a robot named Sophia was granted citizenship. So, could this robot be officially recognized as an inventor?  

In the United States, the question of whether an AI system can be an inventor is pending appeal before the Federal Circuit and may come before the Supreme Court, which has found that unnatural persons (such as corporations) can have a right to free exercise of religion. So far, the American legal response has been to interpret the wordings of the patent statutes narrowly, and pass the question of whether an AI system can be an inventor to policymakers.

As a society, the question of whether we will consider AI to have personhood or be considered as an individual—for example, being able to make a contract, swear an oath, sign a declaration, have agency, or be held accountable—can be confounding. As AI systems make more independent decisions that impact our society, these questions rise in practical importance.

Learn more about AI and machine learning in our previous blogs. Start here with key AI terms and concepts, differences between machine learning and traditional data analytics, and the “18 important skills” needed to bring enterprise-level AI projects to life. 

Paul Ross

Entefy closes latest funding round and adds Paul Ross, former CFO of The Trade Desk, to its Board of Directors

PALO ALTO, Calif., February 1, 2022— Entefy Inc. announced today the appointment of Paul Ross to its Board of Directors. As a multi-time public company CFO, Ross brings to Entefy a wealth of finance and scale leadership experience. Ross’ expertise in public company finance, IPO preparation, human resources, and rapid scaling of corporate infrastructure make him a valuable resource for the executive management team and Board.

The global disruptions to supply chains, the workforce, and consumer behavior brought on by the pandemic, have resulted in a significant rise in enterprise demand for AI and process automation. Over the past year, Entefy has been experiencing growth in multiple key areas including its AI technology footprint, company valuation, customer orders, data center operations, and intellectual property (now spanning 100s of trade secrets and patents combined). The company also recently closed its Series A-1 round which, combined with its previous funding rounds, totals more than $25M in capital raised to date.

As the first CFO of The Trade Desk, Inc. (NASDAQ: TTD), Ross rapidly prepared the company for its highly successful IPO while helping grow revenue more than 20x over five years. Ross and his fellow leadership team created more than $20 billion in market valuation over the same period as the company transitioned from a private enterprise to a profitable public company. Prior to The Trade Desk, Ross held several CFO roles in technology and related industries. Ross holds an MBA from University of Southern California and a bachelor’s degree from University of California, Los Angeles. He earned his Certified Public Accountant (CPA) certification while at PWC.

Entefy Chairman and CEO, Alston Ghafourifar, said, “I’m delighted to welcome Paul to Entefy’s Board of Directors. He’s a world-class executive with the relevant financial leadership and growth experience to support our journey toward the future of AI and automation for organizations everywhere.” “I’m thrilled about this opportunity and proud to have joined Entefy during this exciting phase,” said Ross. “Entefy’s team has done an amazing job building a highly differentiated AI and automation technology to help businesses achieve growth and resiliency, especially in times like these.”

ABOUT ENTEFY 

Entefy is an advanced AI software and process automation company, serving enterprise customers across diverse industries including financial services, health care, retail, and manufacturing.

www.entefy.com

pr@entefy.com

Data

Big digital technology trends to watch in 2022

With everything that has been going on in the world over the past two years, it is not surprising to see so many coping with increased levels of stress and anxiety. In the past, we, as a global community, have had to overcome extreme challenges in order to make lives better for ourselves and those around us. From famines to wars to ecological disasters and economic woes, we have discovered and, in countless ways, invented new solutions to help our society evolve.

What we’ve learned from our rich, problem-laden history is that while the past provides much needed perspective, it is the future that can fill us with hope and purpose. And, from our lens, the future will be increasingly driven by innovation in digital technologies. Here are the big digital trends in 2022 and beyond.

Hyperautomation 

From the early history of mechanical clocks to self-driven machines that dawned the industrial revolution to process robotization driven by data and software, automation has helped people produce more in less time. Emerging and maturing technologies such as robotic process automation (RPA), chatbots, artificial intelligence (AI), and Low-Code, No-Code platforms, have been delivering a new level of efficiency to many organizations worldwide.

Historically, automation was born out of convenience or luxury but, in today’s volatile world, it is quickly becoming a business necessity. Hyperautomation is an emerging phenomenon that uses multiple technologies such as machine learning and business process management systems to expand the depth and breadth of the traditional, narrowly-focused automation.

Think of hyperautomation as intelligent automation and orchestration of multiple processes and tools. So, whether your charter is to build resiliency in supply chain operations, create more personalized experiences for customers, speed up loan processing, save time and money on regulatory compliance, or shrink time to answers or insights, well-designed automation can get you there.     

Gartner predicts that hyperautomation enabling software in 2022 will reach nearly $600 billion. Further, “Gartner expects that by 2024, organizations will lower operational costs by 30% by combining hyperautomation technologies with redesigned operational processes.”

Hybrid Cloud

Moore’s Law, the growing demand for compute availability anywhere, anytime, and the rising costs of hardware, software, and talent, together gave rise to the Public Cloud as an alternative to on-premises or “on-prem” infrastructure. From there, add cybersecurity and data privacy concerns and you can see why Private Clouds provide value. Now mix in the unavoidable need for business and IT agility and you can see the push toward the Hybrid Cloud.

Enterprises recognize that owning and managing their own on-prem infrastructure is expensive in terms of initial capital and in terms of the scarce technical talent required to maintain and improve it over time. An approach to addressing that challenge is to off-load as much non-critical computing activity into the cloud as possible. A third-party provider can offer the compute infrastructure, system architecture, and ongoing maintenance to address the needs of many. This approach reflects the benefits of specialization. No need to maintain holistic systems on-premises when so much can be off-loaded to specialists that offer IaaS (Infrastructure-as-a-Service), PaaS (Platform-as-a-Service), and SaaS (Software-as-a-Service).

The challenge for enterprises, though, is in striking the right balance between on-premises and cloud services. Hybrid Cloud combines private and public cloud computing to provide organizations the scale, security, flexibility, and resiliency they require for their digital infrastructure in today’s business environment. “The global hybrid cloud market exhibited strong growth during 2015-2020. Looking forward, the market is expected to grow at a CAGR of 17.6% during 2021-2026.”

As companies and their data become ever more intermeshed with one another, the complexity, along with the size of the market, will increase even further.

Privacy-preserving machine learning

The digital universe is facing a global problem that isn’t easy to fix—ensuring data privacy in a time when virtually every commercial or governmental service we use in our daily lives revolves around data. With the growing public awareness of frequent data breaches and mistreatment of consumer data (partly thanks for the Facebook-Cambridge Analytica data fiascoYahoo’s data breach impacting 3 billion accounts, and Equifax system breach in 2017 to name a few) companies and governments are taking additional steps to rebuild trust with their customers and constituents. In 2016, Europe introduced GDPR as a consolidated set of privacy laws to ensure a safer digital economy. However, “the United States doesn’t have a singular law that covers the privacy of all types of data.” Here, we take a more patchwork approach to data protection to address specific circumstances—HIPAA covering health information, FCRA for credit reports, or ECPA for wiretapping restrictions.

With the explosion of both edge computing (expected to reach $250.6 Billion in 2024) and an ever greater number and capacity of IoT (Internet of Things) devices and smart machines, the volume of data available for machine learning is vast in quantity and increasingly in terms of diversity of sources. One of the central challenges is how to extract the value of machine learning applied to these data sources while maintaining the privacy of personally-identifiable or otherwise sensitive data. 

Even with the best security, unless properly designed, the feasibility of trained models for AI to incidentally breach the privacy of the underlying data sets is real and increasing. Any company which deals with potentially sensitive data, uses machine learning to extract value, or works in close data-alliance with one or more other companies has to be concerned about the possible direction which machine learning can take and possible breaches of underlying data privacy.

The purpose of privacy-preserving machine learning is to train models in ways that protects sensitive data without degrading model performance. Historically this has been addressed by data anonymization or obfuscation techniques but frequently data anonymization reduces or, in some cases, eliminates the value of the data. Today, other techniques are being applied as well to better ensure data privacy, including federated machine learning designed to train a centralized model via decentralized nodes (e.g. “training data locally on users’ mobile devices rather than logging it to a data center for training”) and differential privacy which makes it possible to collect and share user information while maintain the user’s privacy by adding “noise” to the user inputs.

Privacy-preserving machine learning is a complex and evolving field. Success in this area will help rebuild consumer trust in our digital economy and unleash untapped potential in advanced data analytics that is currently restricted due to privacy concerns.   

Digital Twins

Thanks to recent advances in AI, automation, IoT, cloud computing, and robotics, Industry 4.0 (the Fourth Industrial Revolution) has already begun. As the world of manufacturing and commerce expands and the demand for virtualization grows, digital twins find a footing. A digital twin is the virtual representation of a product, a process, or a product performance. It encompasses the designs, specifications, and quantifications of a product—essentially all the information required to describe what is produced, how it is produced, and how it is used.

As enterprises digitize, the concept of digital twins becomes ever more central. Anything which performs under tight specifications, has high capital value and needs to perform at exceptional levels of consistency is a candidate for digital twinning. This allows companies the ability to use virtual simulations as a faster, more effective way to solve real-world problems. Think of these simulations to validate and test products before they exist—jet engines, water supply systems, advanced performance vehicles, anything sent into space—and the opportunity to augment digital twins with advanced AI to foresee problems in the performance of products, factory operations, retail spaces, personalized health care, or even smart cities of the future.   

In case of products, we need to know that a product is produced to specifications and need to understand how it is performing in order to refine and improve its performance. Take wind turbines as an example. They are macro-engineered products, with a high capital price tag, performing under harsh conditions, and engineered to very tight specifications. Anything that can be learned to improve performance and reduce wear is quite valuable. Sensing devices can tell you in real time wind strength, humidity, time of day, wind fitfulness, temperature and temperature gradients, number of bird strikes, turbine heat, and more.

The global market for digital twins “is expected to expand at a compound annual growth rate (CAGR) of 42.7% from 2021 to 2028.” Digital twins provide an example of both the opportunity created by digitization as well as the complexity arising from that process. The volume of data is large and complex. Advanced analysis of that data with AI and machine learning along with process automation improves a company’s ability to better manage production risks, proactively identify system failures, reduce build and maintenance costs, and make data-driven decisions.   

Metaverse

Lately, you may have heard a lot of buzz about the “metaverse” and the future of the digital world. Much of the recent hype could be attributed to the rebranding of Facebook back in October 2021, changing its corporate name to Meta. At present, metaverse is still mostly conceptual in many ways without a collective agreement on its definition. That said, there are core pieces in place already to enable a digital universe that will feel more immersive and 3D in every aspect, compared to what we experience via the Internet today.

Well known large tech companies including Nvidia, Microsoft, Google, and Apple are already playing their role in making metaverse a reality and other companies and investors are piling on. Perhaps, the “gaming companies like Roblox and Epic Games are the farthest ahead building metaverses.” Meta expects to spend $10 billion on its VR (virtual reality), augmented reality (AR), and mixed reality (MR) technologies this fiscal year in support of its metaverse vision.

Some of the building blocks of the metaverse include strong social media acceptance and use, strong gaming experience, Extended Reality or XR (the umbrella term for VR, AR, and MR) hardware, blockchain, and cryptocurrencies. Even the Digital Twins technology plays a role here. While there is no explicit agreement as to what constitutes a Metaverse, the general idea is that at some point we should be able to integrate the already-existing Internet with a set of open, universally-interoperable virtual worlds and technologies. The metaverse is expected to be much larger than what we’re used to in our physical world. We’ll be able to create endless number of digital realms, unlimited digital things to own, buy, or sell, and all the services we can conjure to blend what we know about our physical world with fantasy. The metaverse will have its own citizens and avatar inhabitants as well as its own set of rules and economies. And it won’t be just for fun and games. Scientists, inventors, educators, designers, engineers, and businesses will all participate in the metaverse to solve technical, social, and environmental challenges to enable health and prosperity for more people in more places in our physical world.

Instead of clunky video chats or conference calls, imagine meetings that are fully immersive and feel natural. Training could be shifted from instruction to experience. Culture building within an enterprise could occur across a geographically distributed workforce. Retail could be drastically transformed with most transactions occurring virtually. For example, even the mundane task of grocery shopping could be almost entirely shifted into a metaverse where, from the comfort of your den, you can wander up and down the aisles, compare products and prices, and feel the ripeness of fruits and vegetables.

AI’s contribution to the metaverse will be significant. AIOps (Artificial Intelligence for IT Operations) will help manage the highly complex infrastructure. Generative AI will create digital content and assets. Autonomous agents will provide and trade all sort of services. Smart contracts will decentralize assets to keep track of digital currency and other transactions in ways that will disintermediate the big tech companies. Deep reinforcement learning will help design better computer chips at unprecedented speed. A series of machine learning models will help personalize gaming and educational experiences. In short, the metaverse will be limited only by compute resources and our imagination.

To meet its promise, the metaverse will face certain challenges. Perhaps once again, our technology is leaping ahead of our social norms and our regulatory infrastructure. From data and security to laws and governing jurisdictions, inclusion and diversity, property and ownership, as well as ethics, we will need our best collective thinking and collaborative partnerships to create new worlds. Similar to the start of the Internet and our experiences thus far, we can expect many experiments, false starts, and delays associated with the metaverse, before landing on the right frameworks and applications that are truly useful and decentralized.

The metaverse market size is expected to reach $872 billion in 2028, representing a 44.1% CAGR between 2020-2028.

Blockchain

Blockchain is closely associated with cryptocurrency but is by no means restricted in its application to cryptocurrency. Blockchain is essentially a decentralized database that allows for simultaneous use and sharing of digital transactions via a distributed network. To track or trade anything of value, blockchain can create a secure record or ledger of transactions which cannot be later manipulated. In many ways, blockchain is a mechanism to create trust in the context of a digital environment. Participants gain the confidence that the data represented is indeed real because the ledgers and transactions are immutable. The records cannot be destroyed or altered.  

The emergence of Bitcoin and alternative cryptocurrencies in recent years has put blockchain technology on a fast adoption curve, in and out of finance. Traditionally, recording and tracking transactions is handled separately by participants involved. For example, it can take a village to complete a real estate transaction—buyers, sellers, brokers, escrow companies, lenders, appraisers, inspectors, insurance companies, government, and more—and that can lead to many inefficiencies including record duplications, processing delays, and potential security vulnerabilities. Blockchain technology is collaborative, giving all users collective control and allowing transactions to be recorded in a decentralized manner, via a peer-to-peer network. Each participant in the business network can now record, receive, or send transactions to other participants and make the entire process safer, cheaper, and more efficient.   

The use cases for blockchain are growing including the use of the technology to explore medical research and improve record accuracy in health care, transfer money, create smart contracts to track the sale of and settle payments for goods and services, improve IoT security, and bring additional transparency to supply chain. By 2028, the market for blockchain technology is forecasted to expand to approximately $400 billion with an 82.4% CAGR from 2021 to 2028.

Web3

Web3 or Web 3.0 is an example of real technology with real application that may be suffering from definitional miasma. In short, Web3 is your everyday web with the exception of centralization that has evolved over the past two decades due to the remarkable success of few big tech juggernauts.

The original web, Web 1.0, was pretty disorganized, dominated by mostly static pages. Web 2.0 became a more interactive web with user-generated content and what made social media, blogging (including microblogging), search, crowdsourcing, and online gaming snowball.

Web 2.0 allowed the web to grow much larger and more useful while, simultaneously, growing more risky with advertising as a key business model. The wild west of social, content, and democratization of new tools gave rise to a set of downsides—information overload, a set of Internet addictions, fake news, hate speech, forgeries and fraud, citizen journalism without guard rails, and more.

Web 3.0 (not to be confused with Tim Berners-Lee’s concept of the Semantic Web which is sometimes referred to as Web 3.0 as well) aims at transforming the current web which is highly centralized via control by a handful of very large tech companies. Web3 focuses on decentralization based on blockchains and is closely associated with cryptocurrencies. Advocates of Web3 see semantic web and AI as critical elements to ensure better security, privacy, and data integrity as well as resolve the broader issue of decentralizing technology away from the dominance of existing technology companies. With Web3, your online data will remain your property and you will be able to move that data or monetize it freely without being dependent on any particular intermediary.

Since both Web3 and Semantic Web involve an evolution from our current Web 2.0, it makes sense that both have arrived at Web 3.0 as a descriptor, but each is describing related, but different aspects of improving the Web. Projections for Web3 are difficult to develop but the issues addressed by Web3 will be key to the emergence of a more open and versatile Internet. 

Autonomous Cyber

Put malicious intent and software together and you’ll get a quick sense of what keeps information security (InfoSec) executives and cyber security professionals up at night. Or day for that matter. Malware, short for malicious software, is an umbrella term for a host of cybersecurity threats facing us all, including spyware, ransomware, worms, viruses, trojan horses, crypto jacking, adware, and more. Cybercriminals (often referred to as “hackers”) use malware and other techniques, such as phishing and man-in-the-middle attacks, that can wreak havoc on computers, applications, and networks. And they do this to destroy or damage computer systems, steal or leak data, or collect ransom in exchange for giving back the control of assets to the original owner.    

Cyber threats are on the rise globally and, well beyond individual or corporate interest, they are a matter of national and international security. The dangers they represent not only provide risk to digital assets but critical physical infrastructure as well. In the United States, the Cybersecurity & Infrastructure Security Agency (CISA), a newer operational component of the Department of Homeland Security, “leads the national effort to understand, manage, and reduce risk to our cyber and physical infrastructure.” Their work helps “ensure a secure and resilient infrastructure for the American people.” Since its formation in 2018, CISA has issued a number of Emergency Directives to help protect information systems. A recent example is the Emergency Directive 22-02 aimed at the Apache Log4J vulnerability which has broad implications, threatening the global computer network. It “poses unacceptable risk to Federal Civilian Executive Branch agencies and requires emergency action.” 

Gone are the days when computer systems and networks could be protected via simple rules-based software tools or human monitoring efforts. In terms of risk assessment, “88% of boards now view cybersecurity as a business risk.” Traditional approaches to cybersecurity are failing the modern enterprise because the sheer volume of cyber activity, the ever-increasing number of undefended areas in codes, systems, or processes (including the additional openings exposed as a result of massive proliferation of IoT devices in recent years), the growing number of cybercriminals, and the sophistication in attacking approaches have in combination surpassed our ability to effectively manage.

Enter Autonomous Cyber, powered by machine intelligence. This is a rapidly developing field, both technologically and in terms of state governance and international law. Autonomous Cyber is the story of digital technology in three acts.

Act I –  The application of AI to our digital world.

Act II – The use of AI to automate attacks by nefarious agents or State entities to penetrate information systems.

Act III – The use of AI, machine learning, and intelligent process automation against cyber-attacks.

Autonomous Cyber leverages AI to continuously monitor an enterprise’s computing infrastructure, applications, and data sources for unexpected changes in patterns of communication, navigation, and data flows. The idea is to use sophisticated algorithms that can distinguish what is normal from what might be abnormal (representing potential risk), intelligent orchestration, and other automation to take certain actions at speed and scale. These actions include creating alerts and notifications, rerouting requests, blocking access, or shutting off services altogether. And best of all, these actions can be designed to either augment human power, such as the capabilities of a cybersecurity professional, or be executed independently without any human control. This can help companies and governments build better defensibility and responsiveness to ensure critical resiliency.  

The global market for AI in cybersecurity “is projected to reach USD 38.2 billion by 2026 from USD 8.8 billion in 2019, at the highest CAGR of 23.3%.” Use of AI by hackers and state actors means that there will be less time between innovations. Therefore, enterprise security systems cannot simply look for replications of old attack patterns. They have to be able to identify, in more places and systems, new schemes of deliberate attacks or accidental/natural disruptions as they emerge in real time. AI in the context of cybersecurity is becoming a critical line of defense and, with the speed of evolution shortening in technology, Autonomous Cyber has the capacity to continuously monitor and autonomously respond to evolving cyber threats.

Conclusion

At Entefy we are passionate about breakthrough computing that can save people time so that they live and work better.

Now more than ever, digital is dominating consumer and business behavior. Consumers’ 24/7 demand for products, services, and personalized experiences wherever they are is forcing businesses to optimize and, in many cases, reinvent the way they operate to ensure efficiency, ongoing customer loyalty, and employee satisfaction. This requires advanced technologies including AI and automation. To learn more about these technologies, be sure to read our previous blogs on multimodal AI, the enterprise AI journey, and the “18 important skills” you need to bring AI applications to life.