2 AI winters and 1 hot AI summer

The field of artificial intelligence (AI) has its roots in the mid-20th century. The early development of AI can be traced back to the Dartmouth Conference in 1956, which is considered the birth of AI as a distinct field of research. The conference brought together a group of computer scientists who aimed to explore the potential of creating intelligent machines. Since its birth, AI’s journey has been long, eventful, and fraught with challenges. Despite early optimism about the potential of AI to match or surpass human intelligence in various domains, the field has undergone two AI winters and now, what appears to be, one hot AI summer.

In 1970, the optimism about AI and machine intelligence was so high that in an interview with Life Magazine, Marvin Minsky, one of the two founders of the MIT Computer Science and Artificial Intelligence Laboratory, predicted that within “three to eight years we will have a machine with the general intelligence of an average human being.” Although the timing of Minsky’s prophecy has been off by a long stretch, recent advances in computing systems and machine learning, including foundation models and large language models (LLMs), are creating renewed optimism in AI capabilities.

The first AI winter

AI winters refer to periods of time when public interest, funding, and progress in the field of artificial intelligence significantly decline. These periods are characterized by a loss of confidence in AI technologies’ potential and often follow periods of overhype and unrealistic expectations.

The first AI winter occurred in the 1970s, following the initial excitement that surrounded AI during the 50s and 60s. The progress did not meet the high expectations, and many AI projects failed to deliver on their promises. Funding for AI research decreased, and interest waned as the technology entered its first AI winter, facing challenges such as:

  1. Technical limitations: AI technologies had met significant technical obstacles. AI research had difficulty representing knowledge in ways that could be readily understood by machines and the R&D was exceedingly limited by the computing power available at the time. This in turn restricted the complexity and scale of learning algorithms. In addition, many AI projects were constrained by limited access to data and encountered difficulties in dealing with real-world complexity and unpredictability, making it challenging to develop effective AI systems.
  2. Overhype and unmet expectations: Early excitement and excessive optimism about AI’s potential to achieve “human-level” intelligence had led to unrealistic expectations. When AI projects faced major hurdles and failed in delivery, it led to disillusionment and a loss of confidence in the technology.
  3. Constraints in funding and other resources: As the initial enthusiasm for AI subsided and results were slow to materialize, funding for AI research plunged. Government agencies and private investors became more cautious about investing in AI projects, leading to resource constraints for institutions and researchers alike.
  4. Lack of practical applications: AI technologies had yet to find widespread practical applications. Without tangible benefits to businesses, consumers, or government entities, interest in the field faded.
  5. Criticism from the scientific community: Some members of the scientific community expressed skepticism about the approach and progress of AI research. Critics argued that the foundational principles and techniques of AI were too limited to achieve human-level intelligence, and they were doubtful about the possibility of creating truly intelligent machines.

    For example, “after a 1966 report by the Automatic Language Processing Advisory Committee (ALPAC) of the National Academy of Sciences/National Research Council, which saw little merit in pursuing [machine translation], public-sector support for practical MT in the United States evaporated…” The report indicated that since fully automatic high-quality machine translation was impossible, the technology could never replace human translators. They said that funds would therefore be better spent on basic linguistic research and machine aids for translators.”

    Another example is the Lighthill report, published in 1973, which was a critique of AI research in the UK. It criticized AI’s failure to achieve its ambitious goals and concluded that AI offered no unique solution that is not achievable in other scientific disciplines. The report highlighted the “combinatorial explosion” problem, suggesting that many AI algorithms were only suitable for solving simplified problems. As a result, AI research in the UK faced a significant setback, with reduced funding and dismantled projects.
  6. style=”margin-bottom: 25px;”

Despite the multiple challenges facing AI at the time, the first AI winter lasted for less than a decade (est. 1974-1980). The end of the first AI winter was marked by a resurgence of interest and progress in the field. Several factors contributed to this revival in the early 1980s. These included new advancements in machine learning and neural networks, expert systems that utilized large knowledge bases and rules to solve specific problems, increased computing power, focused research areas, commercialization of and successes in practical AI applications, funding by the Defense Advanced Research Projects Agency (DARPA), and growth in data. The combined impact of these factors led to a reinvigorated interest in AI, signaling the end of the first AI winter. For a few years, AI research continued to progress and new opportunities for AI applications emerged across various industries. When this revival period ended in the late 1980s, the second AI winter set in.

The second AI winter

It is generally understood that the second AI winter started in 1987 and ended in the early 1990s. Similar to the first period of stagnation and decline in AI interest and activity in the mid 70s, the second AI winter was caused by hype cycles and outsized expectations that AI research could not meet. Once again, government agencies and the private sector became cautious about technical limitations, doubts in ROI (Return on Investment), and criticism from the scientific community. At the time, the expert systems proved to be brittle, rigid, and difficult to maintain. Scaling AI systems proved challenging as well since they needed to be able to learn from large data sets which was computationally expensive. This made it difficult to deploy AI systems in real-world applications.

Symbolic AI’s reliance on explicit, rule-based representations was criticized for being inflexible and unable to handle the complexity and ambiguity of everyday data and knowledge. Symbolic AI, also known as Good Old-Fashioned AI (GOFAI), is an approach to AI that focuses on the use of explicit symbols and rules to represent knowledge and perform reasoning. In symbolic AI, problems are broken down into discrete, logical components, and algorithms are designed to manipulate these symbols to solve problems. The fundamental idea behind symbolic AI is to create systems that can mimic human-like cognitive processes, such as logical reasoning, problem-solving, and decision-making. These systems use symbolic representations to capture knowledge about the world and apply rules for manipulating that knowledge to arrive at conclusions or make predictions.

While symbolic AI was a dominant approach in the early days of AI research, it faced certain limitations, such as difficulties in handling uncertainty, scalability issues, and challenges in learning from data. These limitations, along with the emergence of alternative AI paradigms, such as machine learning and neural networks, contributed to the rise of other AI approaches and the decline of symbolic AI in some areas. That said, symbolic AI continues to be used in specific applications and as part of hybrid AI systems that combine various techniques to address complex problems.

During the second AI winter, the AI fervor was not only waning in the United States, but globally as well. In 1992, the government of Japan “formally closed the books on its vaunted ‘Fifth Generation’ computer project.” The decade-long research initiative was aimed at developing “a new world of computers that could solve problems with human-style reasoning.” After spending $400+ million, most of the ambitious goals did not materialize and the effort ended with little impact on the computer market.

The second AI winter was a setback for the field of AI, but it also led to some important progress. AI researchers learned from their past experiments, and they developed new approaches to improve output and performance. These new approaches, such as deep learning, have led to a resurgence of interest in AI research ever since.

While the second AI winter was beginning to thaw in the early 1990s, other technical, political, and societal transformations were taking hold in parallel. The Cold War had come to an end with the collapse of the Soviet Union in December 1991 and, from the ashes, 15 newly independent nations rose. In the same year, Tim Berners-Lee introduced the World Wide Web—the Internet we know today. And in 1992, the Mosaic browser (later Netscape) was born. Users could “see words and pictures on the same page for the first time and to navigate using scrollbars and clickable links.” In the same decade, Linux, today’s most commonly known and used open source operating system, was launched. The first SMS text message was sent. Amazon.com was founded, disrupting the retail industry starting with books. Palm’s PDA (Personal Digital Assistant) devices gave us a glimpse of what’s to come in mobile computing. Google introduced its now dominant web search engine. And, yes, blogs became a thing, opening up opportunities for people to publish anything online.

As a result of all these changes, the world became more connected than ever, and data grew exponentially larger and richer. In the ensuing decade, the data explosion, key advancements to computing hardware, and breakthroughs in machine learning research would all prove to be perfect enablers of the next generation of AI. The AI spring which followed the second AI winter indicated renewed optimism for intelligent machines that can help radically improve our lives and solve major challenges.  

Today’s hot AI summer

Fast forward to present where the application of AI is more than just a thought exercise. The latest upgrades to AI include a set of technologies which are preparing us for artificial general intelligence (AGI). Also known as strong AI, AGI is used to describe a machine’s intelligence functionality that matches human cognitive capabilities across multiple domains and modalities. AGI is often characterized and judged by self-improvement mechanisms and generalizability rather than specific training to perform in narrow domains. Also see Entefy’s multisensory Mimi AI Engine.

In the context of AI, an AI summer characterizes a period of heightened interest and abundant funding directed towards the development and implementation of AI technology. There’s an AI gold rush underway and the early prospectors and settlers are migrating to the field en masse—investors, the world’s largest corporations, entrepreneurs, innovators, researchers, and tech enthusiasts. Despite certain limitations, the prevailing sentiment within the industry suggests that we are currently living through an AI summer, and here are a few reasons why:

  1. The new class of generative AI: Generative AI is a subset of deep learning, using expansive artificial neural networks to “generate” new data based on what it has learned from its training data. Traditional AI systems are designed to extract insights from data, make predictions, or perform other analytical tasks. In contrast, generative AI systems are designed to create new content based on diverse inputs. By applying different learning techniques to massive data sets such as unsupervised, semi-supervised, or self-supervised learning, today’s generative AI systems can produce realistic and compelling new content based on the patterns and distributions the models learn during training. The user input for and the content generated by these models can be in the form of text, images, audio, code, 3D models, or other data.

    Underpinning generative AI are the new foundation models that have the potential to transform not only our digital world but society at large. Foundation models are sophisticated deep learning models trained on massive amounts of data (typically unlabeled), capable of performing a number of diverse tasks. Instead of training a single model for a single task (which would be difficult to scale across countless tasks), a foundation model can be trained on a broad data set once and then used as the “foundation” or basis for training with minimal fine-tuning to create multiple task-specific models. In this way, foundation models can be adapted to a wide variety of use cases.

    Mainstream examples of foundation models include large language models (LLMs). Some of the better-known LLMs include GPT-4, PaLM 2, Claude, and LLaMA. LLMs are trained on massive data sets typically consisting of text and programing code. The term “large” in LLM refers to both the size of the training data and the number of model hyperparameters. As with most foundation models, the training process for an LLM can be computationally intensive and expensive. It can take weeks or months to train a sophisticated LLM on large data sets. However, once an LLM is trained they can solve common language problems. For example, generating essays, poems, code, scripts, musical pieces, emails, and summaries. LLMs can also be used to translate languages or answer questions.
  2. Better computing: In general, AI is computationally intensive, especially during model training. “It took the combination of Yann LeCun’s work in convolutional neural nets, Geoff Hinton’s back-propagation and Stochastic Gradient Descent approach to training, and Andrew Ng’s large-scale use of GPUs to accelerate deep neural networks (DNNs) to ignite the big bang of modern AI — deep learning.” And deep learning models fueling the current AI boom, with millions and billions of parameters, require robust computing power. Lots of it.

    Fortunately, technical advances in chips and cloud computing are all improving the way we can all access and use computing power. The enhancements to the microchips over the years have fueled scientific and business progress in every industry. Computing power has increased “one trillion-fold” from 1956 to 2015. “The computer that navigated the Apollo missions to the moon was about twice as powerful as a Nintendo console. It had 32.768 bits of Random Access Memory (RAM) and 589.824 bits of Read Only Memory (ROM). A modern smartphone has around 100,000 times as much processing power, with about a million times more RAM and seven million times more ROM. Chips enable applications such as virtual reality and on-device artificial intelligence (AI) as well as gains in data transfer such as 5G connectivity, and they’re also behind algorithms such as those used in deep learning.”
  3. Better access to more quality data: The amount of data available to train AI models has grown significantly in recent years. This is due to the growth of the Internet, the proliferation of smart devices and sensors, as well as the development of new data collection techniques. According to IDC, from 2022 to 2026, data created, replicated, and consumed annually is expected to more than double in size. “The Enterprise DataSphere will grow more than twice as fast as the Consumer DataSphere over the next five years, putting even more pressure on enterprise organizations to manage and protect the world’s data while creating opportunities to activate data for business and societal benefits.”
  4. Practical applications across virtually all industries: Over the past decade, AI has been powering a number of applications and services we use every day. From customer service to financial trading, advertising, commerce, drug discovery, patient care, supply chain, and legal assistance, AI and automation have helped us gain efficiency. And that was before the recent introduction of today’s new class of generative AI. The latest generative AI applications can help users take advantage of human-level writing, coding, and designing capabilities. With the newly available tools, marketers can creating content like never before; software engineers can document code functionality (in half the time), write new code (in nearly half the time), or refactor code “in nearly two-thirds the time“; artists can enhance or modify their work by incorporating generative elements, opening up new avenues for artistic expression and creativity; those engaged in data science and machine learning can solve critical data issues with synthetic data creation; and general knowledge workers can take advantage of machine writing and analytics to create presentations or reports. 

    These few examples only scratch the surface of practical use cases to boost productivity with AI.
  5. Broad public interest and adoption: In business and technology, AI has been making headlines across the board, and for good reason. Significant increases to model performance, availability, and applicability, have brought AI to the forefront of public dialogue. And this renewed interest is not purely academic. New generative AI models and services are setting user adoption records. For example, ChatGPT reached its first 1 million registered users within the first 5 days of release, growing to an estimated 100 million users after only 2 months, at the time making it the fastest-growing user base of any software application in history. For context, the previous adoption king, TikTok took approximately 9 months to reach the same 100 million user milestone, and it took 30 months for Instagram to do the same.

    In the coming years, commercial adoption of AI technology is expected to grow with “significant impact across all industry sectors.” This will allow businesses additional opportunities to increase operational efficiency and boost productivity that is “likely to materialize when the technology is applied across knowledge workers’ activities.” Early reports suggest that the total economic impact of AI on the global economy could range between $17.1 trillion to $25.6 trillion.


The history of AI has been marked by cycles of enthusiasm and challenges, with periods of remarkable progress followed by setbacks, namely two AI winters. The current AI summer stands out as a transformative phase, fueled by breakthroughs in deep learning, better access to data, powerful computing, significant new investment in the space, and widespread public interest.

AI applications have expanded across industries, revolutionizing sectors such as healthcare, finance, transportation, retail, and manufacturing. The responsible development and deployment of new AI technologies, guided by transparent and ethical principles, are vital to ensure that the potential benefits of AI are captured in a manner that mitigates risks. The resurgence of AI in recent years is a testament to the resilience of the field. The new capabilities are getting us closer to realizing the potential of intelligent machines, ushering in a new wave of productivity.

For more about AI and the future of machine intelligence, be sure to read Entefy’s important AI terms for professionals and tech enthusiasts, the 18 valuable skills needed to ensure success in enterprise AI initiatives, and the Holy Grail of AI, artificial general intelligence.


Entefy is an advanced AI software and process automation company, serving SME and large enterprise customers across diverse industries including financial services, health care, retail, and manufacturing.


Entefy co-founders speak on the fast rise of multisensory AI and automation

Entefy sibling co-founders, Alston Ghafourifar and Brienne Ghafourifar, spoke with Yvette Walker, Executive Producer and Host of ABC News & Talk – Southern California Business Report, on the rise of the fast evolving domain of multisensory AI and intelligent process automation. The 1-hour segment included focused discussion on AI applications that help optimize business operations across a number of industries and use cases, including those relevant to supply chains, healthcare, and financial services. Watch the full interview.

In an in-depth conversation about business operational efficiency and optimization across virtually every corner of an organization, the Ghafourifars shared their views on the importance of digital transformation and the intelligence enterprise in volatile times like these. Alston mentioned that AI and machine learning is all about “augmenting human power, allowing machines to do jobs that were traditionally reserved for individuals. And many of those jobs aren’t things that humans should spend their time doing or they’re not things we’re perfectly suited to doing.”

For example, organizations grappling with supply chain challenges are benefitting from machine intelligence and automation. This includes AI-powered applications dealing with everything from sourcing to manufacturing, costing, logistics, and inventory management. “Research shows that many people are now interested in how things are made…all the way from raw materials to how they end up on the shelf or how they’re delivered,” Brienne said. This is ultimately impacting consumer behavior and purchase decisions for many.

This episode also included discussions on blockchain and cryptocurrencies. In particular how AI-powered smart contracts and forecasting models can help reduce risks and increase potential return on investment (ROI).   

When it comes to AI, every organization is at a different point in their journey and readiness. Entefy’s technology platform, in particular, its multisensory AI capabilities, are designed to address digital complexity and information overload. To make this a reality, Entefy and its business customers are envisioning new ways to improve business processes, knowledge management system, as well as workflow and process automation.

For more about AI and the future of machine intelligence, be sure to read our previous blogs on the 18 valuable skills needed to ensure success in enterprise AI initiatives.


Entefy is an advanced AI software and process automation company, serving SME and large enterprise customers across diverse industries including financial services, health care, retail, and manufacturing.


Key technology trends that will dominate 2023

At the beginning of each new year, we conduct research and reflect on how digital technology will continue to evolve over the course of the year ahead. This past year, we observed clear trends in a number of technical areas including machine learning, hyperautomation, Web3, metaverse, and blockchain.

Broadly speaking, artificial intelligence (AI) continues its growth in influence, adoption, and in spending. As compared to five years ago, business adoption of AI globally in 2022 has increased by 250%. At enterprises, AI capabilities such as computer vision, natural language understanding, and virtual agents are being embedded across a number of departments and functional areas for a variety of use cases.

As the world looks past the global pandemic with an eager eye on the horizon, enterprises and governments are adjusting to the new reality with the help of innovation and advanced technologies. 2023 will be a crucial year for organizations interested in gaining and maintaining their competitive edge.

Here are key digital trends at the forefront in 2023 and beyond.

Artificial Intelligence

The artificial intelligence market has been on a rapid growth trajectory for several years, with forecasted market size of a whopping $1.8 trillion by 2030. Ever since the launch of AI as a field at the seminal Dartmouth Conference in 1956, the story of AI evolution has been centered around helping data scientists and the enterprise better understand existing data to improve operations at scale. This type of machine intelligence falls under the category of analytical AI which can outpace our human ability to analyze data. However, analytical AI falls short where humans shine—creativity. Thanks to new advances in the field, machines are becoming increasingly capable of creative tasks as well. This has led to a fast-emerging category referred to as generative AI.

Leveraging a number of advanced deep learning models including General Adversarial Networks (GANs) and transformers, generative AI helps produce a multitude of digital content. From writing to creating new music, images, software code, and more, generative AI is poised to play a strong role across any number of industries.

In the past, there has been fear over the prospect of AI potentially overtaking human capital and ultimately putting people out of work. That said, similar to other disruptive technologies in the past, analytical AI and generative AI can both complement human-produced output, all in the interest of saving time, effectively bettering lives for us all. 


Hyperautomation, as predicted last year, continues to be one of the most promising technologies as it affords enterprises several advantages to ultimately improve operations and save valuable time and resources.

Think of hyperautomation as intelligent automation and orchestration of multiple processes and tools. It’s key in replacing many rote, low-level human tasks with process and workflow automation, creating a more fluid and nimble organization. This enables teams and organizations to adapt quickly to change and operate more efficiently. The end result is increasing the pace of innovation for the enterprise.

With hyperautomation, workflows and processes become more efficient, effectively reducing speed to market with products and services. It also reduces workforce costs while increasing employee satisfaction. Hyperautomation can help organizations establish a baseline standard for operations. When standards are defined, organized, and adhered to, it allows for continual audit readiness and overall risk reduction with those audits. This becomes a centralized source of truth for the business.

Globally, the hyperautomation market is expected to continue expanding with diverse business applications, growing from $31.4 billion in 2021 with a CAGR of 16.5% until 2030.


Not to be confused with cryptocurrencies like Bitcoin or Ethereum (which are built using blockchain technology), blockchain continues to gain adoption as both an implemented technology and a topic of strategic discussion for organizations large and small.

When implemented effectively, blockchain technology has many uses and can provide a number of valuable benefits ranging from increased transparency to accurate transaction tracking, auditability, and overall cost savings for the organization. Blockchain technology appears even more promising when considering its implications as a solution to improve trust between parties such as customers, vendors, producers, wholesalers, regulators, and more. Therefore, enterprise investments in blockchain technology to improve business processes “may offer significantly higher returns for each investment dollar spent than most traditional internal investments.”

It is estimated that blockchain’s potential economic impact on global GDP could exceed $1.75 trillion and enhance 40 million jobs by the end of this decade. Much of this impact is likely to stem from the improved trust which blockchain technology can enable between businesses relating to trade, transaction management, and more. Further, perhaps due in large part to its open and decentralized nature, this economic impact is unlikely to be focused solely on specific regions or dominated by specific nations, with the U.S., Asia (led by China), and Europe all poised to contribute.

Financial services is well positioned to continue as the blockchain leader among industries, as it has in recent years. However, as blockchain technology becomes more prolific, the enterprise landscape is likely to witness major adoption in other areas including healthcare, logistics, legal, media, supply chain, automotive, voting systems, gaming, agriculture, and more. In fact, in 2021 alone, funding for blockchain startups rose by 713% YoY.

With regulatory challenges surrounding digital assets, along with abundant news focused on high profile collapses of cryptocurrency exchanges such as FTX, enterprises are still approaching blockchain adoption cautiously. However, like any technology, blockchain is fast redefining itself as more than just the tech behind Bitcoin. It is emerging as a dominant focus area for digitally-minded organizations.


Web3 (or Web 3.0) represents the next generation of web technologies. It’s decentralized and permissionless, allowing everyone to access, create, and own digital assets globally, without any intermediary. Effectively, this levels the global playing field and brings us back to the original promise of the World Wide Web.

Web 1.0 was the original Web, dominated by static pages and content. In the early days, there were relatively few content creators and publishers. The Web was fairly limited and disorganized back then, but quickly evolved to Web 2.0 which ushered in a new era of digital interactivity and social engagement with innumerable applications. With this new Web, people were free to publish articles, share comments, and engage with others, creating the user-generated content explosion we experience today. But despite enormous success, Web 2.0 has created serious challenges. For example, user data is largely centralized, controlled, and monetized by only a few big tech monopolies.

Web3, along with the underlying blockchain technology that enables it, aims to address these challenges by decentralizing information systems, giving control of data and standards back to the community. This shift from Web 2.0 to Web 3.0 also opens doors to new business models such as those governed by decentralized autonomous organizations (DAOs) that eliminate intermediaries through secure (smart contract) automation.

Web3 has attracted large pools of capital and engineering talent, exploring new business models as the industry is estimated to increase significantly to $33.5 billion by 2030, growing at a CAGR of 44.9% from 2022 to 2030. As with any burgeoning technology disruption, early adopters will almost certainly face challenges, including unclear and evolving regulation and immature and emerging technology platforms.

In 2023 and the years ahead, policy changes are expected, impacting Web3 creators in several ways. New legislation is likely to focus on asset classification, legality and enforceability of blockchain-based contracts, capital provisioning, accountability systems, and anti-money laundering standards.

Digital Assets

From cryptocurrencies to NFTs, digital assets continue to dominate news cycles—between the public collapse of FTX (a leading cryptocurrency exchange) and dramatic and sustained drops in market value for even the largest cryptocurrencies such as Bitcoin. This market activity and awareness helps solidify digital assets as an important topic to watch in 2023.

While much of the discussion surrounding digital assets is focused on the underlying technology, blockchain, it’s the financial and regulatory considerations that are most likely to see dramatic change in the coming year. These considerations will help re-establish trust in an otherwise battered market inundated with scams, fraud, theft, hacks, and more. According to a complaint bulletin published by the Consumer Financial Protection Bureau (CFPB), a U.S. government agency, of all consumer complaints submitted in the last 2 years related to crypto-assets, “the most common issue selected was fraud and scams (40%), followed by transaction issues (with 25% about the issue of ‘Other transaction problem,’ 16% about ‘Money was not available when promised,’ and 12% about ‘Other service problem’).”

Some regulators contend that the very definition of “digital assets” may need to evolve as major regulatory bodies struggle to provide clear and consistent guidance for the sector. Leading the way in digital asset regulation is the European Union. Shortly after the collapse of FTX, Mark Branson, president of BaFin (Germany’s financial market regulator), suggested that “a ‘crypto spring’ may follow what has been a ‘crypto winter’ but that the industry that emerges is likely to have more links with traditional finance, further increasing the need for regulation.” Thoughtful regulation can help establish trust in digital assets, beginning with risk management. An important method of rebuilding this trust is providing third-party validation of assets, liabilities, controls, and solvency. This is expected to be an important area of growth in 2023.

Organizations and governments are also looking to traditional (non-blockchain) financial services, standards, and providers to design security, controls, governance, and transparency with regard to digital assets. The early rise of central bank digital currencies (CBDCs), which are non-crypto assets, in countries such as Venezuela, Russia, China, and Iran, has pushed these countries to “adopt restrictive stances or outright bans on other cryptos.” Conversely, adoption of CBDCs by G7 countries has been “deliberately cautious […] particularly with regards to retail CBDCs used by the public.” Going forward, policymakers will continue to wrestle with these types of important regulatory considerations. They will assess how existing rules can be applied more effectively and design new rules to reinforce trust in the market without hamstringing innovation in the space.

Aside from regulatory changes, digital assets are likely to continue growing through demand driven by merchants and social media. For example, as some social media companies gradually release their own payment platforms for end users, the potential for new types of digital assets is only expanding—from processing cryptocurrency transactions to offering unique value through identity tokens, NFTs, and more. Additionally, with anticipated growth of metaverse and Web3, digital assets are poised to play an outsized role in developing consumer trust, generating tangible value, and promoting engagement in these new technical arenas.


The metaverse is often seen as the next natural evolution of the Internet. Despite challenges facing the metaverse—from technology to user experience—the space is benefiting from continued venture investments and growing attention from notable brands and celebrities. “Brands from Nike and Gucci to Snoop Dogg and TIME Magazine poured money into metaverse initiatives as a way of revolutionizing experiential brand engagement, while Meta doubled down on its Horizon Worlds experiment.”

Further, with a potential to generate up to $5 trillion in economic impact across multiple industries by 2030, the metaverse is too important to ignore. To ensure truly immersive experiences in the metaverse, a significant boost in demand is expected for a number of technical and creative services. In support of the metaverse, specialty services provided by digital designers, architects, software and machine learning engineers, to name a few, are predicted to remain in high demand in 2023 and beyond.

Use of specialized hardware such as AR (augmented reality), VR (virtual reality), MR (mixed reality) glasses and headsets is not strictly required to experience the metaverse. However, advancements in AR, VR, MR, and haptic technologies can dramatically improve the experience, which is likely to drive further engagement and adoption of the metaverse.

Part of the projected growth in the metaverse is born out of necessity. During the early days of the global pandemic, the majority of companies suddenly had to become comfortable with virtual collaboration and meetings via web conferencing. Virtual workspaces in the metaverse are a natural next step, driven by brands and corporations. By 2026, one out of four of us is expected to spend at least an hour each day in a metaverse. Further, nearly one third of organizations in the world are projected to have some sort of product or service ready for the metaverse.

In the metaverse, virtual spaces can be used for onboarding, brainstorming on-the-fly, and ongoing education and training. With high quality and immersive 3D experiences, the metaverse can allow for design and engineering collaborations—for example, the ability to realize concept designs in a virtual world before any expense is spent on real-world production. For brands, the metaverse will also see spaces created (almost as digital lounges) where enthusiastic evangelists can hang out and be part of virtual experiences, develop their own brand-centric communities, and participate in brand contests.

Autonomous Cyber

The explosion of digital networks, devices, applications, software frameworks, libraries, and the rise of consumer demand for everything digital has brought the topic of cybersecurity to the fore. Cybersecurity attacks in recent years have risen at an alarming rate. There are too many examples of significant exploits to list in this article, however, the Center for Strategic & International Studies (CSIS) provides a summary list of notable cybersecurity incidents where losses by incident exceed $1 million. CSIS’s list illustrates the rise in significant cyberattacks, ranging from DDoS attacks to ransomware targeting healthcare databases, banking networks, military communications, and much more.

Cybersecurity is a major concern for the private sector with 88% of boards considering it a business risk. On a broader scale, cybersecurity is considered a matter of national and international security. In 2018, the U.S. Department of Homeland Security established Cybersecurity & Infrastructure Security Agency (CISA) to lead “the Nation’s strategic and unified work to strengthen the security, resilience, and workforce of the cyber ecosystem to protect critical services and American way of life.” As more “sophisticated cyber actors and nation-states […] are developing capabilities to disrupt, destroy, or threaten the delivery of essential services,” the critical risks of cybercrime are felt not just by government agencies, but also by commercial organizations which provide those essential services.

The fast-evolving field of autonomous cyber is a direct response to this growing problem. As computer networks and physical infrastructure grow more vulnerable to cyber threats, the ability for traditional approaches to cybersecurity are proving insufficient. Today’s enterprises require smart, agile systems to monitor and act on the compounding volume of cyber activity across an ever-growing number of undefended areas (in codebases, systems, or processes). Further, as cybercriminals become more sophisticated with modern tools and resources, organizations are feeling the added pressure to fortify their digital and physical security operations quickly in order to ensure resiliency.

Autonomous cyber uses AI and machine intelligence to continuously monitor an enterprise’s digital activity. Advanced machine learning models can help identify abnormal activity which may present risk to an organization’s IT systems or any aspect of business operations. Coupled with the notion of CEOs creating corporate resilience to protect themselves from cybercrimes, among other threats, the atmosphere seems welcoming for embracing more complex security technologies such as AI-powered autonomous cyber. Autonomous cyber uses intelligent orchestration and complex automation to create alerts and take mitigation steps at speed and scale.


At Entefy we are passionate about breakthrough computing that can save people time so that they live and work better. The 24/7 demand for products, services, and personalized experiences is forcing businesses to optimize and, in many cases, reinvent the way they operate to ensure resiliency and growth.

AI and automation are at the core of many of the technologies covered in this article. To learn more, be sure to read our previous articles on key AI terms, beginning the enterprise AI journey, and the 18 skills needed to bring AI applications to life.


Entefy is an advanced AI software and process automation company, serving SME and large enterprise customers across diverse industries including financial services, health care, retail, and manufacturing.


Entefy hits key milestones with new patents issued by the USPTO

Entefy continues to expand its IP portfolio with another set of awarded patents by the USPTO

PALO ALTO, Calif. December 20, 2022. The U.S. Patent and Trademark Office (USPTO) issues four new patents to Entefy Inc. These new patents, along with the other USPTO awards cited in a previous announcement in Q3 of this year, further strengthen Entefy’s growing intellectual property portfolio in artificial intelligence (AI), information retrieval, and automation.

“We operate in highly competitive business and technical environments,” said Entefy’s CEO, Alston Ghafourifar. “Our team is passionate about machine intelligence and how it can positively impact our society. We’re focused on innovation that can help people live and work better using intelligent systems and automation.”

Patent No. 11,494,421 for “System and Method of Encrypted Information Retrieval Through a Context-Aware AI Engine” expands Entefy’s IP holdings in the field of privacy and security cognizant data discovery. This disclosure relates to performing dynamic, server-side search on encrypted data without decryption. This pioneering technology preserves security and data privacy of client-side encryption for content owners, while still providing highly relevant server-side AI-enabled search results.

Patent No. 11,496,426 for “Apparatus and Method for Context-Driven Determination of Optimal Cross-Protocol Communication Delivery” expands Entefy’s patent portfolio of AI-enabled universal communication and collaboration technology. This disclosure relates to optimizing the delivery method of communications through contextual understanding. This innovation simplifies user communication with intelligent delivery across multiple services, devices, and protocols while preserving privacy.

Patent No. 11,494,204 for “Mixed-Grained Detection and Analysis of User Life Events for Context Understanding” strengthens Entefy’s IP portfolio of intelligent personal assistant technology. This disclosure relates to correlation of complex, interrelated clusters of contexts for use by an intelligent interactive interface (“intelli-interface”) to perform actions on behalf of users. This invention saves users time in digesting and acting on the ever-expanding volume of information through contextually relevant automation.

The USPTO has also awarded Entefy Patent No. 11,409,576 for “Dynamic Distribution of a Workload Processing Pipeline on a Computing Infrastructure”, expands Entefy’s patent portfolio of AI-enabled virtual resource management. This disclosure relates to use of AI models to configure, schedule, and monitor workflows across virtualized computing resources. This innovation improves resource use and efficiency of software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS) technology.

Entefy’s proprietary technology is built on key inventions in a number of rapidly changing digital domains including machine learning, multisensory AI, dynamic encryption, universal search, and others. “Entefy AI and automation technology is designed to help businesses become more resilient and operate much more efficiently,” said Ghafourifar. “Our team is excited about innovation in this space and creating distinct competitive differentiation for Entefy and our customers.”


Entefy is an advanced AI software and process automation company, serving SME and large enterprise customers across diverse industries including financial services, health care, retail, and manufacturing.


The smarter grid: AI fuels transformation in the energy sector

When it comes to energy, much of the world is at a critical turning point. There is a noticeable momentum in favor of renewable energy, with scientists and the United Nations warning the world’s leaders that carbon emissions “must plummet by half by 2030 to avoid the worst outcomes.” Given the variability and the lack of predictability in certain energy sources such as solar and wind, there is a growing need for a new kind of “smart” electrical grid that can reliably and efficiently manage flows of electricity from renewable sources with those generated by conventional power plants. Smart grids leverage advanced technologies to ultimately reduce costs and reduce waste. Creating such grids requires massive amounts of data, predictive analytics, and automation which is ideally powered by artificial intelligence (AI).

Many of us might not be aware that the top 10 hottest years since human beings began keeping records have all occurred in the last decade or so. In June 2022, “among global stations with a record of at least 40 years, 50 set (not just tied) an all-time heat or cold record.” During the same month, Arctic sea ice extent “was the 10th-lowest in the 44-year satellite record” and Antarctic sea ice extent “was the lowest for any June on record, beating out 2019.”

In the United States, the costs related to major weather and climate disasters sustained since 1980 (the events that have caused damage with costs in excess of $1 billion) have already totaled more than $2.295 trillion. So far this year alone, the United States has witnessed 15 such costly events.

NOAA National Centers for Environmental Information (NCEI) U.S. Billion-Dollar Weather and Climate Disasters (2022). https://www.ncei.noaa.gov/access/billions/, DOI: 10.25921/stkw-7w73

On a global level, the international treaty on climate change, the Paris Agreement, has called on 196 countries to prevent the planet from warming more than 2 degrees Celsius (3.6 degrees Fahrenheit) above pre-industrial levels. This treaty is a major step toward achieving net-zero carbon emissions by 2050. If this target can’t be met, the resulting financial losses could be as high as $23 trillion, which could put the global economy in a tailspin similar to the most severe economic contractions experienced in the past.   

For the U.S., the second-highest ranking country in carbon emissions (second only to China), fulfilling the Paris Agreement will require herculean effort, including a complete reimagining of the electrical grid.

The Smart Grid

The 2021 failure of the Texas grid left millions of Southerners freezing without electricity for days while the Great Blackout of 2003 shut down electricity for tens of millions on the East Coast. The Texas power grid may be independent and isolated (the only one in the contiguous 48 states). However, it’s clear that along with much of the U.S. infrastructure, power grids across the nation are aging and in desperate need of modernization with deadly power outages occurring at more than double the pace in the past 6 years over the prior 6 years. And, as each day passes, this need for infrastructure modernization becomes even more critical, not just to ensure the safety and reliability of the power network, but also to handle the accelerating demand for electricity catalyzed by clean energy initiatives and electric vehicle (EV) adoption

Historically, electric grids have lacked dynamism and flexibility. They are instead built to achieve very large economies of scale with the main sources of electricity coming from non-renewables such as coal or natural gas. These traditional sources of energy are costly to operate efficiently and consistently. Further, increased regulation over time and higher costs have promoted vertically-integrated energy monopolies which have stifled innovation.

Renewable sources of energy have their own set of challenges. Solar and wind, for example, don’t provide the same level of consistency afforded by traditional large-scale energy production. This means that despite the fact large renewable energy plants (as well as individual homeowners or businesses) can produce power with their solar panels and wind turbines, they are not able to produce such power at a steady, controllable rate. For some, the erratic rate of energy production can sometimes translate into excess production which can in turn be fed into the electrical grid or stored for later use, utilizing on-site batteries. All of this creates opportunities for a new type of 21st-century power grid that is modular, multidirectional, and smart.

With the future of energy looking more and more decentralized, how can energy consumers and generators monitor and monetize the flow of electricity? How does the power grid provide resiliency and flexibility to properly store excess energy while simultaneously operating at the scale necessary to satisfy the ever-growing demand for electricity without interruption? For much of the history of renewable energy, these problems have been managed with computational models based on historical weather and electricity demand patterns, modified according to demographic and economic changes. But the rise of new technologies like real-time data transmission, edge computing, Internet of Things (IoT) sensors, and even AI-powered control systems has enabled the design of new “smart grid” frameworks able to manage and integrate diverse energy sources for a revolutionary new approach to power generation, storage, distribution, pricing, and regulation.

Energy Demand Forecasting

To avoid the dangers of blackouts, a smart power grid anticipates when electricity demand in a given region may be at peak levels. This allows control systems to ensure that adequate electricity is fed to the grid from available sources which can satisfy the demand. If need be, a fossil fuel plant can be activated in advance of any peak demand with sufficient notice. However, what happens when the power grid needs to react swiftly to unexpected disruptions (such as extreme weather changes, natural disasters, or dramatic reductions in capacity) which traditional forecast models are unable to predict?

With AI and process automation, systems can achieve accurate, minute-by-minute forecasts of energy demand through the real-time transmission of energy use data from smart meters. “In 2021, U.S. electric utilities had about 111 million advanced (smart) metering infrastructure (AMI) installations, equal to about 69% of total electric meters installations. Residential customers accounted for about 88% of total AMI installations, and about 69% of total residential electric meters were AMI meters.” AI control and monitoring systems can utilize the wealth of information available via smart meters to identify patterns in the data that coincide with high demand and automatically trigger the allocation of different energy sources to help prevent overload. Similarly, machine learning can also be used to monitor the stability of electricity flow using the rich data generated by IoT sensors throughout the grid.

Automated Energy Trading

When AI triggers the grid to draw on additional energy sources to prevent a blackout, those energy sources can command a price for their resources. From wind and solar farms to traditional fossil fuel power plants, as well as individual homeowners with solar panels or power generators, energy trading is growing in volume and complexity. With legacy electrical grids and traditional meters, there was relatively little communication between utilities and energy customers. Electricity flowed unidirectionally from plants to power lines to customers, with meters read manually without granular insights into energy usage. Smart grids, microgrids, broad-based deployment of AMI meters, and modern battery storage technologies have changed all of that. In combination, these advances now enable bidirectional communication and transactions between producers, consumers, and prosumers (consumers who also produce and store energy, often using solar panels and batteries) allowing for better energy management and stimulating energy trading.

Today, AI and machine learning is used to analyze signals from diverse data sources in order to provide more accurate pricing and energy predictions. Taking advantage of this, allows energy traders the opportunity to better balance their position. AI has also emerged as a catalyst for new business models including “community ownership models, peer-to-peer energy trading, energy services models, and online payment models.” These new models give energy producers the improved ability to sell their electricity to the grid at a competitive market price.

The volatility in energy supply due to network constraints and strong fluctuations in energy demand make forecasting of energy and prices challenging. Automated trading platforms use AI and deep neural networks to monitor the market for the best energy sales opportunities and conduct high frequency trading without the need for human input. This is similar to how Entefy’s Mimi AI Crypto Market Intelligence provides real-time price predictions and 24/7 automated trading functionality in the highly volatile cryptocurrency market.

According to McKinsey, the “deployment of advanced analytics can lead to a reduction of more than 30 percent in costs by optimizing bidding of renewable assets in day-ahead and intraday markets.” As a secondary benefit, fully automated processes leveraging advance AI resulted in “a productivity gain in intraday trading of 90 percent.” The key observation was that these additional forms of automation transformed the role of energy traders from “taking decisions and executing trades toward focusing on market analysis and improving advanced analytics models on a continual basis.”

Intelligent Grid Security

One of the most important features of a truly “smart” grid is its ability to fend off cyberattacks. Cybersecurity is a growing area of concern in many industries and sectors but cyberattacks in the energy sector are of special interest. The ransomware attack on Colonial Pipeline, the United States’ largest pipeline for refined oil products, was a reminder to us all how vulnerable we actually are. On May 7, 2021 a group of cybercriminals from Russia hacked Colonial Pipeline’s “IT network, crippling fuel deliveries up and down the East Coast.” As a result of that cyberattack, within just an hour of the ransomware attack, the management made the decision to shut down the company’s 5,500 miles of pipelines. In effect, this event let to a shutdown of approximately 50% of the East Coast’s gasoline and jet fuel supply.

Preventing cyberattacks will be especially important as more smart grid operations come online and massive amounts of data are generated, transmitted, and stored across multiple IT systems and cloud infrastructures. As covered in one of Entefy’s previous blogs, modern countermeasures include machine learning, hyperautomation, and zero trust systems. AI-powered tools are capable of processing complex and voluminous data infinitely faster (and often more accurately) than human security analysts ever could. With sufficient compute infrastructure, artificial intelligence can be put to work 24/7/365 to analyze trends and learn new patterns that can identify vulnerabilities before it’s too late. More sophisticated approaches such as Autonomous Cyber can go further to use the power of AI not only to continuously monitor data flow or network activity but also to take actions as needed (including alerts, notifications, blocking access, or shutting off services) to help mitigate risks. Active identification of issues in security systems and enabling automatic “self-healing” processes by instantly patching or fixing errors can dramatically reduce remediation time for organizations.


The energy sector is facing massive changes and moving toward a future that operates on clean, renewable energy, decentralization and distributed energy production, as well as intelligent systems that optimize energy production, consumption, and infrastructure. Smart grids are a big step forward in that future. They leverage machine learning, intelligent process automation, and other advanced technologies to deliver efficiency to the market, cut costs, and reduce waste.

For more about AI and the future of machine intelligence, be sure to read our previous blogs on the 18 valuable skills needed to ensure success in enterprise AI initiatives, the rise in cybersecurity threats, and AI ethics.


Entefy is an advanced AI software and process automation company, serving SME and large enterprise customers across diverse industries including financial services, health care, retail, and manufacturing.


AI and intelligent automation coming to a supply chain near you

For years, many Americans have consumed a vast, world-class variety of products on a day-to-day basis without necessarily thinking too much about the complex supply chains involved in these products.

All of that changed dramatically with the onset of the COVID-19 pandemic. Suddenly, critical factories and assembly lines came to a halt around the world while shipping containers were stranded at ports due to restrictions from lockdowns and travel bans. For the first time, a majority of the population became painfully aware of the incredible power the global supply chain has on everyday life. More than two years after the pandemic’s start, we continue to feel the impact of these supply chain disruptions via rising costs, product and inventory shortages, delivery delays, and dramatic shifts in consumer behavior. Add a growing number of extreme weather and geopolitical events to the mix and witness the global flow of goods face greater instability.

The best way to address this increasingly complex and volatile trade environment is to work smarter, not harder. And, that is where artificial intelligence (AI) and automation come in.

Supply chain professionals around the world are rapidly adopting AI and machine learning technologies to help them adapt and thrive in the midst of emergent uncertainty. According to IDC, automation using AI will drive “50% of all supply chain forecasts” by 2023. Further, by the end of this year, “chronic worker shortages will prompt 75% of supply chain organizations to prioritize automation investments resulting in productivity improvements of 10%.”

Here are a few examples of how AI and automation are already transforming the supply chain:

Predictive analytics for delivery and warehousing

One of the great ways AI is impacting the supply chain is via advanced analysis of data generated by the Internet of Things (IoT). Thanks to GPS (Global Positioning System) tracking, IoT sensors placed on raw materials, shipping containers, products, as well as communications throughout the logistics pipeline, providers now have real-time visibility into the movement of their goods. Better yet, advancements in machine learning over the last decade have allowed AI to use voluminous, complex data from shipments and IoT devices to accurately predict the arrival of packages and even monitor the storage conditions of products and raw materials.

AI-enabled dynamic routing is helping companies optimize delivery and logistics leading to significant cost savings and efficiency. UPS, for instance, has employed its on-road integrated optimization and navigation (ORION) system “to reduce the number of miles traveled, fuel consumed, and CO2 emitted.” And the impact is tangible. “According to UPS, ORION has saved it around 100 million miles per year since its inception, which translates into 10 million gallons of fuel and 100,000 metric tons of carbon dioxide emissions. Moreover, reducing just one mile each day per driver can save the company up to $50 million annually.”

Use of predictive analytics in warehousing is gaining adoption as well. Many consumers have personally experienced or observed AI leveraging vast amounts of digital data in order to make predictions. We may see this when an e-commerce service tries to anticipate a consumer’s next purchase using historical data. What we sometimes forget is that all this consumer data is tremendously helpful to warehouses and brick-and-mortar stores, which use machine learning and predictive analytics to determine which items will sell in a given week and where those items should be stocked.

Machine learning for streamlining processes and reducing human error

Machine learning also helps with the more technical and time-consuming aspects of moving products around the world. According to a recently published joint report by the World Trade Organization and the World Customs Organization, customs authorities have already embraced advanced technologies such as AI and machine learning. “Around half use some combination of big data, data analytics, artificial intelligence and machine learning. Those who do not currently use them have plans to do so in the future. The majority of customs authorities see clear benefits from advanced technologies, in particular with regard to risk management and profiling, fraud detection and ensuring greater compliance.”

AI has been able to automate the completion of lengthy international customs and clearance forms, providing accurate tallies of products in shipments, correcting mistakes made by human officials regarding country of origin or gross net weight, and providing a customs entry number when a shipment finally arrives at its destination country. Machine learning can also parse out shipment invoices and correctly load the data into a company’s accounting system.

Additional specific use cases include AI systems and applications that help automate supply chain processes and workflows in smart ways. For example, Entefy’s Mimi Ai Digital Supply Chain Hub which is used to optimize logistics, costing, sourcing, and strengthen supplier relationships. With such AI-powered tools, users can automate repetitive manual procedures, reduce errors, track key metrics in real-time (and in one place), and monitor the global supply chain across geographies, business units, vendors, suppliers, and products. Moreover, users can quickly uncover valuable business-critical insights from complex data streams that would be otherwise unfeasible via traditional manual analysis.

Automated decisions and demand forecasting

Producing, packaging, delivering, and selling products today are meeting new sets of customer expectations and behaviors. Simultaneously, the business climate is growing more complex and more volatile, making competition uniquely challenging in times like these. These conditions are leading to digital and AI transformations at an ever-increasing number of organizations worldwide. Supply chain leaders have begun to embrace the need for modernized IT in order to ensure resiliency and efficiency going forward. And this has led to a big shift toward intelligent systems and automation.

According to McKinsey, 80% of supply chain leaders “expect to or already use AI and machine learning” for demand, sales, and operations planning. The overarching objective is to help reduce costs and increase revenue. This is achieved by reducing reliance on human involvement, integrating analytics across the supply chain (from orders to production to demand forecasts), enabling process and workflow automation at virtually every step, and ultimately balancing inventory levels with service levels while reducing lead times from the moment of order to end delivery.

Improved quality control and waste reduction

Quality companies care about quality. For these organizations quality is important to their customers as well as their bottom line. They realize the financial and operational costs associated with poor quality and are taking additional steps to improve quality control and reduce waste.

What is the cost of poor quality? According to ASQ (American Society for Quality), a global membership society of quality professionals in more than 130 countries, costs incurred related to quality related activities “may be divided into prevention costs, appraisal costs, and internal and external failure costs.” It is estimated that quality-related costs are “as high as 15-20% of sales revenue, some going as high as 40% of total operations. A general rule of thumb is that costs of poor quality in a thriving company will be about 10-15% of operations. Effective quality improvement programs can reduce this substantially, thus making a direct contribution to profits.”

The reality is that today with advanced technologies, including AI and machine learning, waste and yield losses due to quality problems are in many ways avoidable. By taking a data-driven approach to the problem, analyzing mountains of data from multiple sources (such as sensors, plant logs, defect reports, service tickets, etc.), companies can better pinpoint hidden issues, quickly identify root causes, and predict (and avoid) vulnerabilities to reduce critical defects and waste. In combination, with AI and automation, companies can increase yields while reducing the overall cost of quality.


The recent disruptions to global supply chains have tangibly impacted the way we produce and consume products—from major delays in product shipments to escalating costs to inventory shortages to the shifts in consumer behavior. Companies have been thrown off balance and as a result have accelerated their adoption of advanced technologies to gain efficiency and resiliency. Using multi-modal machine learning and intelligent process automation to achieve supply chain optimization is something we take pride in at Entefy. Be sure to read our previous blogs about how to begin your Enterprise AI journey and the 18 valuable skills needed to ensure success in enterprise AI initiatives.


Entefy is an advanced AI software and process automation company, serving SME and large enterprise customers across diverse industries including financial services, health care, retail, and manufacturing.


Entefy awarded four new patents by USPTO

PALO ALTO, Calif. July 12, 2022. The U.S. Patent and Trademark Office (USPTO) issues four new patents to Entefy Inc. Entefy’s newly granted patents represent a range of novel software and intelligent systems that serve to expand and strengthen the company’s core technology in communication, search, security, data privacy, and multimodal AI.

“These newly issued patents highlight our team’s effort and commitment in delivering value through innovation,” said Entefy’s CEO, Alston Ghafourifar. “As a company and as a team, we’ve been focused on developing technologies that can power society through AI and automation that is secure and preserves data privacy.”

Patent No. 11,366,838 for “system and method of context-based predictive content tagging for encrypted data” expands Entefy’s patent portfolio of AI-enabled universal communication and collaboration technology. This patent relates to multi-format, multi-protocol message threading by stitching together related communications in a manner that is seamless from the user’s perspective. This innovation saves users time in identifying, digesting, and acting on the ever-expanding volume of information available through modern data networks while maintaining data privacy.

Patent No. 11,366,839 for “system and method of dynamic, encrypted searching with model driven contextual correlation” expands Entefy’s intellectual property holdings in the field of privacy-preserving data discovery. The disclosure relates to ‘zero-knowledge’ privacy systems, where the server-side searching of user encrypted data is performed without accessing the underlying private user data. This technology preserves security and privacy of client-side encryption for content owners, providing highly relevant server-side search results via the use of content correlation, predictive analysis, and augmented semantic tag clouds.

Patent No. 11,366,849 for “system and method for unifying feature vectors in a knowledge graph” is representative of expansion of Entefy’s patent portfolio in the space of data understanding. The disclosure provides advanced AI-powered techniques for identifying conceptually related information across different data types, representing a significant leap in the evolution of data mapping and processing.

Entefy was also awarded Patent No. 11,367,068 for “decentralized blockchain for artificial intelligence-enabled skills exchanges over a network.” Expanding on Entefy’s core universal communication and multimodal machine intelligence technology, this new direction of technological development allows for AI-powered agents to learn and provide specific skills or services directly to end users or other digital agents through a blockchain-enabled marketplace. By leveraging smart contracts and other blockchain technologies, this invention lets individual products and systems communicate and share skills in an autonomous fashion with cost and resource efficiency in mind.

Entefy’s intellectual property assets span a series of domains from digital communication to artificial intelligence, dynamic encryption, enterprise search, multimodal machine intelligence and others. “As a company, we are committed to developing new technologies that can deliver unprecedented efficiencies to businesses everywhere,” said Ghafourifar. “The investment in our patent portfolio is an integral part of bringing our vision to life.”


Entefy is an advanced AI software and process automation company, serving SME and large enterprise customers across diverse industries including financial services, health care, retail, and manufacturing.

Artificial general intelligence, the Holy Grail of AI

From Mary Shelley’s Frankenstein to James Cameron’s Terminator, the possibility of scientists creating an autonomous intelligent being has long fascinated humanity—as has the potential impact of such a creation. In fact, the invention of machine intelligence with broad, humanlike capabilities, often referred to as artificial general intelligence, or AGI, is an ultimate goal of the computer science field. AGI refers to “a machine’s intelligence functionality that matches human cognitive capabilities across multiple domains. Often characterized by self-improvement mechanisms and generalization rather than specific training to perform in narrow domains.”

Some well-known thinkers such as Stephen Hawking have warned that a true artificial general intelligence could quickly evolve into a “superintelligence” that would surpass and threaten the human race. However, others have argued that a superintelligence could help us solve some of the greatest problems facing humanity. Via rapid and advanced information processing and decision making, AGI can help us complete hazardous jobs, accelerate research, and eliminate monotonous tasks. As global birthrates plummet and the old outnumber the young, at the bare minimum, robots with human-level intelligence might help address the shortage of workers in critical fields such as healthcare, education, and yes, software engineering.

AI all around us

Powerful real-life demonstrations of AI, from computers that beat world chess champions to virtual assistants in our devices that speak to us and execute tasks, may lead us to think that a humanlike AGI is just around the corner. However, it’s important to note that virtually all AI implementations and use cases you experience today are examples of “narrow” or “weak” artificial intelligence—that is, artificial intelligence built and trained for specific tasks.   

Narrow forms of AI that actually work can be useful because of their ability to process information exponentially faster (and with fewer errors) than human beings ever could. However, as soon as these forms of narrow AI come upon a new situation or variation of their expert tasks, their performance falters.

To understand how ambitious the notion of creating AGI truly is, it’s helpful to reflect on the vast complexity of human intelligence.

Complex and combinatorial intelligence

Father of computer science Alan Turing offered what is famously called the Turing Test for identifying AGI. He proposed that any machine that could imitate a human being to the point of fooling another human being could pass the test.

However, MIT roboticist Rodney Brooks has outlined four key developmental stages of human intelligence that can help us measure progress in machine intelligence:

  1. A two-year-old child can recognize and find applications for objects. For example, a two-year-old can see the similarity between a chair and a stool, and the possibility of sitting on both a rock and a pile of pillows. While the visual object recognition and deep learning capabilities of AI have advanced dramatically since the 1990s, most forms of current AI aren’t able to apply new actions to dissimilar objects in this manner.
  2. A four-year-old child can follow the context and meaning of human language exchange through multiple situations and can enter a conversation at any time as contexts shift and speakers change. To some degree, a four-year-old can understand unspoken implications, responding to speech as needed, and can spot and use both lying and humor. In a growing number of cases, AI is able to successfully converse and use persuasion in manners similar to human beings. That said, none currently has the full capacity to consistently give factual answers about the current human-social context in conversation, let alone display all the sophistication of a four-year-old brain.
  3. A six-year-old child can master a variety of complex manual tasks in order to engage autonomously in the environment. Such tasks include self-care and goal-oriented tasks such as dressing oneself and cutting paper in a specific shape. A child at this age can also manually handle younger siblings or pets with elevated sensitivity. There is hope that AI might power robotic assistants to the elderly and disabled by the end of the decade.
  4. An eight-year-old child can infer and articulate the motivations and goals of other human beings by observing their behavior in a given context. They are socially aware and can identify their own motivations and goals and explain them in conversations, in addition to understanding the personal ambitions explained by their conversation partners. Eight-year-olds can implicitly understand the goals and objectives of assignments others give them without a full spoken explanation of the purpose. In 2020, MIT scientists successfully developed a machine learning algorithm that could determine human motivation and whether a human successfully achieved a desired goal through the use of artificial neural networks. This represents a promising advancement in independent machine learning.

Why the current excitement over AGI?

Artificial general intelligence could revolutionize the way we work and live, rapidly accelerating solutions to many of the most challenging problems plaguing human society—from the need to reduce carbon emissions to the need to react to and manage disruptions to global health, the economy, and other aspects of society. Some have posited that it could even liberate human beings from all forms of taxing labor, leaving us free to pursue pleasurable pursuits full time with the help of a universal basic income.

For his 2018 book, Architects of Intelligence, futurist Martin Ford surveyed 23 of the leading scientists in AI and asked them how soon they thought research could produce a genuine AGI. “Researchers guess [that] by 2099, there’s a 50 percent chance we’ll have built AGI.”

So why do news headlines make it seem as though AGI may be only a few years off? Hollywood is partly to blame for both popularizing and glamorizing super-smart machines in pseudo realistic films. Think Iron Man’s digital sidekick J.A.R.V.I.S., Samantha in the movie Her, or Ava in the movie Ex Machina. There is also the dramatic progress in machine learning that has occurred over the past decade in which a combination of advances in big data, computer vision, and speech recognition, along with the application of graphic processing units (GPUs) to algorithmic design, allowed scientists to better replicate patterns of the human brain through artificial neural networks.

Recent big digital technology trends in machine learning, hyperautomation, and the metaverse, among others, have made researchers hopeful that another important scientific discovery could help the field of AI make a giant leap forward toward complex human intelligence. As with previous revolutions in computing and software, it’s magical and inspiring to witness the journey of machine intelligence from narrow AI to AGI and the remarkable ways it can power society.

For more about AI and the future of machine intelligence, be sure to read our previous blogs on important AI terms, the ethics of AI, and the 18 valuable skills needed to ensure success in enterprise AI initiatives.

Introducing Entefy’s new look

As Entefy enters its next phase of growth, we’re excited to publicly unveil our new branding, complete with our redesigned logo, updated visual theme, and exclusive content, all of which can be seen on our brand new website. Reflecting on the evolution of our company and services over recent years, it’s time to update our look on the outside to echo the progress we’ve made on the inside. While this new branding changes our look and feel, our mission and core values remain unchanged. 

With our new visual theme, you’ll notice how we’ve emphasized clarity, simplicity, and efficiency, beginning with the black and white primary color palette. Our suite of 2D and 3D graphic assets are all inspired by geometric shapes and principles to create coherence and harmony, especially when dealing with more complex aspects of data and machine learning. This is Entefy’s fresh visual design language offering a fresh perspective on AI, automation, and the future of machine intelligence.

Entefy’s new website provides important information about the company’s core technologies, highlighted products and services, as well as our all-in-one bundled EASI subscription.

Why Entefy? What is the Mimi AI Engine? How does the Entefy’s 5-Layer Platform work? How do Entefy AI and automation products solve problems at organizations across diverse industries? How does the EASI Subscription work and what are some customer success stories? You’ll find answers to these questions and much more on our new website. We encourage you to visit the new site and blog to learn more about Entefy and a better way to AI.

Much has happened since the early days of Entefy. Technological breakthroughs and inventions. Key product milestones. Expansion of Entefy’s technical footprint across countries and industries. And a long list of amazing people (team members, investors, partners, advisors, and customers alike) who have joined Entefy in manifesting the future of machine intelligence in support of our mission—saving people time so they can work and live better.

Interested in digital and AI transformation for your organization? Request a demo today.

Got the Entefy moxie and interested in new career opportunities? If so, we want you on our team.

Cybersecurity threats are on the rise. AI and hyperautomation to the rescue.

In the early days of the Internet, cybersecurity wasn’t the hot topic it is now. It was more of a nuisance or an inconvenient prank. Back then, the web was still experimental and more of a side hustle for a relatively narrow, techie segment of the population. Nothing even remotely close to the bloodline it is for modern society today. Cyber threats and malware were something that could be easily mitigated with basic tools or system upgrades. But as technology grew smarter and more dependent on the Internet, it became more vulnerable. Today, cloud, mobile, IoT technologies, as well as endless applications, software frameworks, libraries, and widgets, serve as a veritable buffet of access points for hackers to exploit. As a result, cyberattacks are now significantly more widespread and can damage more than just your computer. The acceleration of digital trends due to the COVID-19 pandemic has only exacerbated the problem.

What is malware

Malware, or malicious software, is a general industry term that covers a slew of cybersecurity threats. Hackers use malware and other sophisticated techniques (including phishing and social-engineering-based attacks) to infiltrate and cause damage to your computing devices and applications, or simply steal valuable data. Malware includes common threats such as ransomware, viruses, adware, and spyware as well as, the lesser known threats, such as trojan horses, worms, or crypto jacking. Cybercriminals hack to control data and systems for money, to make political statements, or just for fun.

Today, cybersecurity represents real risk to consumers, corporations, and governments virtually everywhere. The reality is that most systems are not yet capable of identifying or protecting against all cyber threats, and those threats are only rising in volume and sophistication.

Examples of cybersecurity attacks

Cyberattacks in recent years have risen at an alarming rate. There are too many examples of significant exploits to list here. However, the Center for Strategic & International Studies (CSIS) provides a 12-month summary illustrating the noticeable surge in significant cyberattacks, ranging from DDoS attacks to ransomware targeting health care databases, banking networks, military communications, and much more.

All manner of information is at risk in these sorts of attacks, including personally identifiable information, confidential corporate data, and government secrets and classified data. For instance, in February 2022, “a U.N. report claimed that North Korea hackers stole more than $50 million between 2020 and mid-2021 from three cryptocurrency exchanges. The report also added that in 2021 that amount likely increased, as the DPRK launched 7 attacks on cryptocurrency platforms to help fund their nuclear program in the face of a significant sanctions regime.”

Another example, is an exploitation in July 2021 by Russian hackers who “exploited a vulnerability in Kaseya’s virtual systems/server administrator (VSA) software allowing them to deploy a ransomware attack on the network. The hack affected around 1,500 small and midsized businesses, with attackers asking for $70 million in payment.”

Living in a digitally-interconnected world also means growing vulnerability for physical assets. Last year, you may recall hearing about the largest American fuel pipeline, the Colonial Pipeline, being the recipient of a targeted ransomeware attack. In May 2021, “the energy company shut down the pipeline and later paid a $5 million ransom. The attack is attributed to DarkSide, a Russian speaking hacking group.” 

Cybersecurity is a matter of national (and international) security

As a newer operational component of the Department of Homeland Security, the Cybersecurity & Infrastructure Security Agency (CISA) in the U.S. “leads the national effort to understand, manage, and reduce risk to our cyber and physical infrastructure.” Their work helps “ensure a secure and resilient infrastructure for the American people.” CISA was established in 2018 and has since issued a number of emergency directives to help protect information systems. One recent example is the Emergency Directive 22-02 aimed at the Apache Log4J vulnerability which threatened the global computer network. In December 2021, CISA concluded that Log4J security vulnerability posed “an unacceptable risk to Federal Civilian Executive Branch agencies and requires emergency action.” 

In the private sector, “88% of boards now view cybersecurity as a business risk.” Historically, protection against cyber threats meant overreliance on traditional rules-based software and human monitoring efforts. Unfortunately, those traditional approaches to cybersecurity are failing the modern enterprise. This is due to several factors that frankly, in combination, have surpassed our human ability to effectively manage. These factors include the sheer volume of digital activity, the growing number of unguarded vulnerabilities in computer systems (including IoT, BYOD, and other smart devices), code bases, operational processes, the divergence of the two “technospheres” (Chinese and Western), the apparent rise in the number of cybercriminals, and the novel approaches used by hackers to stay one step ahead of their victims.

Costs of cybercrimes 

Security analysts and experts are responsible for hunting down and eliminating potential security threats. But this is tedious and often strenuous work, involving massive sets of complex data, with plenty of opportunity for false flags and threats that can go undetected.

When critical cyber breaches are found, the remediation efforts can take 205 days, on average.  The shortcomings of current cybersecurity, coupled with the dramatic rise in cyberattacks translate into significant costs. “Cybercrime costs the global economy about $445 billion every year, with the damage to business from theft of intellectual property exceeding the $160 billion loss to individuals.”

Cybercrimes can cause physical harm too. This “shifts the conversation from business disruption to physical harm with liability likely ending with the CEO.” By 2025, Gartner expects cybercriminals to “have weaponized operational technology environments successfully enough to cause human casualties.” Further, it is expected that by 2024, the security incidents related to cyber-physical systems (CPSs) will create personal liablity for 75% of CEOs. And, without taking into account “the actual value of a human life into the equation,” by 2023, CPS attacks are predicted to create a financial impact in excess of $50 billion.

Modern countermeasures include zero trust systems, artificial intelligence, and automation

The key to effective cybersecurity is working smarter, not harder. And working smarter in cybersecurity requires a march toward Zero Trust Architecture and autonomous cyber, powered by AI and intelligent automation. Modernizing systems with the zero trust security paradigm and machine intelligence represents powerful countermeasures to cyberattacks.

The White House describes ‘Zero Trust Architecture’ in the following way:

“A security model, a set of system design principles, and a coordinated cybersecurity and system management strategy based on an acknowledgement that threats exist both inside and outside traditional network boundaries.  The Zero Trust security model eliminates implicit trust in any one element, node, or service and instead requires continuous verification of the operational picture via real-time information from multiple sources to determine access and other system responses.  In essence, a Zero Trust Architecture allows users full access but only to the bare minimum they need to perform their jobs.  If a device is compromised, zero trust can ensure that the damage is contained.  The Zero Trust Architecture security model assumes that a breach is inevitable or has likely already occurred, so it constantly limits access to only what is needed and looks for anomalous or malicious activity.  Zero Trust Architecture embeds comprehensive security monitoring; granular risk-based access controls; and system security automation in a coordinated manner throughout all aspects of the infrastructure in order to focus on protecting data in real-time within a dynamic threat environment.  This data-centric security model allows the concept of least-privileged access to be applied for every access decision, where the answers to the questions of who, what, when, where, and how are critical for appropriately allowing or denying access to resources based on the combination of sever.”

AI’s most significant capability is robust, lightning fast data analysis that can learn in much greater volume and in less time than human security analysts ever could. Further, with the right compute infrastructure, artificial intelligence can be at work around the clock without fatigue, analyzing trends and learning new patterns. As a security measure, AI has already been implemented in some small ways such as the ability to scan and process biometrics in mainstream smartphones or advanced analysis of vulnerability databases. But, as advanced technology adoption accelerates, autonomous cyber is likely to provide the highest level of protection against cyberattacks. It will add resiliency to critical infrastructure and global computer networks via sophisticated algorithms and intelligently orchestrated automation that can not only better detect threats but can also take corrective actions to mitigate risks. Actively spotting errors in security systems and automatic “self-healing” by patching them in real time, significantly reduces remediation time.


Accelerating digital trends and cyber threats have elevated cybersecurity to a priority topic for corporations and governments alike. Technology and business leaders are recognizing the limitations of their organizations’ legacy systems and processes. There is growing community consciousness about the notion that, in light of the ever-evolving threat vectors, static and rules-based approaches are destined to fail. Augmenting cybersecurity systems with AI and automation can mitigate risks and speed up an otherwise time-consuming and costly process by identifying breaches in security before the damage becomes too widespread.

Is your enterprise prepared for what’s next in cybersecurity, artificial intelligence, and automation? Begin your enterprise AI Journey and read about the 18 skills needed to materialize your AI initiative, from ideation to production implementation.