Data

Big digital technology trends to watch in 2022

With everything that has been going on in the world over the past two years, it is not surprising to see so many coping with increased levels of stress and anxiety. In the past, we, as a global community, have had to overcome extreme challenges in order to make lives better for ourselves and those around us. From famines to wars to ecological disasters and economic woes, we have discovered and, in countless ways, invented new solutions to help our society evolve.

What we’ve learned from our rich, problem-laden history is that while the past provides much needed perspective, it is the future that can fill us with hope and purpose. And, from our lens, the future will be increasingly driven by innovation in digital technologies. Here are the big digital trends in 2022 and beyond.

Hyperautomation 

From the early history of mechanical clocks to self-driven machines that dawned the industrial revolution to process robotization driven by data and software, automation has helped people produce more in less time. Emerging and maturing technologies such as robotic process automation (RPA), chatbots, artificial intelligence (AI), and Low-Code, No-Code platforms, have been delivering a new level of efficiency to many organizations worldwide.

Historically, automation was born out of convenience or luxury but, in today’s volatile world, it is quickly becoming a business necessity. Hyperautomation is an emerging phenomenon that uses multiple technologies such as machine learning and business process management systems to expand the depth and breadth of the traditional, narrowly-focused automation.

Think of hyperautomation as intelligent automation and orchestration of multiple processes and tools. So, whether your charter is to build resiliency in supply chain operations, create more personalized experiences for customers, speed up loan processing, save time and money on regulatory compliance, or shrink time to answers or insights, well-designed automation can get you there.     

Gartner predicts that hyperautomation enabling software in 2022 will reach nearly $600 billion. Further, “Gartner expects that by 2024, organizations will lower operational costs by 30% by combining hyperautomation technologies with redesigned operational processes.”

Hybrid Cloud

Moore’s Law, the growing demand for compute availability anywhere, anytime, and the rising costs of hardware, software, and talent, together gave rise to the Public Cloud as an alternative to on-premises or “on-prem” infrastructure. From there, add cybersecurity and data privacy concerns and you can see why Private Clouds provide value. Now mix in the unavoidable need for business and IT agility and you can see the push toward the Hybrid Cloud.

Enterprises recognize that owning and managing their own on-prem infrastructure is expensive in terms of initial capital and in terms of the scarce technical talent required to maintain and improve it over time. An approach to addressing that challenge is to off-load as much non-critical computing activity into the cloud as possible. A third-party provider can offer the compute infrastructure, system architecture, and ongoing maintenance to address the needs of many. This approach reflects the benefits of specialization. No need to maintain holistic systems on-premises when so much can be off-loaded to specialists that offer IaaS (Infrastructure-as-a-Service), PaaS (Platform-as-a-Service), and SaaS (Software-as-a-Service).

The challenge for enterprises, though, is in striking the right balance between on-premises and cloud services. Hybrid Cloud combines private and public cloud computing to provide organizations the scale, security, flexibility, and resiliency they require for their digital infrastructure in today’s business environment. “The global hybrid cloud market exhibited strong growth during 2015-2020. Looking forward, the market is expected to grow at a CAGR of 17.6% during 2021-2026.”

As companies and their data become ever more intermeshed with one another, the complexity, along with the size of the market, will increase even further.

Privacy-preserving machine learning

The digital universe is facing a global problem that isn’t easy to fix—ensuring data privacy in a time when virtually every commercial or governmental service we use in our daily lives revolves around data. With the growing public awareness of frequent data breaches and mistreatment of consumer data (partly thanks for the Facebook-Cambridge Analytica data fiascoYahoo’s data breach impacting 3 billion accounts, and Equifax system breach in 2017 to name a few) companies and governments are taking additional steps to rebuild trust with their customers and constituents. In 2016, Europe introduced GDPR as a consolidated set of privacy laws to ensure a safer digital economy. However, “the United States doesn’t have a singular law that covers the privacy of all types of data.” Here, we take a more patchwork approach to data protection to address specific circumstances—HIPAA covering health information, FCRA for credit reports, or ECPA for wiretapping restrictions.

With the explosion of both edge computing (expected to reach $250.6 Billion in 2024) and an ever greater number and capacity of IoT (Internet of Things) devices and smart machines, the volume of data available for machine learning is vast in quantity and increasingly in terms of diversity of sources. One of the central challenges is how to extract the value of machine learning applied to these data sources while maintaining the privacy of personally-identifiable or otherwise sensitive data. 

Even with the best security, unless properly designed, the feasibility of trained models for AI to incidentally breach the privacy of the underlying data sets is real and increasing. Any company which deals with potentially sensitive data, uses machine learning to extract value, or works in close data-alliance with one or more other companies has to be concerned about the possible direction which machine learning can take and possible breaches of underlying data privacy.

The purpose of privacy-preserving machine learning is to train models in ways that protects sensitive data without degrading model performance. Historically this has been addressed by data anonymization or obfuscation techniques but frequently data anonymization reduces or, in some cases, eliminates the value of the data. Today, other techniques are being applied as well to better ensure data privacy, including federated machine learning designed to train a centralized model via decentralized nodes (e.g. “training data locally on users’ mobile devices rather than logging it to a data center for training”) and differential privacy which makes it possible to collect and share user information while maintain the user’s privacy by adding “noise” to the user inputs.

Privacy-preserving machine learning is a complex and evolving field. Success in this area will help rebuild consumer trust in our digital economy and unleash untapped potential in advanced data analytics that is currently restricted due to privacy concerns.   

Digital Twins

Thanks to recent advances in AI, automation, IoT, cloud computing, and robotics, Industry 4.0 (the Fourth Industrial Revolution) has already begun. As the world of manufacturing and commerce expands and the demand for virtualization grows, digital twins find a footing. A digital twin is the virtual representation of a product, a process, or a product performance. It encompasses the designs, specifications, and quantifications of a product—essentially all the information required to describe what is produced, how it is produced, and how it is used.

As enterprises digitize, the concept of digital twins becomes ever more central. Anything which performs under tight specifications, has high capital value and needs to perform at exceptional levels of consistency is a candidate for digital twinning. This allows companies the ability to use virtual simulations as a faster, more effective way to solve real-world problems. Think of these simulations to validate and test products before they exist—jet engines, water supply systems, advanced performance vehicles, anything sent into space—and the opportunity to augment digital twins with advanced AI to foresee problems in the performance of products, factory operations, retail spaces, personalized health care, or even smart cities of the future.   

In case of products, we need to know that a product is produced to specifications and need to understand how it is performing in order to refine and improve its performance. Take wind turbines as an example. They are macro-engineered products, with a high capital price tag, performing under harsh conditions, and engineered to very tight specifications. Anything that can be learned to improve performance and reduce wear is quite valuable. Sensing devices can tell you in real time wind strength, humidity, time of day, wind fitfulness, temperature and temperature gradients, number of bird strikes, turbine heat, and more.

The global market for digital twins “is expected to expand at a compound annual growth rate (CAGR) of 42.7% from 2021 to 2028.” Digital twins provide an example of both the opportunity created by digitization as well as the complexity arising from that process. The volume of data is large and complex. Advanced analysis of that data with AI and machine learning along with process automation improves a company’s ability to better manage production risks, proactively identify system failures, reduce build and maintenance costs, and make data-driven decisions.   

Metaverse

Lately, you may have heard a lot of buzz about the “metaverse” and the future of the digital world. Much of the recent hype could be attributed to the rebranding of Facebook back in October 2021, changing its corporate name to Meta. At present, metaverse is still mostly conceptual in many ways without a collective agreement on its definition. That said, there are core pieces in place already to enable a digital universe that will feel more immersive and 3D in every aspect, compared to what we experience via the Internet today.

Well known large tech companies including Nvidia, Microsoft, Google, and Apple are already playing their role in making metaverse a reality and other companies and investors are piling on. Perhaps, the “gaming companies like Roblox and Epic Games are the farthest ahead building metaverses.” Meta expects to spend $10 billion on its VR (virtual reality), augmented reality (AR), and mixed reality (MR) technologies this fiscal year in support of its metaverse vision.

Some of the building blocks of the metaverse include strong social media acceptance and use, strong gaming experience, Extended Reality or XR (the umbrella term for VR, AR, and MR) hardware, blockchain, and cryptocurrencies. Even the Digital Twins technology plays a role here. While there is no explicit agreement as to what constitutes a Metaverse, the general idea is that at some point we should be able to integrate the already-existing Internet with a set of open, universally-interoperable virtual worlds and technologies. The metaverse is expected to be much larger than what we’re used to in our physical world. We’ll be able to create endless number of digital realms, unlimited digital things to own, buy, or sell, and all the services we can conjure to blend what we know about our physical world with fantasy. The metaverse will have its own citizens and avatar inhabitants as well as its own set of rules and economies. And it won’t be just for fun and games. Scientists, inventors, educators, designers, engineers, and businesses will all participate in the metaverse to solve technical, social, and environmental challenges to enable health and prosperity for more people in more places in our physical world.

Instead of clunky video chats or conference calls, imagine meetings that are fully immersive and feel natural. Training could be shifted from instruction to experience. Culture building within an enterprise could occur across a geographically distributed workforce. Retail could be drastically transformed with most transactions occurring virtually. For example, even the mundane task of grocery shopping could be almost entirely shifted into a metaverse where, from the comfort of your den, you can wander up and down the aisles, compare products and prices, and feel the ripeness of fruits and vegetables.

AI’s contribution to the metaverse will be significant. AIOps (Artificial Intelligence for IT Operations) will help manage the highly complex infrastructure. Generative AI will create digital content and assets. Autonomous agents will provide and trade all sort of services. Smart contracts will decentralize assets to keep track of digital currency and other transactions in ways that will disintermediate the big tech companies. Deep reinforcement learning will help design better computer chips at unprecedented speed. A series of machine learning models will help personalize gaming and educational experiences. In short, the metaverse will be limited only by compute resources and our imagination.

To meet its promise, the metaverse will face certain challenges. Perhaps once again, our technology is leaping ahead of our social norms and our regulatory infrastructure. From data and security to laws and governing jurisdictions, inclusion and diversity, property and ownership, as well as ethics, we will need our best collective thinking and collaborative partnerships to create new worlds. Similar to the start of the Internet and our experiences thus far, we can expect many experiments, false starts, and delays associated with the metaverse, before landing on the right frameworks and applications that are truly useful and decentralized.

The metaverse market size is expected to reach $872 billion in 2028, representing a 44.1% CAGR between 2020-2028.

Blockchain

Blockchain is closely associated with cryptocurrency but is by no means restricted in its application to cryptocurrency. Blockchain is essentially a decentralized database that allows for simultaneous use and sharing of digital transactions via a distributed network. To track or trade anything of value, blockchain can create a secure record or ledger of transactions which cannot be later manipulated. In many ways, blockchain is a mechanism to create trust in the context of a digital environment. Participants gain the confidence that the data represented is indeed real because the ledgers and transactions are immutable. The records cannot be destroyed or altered.  

The emergence of Bitcoin and alternative cryptocurrencies in recent years has put blockchain technology on a fast adoption curve, in and out of finance. Traditionally, recording and tracking transactions is handled separately by participants involved. For example, it can take a village to complete a real estate transaction—buyers, sellers, brokers, escrow companies, lenders, appraisers, inspectors, insurance companies, government, and more—and that can lead to many inefficiencies including record duplications, processing delays, and potential security vulnerabilities. Blockchain technology is collaborative, giving all users collective control and allowing transactions to be recorded in a decentralized manner, via a peer-to-peer network. Each participant in the business network can now record, receive, or send transactions to other participants and make the entire process safer, cheaper, and more efficient.   

The use cases for blockchain are growing including the use of the technology to explore medical research and improve record accuracy in health care, transfer money, create smart contracts to track the sale of and settle payments for goods and services, improve IoT security, and bring additional transparency to supply chain. By 2028, the market for blockchain technology is forecasted to expand to approximately $400 billion with an 82.4% CAGR from 2021 to 2028.

Web3

Web3 or Web 3.0 is an example of real technology with real application that may be suffering from definitional miasma. In short, Web3 is your everyday web with the exception of centralization that has evolved over the past two decades due to the remarkable success of few big tech juggernauts.

The original web, Web 1.0, was pretty disorganized, dominated by mostly static pages. Web 2.0 became a more interactive web with user-generated content and what made social media, blogging (including microblogging), search, crowdsourcing, and online gaming snowball.

Web 2.0 allowed the web to grow much larger and more useful while, simultaneously, growing more risky with advertising as a key business model. The wild west of social, content, and democratization of new tools gave rise to a set of downsides—information overload, a set of Internet addictions, fake news, hate speech, forgeries and fraud, citizen journalism without guard rails, and more.

Web 3.0 (not to be confused with Tim Berners-Lee’s concept of the Semantic Web which is sometimes referred to as Web 3.0 as well) aims at transforming the current web which is highly centralized via control by a handful of very large tech companies. Web3 focuses on decentralization based on blockchains and is closely associated with cryptocurrencies. Advocates of Web3 see semantic web and AI as critical elements to ensure better security, privacy, and data integrity as well as resolve the broader issue of decentralizing technology away from the dominance of existing technology companies. With Web3, your online data will remain your property and you will be able to move that data or monetize it freely without being dependent on any particular intermediary.

Since both Web3 and Semantic Web involve an evolution from our current Web 2.0, it makes sense that both have arrived at Web 3.0 as a descriptor, but each is describing related, but different aspects of improving the Web. Projections for Web3 are difficult to develop but the issues addressed by Web3 will be key to the emergence of a more open and versatile Internet. 

Autonomous Cyber

Put malicious intent and software together and you’ll get a quick sense of what keeps information security (InfoSec) executives and cyber security professionals up at night. Or day for that matter. Malware, short for malicious software, is an umbrella term for a host of cybersecurity threats facing us all, including spyware, ransomware, worms, viruses, trojan horses, crypto jacking, adware, and more. Cybercriminals (often referred to as “hackers”) use malware and other techniques, such as phishing and man-in-the-middle attacks, that can wreak havoc on computers, applications, and networks. And they do this to destroy or damage computer systems, steal or leak data, or collect ransom in exchange for giving back the control of assets to the original owner.    

Cyber threats are on the rise globally and, well beyond individual or corporate interest, they are a matter of national and international security. The dangers they represent not only provide risk to digital assets but critical physical infrastructure as well. In the United States, the Cybersecurity & Infrastructure Security Agency (CISA), a newer operational component of the Department of Homeland Security, “leads the national effort to understand, manage, and reduce risk to our cyber and physical infrastructure.” Their work helps “ensure a secure and resilient infrastructure for the American people.” Since its formation in 2018, CISA has issued a number of Emergency Directives to help protect information systems. A recent example is the Emergency Directive 22-02 aimed at the Apache Log4J vulnerability which has broad implications, threatening the global computer network. It “poses unacceptable risk to Federal Civilian Executive Branch agencies and requires emergency action.” 

Gone are the days when computer systems and networks could be protected via simple rules-based software tools or human monitoring efforts. In terms of risk assessment, “88% of boards now view cybersecurity as a business risk.” Traditional approaches to cybersecurity are failing the modern enterprise because the sheer volume of cyber activity, the ever-increasing number of undefended areas in codes, systems, or processes (including the additional openings exposed as a result of massive proliferation of IoT devices in recent years), the growing number of cybercriminals, and the sophistication in attacking approaches have in combination surpassed our ability to effectively manage.

Enter Autonomous Cyber, powered by machine intelligence. This is a rapidly developing field, both technologically and in terms of state governance and international law. Autonomous Cyber is the story of digital technology in three acts.

Act I –  The application of AI to our digital world.

Act II – The use of AI to automate attacks by nefarious agents or State entities to penetrate information systems.

Act III – The use of AI, machine learning, and intelligent process automation against cyber-attacks.

Autonomous Cyber leverages AI to continuously monitor an enterprise’s computing infrastructure, applications, and data sources for unexpected changes in patterns of communication, navigation, and data flows. The idea is to use sophisticated algorithms that can distinguish what is normal from what might be abnormal (representing potential risk), intelligent orchestration, and other automation to take certain actions at speed and scale. These actions include creating alerts and notifications, rerouting requests, blocking access, or shutting off services altogether. And best of all, these actions can be designed to either augment human power, such as the capabilities of a cybersecurity professional, or be executed independently without any human control. This can help companies and governments build better defensibility and responsiveness to ensure critical resiliency.  

The global market for AI in cybersecurity “is projected to reach USD 38.2 billion by 2026 from USD 8.8 billion in 2019, at the highest CAGR of 23.3%.” Use of AI by hackers and state actors means that there will be less time between innovations. Therefore, enterprise security systems cannot simply look for replications of old attack patterns. They have to be able to identify, in more places and systems, new schemes of deliberate attacks or accidental/natural disruptions as they emerge in real time. AI in the context of cybersecurity is becoming a critical line of defense and, with the speed of evolution shortening in technology, Autonomous Cyber has the capacity to continuously monitor and autonomously respond to evolving cyber threats.

Conclusion

At Entefy we are passionate about breakthrough computing that can save people time so that they live and work better.

Now more than ever, digital is dominating consumer and business behavior. Consumers’ 24/7 demand for products, services, and personalized experiences wherever they are is forcing businesses to optimize and, in many cases, reinvent the way they operate to ensure efficiency, ongoing customer loyalty, and employee satisfaction. This requires advanced technologies including AI and automation. To learn more about these technologies, be sure to read our previous blogs on multimodal AI, the enterprise AI journey, and the “18 important skills” you need to bring AI applications to life.