AI ethics and adding trust to the equation

The size and scope of the increasingly global digital activity is well documented. The growing consumer and business demand, in essentially every vertical market, for productivity, knowledge, and communication is pushing the world into a new paradigm of digital interaction, very much like mobility, the Internet, and the PC before it. But, on a more massive scale. The new paradigm includes bringing machine intelligence to virtually every aspect of our digital world, from conversational interfaces to robot soldiers, autonomous vehicles, deep fakes, virtual tutoring systems, shopping and entertainment recommendations, and even minute-by-minute cryptocurrency price predictions.

The adoption of new technologies typically follows a fairly predictable S-curve during their lifecycles—from infancy to rapid growth, maturity, and ultimately decline. With machine learning, in many ways, we are still in the early stages. This is especially the case for artificial general intelligence (AGI) which aims at the type of machine intelligence that is human like in terms of self reasoning, planning, learning, and conversing. In science fiction, AGI experiences consciousness and sentience. Although we may be many years away from the sci-fi version of human-like systems, today’s advanced machine learning is creating a step change in interaction between people, machines, and services. At Entefy, we refer to this new S-curve as universal interaction. It is important to note this step change impacts nearly every facet of our society, making ethical development and practices in AI and machine learning a global imperative.

Ethics

Socrates’ teachings about ethics and standards of conduct nearly 2,500 years ago has had a lasting impact on our rich history ever since. The debate about what constitutes “good” or “bad” and, at times, the blurry lines between them is frequent and ongoing. And, over millenia,  disruptive technologies have influenced this debate with new significant inventions and discoveries.   

Today, ethical standards are in place across many industries. In most cases, these principles exist to protect lives and liberty and to ensure integrity in business. For example, the Law Enforcement Oath taken by local and state police officers. The oath in the field of law required for attorneys, as officers of the court. Professional code of ethics and code of conduct. The mandatory oath for federal employees or armed forces.

The Hippocratic Oath in medicine is another well-known example. Although the specific pledges vary by medical school, in general, there’s recognition as to the unique role doctors play in their patients’ lives and the delineation of code of ethics that guide the actions of physicians. The more modern version of the Hippocratic oath takes guidance from the Declaration of Geneva which was adopted by the World Medical Association (WMA). Among other ethical commitments, the Physician’s Pledge states:

“I will not permit considerations of age, disease or disability, creed, ethnic origin, gender, nationality, political affiliation, race, sexual orientation, social standing, or any other factor to intervene between my duty and my patient.”

This clause is striking for its relevance to a different set of technology fields that today are still in their early phases: AI, machine learning, hyperautomation, cryptocurrency. With AI systems already at work in many areas of life and business, from medicine to criminal justice to surveillance, perhaps it’s not surprising that members of the AI and data science community have proposed an algorithm-focused version of a Hippocratic oath. “We have to empower the people working on technology to say ‘Hold on, this isn’t right,’” said DJ Patil, the U.S. Chief Data Scientist in President Obama’s administration. The group’s 20 core principles include ideas such as “Bias will exist. Measure it. Plan for it.” and “Exercise ethical imagination.”

The case for trustworthy and ethical AI

The need for professional responsibility in the field of artificial intelligence cannot be understated. There are many high-profile cases of algorithms exhibiting biased behavior resulting from the data used in their training. The examples that follow add more weight to the argument that AI ethics are not just beneficial, but essential:

  1. Data challenges in predictive policing. AI-powered predictive policing systems are already in use in cities including Atlanta and Los Angeles. These systems leverage historic demographic, economic, and crime data to predict specific locations where crime is likely to occur. So far so good. The ethical challenges of these systems became clear in a study of one popular crime prediction tool. The predictive policing system developed by the Los Angeles Police Department in conjunction with university researchers, was shown to worsen the already problematic feedback loop present in policing and arrests in certain neighborhoods. An attorney from the Electronic Frontier Foundation said, “‘if predictive policing means some individuals are going to have more police involvement in their life, there needs to be a minimum of transparency.”’
  2. Unfair credit scoring and lending. Operating on the premise that “all data is credit data,” machine learning systems are being designed across the financial services industry in order to determine creditworthiness using not only traditional credit data, but social media profiles, browsing behaviors, and purchase histories. The goal on the part of a bank or other lender is to reduce risk by identifying individuals or businesses most likely to default. Research into the results of these systems has identified cases of bias such as two businesses of similar creditworthiness receiving different scores due to the neighborhood in which each business is located.
  3. Biases introduced into natural language processing. Computer vision and natural language processing (NLP) are subfields of artificial intelligence that give computer systems digital eyes, ears, and voices. Keeping human bias out of those systems is proving to be challenging. One Princeton study into AI systems that leverage information found online showed that the biases people exhibit can make their way into AI algorithms via the systems’ use of Internet content. The researchers observed “that human-like semantic biases result from the application of standard machine learning to ordinary language—the same sort of language humans are exposed to every day.” This matters because other solutions and systems often use machine learning models that were trained on similar types of datasets.
  4. Limited effectiveness of health care diagnosis. There is limitless potential for AI-powered systems to improve patients’ lives using trustworthy and ethical AI. Entefy has written extensively on the topic, including the analysis of 9 paths to AI-powered affordable health care, how machine learning can outsmart coronavirus, improving the relationship between patients and doctors, and AI-enabled drug discovery and development.

    The ethical AI considerations in the health care industry emerge from the data and whether the data includes biases tied to variability in the general population’s access to and quality of health care. Data from past clinical trials, for instance, is likely to be far less diverse than the face of today’s patient population. Said one researcher, “At its core, this is not a problem with AI, but a broader problem with medical research and healthcare inequalities as a whole. But if these biases aren’t accounted for in future technological models, we will continue to build an even more uneven healthcare system than what we have today.”
  5. Impaired judgement in the criminal justice system. AI is performing a number of tasks for courts such as supporting judges in bail hearings and sentencing. One study of algorithmic risk assessment in criminal sentencing revealed the need for removing bias from some of their systems. Examining the risk scores of more than 7,000 people arrested in Broward County, Florida, the study concluded that the system was not only inaccurate but plagued with biases. For example, it was only 20% accurate in predicting future violent crimes and twice as likely to inaccurately flag African-American defendants as likely to commit future crimes. Yet these systems contribute to sentencing and parole decisions.

Ensuring AI ethics in your organization

The topic of ethical AI is no longer purely philosophical. It is being shaped and legislated to protect consumers, business, and international relations. According to a White House fact sheet, “the United States and European Union will develop and implement AI systems that are innovative and trustworthy and that respect universal human rights and shared democratic values, explore cooperation on AI technologies designed to enhance privacy protections, and undertake an economic study examining the impact of AI on the future of our workforces.”

The European Commission has already drafted “The Ethics Guidelines for Trustworthy Artificial Intelligence (AI)” to promote ethical principles for organizations, developers, and society at large. To bring trustworthy AI to your organization, begin with structuring AI initiatives in ways that are lawful, ethical, and robust. Ethical AI demands respect for applicable laws, regulations, as well as ethical principles and values, while making sure the AI system itself is resilient and safe. In their guidance, the European Commision presents the following 7 requirements for trustworthy AI:

  1. Human agency and oversight
    Including fundamental rights, human agency and human oversight
  2. Technical robustness and safety
    Including resilience to attack and security, fall back plan and general safety, accuracy, reliability and reproducibility
  3. Privacy and data governance
    Including respect for privacy, quality and integrity of data, and access to data
  4. Transparency
    Including traceability, explainability and communication
  5. Diversity, non-discrimination and fairness
    Including the avoidance of unfair bias, accessibility and universal design, and stakeholder participation
  6. Societal and environmental wellbeing
    Including sustainability and environmental friendliness, social impact, society and democracy
  7. Accountability
    Including auditability, minimisation and reporting of negative impact, trade-offs and redress

Be sure to read Entefy’s previous article on actionable steps to include AI ethics in your digital and intelligence transformation initiatives. Also, brush up on AI and machine learning terminology with our 53 Useful terms for anyone interested in artificial intelligence.

Can AI be officially recognized as an inventor?

Invention is generally considered a pinnacle of the creative mind. Since the dawn of the computer age, the question of whether an artificial intelligence (“AI”) can be such a creative mind has been the subject of philosophical debate. More recently, with advancements in computing and deep learning, in particular multimodal machine learning, this question is no longer purely philosophical. It has reached the realm of practical consideration. While still limited, advanced AI systems are now able to assimilate vast amounts of data and synthesize learning to produce discovery at scales far beyond human capability.

Today, machine intelligence is used to decode genetic sequences for vaccine development, determine how to fold protein structures, accurately detect human emotions, compose music, operate autonomous vehicles, identify fraud in finance, generate smart content in education, create smarter robots, and much more. These are all made possible through a series of machine learning techniques. In short, these techniques teach machines how to make sense of the data and signals exposed to them, often in very human ways. At times, generative machine learning models are even capable of creating content that is equivalent to or better than those created by people. A recent study found that AI-generated images of faces “are indistinguishable from real faces and more trustworthy.” From the perspective of practical application, there is little debate that AI systems can learn and engage in creative discovery. More recently, however, the question of the creative potential of artificial intelligence has posed an intriguing legal question: Can an AI system be an “inventor” and be issued a patent?

This legal question was put to the test by physicist Dr. Stephen Thaler, inventor of DABUS (Device for Autonomous Boot-strapping of Unified Sentience). In 2019, Dr. Thaler filed two patent applications in a number of countries naming DABUS as the sole inventor. One patent application related to making of containers using fractal geometry, and the other to producing light that flickers rhythmically in patterns mimicking human neural activity.

While the actual invention claims were found allowable, the United States Patent and Trademark Office (USPTO) decided that only a natural person could be an inventor. Since DABUS is a machine and not a natural person, a patent could not be issued. The matter was appealed to the U.S. District Court for the Eastern District of Virginia, which agreed with the USPTO based on the plain language of the law describing the inventor as an “individual.” The Court noted that while the term “person” can include legal entities, such as corporations, the term “individual” refers to a “natural person” or a human being. According to the Court, whether an artificial intelligence system can be considered an inventor is a policy question best left to the legislature.

This same question came before the Federal Court of Australia, when the Australian patent agency refused to award a patent to DABUS. The Australian Federal Court, however, did not limit its consideration to a narrow discussion on the meaning of a word, but asked a more fundamental question as to the nature and source of human versus AI creativity: “We are both created and create. Why cannot our own creations also create?” The Australian Federal Court found that, while DABUS did not have property rights to ‘own’ a patent, it was nonetheless legally able to be an ‘inventor.’

Both this debate and the practical applications will have global implications on how people interact with AI. International approaches will continue to vary and perhaps further complicate matters. In the case of Saudi Arabia, and later the United Arab Emirates, a robot named Sophia was granted citizenship. So, could this robot be officially recognized as an inventor?  

In the United States, the question of whether an AI system can be an inventor is pending appeal before the Federal Circuit and may come before the Supreme Court, which has found that unnatural persons (such as corporations) can have a right to free exercise of religion. So far, the American legal response has been to interpret the wordings of the patent statutes narrowly, and pass the question of whether an AI system can be an inventor to policymakers.

As a society, the question of whether we will consider AI to have personhood or be considered as an individual—for example, being able to make a contract, swear an oath, sign a declaration, have agency, or be held accountable—can be confounding. As AI systems make more independent decisions that impact our society, these questions rise in practical importance.

Learn more about AI and machine learning in our previous blogs. Start here with key AI terms and concepts, differences between machine learning and traditional data analytics, and the “18 important skills” needed to bring enterprise-level AI projects to life. 

Paul Ross

Entefy closes latest funding round and adds Paul Ross, former CFO of The Trade Desk, to its Board of Directors

PALO ALTO, Calif., February 1, 2022— Entefy Inc. announced today the appointment of Paul Ross to its Board of Directors. As a multi-time public company CFO, Ross brings to Entefy a wealth of finance and scale leadership experience. Ross’ expertise in public company finance, IPO preparation, human resources, and rapid scaling of corporate infrastructure make him a valuable resource for the executive management team and Board.

The global disruptions to supply chains, the workforce, and consumer behavior brought on by the pandemic, have resulted in a significant rise in enterprise demand for AI and process automation. Over the past year, Entefy has been experiencing growth in multiple key areas including its AI technology footprint, company valuation, customer orders, data center operations, and intellectual property (now spanning 100s of trade secrets and patents combined). The company also recently closed its Series A-1 round which, combined with its previous funding rounds, totals more than $25M in capital raised to date.

As the first CFO of The Trade Desk, Inc. (NASDAQ: TTD), Ross rapidly prepared the company for its highly successful IPO while helping grow revenue more than 20x over five years. Ross and his fellow leadership team created more than $20 billion in market valuation over the same period as the company transitioned from a private enterprise to a profitable public company. Prior to The Trade Desk, Ross held several CFO roles in technology and related industries. Ross holds an MBA from University of Southern California and a bachelor’s degree from University of California, Los Angeles. He earned his Certified Public Accountant (CPA) certification while at PWC.

Entefy Chairman and CEO, Alston Ghafourifar, said, “I’m delighted to welcome Paul to Entefy’s Board of Directors. He’s a world-class executive with the relevant financial leadership and growth experience to support our journey toward the future of AI and automation for organizations everywhere.” “I’m thrilled about this opportunity and proud to have joined Entefy during this exciting phase,” said Ross. “Entefy’s team has done an amazing job building a highly differentiated AI and automation technology to help businesses achieve growth and resiliency, especially in times like these.”

ABOUT ENTEFY 

Entefy is an advanced AI software and process automation company, serving enterprise customers across diverse industries including financial services, health care, retail, and manufacturing.

www.entefy.com

pr@entefy.com

Data

Big digital technology trends to watch in 2022

With everything that has been going on in the world over the past two years, it is not surprising to see so many coping with increased levels of stress and anxiety. In the past, we, as a global community, have had to overcome extreme challenges in order to make lives better for ourselves and those around us. From famines to wars to ecological disasters and economic woes, we have discovered and, in countless ways, invented new solutions to help our society evolve.

What we’ve learned from our rich, problem-laden history is that while the past provides much needed perspective, it is the future that can fill us with hope and purpose. And, from our lens, the future will be increasingly driven by innovation in digital technologies. Here are the big digital trends in 2022 and beyond.

Hyperautomation 

From the early history of mechanical clocks to self-driven machines that dawned the industrial revolution to process robotization driven by data and software, automation has helped people produce more in less time. Emerging and maturing technologies such as robotic process automation (RPA), chatbots, artificial intelligence (AI), and Low-Code, No-Code platforms, have been delivering a new level of efficiency to many organizations worldwide.

Historically, automation was born out of convenience or luxury but, in today’s volatile world, it is quickly becoming a business necessity. Hyperautomation is an emerging phenomenon that uses multiple technologies such as machine learning and business process management systems to expand the depth and breadth of the traditional, narrowly-focused automation.

Think of hyperautomation as intelligent automation and orchestration of multiple processes and tools. So, whether your charter is to build resiliency in supply chain operations, create more personalized experiences for customers, speed up loan processing, save time and money on regulatory compliance, or shrink time to answers or insights, well-designed automation can get you there.     

Gartner predicts that hyperautomation enabling software in 2022 will reach nearly $600 billion. Further, “Gartner expects that by 2024, organizations will lower operational costs by 30% by combining hyperautomation technologies with redesigned operational processes.”

Hybrid Cloud

Moore’s Law, the growing demand for compute availability anywhere, anytime, and the rising costs of hardware, software, and talent, together gave rise to the Public Cloud as an alternative to on-premises or “on-prem” infrastructure. From there, add cybersecurity and data privacy concerns and you can see why Private Clouds provide value. Now mix in the unavoidable need for business and IT agility and you can see the push toward the Hybrid Cloud.

Enterprises recognize that owning and managing their own on-prem infrastructure is expensive in terms of initial capital and in terms of the scarce technical talent required to maintain and improve it over time. An approach to addressing that challenge is to off-load as much non-critical computing activity into the cloud as possible. A third-party provider can offer the compute infrastructure, system architecture, and ongoing maintenance to address the needs of many. This approach reflects the benefits of specialization. No need to maintain holistic systems on-premises when so much can be off-loaded to specialists that offer IaaS (Infrastructure-as-a-Service), PaaS (Platform-as-a-Service), and SaaS (Software-as-a-Service).

The challenge for enterprises, though, is in striking the right balance between on-premises and cloud services. Hybrid Cloud combines private and public cloud computing to provide organizations the scale, security, flexibility, and resiliency they require for their digital infrastructure in today’s business environment. “The global hybrid cloud market exhibited strong growth during 2015-2020. Looking forward, the market is expected to grow at a CAGR of 17.6% during 2021-2026.”

As companies and their data become ever more intermeshed with one another, the complexity, along with the size of the market, will increase even further.

Privacy-preserving machine learning

The digital universe is facing a global problem that isn’t easy to fix—ensuring data privacy in a time when virtually every commercial or governmental service we use in our daily lives revolves around data. With the growing public awareness of frequent data breaches and mistreatment of consumer data (partly thanks for the Facebook-Cambridge Analytica data fiascoYahoo’s data breach impacting 3 billion accounts, and Equifax system breach in 2017 to name a few) companies and governments are taking additional steps to rebuild trust with their customers and constituents. In 2016, Europe introduced GDPR as a consolidated set of privacy laws to ensure a safer digital economy. However, “the United States doesn’t have a singular law that covers the privacy of all types of data.” Here, we take a more patchwork approach to data protection to address specific circumstances—HIPAA covering health information, FCRA for credit reports, or ECPA for wiretapping restrictions.

With the explosion of both edge computing (expected to reach $250.6 Billion in 2024) and an ever greater number and capacity of IoT (Internet of Things) devices and smart machines, the volume of data available for machine learning is vast in quantity and increasingly in terms of diversity of sources. One of the central challenges is how to extract the value of machine learning applied to these data sources while maintaining the privacy of personally-identifiable or otherwise sensitive data. 

Even with the best security, unless properly designed, the feasibility of trained models for AI to incidentally breach the privacy of the underlying data sets is real and increasing. Any company which deals with potentially sensitive data, uses machine learning to extract value, or works in close data-alliance with one or more other companies has to be concerned about the possible direction which machine learning can take and possible breaches of underlying data privacy.

The purpose of privacy-preserving machine learning is to train models in ways that protects sensitive data without degrading model performance. Historically this has been addressed by data anonymization or obfuscation techniques but frequently data anonymization reduces or, in some cases, eliminates the value of the data. Today, other techniques are being applied as well to better ensure data privacy, including federated machine learning designed to train a centralized model via decentralized nodes (e.g. “training data locally on users’ mobile devices rather than logging it to a data center for training”) and differential privacy which makes it possible to collect and share user information while maintain the user’s privacy by adding “noise” to the user inputs.

Privacy-preserving machine learning is a complex and evolving field. Success in this area will help rebuild consumer trust in our digital economy and unleash untapped potential in advanced data analytics that is currently restricted due to privacy concerns.   

Digital Twins

Thanks to recent advances in AI, automation, IoT, cloud computing, and robotics, Industry 4.0 (the Fourth Industrial Revolution) has already begun. As the world of manufacturing and commerce expands and the demand for virtualization grows, digital twins find a footing. A digital twin is the virtual representation of a product, a process, or a product performance. It encompasses the designs, specifications, and quantifications of a product—essentially all the information required to describe what is produced, how it is produced, and how it is used.

As enterprises digitize, the concept of digital twins becomes ever more central. Anything which performs under tight specifications, has high capital value and needs to perform at exceptional levels of consistency is a candidate for digital twinning. This allows companies the ability to use virtual simulations as a faster, more effective way to solve real-world problems. Think of these simulations to validate and test products before they exist—jet engines, water supply systems, advanced performance vehicles, anything sent into space—and the opportunity to augment digital twins with advanced AI to foresee problems in the performance of products, factory operations, retail spaces, personalized health care, or even smart cities of the future.   

In case of products, we need to know that a product is produced to specifications and need to understand how it is performing in order to refine and improve its performance. Take wind turbines as an example. They are macro-engineered products, with a high capital price tag, performing under harsh conditions, and engineered to very tight specifications. Anything that can be learned to improve performance and reduce wear is quite valuable. Sensing devices can tell you in real time wind strength, humidity, time of day, wind fitfulness, temperature and temperature gradients, number of bird strikes, turbine heat, and more.

The global market for digital twins “is expected to expand at a compound annual growth rate (CAGR) of 42.7% from 2021 to 2028.” Digital twins provide an example of both the opportunity created by digitization as well as the complexity arising from that process. The volume of data is large and complex. Advanced analysis of that data with AI and machine learning along with process automation improves a company’s ability to better manage production risks, proactively identify system failures, reduce build and maintenance costs, and make data-driven decisions.   

Metaverse

Lately, you may have heard a lot of buzz about the “metaverse” and the future of the digital world. Much of the recent hype could be attributed to the rebranding of Facebook back in October 2021, changing its corporate name to Meta. At present, metaverse is still mostly conceptual in many ways without a collective agreement on its definition. That said, there are core pieces in place already to enable a digital universe that will feel more immersive and 3D in every aspect, compared to what we experience via the Internet today.

Well known large tech companies including Nvidia, Microsoft, Google, and Apple are already playing their role in making metaverse a reality and other companies and investors are piling on. Perhaps, the “gaming companies like Roblox and Epic Games are the farthest ahead building metaverses.” Meta expects to spend $10 billion on its VR (virtual reality), augmented reality (AR), and mixed reality (MR) technologies this fiscal year in support of its metaverse vision.

Some of the building blocks of the metaverse include strong social media acceptance and use, strong gaming experience, Extended Reality or XR (the umbrella term for VR, AR, and MR) hardware, blockchain, and cryptocurrencies. Even the Digital Twins technology plays a role here. While there is no explicit agreement as to what constitutes a Metaverse, the general idea is that at some point we should be able to integrate the already-existing Internet with a set of open, universally-interoperable virtual worlds and technologies. The metaverse is expected to be much larger than what we’re used to in our physical world. We’ll be able to create endless number of digital realms, unlimited digital things to own, buy, or sell, and all the services we can conjure to blend what we know about our physical world with fantasy. The metaverse will have its own citizens and avatar inhabitants as well as its own set of rules and economies. And it won’t be just for fun and games. Scientists, inventors, educators, designers, engineers, and businesses will all participate in the metaverse to solve technical, social, and environmental challenges to enable health and prosperity for more people in more places in our physical world.

Instead of clunky video chats or conference calls, imagine meetings that are fully immersive and feel natural. Training could be shifted from instruction to experience. Culture building within an enterprise could occur across a geographically distributed workforce. Retail could be drastically transformed with most transactions occurring virtually. For example, even the mundane task of grocery shopping could be almost entirely shifted into a metaverse where, from the comfort of your den, you can wander up and down the aisles, compare products and prices, and feel the ripeness of fruits and vegetables.

AI’s contribution to the metaverse will be significant. AIOps (Artificial Intelligence for IT Operations) will help manage the highly complex infrastructure. Generative AI will create digital content and assets. Autonomous agents will provide and trade all sort of services. Smart contracts will decentralize assets to keep track of digital currency and other transactions in ways that will disintermediate the big tech companies. Deep reinforcement learning will help design better computer chips at unprecedented speed. A series of machine learning models will help personalize gaming and educational experiences. In short, the metaverse will be limited only by compute resources and our imagination.

To meet its promise, the metaverse will face certain challenges. Perhaps once again, our technology is leaping ahead of our social norms and our regulatory infrastructure. From data and security to laws and governing jurisdictions, inclusion and diversity, property and ownership, as well as ethics, we will need our best collective thinking and collaborative partnerships to create new worlds. Similar to the start of the Internet and our experiences thus far, we can expect many experiments, false starts, and delays associated with the metaverse, before landing on the right frameworks and applications that are truly useful and decentralized.

The metaverse market size is expected to reach $872 billion in 2028, representing a 44.1% CAGR between 2020-2028.

Blockchain

Blockchain is closely associated with cryptocurrency but is by no means restricted in its application to cryptocurrency. Blockchain is essentially a decentralized database that allows for simultaneous use and sharing of digital transactions via a distributed network. To track or trade anything of value, blockchain can create a secure record or ledger of transactions which cannot be later manipulated. In many ways, blockchain is a mechanism to create trust in the context of a digital environment. Participants gain the confidence that the data represented is indeed real because the ledgers and transactions are immutable. The records cannot be destroyed or altered.  

The emergence of Bitcoin and alternative cryptocurrencies in recent years has put blockchain technology on a fast adoption curve, in and out of finance. Traditionally, recording and tracking transactions is handled separately by participants involved. For example, it can take a village to complete a real estate transaction—buyers, sellers, brokers, escrow companies, lenders, appraisers, inspectors, insurance companies, government, and more—and that can lead to many inefficiencies including record duplications, processing delays, and potential security vulnerabilities. Blockchain technology is collaborative, giving all users collective control and allowing transactions to be recorded in a decentralized manner, via a peer-to-peer network. Each participant in the business network can now record, receive, or send transactions to other participants and make the entire process safer, cheaper, and more efficient.   

The use cases for blockchain are growing including the use of the technology to explore medical research and improve record accuracy in health care, transfer money, create smart contracts to track the sale of and settle payments for goods and services, improve IoT security, and bring additional transparency to supply chain. By 2028, the market for blockchain technology is forecasted to expand to approximately $400 billion with an 82.4% CAGR from 2021 to 2028.

Web3

Web3 or Web 3.0 is an example of real technology with real application that may be suffering from definitional miasma. In short, Web3 is your everyday web with the exception of centralization that has evolved over the past two decades due to the remarkable success of few big tech juggernauts.

The original web, Web 1.0, was pretty disorganized, dominated by mostly static pages. Web 2.0 became a more interactive web with user-generated content and what made social media, blogging (including microblogging), search, crowdsourcing, and online gaming snowball.

Web 2.0 allowed the web to grow much larger and more useful while, simultaneously, growing more risky with advertising as a key business model. The wild west of social, content, and democratization of new tools gave rise to a set of downsides—information overload, a set of Internet addictions, fake news, hate speech, forgeries and fraud, citizen journalism without guard rails, and more.

Web 3.0 (not to be confused with Tim Berners-Lee’s concept of the Semantic Web which is sometimes referred to as Web 3.0 as well) aims at transforming the current web which is highly centralized via control by a handful of very large tech companies. Web3 focuses on decentralization based on blockchains and is closely associated with cryptocurrencies. Advocates of Web3 see semantic web and AI as critical elements to ensure better security, privacy, and data integrity as well as resolve the broader issue of decentralizing technology away from the dominance of existing technology companies. With Web3, your online data will remain your property and you will be able to move that data or monetize it freely without being dependent on any particular intermediary.

Since both Web3 and Semantic Web involve an evolution from our current Web 2.0, it makes sense that both have arrived at Web 3.0 as a descriptor, but each is describing related, but different aspects of improving the Web. Projections for Web3 are difficult to develop but the issues addressed by Web3 will be key to the emergence of a more open and versatile Internet. 

Autonomous Cyber

Put malicious intent and software together and you’ll get a quick sense of what keeps information security (InfoSec) executives and cyber security professionals up at night. Or day for that matter. Malware, short for malicious software, is an umbrella term for a host of cybersecurity threats facing us all, including spyware, ransomware, worms, viruses, trojan horses, crypto jacking, adware, and more. Cybercriminals (often referred to as “hackers”) use malware and other techniques, such as phishing and man-in-the-middle attacks, that can wreak havoc on computers, applications, and networks. And they do this to destroy or damage computer systems, steal or leak data, or collect ransom in exchange for giving back the control of assets to the original owner.    

Cyber threats are on the rise globally and, well beyond individual or corporate interest, they are a matter of national and international security. The dangers they represent not only provide risk to digital assets but critical physical infrastructure as well. In the United States, the Cybersecurity & Infrastructure Security Agency (CISA), a newer operational component of the Department of Homeland Security, “leads the national effort to understand, manage, and reduce risk to our cyber and physical infrastructure.” Their work helps “ensure a secure and resilient infrastructure for the American people.” Since its formation in 2018, CISA has issued a number of Emergency Directives to help protect information systems. A recent example is the Emergency Directive 22-02 aimed at the Apache Log4J vulnerability which has broad implications, threatening the global computer network. It “poses unacceptable risk to Federal Civilian Executive Branch agencies and requires emergency action.” 

Gone are the days when computer systems and networks could be protected via simple rules-based software tools or human monitoring efforts. In terms of risk assessment, “88% of boards now view cybersecurity as a business risk.” Traditional approaches to cybersecurity are failing the modern enterprise because the sheer volume of cyber activity, the ever-increasing number of undefended areas in codes, systems, or processes (including the additional openings exposed as a result of massive proliferation of IoT devices in recent years), the growing number of cybercriminals, and the sophistication in attacking approaches have in combination surpassed our ability to effectively manage.

Enter Autonomous Cyber, powered by machine intelligence. This is a rapidly developing field, both technologically and in terms of state governance and international law. Autonomous Cyber is the story of digital technology in three acts.

Act I –  The application of AI to our digital world.

Act II – The use of AI to automate attacks by nefarious agents or State entities to penetrate information systems.

Act III – The use of AI, machine learning, and intelligent process automation against cyber-attacks.

Autonomous Cyber leverages AI to continuously monitor an enterprise’s computing infrastructure, applications, and data sources for unexpected changes in patterns of communication, navigation, and data flows. The idea is to use sophisticated algorithms that can distinguish what is normal from what might be abnormal (representing potential risk), intelligent orchestration, and other automation to take certain actions at speed and scale. These actions include creating alerts and notifications, rerouting requests, blocking access, or shutting off services altogether. And best of all, these actions can be designed to either augment human power, such as the capabilities of a cybersecurity professional, or be executed independently without any human control. This can help companies and governments build better defensibility and responsiveness to ensure critical resiliency.  

The global market for AI in cybersecurity “is projected to reach USD 38.2 billion by 2026 from USD 8.8 billion in 2019, at the highest CAGR of 23.3%.” Use of AI by hackers and state actors means that there will be less time between innovations. Therefore, enterprise security systems cannot simply look for replications of old attack patterns. They have to be able to identify, in more places and systems, new schemes of deliberate attacks or accidental/natural disruptions as they emerge in real time. AI in the context of cybersecurity is becoming a critical line of defense and, with the speed of evolution shortening in technology, Autonomous Cyber has the capacity to continuously monitor and autonomously respond to evolving cyber threats.

Conclusion

At Entefy we are passionate about breakthrough computing that can save people time so that they live and work better.

Now more than ever, digital is dominating consumer and business behavior. Consumers’ 24/7 demand for products, services, and personalized experiences wherever they are is forcing businesses to optimize and, in many cases, reinvent the way they operate to ensure efficiency, ongoing customer loyalty, and employee satisfaction. This requires advanced technologies including AI and automation. To learn more about these technologies, be sure to read our previous blogs on multimodal AI, the enterprise AI journey, and the “18 important skills” you need to bring AI applications to life.

Flow chart

Crypto trading, AI style

The dizzying rise of cryptocurrencies, from Bitcoin to Ethereum to Dogecoin, and their underlying technologies are signaling potential disruptions to the traditional world of finance. Cryptos and blockchain are decentralizing finance in ways that were difficult to imagine only a few years ago. Today, the crypto market itself is about to experience a new set of disruptions and, this time, by a different type of technology, artificial intelligence (AI). AI and machine learning (ML) can dramatically improve predictions or the decision making process, supporting traders in developing superior strategies that can in turn help generate more alpha.

The Current State of Crypto

Bitcoin was first introduced to the world in 2009 and, as of the time of this writing, more than 6,700 alternative coins (“altcoins”) have followed. Over the past 12 years, the crypto market has grown substantially. Recently, the Chairman of the U.S. Securities and Exchange Commission, Gary Gensler, publicly stated that “this asset class purportedly is worth about $1.6 trillion, with 77 tokens worth at least $1 billion each and 1,600 with at least a $1 million market capitalization.”

According to a 2021 study by Fidelity Digital Assets, seven out of ten institutional investors surveyed already hold investments in digital assets, with investors in Asia and Europe outpacing the U.S. in terms of investment rate. Bitcoin and Ethereum both have been key drivers in growing adoption in Europe. However, more than half of the investors surveyed globally consider “price volatility” as one of the greatest barriers to investment in this area.

The popularity of cryptocurrencies has grown to such an extent that certain governments are considering adopting them officially. In fact, in September 2021, El Salvador became the first country in the world to make Bitcoin legal tender. And, other governments may be following suit. Even countries such as China are considering creating their own digital currencies.

AI in Finance

Over the years, AI has been a powerful weapon that is rapidly transforming the global financial services industry. Its wide-ranging applications span from fraud detection and compliance to robo-advisors, algorithmic trading, and much more. In the investment sector, firms that have effectively leveraged AI have been able to generate superior returns, far outperforming the S&P. Medallion Fund for example “has returned an astonishing 71.8% per annum (or around 38% net of fees) over its [first] 29 years, versus about 10% for the S&P 500.” Machine learning has the power to help companies quickly analyze large quantities of data to uncover valuable patterns, trends, and other insights that would be otherwise impractical, either too time consuming or too laborious, for people to extract on their own.

Predicting trading patterns is exceptionally difficult because there are myriad explanations as to why a particular stock price can fluctuate on any given day. For example, in the field of behavioral finance, research has shown how psychological factors, like sunshine and weather, can influence a human trader’s investment decisions. Additionally, global catastrophes, new regulations, political changes, inflation, and other factors impact stock prices. All of this showcases the butterfly effect, where small changes in initial conditions can lead to drastic changes in the results. In stock trading, the results are stock prices, and any change to the global environment, from something as simple as sunny weather in New York on a particular day can have real implications on the market. With the diversity and abundance of data available today, AI can be utilized to make advanced predictions. 

Crypto trading with machine learning

The crypto ecosystem is transforming rapidly as digital adoption increases worldwide. As more people buy into these digital assets, more channels are created for individuals to trade them. Now, in many places, you can purchase a cryptocurrency almost as readily as you could a public stock. It’s as simple as reaching into your pocket, grabbing your smart phone, downloading a cryptocurrency investment app, and making your first purchase. 

In the U.S., it is estimated that 15% of consumers already own either Bitcoin or an altcoin. 70% of institutional investors also own crypto assets. However, for an investor, whether individual or institutional, it is difficult to keep pace with the volatility and the volume of information created in the world of crypto every minute. And keep in mind that, unlike the traditional stock markets that are highly regulated and active only part of weekdays, the crypto market is largely unregulated and active 24/7, 365 days a year. This persistent activity, places unrelenting pressure on traders, leaving them feeling overwhelmed, causing tremendous stress and fatigue. A mere tweet from a celebrity can send crypto prices to the moon or bring it crashing down to earth.

Here is where machine learning can be of value when properly designed and implemented. By continually analyzing voluminous and diverse data, using large computing clusters, advanced ML models can forecast changes to cryptocurrency prices in ways unfeasible for traditional human analysts or traders. To generate better returns in the crypto market, you need a solid investment strategy, suitable for a volatile and speculative asset class, augmented with sophisticated AI capable of analyzing pricing trends, news articles, social posts, and diverse technical indicators. At Entefy, our team has added new machine learning models and capabilities to our multimodal AI platform specifically for the crypto market, providing machine intelligence including price forecasting, volatility analysis, market sentiment analysis, and more to help traders develop superior strategies.

Conclusion

In little more than a decade, cryptocurrency has evolved from a single nascent digital coin, into a vast cohort of global altcoins that is collectively disrupting the traditional world of finance. Making sense of the rapidly moving market and forecasting prices is an insurmountable challenge for any investor. Fortunately, technology is evolving quickly, with AI at the forefront to meet this challenge and support this decentralized community. 

Be sure to check out our previous blogs on 6 other creative uses of AI and the differences between traditional data analytics and machine learning

Crypto chart

Shorten your project implementation time and watch ROI soar

For an enterprise, a faster implementation unlocks dramatic hidden benefits beyond the mere reduction in man-hours. Shorter or smaller projects tend to have a higher probability of success. It is in this way that quicker project execution can increase your yield of success and overall ROI.

Typical project measurements include delivery on time, on budget, and achieving the stated business objectives. The sooner a project can be completed and with fewer hours committed, the greater the probability that these measures will be achieved. In fact, the odds of success increase multifold. This is one of the few instances where better, cheaper, and faster can be achieved without the typical disappointing trade-offs.

Why is this important? Because the cost of failure to deliver is high and frequent. A global research project between KPMG, IPMA (International Project Management Association) and AIPM (Australian Institute of Project Management) in 2019 gives us some insights into parameters for costs and the nature of project failures. From the research, a key theme emerged indicating that “the overall sense of success rates of projects continues to be low when viewed through the lens of cost, time, scope and stakeholder satisfaction.”

 Key findings include:

  • 81% of organizations fail to deliver successful projects, at least most of the time
  • 70% of projects fail to deliver on time
  • 64% of projects fail to deliver on budget
  • 56% of projects fail to deliver the intended business value

Based on these numbers, a senior executive can logically expect that for any randomly selected project, hitting the trifecta of on time, on budget, and with full business value achieved would be rare. Further, projects are frequently rescoped once they have been approved (usually with longer delivery dates and higher budgets) and, a portion of projects are cancelled before completion altogether.

Of the majority of projects which do not go according to plan, the sources of failure are legion— business assumptions did not materialize, the market demand collapsed or shifted, new competitors entered the market, the regulatory environment changed, absence of internal project support, resources for delivery were insufficient or unavailable, and communication between stakeholder groups was weak.

Large projects in particular share common attributes not inherent to smaller projects. Namely complexity that stems from intricate and rapidly evolving contexts. Sometimes, it is the sheer duration of implementation which creates failure and at other times it is the number of moving parts required to ensure project success. Regardless, implement your projects much faster and you’ll more likely increase the yield of successful projects simply because you have so sharply reduced the window of time where project planning assumptions age out.

According to a study by The Standish Group, which has been collecting and reporting on IT project management statistics for major corporations around the world for nearly four decades, there is a marked success inflection point between smaller projects and larger projects. Compared with larger projects, smaller projects tend to have shorter durations. In the report, small projects defined as those with $1 million in labor content or less have a 76% success rate, whereas large projects defined as those with labor content greater than $10 million only have a 10% success rate.  

More recently, in their 2018 CHAOS Report, the Standish Group has become even more explicit about the importance of project duration. They are focusing increasingly on Decision Latency Theory which asserts ‘The value of the interval is greater than the quality of the decision.’ In other words, the longer it takes to reach a decision or achieve an action, the more expensive it becomes and the more probable that the project will fail.

Another way of looking at decision latency is to consider the amount of time it takes for a team to make a decision in response to a business change. You start the implementation of a one-year project and very soon you are noticing a cascade of changes in assumptions (including costs, supply, skill availability, demand, and regulatory constraints) and with each of these changes you need to consider the impact on resourcing, approach, or solution structure of the project. And each of these decisions can strain deadlines, increasing the probability of facing even more decisions and further delays.

Key steps to successful project delivery

  • Know your objectives (and how they are measured)
  • Ensure that the project objectives are linked with organizational goals
  • Assemble a skilled and experienced project leadership team
  • Gain executive buy-in and sponsorship
  • Confirm stakeholder needs
  • Use quality project management and resource planning tools
  • Focus on agile project management and minimize the old-style big bang project approach
  • Establish proper communication protocols
  • Include change management as part of the project plan
  • Document and communicate changing circumstances which are likely to impact project budgets, schedule, or deliverables
  • Tightly manage scope change

At Entefy, our team deals with the complexity that accompanies enterprise-level multimodal AI software and automation projects. The Entefy 5-layer AI platform is fully configurable, making it possible to structure a project quite differently than those in the past with the biggest change involving the capacity to deliver such projects on a dramatically expedited schedule—10x faster than competing options. And we understand that with faster delivery comes a much higher project success rate and a much higher ROI.

Be sure to read our previous blogs on starting the enterprise AI journey and the “18 important skills” you need to bring AI projects to production.  

AI

Bringing AI projects to life and avoiding 5 common missteps

When it comes to artificial intelligence (AI), the path from ideation to implementation can be a long and technical one. Hiring the right talent. Wrangling data. Setting up the right infrastructure. Experimenting with models. Navigating the IP landscape. Waiting for results. More experimenting. More waiting. Enterprise AI implementation can be one of the most complex initiatives for IT and business teams today. And that’s before acknowledging the time pressure created by fast-moving markets and competitors.

As an AI software and automation company, Entefy works with private and public companies both as an advanced technology provider and an innovation advisor. This has given our team a unique perspective on what it takes to successfully bring AI to life. Our experience has taught us how to avoid major missteps organizations often make when developing or adopting AI and machine learning (ML) capabilities. These 5 missteps can derail productivity and growth at any organization:

Unrealistic expectations

“I hear that AI is the future. So let’s build some algorithms, point them at our data, and we should be awash in new insights by the end of the week.” Success with AI begins with adopting a more nuanced understanding of what artificial intelligence can and can’t do. AI and machine learning can indeed create remarkable new insights and capabilities. But you need to align your expectations with reality. This isn’t a new lesson. Enterprise-scale software deployments demonstrate this idea time and again. Software itself isn’t magic. The magic emerges from forward thinking application design and the effective integration of new capabilities into business processes. Think about your AI program in the same way, and you’ll be on the path to long-term success.

Expecting AI to instantly deliver transformative capabilities across your entire organization is unreasonable. Over the short term, narrowly defined projects can indeed be quickly deployed to deliver impressive impact. But to expect spontaneous intelligence to spring to life immediately can lead to missed expectations. The best way to view AI today is to focus on the “learning” in machine learning, not the “intelligence” in artificial intelligence. 

Development of advanced AI/ML systems is experimental in nature. Algorithmic tuning (iterative improvement) can be time-intensive and can cause unanticipated delays along the way. Equally time intensive is the process of data curation, a key pre-requisite step to prepare for training. Because of this, precise cost and ROI projections are difficult to ascertain upfront.

Short-term tactics without long-term planning 

It is easy to fall in love with AI. So it’s a common mistake to prioritize technology over goals and outcomes. Symptoms include approaching every problem with a popular technique or over-relying on one tool or framework. Instead, invest time in identifying needs and priorities before moving on to AI vendor selection or development planning.

Approach an AI project with a defined end goal. What problem are we solving? Ensure you have a clear understanding of the potential benefits as well as the impact to the existing business process. Turn to technology considerations only after you have a clear understanding of the problem you want to solve.

To better understand the limitations of AI, start by looking at information silos. The type of silos that result from teams, departments, and divisions storing information in isolation from one another. This limits access to critical knowledge and creates issues around data availability and integrity. The root cause is prioritizing short-term needs over long-term interoperability. With AI, this happens when companies develop multiple narrowly-scoped expert systems that can’t be leveraged to solve other business problems. Working with AI providers who offer diverse intelligence capabilities can go a long way to avoiding AI silos and, over time, increasing ROI.

Long-term planning should also consider compliance and legislation, especially as they pertain to data privacy. Without guidelines for sourcing, training, and using data, organizations risk violating privacy rules and regulations. The global reach of the EU General Data Protection Regulation (GDPR) law, combined with the growing trend toward data privacy legislation in the U.S., makes the treatment of data more complex and important than ever. Don’t let short-term considerations impede your long-term compliance obligations under these laws. 

Model mania 

The first rule of AI modeling is to resist the urge of jumping straight into code and algorithms until the identification of goals and intended results. After that, you can begin evaluating how to leverage specific models and frameworks with purpose. For example, starting with the idea that deep learning is going to “work magic” is the proverbial cart before the horse. You need a destination before any effective decisions can be made.

 There is a lot of misinformation around which AI methods work best for specific use cases or industries. Deep learning doesn’t always outperform classical machine learning. Or, industry-specific AI will not necessarily give you the best results. So try your best to be results driven, not method driven. And don’t let trends influence that decision. For example, neural networks have been going in and out of style for decades, ever since researchers first proposed them in 1944.

When making decisions about model selection, it’s necessary to consider 3 key factors—time, compute, and performance. A sophisticated deep learning approach may yield high probability results, but it does so relying on often costly CPU/GPU horsepower. Different algorithms have different costs and benefits in these areas.

The lesson is simple: A specific machine learning technique is either effective for achieving your specific goal, or it is not. When a particular approach works in the context of one problem, it’s natural to prioritize that approach when tackling the next problem. Random decision forests, for instance, are powerful and flexible algorithms that can be broadly applied to many problems. But resist settling into a comfort zone and know that AI success comes from frequent and ongoing experimentation.

Data considerations

Data matters. Good data can’t fix a bad model, but bad data can ruin a good model. It is a myth that AI/ML success requires massive datasets. In practice, data quality is often more likely to determine the success of your project. The challenge is two-fold. First, it’s necessary to understand how the structure of your data relates to your overarching goal. Second, the value of proper modeling can’t be understated, no matter how large the dataset.

Be sure to consider the 4 Vs of data to ensure success in advanced AI initiatives. The 4Vs include data volume, variety, velocity, and veracity. The road from data to insights can be patchy and long, requiring many types of expertise. Dealing with the 4 Vs early in the exploration process can help accelerate discovery and unlock otherwise hidden value. 

The successful preparation and processing of data is a highly complex exercise in multi-dimensional chess—every consideration is connected to multiple other considerations. Common issues entail: Ineffective pre-processing of data; trying to simplify the data with too strong a dimensionality reduction; excessive data wrangling; poorly annotated datasets. And there’s no single best practice. Data curation at its core is problem-specific.

Underestimating the human element

Large-scale rollouts often fail due to a range of human factors. Users aren’t properly trained. Features don’t enhance existing workflows. UI/UX is confusing. The company culture isn’t AI forward. 

Full realization of the benefits of AI starts with an empowered, educated workforce. Best practices for this include training strategies centered around continuous improvement in organization-wide technical training as well as leadership development for key champions of the project.

When it comes to hiring AI/ML talent, the situation on the ground is sobering for any organization with ambitions to rapidly scale internal AI capabilities. Stories of newly minted machine learning graduates fetching steep salaries are real, as practically every large company on the planet drives up demand for a limited pool of qualified candidates. Then there’s the reality of ML resume inflation, where some job seekers add machine learning credentials without the necessary skills or experience to deliver real value.

Traditional software system development follows the plan-design-build-test-deploy framework. AI development follows a slightly different path due to its experimental nature. Much time and effort is needed to identify and curate the right datasets, train models, and optimize model performance. Ensure your technical and business teams align on these differences and that they have the required skills to remain productive in this new environment.

Conclusion

There are countless parallels between the early adoption of enterprise software decades ago and the rollout of AI and machine learning today. In both cases, organizations have faced pressure to leverage the power of new capabilities quickly and effectively. The path to success is complex and fraught with pitfalls, covering everything from personnel training to thoughtfully scripted rollouts.

The lesson of that earlier time was this: After all of the strategic and tactical wrinkles of software implementation were addressed, the new solutions did indeed make a significant impact on people and their organizations. The AI story is no different. Deploying intelligence capabilities can be challenging but the competitive advantages they confer are transformational.

AI Globe

The intelligent enterprise leaps forward in 2021

The many disruptions in 2020 effectively illustrated the fragility of life, work, and society at large. People and organizations alike scrambled to solve problems in ways not considered before the COVID-19 pandemic. Many businesses suffered hefty losses while others survived, and even thrived in some cases, with increased agility and a move toward modern technologies.

In reflecting upon last year’s events and the use of advanced technologies, including artificial intelligence (AI) and machine learning, we observed promising activity in several key sectors such as healthcare, manufacturing, retail, finance, and education.

In healthcare for instance, while COVID-19 drastically shifted life for everyone, many essential healthcare workers were on the frontlines to help overcome the global pandemic with assistance from machine learning. One particularly promising example came from MIT researchers who developed an AI model to help them diagnose asymptomatic COVID-19 patients through the sound of their coughs. The difference between a healthy cough and an asymptomatic cough cannot be heard by the human ear, but when “the researchers trained the model on tens of thousands of samples of coughs,” the AI system discerned asymptomatic coughs with 100% accuracy. As we continue to keep physical distance from each other, a widely available test like this for asymptomatic patients could help the world flatten the curve.

Across the pond in the UK, a research project is being funded to track side effects related to COVID-19 vaccines as they are distributed. With several companies vying to deliver their vaccines to the market as quickly as possible, this tool is used to track adverse side effects. The awarded government contract for this purpose indicates “that the AI tool will ‘process the expected high volume of Covid-19 vaccine adverse drug reaction (ADRs) and ensure that no details . . . are missed.’” Other use cases that leverage AI to combat the pandemic, can be found on our previous blog, “How machine learning will help us outsmart the coronavirus.”

Companies outside of healthcare also took advantage of machine intelligence to showcase new capabilities or streamline their operations. For example, in select major cities across the U.S., driverless cars performed additional road testing. Case in point, during last quarter, Cruise introduced its very first driverless car hitting the asphalt in San Francisco. While there was a human in the passenger seat to experience the ride, this was the company’s first step toward securing permits to launch a commercial service using its autonomous vehicles.

For many who had never heard of Zoom prior to the pandemic, virtual video communication technology became nearly ubiquitous for those who could no longer communicate or collaborate in person—at home, at work, in education. This means more people relied on these types of technologies to perform functions they would normally handle face-to-face. People began to think of video communication as the virtual water cooler for happy hours, birthday celebrations, and other meet ups. Even visits to Santa went viral. AI models are taking virtual communication to the next level with chatbots, improved personalization, smart replies, and more.

While much was accomplished with AI last year, 2021 promises to do even more. Here are some of the trends we expect to unfold this year:

AI spend will break through previous records

To adapt to the major disruptions caused by the pandemic and the ensuing social and economic shifts, businesses and governments worldwide have begun increasing technology spends while lowering budgets in other departments such as HR and marketing. According to Gartner, 67% of board of directors (BoDs) surveyed foresee expansion to the technology budget and part of that budget belongs to advanced technologies with AI and analytics “expected to emerge stronger as game-changer technologies.”

Competition in the coming years will require organizations to adopt AI at a faster rate. Machine learning will help augment human power by unearthing new insights otherwise hidden in data and by automating a series of workflows, tasks, and processes that consume too much human time and effort. This can be consequential in many areas of operations including finance, sales, product development and delivery, security, and IT. These needs will push technology spending to new heights. Over the next four years alone, global AI spending is forecasted to double from “50.1 billion in 2020 to more than $110 billion in 2024.”

CIOs will help lead the productivity revolution

More enterprises will implement AI strategies by leaning on their CIOs to achieve real business results. This year, experimentations with machine learning will accelerate but that alone will not be sufficient. Enterprise CIOs will be under increasing pressure to explore, select, and implement suitable technologies that can power the intelligent enterprise. Their focus will remain on maximizing productivity by streamlining the many facets of internal operations.

As of 2019, “only 8% of firms engage in core practices that support widespread adoption. Most firms have run only ad hoc pilots or are applying AI in just a single business process.” Focusing on AI core practices as opposed to ad hoc implementations will not only enable stronger adoption within these organizations, but will also foster additional cross-team collaboration for better results. CIOs encouraging adoption of these new technologies will empower employees to explore and test AI projects so that they are used as efficiently as possible. This will help drive business success as in-person workflows remain disrupted by the pandemic, with an accelerated secular push toward remote work.

AI will become more widespread  

A natural byproduct of increased C-suite adoption of AI deployments within the enterprise is efficiency via automation, speed, and scale. Widespread adoption of intelligent applications and process automation simply translates into cost reductions and time savings. According to Gartner, “organizations want to reach the next level by delivering AI value to more people.” More internal stakeholders being exposed to a company’s AI initiatives will eventually bleed into other areas of business, internally and externally. “In the enterprise, the target for democratization of AI may include customers, business partners, business executives, salespeople, assembly line workers, application developers and IT operations professionals.” With more people realizing the benefits of machine learning in particular, we can expect potential for more AI-related learning, problem-solving, and even jobs.

Cybersecurity will enhance the remote workforce

Last year, many organizations were forced into a decentralized workforce in a matter of days. This unanticipated shift pushed these organizations toward new technology implementations that ensured information security in a very short time. More than ever, safety protections are essential for physical employees as well as remote operations. McKinsey notes that “as employees became comfortable working from home, companies began standardizing procedures for remote work environments and explored technologies to reduce long-term risk.” This year, enterprises will further strengthen their cybersecurity efforts in response to the increased vulnerabilities exposed via use of non-secure networks and devices by the growing size of the virtual workforce.

Ethical and responsible AI gain attention

As machine learning becomes more prevalent in day-to-day business, the conversation around data privacy and ethical uses of AI gains momentum. The topic of AI ethics is no longer a subject of discussion for only major universities or nonprofit organizations. Enterprises are becoming fast aware of the issues pertaining to mass aggregation and analysis of personal and sensitive data. The benefits of unlocking data to make smarter business decisions or reduce errors in operations comes with the added responsibility of protecting data in ways that do not cause reputational, moral, or regulatory harm. Major companies have already had to face backlash for not providing a clear outline of their data collection and processing standards. “Companies need a plan for mitigating risk — how to use data and develop AI products without falling into ethical pitfalls along the way.”

At Entefy, we are bullish about AI and how it will transform the way we work and live. 2021 promises to be an important year in our collective journey toward the intelligent enterprise. Be sure to read our previous blogs on enterprise AI and the “18 important skills” you need to bring it to life. 

Coffee

Tech advances, coffee talk, and the new case for Enlightenment

For any student of history, economics, or innovation, there are a couple of truly astounding facts. One is the dawn of stone tool use about 3.3 million years ago, deep in our ancestral tree. And that was about it for the next three million plus years. 

Eventually the pace changed and accelerated about ten thousand years ago. First came agriculture, metal work, then towns and cities, and then coffee shops. The sudden lift during the Enlightenment Era, not more than 250 years ago, is the second astounding fact. Out of nowhere, people unlocked unprecedented levels of productivity and human well-being. At its core, the Enlightenment Era was a belief in humankind’s ability to craft a new and better future based on ideas – ideas debated openly, tested scientifically, applied universally, and for all to ultimately benefit.

The story of slow technological change is reflected in slow economic development. For most of our history, we have been stuck in a cycle where a few steps toward plenty has led to overpopulation and starvation. The graph below illustrates the episodic nature of our technological (and economic) leaps forwards.

Clark, G. (2008). A Farewell to Alms: A Brief Economic History of the World. United Kingdom: Princeton University Press.

What changed during the Age of Enlightenment? The world mind changed. Between 1600 and 1800, a new way of thinking about human existence emerged. It began in Europe, but the ideas were universal and soon spread to every continent where they were further adapted and evolved.

What were those ideas? Steven Pinker outlines them in his book, “Enlightenment Now”:

Provoked by challenges to conventional wisdom from science and exploration, mindful of the bloodshed of recent wars of religion, and abetted by the easy movement of ideas and people, the thinkers of the Enlightenment sought a new understanding of the human condition. The era was a cornucopia of ideas, some of them contradictory, but four themes tie them together: reason, science, humanism, and progress.

He then elaborates, identifying the behaviors which enabled and supported reason, science, humanism, and progress:

Among those norms are free speech, nonviolence, cooperation, cosmopolitanism, human rights, and an acknowledgment of human fallibility, and among the institutions are science, education, media, democratic government, international organizations, and markets. Not coincidentally, these were the major brainchildren of the Enlightenment.

Several of the clearest examples of Enlightenment thinking and behavior emerged in the coffeehouses of London in the 17th and 18th centuries. London was a global trading center during that time where people from many regions, from many classes, from many belief systems, connected and conversed, building relationships and wisdom in the warmth and welcome of coffee shops. New ideas were tested, new businesses launched, new interpretations of the world discussed among people from many walks of life.

That tradition has continued, although we now exchange ideas well beyond just the coffee shop – in online forums, in conventions, in think tanks, in research institutions, in corporate R&D labs, in papers books, TV, and of course social media. Although the Age of Enlightenment also exists in those places, it continues to percolate in the neighborhood coffee shop. Places where people meet as equals, with shared interests, ideas, complaints, suggestions, and daring thoughts. Where a conversation can drift on camaraderie and then turn sharply at an inspired thought. Where laughter is bonding and where thoughtful silences can be comfortable. Where the human person and human relationships are still at the heart of all that is important. 

It is that spirit which inspires Entefy. A spontaneous conversation in a coffee shop led to the launch of a venture. A venture which in turn could only exist on the basis of Age of Enlightenment ideas, norms, and institutions. The idea of advanced technology and smart machines helping all people communicate universally and gain global access to information in order to build their own understanding of the world, create new ideas and innovation, dramatically improving productivity and human well-being. For everyone. 

For it to work, the animating energy of the Age of Enlightenment had to go beyond mere ideas and include the human element. Conversations transform ideas into progress where there is shared respect for open dialog, nonviolence, cooperation, cosmopolitanism, human rights, reciprocity, and certainly an acknowledgment of human fallibility and the need for the grace of forgiveness.

Entefy was birthed in a coffee shop. We hope to cultivate the ethos of the Age of Enlightenment and foster those norms which make progress and improvement a continuing opportunity. 

Enterprise AI

Enterprise AI? Begin your journey here

AI is transforming the way we operate business around the world. Everything from the ads we see online to how we choose shows to watch while sheltering in place is now influenced by AI.

As popular as it has become, getting started with AI within a company can seem like a monumental task, especially while competitors are moving at a quicker pace with their AI initiatives. But AI doesn’t have to be complicated and you don’t need to reinvent the wheel to introduce it to your operations.

The key to getting started is to focus on the business problems you would like to solve, especially the business problems that can most benefit from advanced analysis of data. Look for areas where meaningful human effort is needed to complete routine tasks or make better decisions while large volumes of data sit idle or are underutilized. Here are three areas where AI can help optimize:

1. Decisions and insights

How can we gain data-driven insights to help make better decisions? The type of decisions that can unlock areas of improvement at your organization by:

  • Lowering costs
  • Improving customer experience and engagement
  • Reducing customer churn
  • Detecting fraud

It is estimated that only 4% of businesses effectively capture value from their data. This is partly due to the fact that 90% of digital data generated is dark or unstructured. Advanced analysis of both structured and unstructured data can reveal hidden, and often surprising, insights. Companies worldwide are already using such insights for countless applications including those that create better customer experiencesmore efficient manufacturing of products and supply chain managementpersonalized shoppingAI-generated podcasts and entertainment content, and protection against cybersecurity threats.

2. Processes and workflows

How can we leverage the power of process automation to help free up time and resources? Throughout the workday, employees are often engaged in time consuming rote tasks and workflows that could be performed exponentially faster by intelligent machines. These types of tasks are often repetitive, create bottlenecks, and don’t require the human touch. By implementing automation in this area, your team can take back the precious time needed to focus on high value work that grows the business.

In this regard, something even simpler than advanced AI and machine learning, such as robotic process automation (RPA), can be used to send out invoices or process credit card payments. Automating tasks like these not only saves time but can reduce human errors and costs. According to Gartner, the COVID-19 “pandemic and ensuing recession increased interest in RPA for many enterprises” due to increased pressures on businesses to better manage operations and costs.

3. Teamwork and communication

How can we make team collaboration more efficient to raise productivity? Here’s where next-generation communication tools and knowledge management systems play important roles.

These days, information is the new gold. By making it easily accessible, searchable, and sharable, information can help each of us perform better at our jobs. At enterprises, information is typically stored in and retrieved out of complex knowledge management systems that sit at the core of decisionmaking. Employees depend on these systems to access the organization’s knowledge base and improve communication and collaboration across teams.

Powering communication and knowledge management systems with AI can usher in an entirely new level of productivity. For instance, machine learning can be used to map diverse data types housed in disparate storage repositories and enable universal search capabilities across everything from PDFs to spreadsheets, images, text, audio packets, and even videos. Natural language processing (NLP) can be used to summarize documents or help communication tools improve grammar or tone.

AI is also used in detecting sentiment and emotion contained in digital conversations and social media chatter. Companies use sentiment analysis to identify key emotional triggers for a variety of use cases, not only to improve employee relations but also those with customers—de-escalating situations where frustration or threats are observed, understanding brand loyalty, or identifying early signs of happiness or dissatisfaction within teams or customers.

Additional questions to help kick off your AI initiative

After landing on the key areas where AI can add value, consider the following questions:  

  • Which use cases can quickly prove value? 
  • Is there sufficient managerial or executive support for the intended use cases?
  • Are ROI expectations realistic among the stakeholders?
  • Are changes required to the compute infrastructure for this purpose? This involves hardware and IT capacity planning.
  • Do we have access to the right data for the intended use case? In the exploration phase, considering the 4 Vs of data can help accelerate discovery and unlock hidden value.
  • What are the common missteps to avoid in implementing AI?
  • Can we find the right skills in-house or via external vendors to make this this a reality? It takes 18 separate skills to bring an AI solution to life from ideation to production level implementation.

Get informed and learn from the experts

AI is a vast and complex field. So, it would be helpful to get started with key AI terms and concepts. You may also enjoy learning how machine learning differs from traditional data analytics.

Of course, if time is at a premium, you can form a partnership with an AI firmThe data in your business tells a story. Finding these stories is what AI professionals do best. And, perhaps more important, they can help you avoid costly mistakes.