AI

USPTO awards Entefy 5 new patents

Entefy has been issued new patents by USPTO for core inventions in intelligent communication and data privacy 

PALO ALTO, Calif. June 28, 2019. Entefy Inc. is announcing a series of newly-awarded patents by the U.S. Patent and Trademark Office (USPTO). In addition to a number of trade secrets, the company’s IP portfolio now includes 51 combined issued and pending patents. Entefy’s latest 5 patents cover company innovations in areas of intelligent communication and data privacy.

Patent No. 10,135,764 describes the company’s “universal interaction platform for people, services, and devices.” This invention enables uniform communication between users, their devices, and popular services and is an important component of Entefy’s unique multimodal intelligence platform as well as its universal communicator application. This includes a modular message format and delivery mechanism for translating communication packets seamlessly from one format into another. At the core of the system is a communication intelligence that can receive a message in one format and automatically transform that message into a different format as required by the receiving user, service, or device. This technology can power any number of AI-based use cases involving people-to-machine and machine-to-machine communication.

Patent No. 10,169,300, “Advanced zero-knowledge document processing and synchronization,” explains additional methods by which live, multi-user document collaboration products can provide the same level of convenience for users without sacrificing security or privacy.

Continuing with the company’s history of key inventions in multi-protocol messaging, USPTO awarded Entefy with a new patent, No. 10,169,447, dealing with a “system and method of message threading for a multi-format, multi-protocol communication system.” This patent covers methods that enable threading of messages in a conversation between one or more users involving multiple protocols—for example, threading user-to-user conversations taking place across email, SMS, and instant message services.

When it comes to sharing digital assets such as documents, photos, videos, and more, Entefy’s Adaptive Privacy Control (APC) technology enables unprecedented levels of data protection. This technology works by allowing users to encrypt even small bits of information within a larger file such as a name in a document, important values in a spreadsheet, or even a small region of pixels in a photograph. With Patent No. 10,169,597 and Patent No. 10,305,683, APC extends to multi-layered protection in video files and multichannel audio files as well. Solutions using this technology provide users and content creators alike with advanced control over the viewing, distribution, and modification of their digital assets.

“Invention has always been and will always remain a major part of Entefy’s culture. I’m very proud of the team’s creativity and commitment to solving problems that are not just technically challenging, but also have potential for broad impact,” said Entefy’s CEO, Alston Ghafourifar. “As a team, we feel fortunate to be developing cutting edge capabilities in rapidly evolving areas of digital communication, security, data privacy, and machine intelligence.”

Today’s release is the latest in a series of patent announcements, including earlier Entefy patents that enables new forms of zero-trust system authentication as well as secure document collaboration.

ABOUT ENTEFY 

Entefy is an AI software company with multimodal machine learning technology (on-premise and SaaS solutions) designed to redefine automation and power the intelligent enterprise. Entefy’s multimodal AI platform encapsulates advanced capabilities in machine cognition, computer vision, natural language processing, audio analysis, and other data intelligence. Organizations use Entefy solutions to accelerate their digital transformation and dramatically improve existing systems—knowledge management, search, communication, intelligent process automation, cybersecurity, data privacy, and much more. Get started at www.entefy.com.

Globe

Building a healthy global population with AI

The World Health Organization (WHO) has called for the importance of universal healthcare coverage, declaring that health is not a privilege, but a right. According to WHO, full coverage of essential health services are still unavailable to more than half of the world’s population. And now, as part of the Sustainable Development Goals, all UN Member States are working toward the ambitious universal health coverage (UHC) by 2030.  

Although universal healthcare is a hot-button topic here in the United States, it’s important to remember that impoverished or underdeveloped nations around the world are still lacking access to even the most basic forms of healthcare. Limitations in educational resources also translate into limited opportunities for doctors to be educated and trained in their native countries. AI can bring comprehensive healthcare resources to these areas and help provide half of the world’s population with this all-important human right.

When it comes to solving the healthcare shortage, it isn’t as easy as shipping doctors, medical equipment, or computers across the globe. Many places still lack the necessary resources and infrastructure such as clean water, electricity, or Internet access to sufficiently run clinics. So, smarter and more comprehensive solutions are needed to tackle this challenge. The introduction of Early Detection and Prevention System (EDPS) to India in 1998 is an example of such a solution. A study by Kempegowda Institute of Medical Sciences involving 933 patients illustrated an overall consistency rate of 94% between the EDPS and physicians.

AI has already shown the potential for improving patient-doctor relationships, but it can also assist doctors in diagnosing medical conditions. This makes it invaluable in rural areas where doctors are scarce and often operate without the support of specialists and peers to help make more complex diagnoses. This will allow for more effective treatment plans that can be reasonably accomplished within the limits of those regions.

AI can be utilized in areas where access to doctors is either limited or nonexistent. The spread of mobile and cloud technologies, especially in resource-poor areas, have made this easier. For example, apps have been developed and launched in rural Rwanda where blood for transfusions can be ordered and delivered by drone within minutes. In certain regions within Thailand, India, and China, machine learning and natural language processing (NLP) are being leveraged to guide cancer treatments. “Researchers trained an AI application to provide appropriate cancer treatment recommendations by giving it descriptions of patients and telling the application the best treatment options. The AI application uses NLP to mine the medical literature and patient records—including doctor notes and lab results—to provide treatment advice. When examining different patients, this application agreed with experts in more than 90% of patients in one study and 50% in another.”

Of course, AI’s value isn’t limited to only a handful of use cases in healthcare. It can also provide invaluable support to countries struck by natural disasters. One of the best examples of this is Nepal after the devastating 2015 earthquake. Entire villages were flattened by the quake, leaving many survivors destitute and without immediate shelter or aid. The United Nations Office for the Coordination of Humanitarian Affairs utilized AI in its recovery efforts. It mapped key information pertaining to the disaster from satellites, cell phones, and social media posts to learn what was needed and where, expediting the delivery of aid and supplies exactly where they were needed. The system also took into consideration structural damage to generate digital maps and ensure that relief workers could move safely. Doing so helped prevent further death and injury that commonly occurs in relief efforts reliant on traditional means.

Disease prevention and management can be critical to a developing country’s economy. Advanced machine learning can help by keeping track of new cases of particularly contagious pathogens to determine the risks and predict outbreak patterns. It’s simpler, more effective, and often cheaper to prevent an epidemic than it is to treat it. AI is already used to model and predict epidemics, based on how the disease is transmitted and the natural occurrences that can have an impact. It’s had success in predicting and mitigating the transmission of dengue fever in Manila, prompting researchers to work closely with the Philippine government in expanding the AI initiative. With half the world’s population at risk of developing dengue, this is no small milestone.

With advances in computing and software, healthcare globally is beginning to feel the positive impact of AI on a number of areas including patient care, drug manufacturing, and disaster recovery. With mobile and cloud technologies growing more widespread and becoming smarter with machine learning, universal health coverage for all no longer seems like a distant dream. 

3D models

What the Hawaii missile alert fiasco teaches us about UX

When people hear the term “user experience,” many assume it refers to some sort of technical design or development. But user experience (UX) is much more than that. UX serves as the conduit between an organization and its audience. User experience encompasses every aspect about how someone interacts with your company. The usability of your website, the functionality of your product, how adept your chatbots are at answering support questions – all of these fall under UX. 

But building a logical, intuitive UX matters for your employees as well. As industries are transformed by artificial intelligence, you must make sure your team members know how to use new platforms safely and efficiently. If your processes are built on clunky, difficult-to-use systems or user interfaces (UI), bad things can happen.

You may recall that back in January, residents of Hawaii received a terrifying alert that a ballistic missile was headed straight for them. Fortunately, it was a false alarm, though it left people deeply shaken. Many were outraged that such an error could even happen.

It turns out that bad UX design played a role in the Hawaii missile warning mistake. One local paper published a photo recreation of the state’s alert notification interface. The option for sending drill alerts was almost indistinguishable from the one that would sound an actual alarm. The idea that someone could click the wrong link wasn’t at all far-fetched.

Thankfully, no one was hurt as a result of that mistake. But the error highlighted the importance of great UX.

How will you treat your guests?

In an earlier piece on the importance of user experience, Entefy referenced the words of designer Charles Eames, who said, “The role of the designer is that of a very good, thoughtful host, anticipating the needs of his guests.” We find that Eames’ design wisdom translates well to UX, as the concept of user-as-guest can be a useful starting point for shaping the experience.

What is the tone of your messaging when someone discovers your organization online or uses your products? Is it inviting or is it impersonal? When they arrive at your homepage, do your aesthetics, content, and navigation make them want to stay awhile? Or will they find a warmer reception somewhere else? No matter how impressive your décor, gourmet your food, or prominent your guest list, you will fail as a host if you’re not engaging.

You can apply the same logic to the UX of software your organization uses internally. Examine the technology your team uses from their perspectives. Does it make their jobs easier? Do the tools they rely on facilitate cooperation and dialogue? Or is every day a grind because they’re forced to interact with confusing or outdated software? As we saw with the Hawaii missile mishap, the quality of your UX has real consequences.

Three pillars of UX design

Your UX design should always be evolving. New tech platforms and industry trends will change how your audience interacts with your company and how your employees serve your customers and clients. There is no done with UX. You’ll always be revising your site, your messaging, your marketing channels, and your customer service workflows.

But there are core principles around which your UX should be designed. Every decision should begin and end with these in mind:

1. UX is all about the end user

Great UX design is rooted in empathy. As the creator of a product or service, you’re naturally close to what you’ve built. You understand everything about how it works, and you know what your company stands for. But you need to take a step back and consider the experience from the perspective of someone who’s never used it before. What are the stumbling blocks? What are their workflows? How could you make their interactions with your company or products easier, more valuable, or more enjoyable?

You can ask your end users for feedback through surveys and in-person discussion groups. But those aren’t always feasible. Besides, what people say and what they do are often very different. Someone might say they feel confident using your platform, but they might struggle more than they let on.

That’s where usability tests prove helpful. Just a handful of tests can reveal substantial gaps in the user experience and solving those could be a game-changer. Just remember that like your broader UX strategy, there is no end point for usability testing. You always want to be measuring performance and seeing where you can make your UX that much better.

2. Design for your entire audience

One size rarely fits all. Everyone who interacts with your company brings different needs, expectations, and comfort levels to the table. Some may be quite tech-savvy, while others will face a learning curve. The best user experiences are built with all of these people in mind. They’re inclusive and responsive, and they come with a high level of user support.

Previously, Entefy examined the issue of ageist design in technology and how it excludes people from accessing goods and services. When you don’t consider the full spectrum of users’ needs, that all-important dialogue between you and your audience breaks down. Someone who can’t easily navigate your site or is greeted with radio silence on your support channels isn’t going to feel heard. And you can be sure that they will shift their attention – and their business – to a company that’s more responsive or accommodating.

3. Be thoughtful about how you use technology

Tools such as automation platforms and chatbots can improve your UX, if they make sense for your employees and your audience. In the rush to prepare their companies for the age of AI, some leaders believe they must use every new tech tool at their disposal. But automated workflows, chatbots, and analytics programs are only effective if they support your end users.

Before implementing any new program, view it through the lens of your employees or audience. Will this feature directly impact their productivity or satisfaction? If the answer is yes, integrate it into your UX, but make sure to educate them about the change. Even if a tool or feature seems intuitive to you, it may be more difficult for your audience to grasp. But if you explain its purpose and help them through the initial adjustment period, they’ll likely be willing to follow your lead.

We live in an age of more content and more sophisticated technology than the world has ever seen. But communication can become a lost art if we’re not careful about how we apply those riches. Good UX design helps you maximize your resources and engage users in meaningful conversations for years to come. 

Fingerprint

Locking out cybersecurity threats with advanced machine learning

In the early days of the Internet, being hacked wasn’t usually newsworthy. It was more of a nuisance or an inconvenient prank. Threats and malware were something that could be easily mitigated with basic tools or system upgrades. But as technology became smarter and more dependent on the Internet, it also became more vulnerable. Today, cloud and mobile technologies serve as a veritable buffet of access points for hackers to exploit, compared to the slimmer pickings of a decade or so ago. As a result, cyberattacks are now significantly more widespread, and can damage more than just a computer. Being hacked nowadays is much more than a mere annoyance. What started out historically as simple pranks now has the muscle to cripple hospitals, banks, power grids, and even governments. But where hacking has gotten more sophisticated, so has cybersecurity with the help of advanced machine learning.

Cybersecurity is the top concern of an economic, social, and political world that depends almost entirely on the Internet. But most systems aren’t yet capable of catching all the threats, which are growing more every day at an incredible rate—an average of 350,000 new varieties per day. Within the past two years, there have been a surge of significant cyberattacks, ranging from DDoS attacks to ransomware infections, targeting hospital patient databases, banking networks, and military communications. All manner of information is at risk in these sorts of attacks, including personal health and banking information, classified data, and government services. The city of Atlanta recently spent over $2.6 million recovering from a ransomware attack that crippled the city’s online services. A major search engine paid out $50 million in a settlement after a massive customer database breach. So it’s easy to glean that hacking has moved on from inconvenient to downright costly.

Security analysts and experts are responsible for hunting down and eliminating potential security threats. But this is tedious and often strenuous work. It involves massive sets of complex data, with plenty of opportunity for false flags and threats that can go undetected. And when breaches are found, the time it takes for a fix to be built and implemented varies, depending on industry, between 100-245 days. This is more than enough time for a hacker to cause serious and costly damage. The shortcomings of current cybersecurity, coupled with the dramatic rise in cyberattacks will mean that by 2021, 3.5 million cybersecurity jobs will be needed, and cyberattacks will cost an estimated $6 trillion per year. But because burnout is so common in the cybersecurity industry, it’s speculated that most of those jobs won’t be or remain filled.

The key to effective cybersecurity is to work smarter, not harder. Supplementing cybersecurity systems with AI can be an intelligent countermeasure to cyberattacks. AI as a security measure has already been implemented in some small ways in current technology. Its ability to scan and process biometrics in real time has already been implemented in mainstream smartphones. Large tech firms have already begun to use AI protection in their networks and cloud operations.

AI’s most significant capability is robust, lightning fast data analysis that can learn in much greater volume and in less time than human security analysts ever could. It can be trained by white hat hackers who develop malware for security firms to recognize malicious patterns and behaviors in software before conventional antivirus programs and firewalls can learn to identify them. Natural language processing can, by scanning news reports and articles for cyberattacks, prepare a system for incoming threats. Artificial intelligence can be at work around the clock without fatigue, analyzing trends and learning new patterns. Maybe most important, AI can actively spot errors in security systems and “self-heal” by patching them in real time, cutting back significantly on remediation time.

Cyberattacks may now be newsworthy for the damages they have caused but implementing AI into cybersecurity systems can greatly diminish the risk. Machine learning can speed up an otherwise time-consuming and costly process by identifying breaches in security before the damage becomes too widespread.

Politics

AI earning a seat in politics

Alexandria Ocasio-Cortez is no stranger to making headlines. She made quite a few after her election to Congress in 2018, namely as a woman, a millennial, and a democratic socialist who had just upended a ten-term incumbent to secure a seat in one of the highest branches of government. And in 2019, she shows no signs of slowing down, particularly after her comments at SXSW about automation and the future of the American economy. She is not the first to remark on AI’s place in the world, but she is among the more high-profile politicians. Her comments, and the subsequent public response, brought AI to the American political platform more pointedly than before. It’s clear that automation is going to become a hot-button topic in the future of politics, not just in America, but around the globe, with opinions as divided as politics itself.

In recent years, AI has been making its way into politics worldwide in ways both blatant and subtle. In the Tama city area of Tokyo, Japan, Mitchido Matsuda ran a “fair and balanced” campaign for mayor that garnered 4,000 votes. That may not sound particularly impressive on its own, but is notable considering that Mitchido-san is a robot. On a larger scale, in Russia, an AI named Alisa not only managed to secure a nomination for president, she garnered more than 25,000 votes. She ultimately lost to Putin, but her nomination and the subsequent base of support highlight growing confidence in AI systems in governance and politics.

So far, AI has been primarily used to support the campaigns of flesh-and-blood politicians and machine learning algorithms have been instrumental in targeted political advertising and Internet-based opinion polls. But the appeal of AI politicians seems to be in how they can compensate for traits that often cost support for human politicians. AI has unending stamina, can quickly react to new events, and multitask in ways previously unimaginable. But is that enough to make an AI system a worthy policymaker and a viable threat to human politicians?

Ocasio-Cortez has called for the public to embrace automation in ways that free up our time to create and live as we choose. This was her response to concerns raised by some of her constituents and perhaps much of the country: will AI take over jobs? This concern exemplifies a powerful dichotomy when it comes to public opinion of AI. A recent study in Europe has uncovered that while the public is still fearful of what automation could do to the job market, “one in four Europeans would prefer artificial intelligence to make important decisions about the running of their country.” This may be in response to widespread frustration with government. However, what seems to be gaining ground is the belief that AI’s logic-based framework may be better suited to policymaking. Although Alisa and Mitchido-san didn’t win their political races, they’ve opened our eyes to new opportunities for AI in politics.

AI Vault

Other “F” words: fraud in finance

Of all the “F” words that raise our collective blood pressure, none does so quite as effectively as “fraud.” It’s a crime that predates currency and, alongside society and technology, has only gotten more sophisticated with time. The increase of online banking and ecommerce has only exacerbated the opportunities for fraud. It has become a pervasive and costly problem with depth and complexity that makes it almost impossible to prevent. Fortunately, with artificial intelligence and its potent data intelligence capabilities, early detection of fraud is becoming a reality.

Needless to say, wherever there’s money, there’s usually a system to monitor its activity and check for fraud. These types of systems often track and gauge transaction activities, spending habits, or react to reported cases of fraud by those impacted. There’s already a certain degree of automation integrated in anti-fraud systems, but overall, manual human reviews remain the primary line of defense. This is a challenge because manual reviews of volumous financial data are error-prone, repetitive, and time consuming, not to mention costly due to wages and ongoing mandatory training of staff. In fact, according to a North American business survey, “manual review staff account for the largest single slice of [the] fraud management budget.”

Historically, fraud detection has been focused on discovering fraud rather than preventing it. This practice often results in higher rates of false positives or falsely-flagged transactions, which can erode or even destroy consumer trust. Merchants and banks lose out too on revenue as well as customers who, as a result of false flags, are hesitant to use their services or trust in their ability to protect data. In financial services, the technology already deployed to mitigate fraud leaves much to be desired. The traditional paradigm for fraud detection lies in customer history, where rule-based automated systems react to certain preset parameters. This can be a challenge because “false positives occur regularly with traditional rule-based anti-fraud measures, where the system flags anything that falls outside a given set of parameters.” Anti-fraud measures based purely on rules are devoid of human sensibility which is often essential to the understanding and mitigation of fraud. 

What sets machine learning systems apart is their ability to learn from massively diverse sets of data points, rather than merely focusing on customer history or a set of predetermined parameters. This way, sophisticated algorithmic models can build a more comprehensive profile for each customer and provide the right context to their purchasing history. Financial service juggernauts have already begun utilizing deep learning in their fraud detection systems, and by doing so, one company was able to cut down their number of false positives in half.

The implementation of AI in anti-fraud processes doesn’t entirely eliminate the need for the human touch. The human-AI feedback cycle necessitates a strong collaboration between people and machines. Many AI systems remain dependent on human input, especially in its early phases, and for integrating diversity into its models and understanding of data. Highly skilled analysts retain the ability to think like fraudsters and understand the complexity of human emotions that can affect how and why fraud is committed. In return, AI can help process vast, complex sets of data in a fraction of the time and do so around the clock.

These sorts of advantages are exactly what the financial industry has been looking for, especially in the wake of notorious financial fraud cases making headlines around the world. We’ve previously discussed how advanced AI can help streamline compliance processes, but at a business level, it provides even more. Leveraging the power of AI is a smart way to combat fraud and give peace of mind to concerned customers who are tired of hearing the “F” word.

Robot hand

Advanced automation and where employment is moving

Type in “Will AI…” into a search engine, and the first suggestion you’ll likely see is “Will AI take over?” The second? “Will AI take over jobs?” So it’s easy to glean what the public’s biggest concern is as far as AI. But even though AI is quickly becoming a technological juggernaut, this type of concern isn’t entirely well-founded. After all, the job market will do as it has always done when faced with new technology – it will adapt, it will evolve, and it will ultimately thrive in spite of our fears.

The job market has always been dynamic, not static, with obsolete jobs often replaced with better jobs to reflect the change in times. And AI isn’t about to alter any of that. Jobs evolve and change with every generation, and panic is often the first response to this phenomenon. Popular perception is that AI technology will displace humans in their jobs, but this is the same perception of every major technological milestone that historically was believed to do the same.

It’s important to remember that a job is not a singular activity, but rather a collection of different tasks. When automation is proposed, it’s immediately assumed that the entire job will be at risk of automation rather than just one of the tasks. Additionally, such assumptions typically disregard the potential for job creation. Even the Oxford paper claiming that up to 47% of US employment is at risk of automation cautions that predicting technological progress and the job creation that comes with it is difficult. And historically speaking, automation of certain trades actually created more jobs and lowered the price of goods that became easier to produce.

A chief example is Henry Ford’s introduction of the moving-chassis assembly line to the automotive industry. At the time, this too was met with strong skepticism. But the newly-automated system reduced the production of Model Ts from 12 hours per car to merely 2.5 hours. This made the cars less expensive to make, which in turn made it more affordable for the average worker to purchase. Meanwhile, more jobs, at highly competitive wages, were now available at Ford’s factory for workers who could maintain the new machinery. These two outcomes effectively gave birth to a thriving American middle class. The continued success of Henry Ford and his employees depended on the automotive industry’s brave decision to embrace new technology instead of avoiding it.

Following Oxford’s earlier paper, McKinsey conducted a new study in 2017 which delved deeper into historical context and the nuance of the job market. By taking into account the fact that a job is a collection of tasks rather than just one, the study determined that less than 5% of jobs are at risk of being fully automated, rather than the more daunting 47% estimated in Oxford’s study. Automating a job by and large refers to the automation of some tasks.

AI is meant to augment human capabilities, not usurp them. Machine learning already shows tremendous promise in improving the healthcare industry. For instance, AI systems are streamlining processes bogged down by obsolete traditional methods and providing tools to assist doctors with diagnoses and other tasks. And all of this is being achieved without replacing a single doctor.

Historically speaking, transformative change is rarely met with broad-based enthusiasm, and this has been a trend since time immemorial. But in spite of our fears, advanced technology has helped reshape our way of life into greater health, wealth, and prosperity for more people around the globe. Keep that in mind, and our search suggestions may soon change from “Will AI take over jobs?” to “How will AI make jobs better?”

Healthcare

Patients and doctors, now better together with AI

It’s no secret that healthcare in the United States is in crisis. With skyrocketing costs, dysfunctional health coverage, and overburdened doctors, hospitals, and staff, it’s hardly surprising that that only 39% of patients have ‘a great deal’ of confidence in the medical system. This disparity between patients and doctors could have lasting impact on our collective health (as recent resurgences in measles and other preventable diseases has shown). But artificial intelligence’s significant impact on the healthcare industry, and its potential for more, may just be the tool we need to reconcile the patient experience.

Machine learning has already gained a foothold in modern medicine as a powerful tool to help doctors diagnose, perform medical procedures, and prescribe medication more quickly and with greater accuracy. The boon for the healthcare economy is enormous, with growth expected to reach $6.6 billion by 2021 and an estimated $150 billion in annual savings by 2026 in the United States alone. For patients who fear that any unforeseen medical bill could spell bankruptcy, this should come as somewhat of a reprieve. But saving money alone may not be enough to build patient confidence; patient-doctor interaction suffers from critical symptoms as well.

With their doctors spending more of their time doing paperwork than interfacing with them, patients often feel neglected or outright ignored in the doctor’s office. With little knowledge shared between the two to build up a relationship, patient perception of doctors has become increasingly negative. Of course, doctors have their reasons, although they do little to appease anxious patients: decreased insurance payouts means that doctors, particularly specialists, need to take on as many patients as possible within increasingly shorter time spans in order to make enough money to run their offices. This, coupled with increasing need to spend more than “two-thirds of their time doing paperwork” on administrative tasks, including documentation, follow-up, and dealing with insurance companies and several other layers of bureaucracy. This results in interacting more with paperwork than patient and lead to job dissatisfaction that can translate in doctor-patient interactions. Research suggests that an important correlation may exist between a patient’s satisfaction and the professional satisfaction of their physicians. By improving physician satisfaction, frustration with the medical process may finally be curable. 

AI optimization has already made tremendous headway in terms of how quickly and accurately physicians can make diagnoses. But if AI could also make a dent in the paperwork problem, physicians could be afforded the time to take on more patients every week and spend more than the current average of 22 minutes per encounter with their existing patients. 

Additionally, AI technology that already exists could expedite the diagnostic progress even further, to allow doctors to forge the deeper connections their patients often long for. Wearables like the Apple Watch and FitBit already track metrics such as heartbeat and steps taken per day. Even cell phones have apps that can track the amount of REM sleep you achieve each night. If the technology continues to develop in this way, including the ability to track other metrics such as body temperature or even blood content levels, wearables can potentially work like a car’s computer, tracking and keeping record of irregularities in our bodies for doctors to easily access later. These recorded symptoms could further lend confidence in patients’ perception of their own health and help them put to words their ambiguous understanding of their symptoms – the end of the era of “Well Doctor, I had this weird, kinda stabby pain in my stomach,” and the beginning of accurately described symptoms and expedited diagnoses.

AI in healthcare is also poised to impact how patients choose their physicians. Studies have shown while popular perception of AI is mixed, hope for AI in healthcare continues to trend positively, particularly the concept of systems that provide patient care. With that knowledge in hand, rebuilding patient trust in their doctors could be made easier by utilizing AI in the practice. This would give forward-thinking doctors a reputation for using efficient, accurate, and cutting-edge technology, propelling them ahead of their peers.

Ultimately, there is no substitute for doctors. AI has the capacity to suggest a diagnosis with accuracy, but doctors have the experience and a human approach, such as banter, thoughtfulness, and empathy, that cannot yet be replicated by machines. It was the loss of that human approach through our overburdened healthcare system that created the gap between doctors and their patients in the first place. By utilizing advanced AI to the best of its potential, doctors can bridge that gap with patients, and restore the patient experience to one of mutual trust, faith, and respect.

Alston

Entefy CEO Alston Ghafourifar speaks on enterprise AI readiness at event hosted by Franklin Templeton Investments

Entefy CEO Alston Ghafourifar spoke to a group of international institutional investors hosted by Franklin Templeton Investments in San Francisco, California. The event focused on the future of AI, its impact and value creation, as well as the importance of data optimization for organizations. 

In an in-depth conversation about organizational bottlenecks for building effective AI solutions, Alston expressed his view on the importance of a company’s commitment—

especially at the executive level—to digital transformation and the intelligent enterprise. Realizing the full potential of AI and machine intelligence within the enterprise, closely resembles any other type of organizational change. “If you don’t have the organizational will to address the data readiness problem from a system’s perspective then you’re not going to actually be leading the stack,” Alston stated. 

In an effort to transform idle, dark data into valuable insights, enterprises worldwide are looking into ways AI can make their operations more efficient, enhance their customer experience, and ultimately become more competitive. This pursuit is occurring across a broad spectrum of industries as well as functional departments within organizations. Underlying this activity is a paradigm shift in how companies perceive the role of data and AI to optimize productivity and business processes.

When it comes to AI and data, every organization is at a different phase of readiness. Alston shared that, at Entefy, we spend time to understand where an organization is in terms of their AI journey. “Are they just talking about it?” Or do they have “at least a process for how they aggregate, transform, and move information throughout the system?” Entefy’s advanced multimodal AI is designed to solve the information overload problem for organizations large and small. And to do this, we along with our customers are reimagining knowledge management, workflow orchestration, and process automation.

We thank the event organizers, speakers, and attendees for making this event possible. Special thanks to Ryan Biggs from Franklin Templeton Investments who moderated the discussion. 

Enterprise AI

Is your enterprise ready for AI? Here are 18 important skills you need to make it happen

The artificial intelligence (AI) revolution is unfolding right before our eyes, and organizations of all types have already begun taking advantage of this amazing technology. Today, machine learning is transforming industries and disrupting software and automation across the globe.

AI/machine learning as a discipline is highly dynamic, relying on a unique blend of science and engineering that requires training models and adjusting algorithms to make useful predictions based on a dataset. Building algorithmic models and thoughtful process orchestration are central to work in this field and extensive experimentation is simply par for the course. And, high precision in AI and machine learning is achieved where science meets art.

At Entefy, we’re obsessed with advanced computing and its potential to change lives for the better. Given our work in AI and machine learning over the years, we’ve had the good fortune of working with amazing people, inventing new things, and building systems that bring unprecedented efficiency to operations, processes, and workflows. This includes our work in the emerging, complex field of multimodal AI

So, what does it take to make it all work and work well? What are the proficiencies and competencies required to bring AI applications to life? Here are the 18 skills needed for the successful development of a single AI solution from ideation to commercial delivery:

Architecture: Design and specification of software subsystems, supporting technologies, and associated orchestration services to ensure scalable and performant delivery.

Infrastructure Engineering: Implementation and maintenance of physical and virtual resources, networks, and controls that support the flow, storage, and analysis of data.

DataOps: Management, optimization, and monitoring of data retrieval, storage, transformation, and distribution throughout the data life cycle including preparation, pipelines, and reporting. 

Machine Learning Science: Research and development of machine learning models including designing, training, validation, and testing of algorithmic models.

Machine Learning Engineering: Development and software optimization of machine learning models for scalable deployment.

DevOps: Building, packaging, releasing, configuring, and monitoring of software to streamline the development process.

Backend Engineering: Development of server-side programs and servicesincluding implementation of core application logic, data storage, and APIs.

Frontend Engineering: Development of client-side applications and user interfaces such as those present in websites, desktop software, and mobile applications.

Security Engineering: Implementation and management of policies and technologies that protect software systems from threats.

Quality Assurance: Quality and performance testing that validates the business logic and ensures the proper product function.

Release Management: Planning, scheduling, automating, and managing the testing and deployment of software releases throughout a product lifecycle.

UI Design: Wireframing, illustrations, typography, image and color specifications that help visually bring user interfaces to life. 

UX Design: Specification of the interaction patterns, flows, features, and interface behaviors that enhance accessibility, usability, and overall experience of the user interaction.  

Project Management: Coordination of human resources, resolution of dependencies, and procurement of tools to meet project goals, budget, and delivery timeline.

Product Management: Scoping user, business, and market requirements to define product features and manage plans for achieving business goals.

Technical Writing and Documentation: Authoring solution and software specifications, usage documentation, technical blogs, and data sheets. 

Compliance and Legal Operations: Creation and monitoring of policies and procedures to ensure a solution’s adherence to corporate and governmental regulations through development and deployment cycles, including IP rights, data handling, and export controls. 

Business Leadership: Strategic decision making, risk assessment, budgeting, staffing, vendor sourcing, and coordination of resources all in service of the core business and solution objectives.