Enabling comprehensive location-based data retrieval through knowledge graph augmentation

U.S. Patent Number: 12,067,063
Patent Title: Temporal transformation of location-based queries
Issue Date: August 20, 2024
Inventors: Ghafourifar, et al.
Assignee: Entefy Inc.

Patent Abstract

A system and method for transforming location-based data queries into temporal domain by leveraging a location-to-time knowledge conversion graph. In some systems which contain diverse sets of data objects, only certain objects may contain explicit location data, while others may not. Therefore, queriability of this diverse data by location properties would likely yield incomplete results. In some embodiments, this method allows for the transformation and augmentation of a given data query containing location-based filtering properties into a time-region-based lookup, wherein a given location has been assigned to a time region in the given data graph and all data events within that time region may be augmented with location metadata automatically in the knowledge graph. Over time, a system utilizing these embodiments can offer comprehensive location-based data services and insights with a diverse set of data objects that exists and not all objects contain explicit location information.

USPTO Technical Field

This disclosure relates generally to converting a location-based query to a time-based query.

Background

Many data items are generated with location information embedded as metadata. For example, an image file may include global positioning system (GPS) data indicating where the image file was created. Location data associated with data items may be used for a variety of purposes. For example, an individual may query a system that tracks data items to determine which data items the individual generated while on a trip to Europe. However, some data items may not be associated with location data. Such data items may not be considered by systems and applications that operate on such data based on location, therein reducing the scope or efficacy of location-based queries with regard to retrieval of diverse data in a given system.

Read the full patent here.

ABOUT ENTEFY

Entefy is an enterprise AI software company. Entefy’s patented, multisensory AI technology delivers on the promise of the intelligent enterprise, at unprecedented speed and scale.

Entefy products and services help organizations transform their legacy systems and business processes—everything from knowledge management to workflows, supply chain logistics, cybersecurity, data privacy, customer engagement, quality assurance, forecasting, and more. Entefy’s customers vary in size from SMEs to large global public companies across multiple industries including financial services, healthcare, retail, and manufacturing.

To leap ahead and future proof your business with Entefy’s breakthrough AI technologies, visit www.entefy.com  or contact us at contact@entefy.com.

Searchable tag clouds, associations, and correlations within encrypted data files in zero-knowledge systems

U.S. Patent Number: 11,755,629
Patent Title: System and method of context-based predictive content tagging for encrypted data
Issue Date: September 12, 2023
Inventors: Ghafourifar, et al.
Assignee: Entefy Inc.

Patent Abstract

This disclosure relates to systems, methods, and computer readable media for performing multi-format, multi-protocol message threading in a way that is most beneficial for the individual user. Users desire a system that will provide for ease of message threading by “stitching” together related communications in a manner that is seamless from the user’s perspective. Such stitching together of communications across multiple formats and protocols may occur, e.g., by: 1) direct user action in a centralized communications application (e.g., by a user clicking ‘Reply’ on a particular message); 2) using semantic matching (or other search-style message association techniques); 3) element-matching (e.g., matching on subject lines or senders/recipients/similar quoted text, etc.); and 4) “state-matching” (e.g., associating messages if they are specifically tagged as being related to another message, sender, etc. by a third-party service, e.g., a webmail provider or Instant Messaging (IM) service).

USPTO Technical Field

This disclosure relates generally to systems, methods, and computer readable media for message threading across multiple communications formats and protocols.

Background

The proliferation of personal computing devices in recent years, especially mobile personal computing devices, combined with a growth in the number of widely-used communications formats (e.g., text, voice, video, image) and protocols (e.g., SMTP, IMAP/POP, SMS/MMS, XMPP, YMSG, etc.) has led to a communications experience that many users find fragmented and difficult to search for relevant information in. Users desire a system that will provide for ease of message threading by “stitching” together related communications across multiple formats and protocols—all seamlessly from the user’s perspective. Such stitching together of communications across multiple formats and protocols may occur, e.g., by: 1) direct user action in a centralized communications application (e.g., by a user clicking ‘Reply’ on a particular message); 2) using semantic matching (or other search-style message association techniques); 3) element-matching (e.g., matching on subject lines or senders/recipients/similar quoted text, etc.); and 4) “state-matching” (e.g., associating messages if they are specifically tagged as being related to another message, sender, etc. by a third-party service, e.g., a webmail provider or Instant Messaging (IM) service.

With current communications technologies, conversations remain “siloed” within particular communication formats or protocols, leading to users being unable to search across multiple communications in multiple formats or protocols and across multiple applications on their computing devices to find relevant communications (or even communications that a messaging system may predict to be relevant), often resulting in inefficient communication workflows—and even lost business or personal opportunities. For example, a conversation between two people may begin over text messages (e.g., SMS) and then transition to email. When such a transition happens, the entire conversation can no longer be tracked, reviewed, searched, or archived by a single source since it had ‘crossed over’ protocols. For example, if the user ran a search on their email search system for a particular topic that had come up only in the user’s SMS conversations, such a search may not turn up optimally relevant results.

Further, a multi-format, multi-protocol, communication threading system, such as is disclosed herein, may also provide for the semantic analysis of conversations. For example, for a given set of communications between two users, there may be only a dozen or so keywords that are relevant and related to the subject matter of the communications. These dozen or so keywords may be used to generate an “initial tag cloud” to associate with the communication(s) being indexed. The initial tag cloud can be created based on multiple factors, such as the uniqueness of the word, the number of times a word is repeated, phrase detection, etc. These initial tag clouds may then themselves be used to generate further an expanded “predictive tag cloud,” based on the use of Markov chains or other predictive analytics based on established language theory techniques and data derived from existing communications data in a centralized communications server. These initial tag clouds and predictive tag clouds may be used to improve message indexing and provide enhanced relevancy in search results. In doing so, the centralized communications server may establish connections between individual messages that were sent/received using one or multiple communication formats or protocols and that may contain information relevant to the user’s initial search query.

The subject matter of the present disclosure is directed to overcoming, or at least reducing the effects of, one or more of the problems set forth above. To address these and other issues, techniques that enable seamless, multi-format, multi-protocol communication threading are described herein.

Read the full patent here.

ABOUT ENTEFY

Entefy is an enterprise AI software company. Entefy’s patented, multisensory AI technology delivers on the promise of the intelligent enterprise, at unprecedented speed and scale.

Entefy products and services help organizations transform their legacy systems and business processes—everything from knowledge management to workflows, supply chain logistics, cybersecurity, data privacy, customer engagement, quality assurance, forecasting, and more. Entefy’s customers vary in size from SMEs to large global public companies across multiple industries including financial services, healthcare, retail, and manufacturing.

To leap ahead and future proof your business with Entefy’s breakthrough AI technologies, visit www.entefy.com  or contact us at contact@entefy.com.

A new Intent Determination Service (IDS) architecture to reduce computational costs and improve intent recognition for textual model inputs

U.S. Patent Number: 11,914,625
Patent Title: Search-based natural language intent determination
Issue Date: February 27, 2024
Inventors: Ghafourifar, et al.
Assignee: Entefy Inc.

Patent Abstract

Improved intelligent personal assistant (IPA) software agents are disclosed that are configured to interact with various people, service providers, files, and/or smart devices. More particularly, this disclosure relates to an improved Natural Language Processing (NLP) Intent Determination Service (IDS) that is able to determine the likely best action to take in response to generic user commands and queries. The improved NLP IDS disclosed is said to be ‘search-based’ because, rather than attempt to parse incoming user commands and queries up front, the incoming user commands and queries are searched against a pre-generated database of exemplary user commands (e.g., having associated action or parsing identifiers) to determine the most relevant search result(s). The associated system actions and known grammar/parsing rules of the most relevant search result(s) may then be used to process the incoming user command or query—without having to actually parse the incoming user command or query from scratch.

USPTO Technical Field

This disclosure relates generally to apparatuses, methods, and computer readable media for improved natural language processing (NLP) intent determination, e.g., for use with intelligent personal assistant software agents that are configured to interact with people, services, and devices across multiple communications formats and protocols.

Background

Intelligent personal assistant (IPA) software systems comprise software agents that can perform various tasks or services on behalf of an individual user. These tasks or services may be based on a number of factors, including: spoken word or verbal input from a user, textual input from a user, gesture input from a user, a user’s geolocation, a user’s preferences, a user’s social contacts, and an ability to access information from a variety of online sources, such as via the World Wide Web. However, current IPA software systems have fundamental limitations in natural language processing, natural language understanding (NLU), and so-called “intent determination” in practical applications.

For example, in some systems, language context and action possibilities gleaned from user commands may be constrained ‘up front’ by identifying the specific service that the user is sending the command to before attempting to perform any NLP/NLU—thus increasing the accuracy of results and significantly reducing the amount of processing work needed to understand the commands. However, this strategy may not provide a satisfactory user experience in the context of AI-enabled IPAs, wherein the user may often engage in macro-level ‘conversations’ with his or her device via a generic query to a single IPA ‘persona’ that is capable of interacting with many third-party services, APIs, file, document, and/or systems. In such situations, it becomes more complex and challenging for the IPA to reliably direct the user’s commands to the appropriate data, interface, third-party service, etc.—especially when a given command may seemingly apply with equal validity to two or more known third-party interfaces or services that the IPA software agent is capable of interfacing with. For example, the command, “Send {item}.” may apply with seemingly equal validity to a native text messaging interface, a native email client, a third-party messaging interface, a flower delivery service, etc.

Moreover, it is quite computationally expensive to attempt to parse the grammar of each incoming user command or query ‘up front,’ i.e., to attempt to determine the intent of the user’s command and/or which specific services, APIs, file, document, or system the user intends for his command to be directed to. Computationally-expensive parsing may also be used to determine how certain words or phrases in the user’s command depend on, relate to, or modify other words or phrases in the user’s command, thereby giving the system a greater understanding of the user’s actual intent.

NLP systems may be used to attempt to glean the true intent of a user’s commands, but the success of such systems is largely dependent upon the training set of data which has been used to train the NLP system. NLP also requires computationally-intensive parsing to determine what parts of the user’s command refer to intents, which parts refer to entities, which parts refer to attributes, etc., as well as which entities and attributes are dependent upon (or are modifying) which intents.

The subject matter of the present disclosure is directed to overcoming, or at least reducing the effects of, one or more of the problems set forth above. To address these and other issues, techniques that enable a more computationally-efficient, so-called ‘search-based,’ NLP intent determination system are described herein.

Read the full patent here.

ABOUT ENTEFY

Entefy is an enterprise AI software company. Entefy’s patented, multisensory AI technology delivers on the promise of the intelligent enterprise, at unprecedented speed and scale.

Entefy products and services help organizations transform their legacy systems and business processes—everything from knowledge management to workflows, supply chain logistics, cybersecurity, data privacy, customer engagement, quality assurance, forecasting, and more. Entefy’s customers vary in size from SMEs to large global public companies across multiple industries including financial services, healthcare, retail, and manufacturing.

To leap ahead and future proof your business with Entefy’s breakthrough AI technologies, visit www.entefy.com  or contact us at contact@entefy.com.

A novel system for dynamically allocating GPU resources in diverse AI workflows to improve throughput and optimize utilization of GPUs within a multi-node infrastructure

U.S. Patent Number: 11,645,123
Patent Title: Dynamic distribution of a workload processing pipeline on a computing infrastructure
Issue Date: May 09, 2023
Inventors: Alston Ghafourifar
Assignee: Entefy Inc.

Patent Abstract

Disclosed are systems, methods, and computer readable media for automatically assessing and allocating virtualized resources (such as CPU and GPU resources). In some embodiments, this method involves a computing infrastructure receiving a request to perform a workload, determining one or more workflows for performing the workload, selecting a virtualized resource, from a plurality of virtualized resources, wherein the virtualized resource is associated with a hardware configuration, and wherein selecting the virtualized resources is based on a suitability score determined based on benchmark scores of the one or more workflows on the hardware configuration, scheduling performance of at least part of the workload on the selected virtualized resource, and outputting results of the at least part of the workload.

USPTO Technical Field

This disclosure relates generally to apparatuses, methods, and computer readable media for predicting and allocating computing resources for workloads.

Background

Modern computing infrastructures allow computational resources to be shared through one or more networks, such as the internet. For example, a cloud computing infrastructure may enable users, such as individuals and/or organizations, to access shared pools of computing resources, such as servers, both virtual and real, storage devices, networks, applications, and/or other computing based services. Remote services allow users to access computing resources on demand remotely in order to perform a variety computing functions. These functions may include computing data. For example, cloud computing may provide flexible access to computing resources without accruing up-front costs, such as purchasing computing devices, networking equipment, etc. and investing time in establishing a private network infrastructure. Utilizing remote computing resources, users are able to focus on their core functionality rather than optimizing data center operations.

With today’s communications networks, examples of cloud computing services a user may access includes software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS) technologies. SaaS is a delivery model that provides software as a service rather than an end product, while PaaS acts an extension of SaaS that goes beyond providing software services by offering customizability and expandability features to meet a user’s needs. Another example of cloud computing service includes infrastructure as a service (IaaS), where APIs are provided to access various computing resources, such as raw block storage, file or object level storage, virtual local area networks, firewalls, load balancers, etc. Service systems may handle requests for various resources using virtualized resources (VRs). VRs allows for hardware resources, such as servers, to be pooled for use by the service systems. These VRs may be configured using pools of hypervisors for virtual machines (VMs) or through containerization.

Containerization, or containers, are generally a logical packaging mechanism of resources for running an application which are abstracted out from the environment in which they are actually run. Multiple containers generally may be run directly on top of a host OS kernel and each container generally contains the resources, such as storage, memory, and APIs needed to run a particular application the container is set up to run. In certain cases, containers may be resized by adding or removing resources dynamically to account for workloads or a generic set of resources may be provided to handle different applications. As containers are created on and managed by a host system at a low level, they can be spawned very quickly. Containers may be configured to allow access to host hardware, such as central processing units (CPUs) or graphics processing units (GPUs), for example, through low-level APIs included with the container. Generally, containers may be run in any suitable host system and may be migrated from one host system to another as hardware and software compatibility is handled by the host and container layers. This allows for grouping containers to optimize use of the underlying host system. A host controller may also be provided to optimize distribution of containers across hosts.

Modern CPUs may be configured to help distribute CPU processing load across multiple processing cores, therefore allowing multiple computing tasks to execute simultaneously and reduce overall real or perceived processing time. For example, many CPUs include multiple independent and asynchronous cores, each capable of handling different tasks simultaneously. Generally, GPUs, while having multiple cores, can be limited in their ability to handle multiple different tasks simultaneously. A typical GPU can be characterized as a processor which can handle a Single Instruction stream with Multiple Data streams (SIMD) whereas a typical multi-core CPU can be characterized as a processor which can handle Multiple Instruction streams with Multiple Data streams (MIMD). A multi-core CPU or a cluster of multiple CPUs can also be characterized as parallelized SIMD processor(s), thereby in effect simulating a MIMD architecture.

A SIMD architecture is generally optimized to perform processing operations for simultaneous execution of the same computing instruction on multiple pieces of data, each processed using a different core. A MIMD architecture is generally optimized to perform processing operations which requires simultaneous execution of different computing instructions on multiple pieces of data, regardless of whether executing processes are synchronized. As such, SIMD processors, such as GPUs, typically perform well with discrete, highly parallel, computational tasks spread across as many of the GPU cores as possible and making use of a single instruction stream. Many GPUs have specific hardware and firmware limitations in place to limit the ability for the GPU cores to be separated, or otherwise virtualized, thereby reinforcing the SIMD architecture paradigm. CPUs typically have little or no such limitation, thereby making the process of dividing GPU processing time across multiple tasks difficult as compared to CPUs. Rather than attempting this, IaaS providers with GPU resources may need to provide more physical GPUs to handle GPU processing requests and possibly even dedicated GPUs for certain processes, for example, artificial intelligence (AI) workloads, even if the actual computational capacity of that infrastructure far out-strips the GPU compute demand, leading to inflated capital and operating costs associated with offering GPU resources in an IaaS, PaaS, SaaS, or other product or cloud infrastructure offering.

In the case of GPU-heavy workloads such as those demanded by certain AI-enabled offerings, not all AI workloads are the same and hardware optimal for running one AI workload may not be rightly-sized for another AI workload.

Virtualization techniques have emerged throughout the past decades to optimize the utilization of hardware resources such as CPUs by efficiently allowing computing tasks to be spread across multiple cores, CPUs, clusters, etc. However, such virtualization is generally not available or non-performant for GPUs and this can lead to higher operating costs and increased application or platform latency. What is needed is a technique for appropriately scaling a workflow pipeline to handle high-density processing operations (such as AI operations) which require frequent utilization of GPUs during processing.

Read the full patent here.

ABOUT ENTEFY

Entefy is an enterprise AI software company. Entefy’s patented, multisensory AI technology delivers on the promise of the intelligent enterprise, at unprecedented speed and scale.

Entefy products and services help organizations transform their legacy systems and business processes—everything from knowledge management to workflows, supply chain logistics, cybersecurity, data privacy, customer engagement, quality assurance, forecasting, and more. Entefy’s customers vary in size from SMEs to large global public companies across multiple industries including financial services, healthcare, retail, and manufacturing.

To leap ahead and future proof your business with Entefy’s breakthrough AI technologies, visit www.entefy.com  or contact us at contact@entefy.com.

A distributed AI agent network architecture for optimizing collaboration and tool execution between two intelligent agents with federated ledger transactions

U.S. Patent Number: 11,367,068
Patent Title: Decentralized blockchain for artificial intelligence-enabled skills exchanges over a network
Issue Date: June 21, 2022
Inventors: Ghafourifar, et al.
Assignee: Entefy Inc.

Patent Abstract

An improved decentralized, blockchain-driven network for artificial intelligence (AI)-enabled skills exchange between Intelligent Personal Assistants (IPAs) in a network is disclosed that is configured to perform computational tasks or services (also referred to herein as “skills”) in an optimally-efficient fashion. In some embodiments, this may comprise a first IPA paying an agreed cost to a second IPA to perform a particular skill in a more optimally-efficient fashion. In some embodiments, a skills registry is published, comprising benchmark analyses and costs for the skills offered by the various nodes on the skills exchange network. In other embodiments, a transaction ledger is maintained that provides a record of all transactions performed across the network in a tamper-proof and auditable fashion, e.g., via the use of blockchain technology. Over time, the AI-enabled nodes in the system may learn to scale, replicate, and transact with each other in an optimized—and fully autonomous—fashion.

USPTO Technical Field

This disclosure relates generally to apparatuses, methods, and computer readable media for a decentralized, secure network for artificial intelligence (AI)-enabled performance and exchange of computational tasks and services between network nodes.

Background

Intelligent personal assistant (IPA) software systems comprise software agents that can perform various functions, e.g., computational tasks or services, on behalf of an individual user or users. IPAs, as used herein, may simply be thought of as computational “containers” for certain functionalities. The functionalities that are able to be performed by a given IPA at a particular moment in time may be based on a number of factors, including: a user’s geolocation, a user’s preferences, an ability to access information from a variety of online sources, the processing power and/or current performance load of a physical instance that the IPA is currently being executed on, and the historical training/modification/customization that has been performed on the IPA. As such, current IPA software systems have fundamental limitations in terms of their capabilities and abilities to perform certain computational tasks.

For example, in some instances, a first IPA executing on a first device on a network may be able to perform a particular first computational task or service (also referred to herein as a “skill”) with a very high degree of accuracy, but may be executing on a physical instance that lacks the necessary computational power or capacity to perform the particular first computational task or service in a reasonable amount of time. Likewise, a second IPA, e.g., being executed on a device belonging to another user on the same network, may have excellent computational power and capacity, but not have been trained to perform the first computational task or service with a high degree of accuracy. As such, the particular first computational task or service is not likely to be able to be efficiently performed by either the first IPA or the second IPA, causing, in effect, an inevitable marketplace inefficiency in the overall skills network.

Such a scenario may not provide for a satisfactory (or efficient) user experience across the many users and/or nodes of the network. In the context of AI-enabled IPAs, the IPAs may be able to “learn” and improve their performance of certain computational tasks or services over time. AI-enabled IPAs may also be able to determine, over time, more efficient usages of the network’s overall computational capacity to perform computational tasks or services at a high level of performance and at a low operational cost, e.g., by ‘farming out’ certain computational tasks to other IPAs and/or nodes in the network that can perform the task in a more optimal manner.

However, in order to be able to act, react, and interoperate in an efficient manner, the various IPAs distributed across a network must have accurate information as to the current status of the various skills that the nodes on the network are able to perform (e.g., in terms of benchmarking scores, availability, and/or costs)—as well as the ability to determine the most optimal nodes that could be used to perform such skills, given computational and cost constraints.

Moreover, in order to reliably provide “value,” i.e., payment for services rendered, to other nodes in the aforementioned network for the performance of skills in an optimized manner, it is important that a secure ledger of transactions performed across the network be maintained in a tamper-proof and auditable fashion.

The subject matter of the present disclosure is directed to overcoming, or at least reducing the effects of, one or more of the problems set forth above. To address these and other issues, techniques that enable a decentralized, secure network for the AI-enabled performance and exchange of computational tasks and services between nodes on a network are described herein.

Read the full patent here.

ABOUT ENTEFY

Entefy is an enterprise AI software company. Entefy’s patented, multisensory AI technology delivers on the promise of the intelligent enterprise, at unprecedented speed and scale.

Entefy products and services help organizations transform their legacy systems and business processes—everything from knowledge management to workflows, supply chain logistics, cybersecurity, data privacy, customer engagement, quality assurance, forecasting, and more. Entefy’s customers vary in size from SMEs to large global public companies across multiple industries including financial services, healthcare, retail, and manufacturing.

To leap ahead and future proof your business with Entefy’s breakthrough AI technologies, visit www.entefy.com  or contact us at contact@entefy.com.

A method for unifying feature vectors using chained classifiers and facilitating cross-modal information retrieval through knowledge correlation and extrapolation

U.S. Patent Number: 11,366,849
Patent Title: System and method for unifying feature vectors in a knowledge graph
Issue Date: June 21, 2022
Inventors: Ghafourifar, et al.
Assignee: Entefy Inc.

Patent Abstract

Disclosed are apparatuses, methods, and computer readable media for improved multi-datatype searching comprising receiving a search query of a first datatype, generating a vector of the first datatype describing the search query, expanding the vector of the first datatype to include a second datatype vector, wherein the second datatype vector is different from the first datatype but may be conceptually equivalent, and wherein the second datatype is associated with the vector of the first datatype, and performing a search based on the first datatype and a search on the second datatype.

USPTO Technical Field

This disclosure relates generally to apparatuses, methods, and computer readable media for a unified knowledge vector for improved multi-format search.

Background

Machine learning and pattern recognition software system can be harnessed to perform various artificial intelligence tasks or services, such as object recognition, translation, and autonomous driving. These tasks or services may be based on a number of types of input, including speech, text, images, video, light detection and ranging (LIDAR), etc. Patterns may be determined to analyze and make inferences about the input. In certain cases, classifiers may be used to recognize various aspects of the input and the output of these classifiers may be organized as a vector. Vectors generally represent an item or concept as described by the classifiers. As an example, for a picture of a face, various classifiers trained to recognize specific facial features may be run to help identify a person in the picture. These classifiers may each output scores indicating how much the face matches with specific facial feature each classifier is trained to identify. These scores may be collected into a facial image vector and the facial image vector compared to a database of facial image vectors describing other face images to find a match. This comparison may be, for example, based on the output of clustering algorithms such as K-Nearest Neighbor (KNN) and other forms of analysis of vectors representing attributes detected between different facial images.

These comparisons work within single data type, but break down across different types of data since a vector describing a concept associated with a first data type may not accurately describe that same concept in a second data type. Additionally, the first data concept vector may represent relationship X with various other concept vectors in the first data space, whereas that same concept vector may not exist or may represent a different relationship Y with other concepts in the second data space. For an example, a vector representing an image of an intersection of two roads may be more closely related, for example as a part of a KNN analysis, to a curved road rather than two lines or objects intersecting each other. However, in text, a vector for an intersection may be more closely related to a crossing point or line than anything related to roads. Moreover, for the image data type, the physical location or angle of the image may influence the resulting vector describing the image. This in turn may influence the KNN analysis. For example, a particular image of an intersection may be partially occluded by a traffic sign for the intersection, which may result in the vector being more closely related to a merge traffic sign. Attempting to map this vector across data types into text may then point to a completely different concept than expected.

The subject matter of the present disclosure is directed to overcoming, or at least reducing the effects of, one or more of the problems set forth above. To address these and other issues, techniques for improved cross data search by enabling comparisons of feature vectors across data types are described herein.

Read the full patent here.

ABOUT ENTEFY

Entefy is an enterprise AI software company. Entefy’s patented, multisensory AI technology delivers on the promise of the intelligent enterprise, at unprecedented speed and scale.

Entefy products and services help organizations transform their legacy systems and business processes—everything from knowledge management to workflows, supply chain logistics, cybersecurity, data privacy, customer engagement, quality assurance, forecasting, and more. Entefy’s customers vary in size from SMEs to large global public companies across multiple industries including financial services, healthcare, retail, and manufacturing.

To leap ahead and future proof your business with Entefy’s breakthrough AI technologies, visit www.entefy.com  or contact us at contact@entefy.com.

Constructing personalized knowledge graphs to enhance AI context resolution in multi-agent systems

U.S. Patent Number: 12,461,762
Patent Title: Apparatus and method for detecting, analyzing, and mapping transaction events for improved context understanding in artificial intelligence systems exchanges between agents over a network
Issue Date: November 4, 2025
Inventors: Ghafourifar, et al.
Assignee: Entefy Inc.

Patent Abstract

Techniques for resolving multiple user requests from multiple user accounts by an interactive interface are described. An interactive interface can obtain a first multi-dimensional context graph for a first user account and a second context graph for a second user account. Each graph comprises correlated contexts related to the user account. The interface can also receive a first user request associated with the first user account and a second user request associated with the second user account; determine, based on the first graph, a first current context and one or more first previous contexts for the first user request; determine, based on the second graph, a second current context and one or more second previous contexts for the second user request; determine one or more interrelationships between the first and the second graphs; and resolve the user requests based on the contexts and the interrelationships.

USPTO Technical Field

Embodiments described herein relate to interactive interfaces (e.g., intelligent personal assistants (IPAs), virtual assistants, knowledge navigators, chatbots, command-response engines, other software/hardware agents capable of performing actions on behalf of or for an entity, etc.). More particularly, embodiments described herein relate to one or more techniques of correlating clusters of contexts (context clusters”) of a user account that corresponds to an entity for use by an intelligent interactive interface (“intelli-interface”) to perform actions on behalf of or for the user account.

Background

Modern consumer electronics are capable of enabling interactive interfaces (e.g., intelligent personal assistants (IPAs), virtual assistants, knowledge navigators, chatbots, command-response engines, other software/hardware agents capable of performing actions on behalf of or for an entity, etc.) to perform actions on behalf of or for user accounts that correspond to entities. That is, these interfaces can receive requests (in the form of inputs) from an entity (e.g., a person, a service, a smart device, etc.) and respond to the requests accordingly. For example, at least one currently available interactive interface can respond to a user’s request received via input (e.g., text input, voice input, gesture input, etc.) for nearby restaurants with a list of establishments within a predetermined location of the user. The output can be provided to the user as textual output, image output (e.g., graphics, video, etc.), audio output, haptic output, tactile output, any combination thereof, or any other known output.

One problem associated with some interactive interfaces is their inability to multi-task—that is, some interactive interfaces cannot receive multiple user requests that are ambiguous or contextually unrelated, manage the multiple user requests concurrently, and resolve the multiple user requests. For example, some typical interactive interfaces cannot receive a first request to “find nearby restaurants” and a second user request to “find nearby bookstores”, manage the requests concurrently, and resolve both user requests. In this example, none of the user requests are resolved before the other one is received. Consequently, these types of interactive interfaces can only receive and resolve a single request before being able to receive (and resolve) another request. This leads to one-purpose-one-action type of interactive interfaces that require users to follow restrictive patterns of usage in order to migrate from one task to another, which can contribute to or cause user dissatisfaction.

Another problem associated with some interactive interfaces is their relative inability to provide relevant predictive and reactive solutions to a user’s requests based on the user’s context. This may be because traditional techniques of context derivation are not precise enough. For example, at least one typical context derivation technique relies on time-based principles. Generally, these time-based approaches can be based on temporal locality principles or spatial locality principles. Stated differently, at least one typical context derivation technique bases its context determinations exclusively on time-based data, such as recent locations or recent interactions, as a way of developing an insight into a user’s context. Such a technique can yield inaccurate predictions, which can cause interactive interfaces relying on this context derivation technique to generate irrelevant solutions to user requests. Irrelevant solutions can contribute to or cause user dissatisfaction.

Yet another problem associated with some interactive interfaces is their inability to partition knowledge used for servicing user requests into manageable data sets. This is exemplified when user context determinations are considered at either a fine-grained context level (e.g., the user is currently at a location with a latitude and longitude of 48.869701, 2.307909, etc.) or a more broadly defined level (e.g., the user is currently on planet Earth, etc.). An incorrect context determination can limit the functionality of an interactive interface that is designed to provide relevant predictive and reactive solutions to a user’s requests. Too fine-grained or narrow a context and the interactive interface will lack enough data to provide relevant and/or reliable solutions to a user’s requests. Too broadly defined or high level a context and the interactive interface will also lack enough data to accurately provide relevant and/or reliable solutions to a user’s requests. For example, if a user asks his interactive interface to suggest items to buy during a trip to a local grocery store and the user has provided the assistant with the following data: underwear, paper towels, and a flashlight. Without a technique for determining the user’s proper context and feeding the determined technique to the interactive interface, irrelevant suggestions may be output to the user by the interactive interface.

The problems discussed above can cause an interactive interface to operate inefficiently because it has to perform multiple attempts in order to resolve a single user request. This inefficient operation can, in turn, result in wasted computational resources. For example, computational resources that would otherwise not be necessary may be needed by an interactive assistant to service a single user request due to errors. Waste includes, but is not limited to, processing power for performing and/or repeating the performance of queries or transactions associated with resolving user requests and storage memory space for storing data about the incorrect or improper resolutions of user requests.

For at least the reasons set forth in this section of the present disclosure, some interactive interfaces remain sub-optimal.

Read the full patent here.

ABOUT ENTEFY

Entefy is an enterprise AI software company. Entefy’s patented, multisensory AI technology delivers on the promise of the intelligent enterprise, at unprecedented speed and scale.

Entefy products and services help organizations transform their legacy systems and business processes—everything from knowledge management to workflows, supply chain logistics, cybersecurity, data privacy, customer engagement, quality assurance, forecasting, and more. Entefy’s customers vary in size from SMEs to large global public companies across multiple industries including financial services, healthcare, retail, and manufacturing.

To leap ahead and future proof your business with Entefy’s breakthrough AI technologies, visit www.entefy.com  or contact us at contact@entefy.com.

New AI framework for generating standardized tool interfaces to API endpoints and smart services in a multi-agent Universal Interaction Platform

U.S. Patent Number: 11,740,950
Patent Title: Application program interface analyzer for a universal interaction platform
Issue Date: August 29, 2023
Inventors: Ghafourifar, et al.
Assignee: Entefy Inc.

Patent Abstract

An application program interface (API) analyzer that determines protocols and formats to interact with a service provider or smart device. The API analyzer identifies an API endpoint or web sites for the service provider or smart device, determines a service category or device category, selects a category-specific corpus, forms a service-specific or device-specific corpus by appending information regarding the service provider or smart device to the category-specific corpus, and parses API documentation or the websites.

USPTO Technical Field

This disclosure relates generally to apparatuses, methods, and computer readable media for interacting with people, services, and devices across multiple communications formats and protocols.

Background

A growing number of service providers allow users to request information or services from those service providers via a third party software applications. Additionally, a growing number of smart devices allow users to obtain information from and control those smart devices via a third party software application. Meanwhile, individuals communicate with each other using a variety of protocols such as email, text, social messaging, etc. In an increasingly chaotic digital world, it’s becoming increasingly difficult for users to manage their digital interactions with service providers, smart devices, and individuals. A user may have separate software applications for requesting services from a number of service providers, for controlling a number of smart devices, and for communicating with individuals. Each of these separate software applications may have different user interfaces and barriers to entry.

The subject matter of the present disclosure is directed to overcoming, or at least reducing the effects of, one or more of the problems set forth above. To address these and other issues, techniques that enable seamless, multi-format, multi-protocol communications are described herein.

Read the full patent here.

ABOUT ENTEFY

Entefy is an enterprise AI software company. Entefy’s patented, multisensory AI technology delivers on the promise of the intelligent enterprise, at unprecedented speed and scale.

Entefy products and services help organizations transform their legacy systems and business processes—everything from knowledge management to workflows, supply chain logistics, cybersecurity, data privacy, customer engagement, quality assurance, forecasting, and more. Entefy’s customers vary in size from SMEs to large global public companies across multiple industries including financial services, healthcare, retail, and manufacturing.

To leap ahead and future proof your business with Entefy’s breakthrough AI technologies, visit www.entefy.com  or contact us at contact@entefy.com.

A context-driven Optimal Decision Engine (ODE) for automated protocol selection in people-centric digital communication systems

U.S. Patent Number: 11,831,590
Patent Title: Apparatus and method for context-driven determination of optimal cross-protocol communication delivery
Issue Date: November 28, 2023
Inventors: Ghafourifar, et al.
Assignee: Entefy Inc.

Patent Abstract

This disclosure relates generally to apparatus, methods, and computer readable media for composing communications for computing devices across multiple formats and multiple protocols. More particularly, but not by way of limitation, this disclosure relates to apparatus, methods, and computer readable media to permit computing devices, e.g., smartphones, tablets, laptops, and the like, to send communications in a number of pre-determined and/or ‘determined-on-the-fly’ optimal communications formats and/or protocols. Determinations of optimal delivery methods may be intelligently based on the sender individually or the relationship with the sender in the context of a group of recipients—including the format of the incoming communication, the preferred format of the recipient and/or sender, and an optimal format for a given communication message. The techniques disclosed herein allow communications systems to become ‘message-centric’ or ‘people-centric,’ as opposed to ‘protocol-centric,’ eventually allowing consideration of message protocol to fall away entirely for the sender of the communication.

USPTO Technical Field

This disclosure relates generally to apparatuses, methods, and computer readable media for composing communications for computing devices across multiple communications formats and protocols as intelligently determined using one or more context factors to determine the optimal delivery method for the communications.

Background

The proliferation of personal computing devices in recent years, especially mobile personal computing devices, combined with a growth in the number of widely-used communications formats (e.g., text, voice, video, image) and protocols (e.g., SMTP, IMAP/POP, SMS/MMS, XMPP, etc.) has led to a communications experience that many users find fragmented and restrictive. Users desire a system that will provide ease of communication by sending an outgoing message created in whatever format was convenient to the composer, with delivery options to one or more receivers in whatever format or protocol that works best for them—all seamlessly from the composer’s and recipient(s)’s perspective. With current communications technologies that remain “protocol-centric”—as opposed to “message-centric” or “people-centric”—such ease of communication is not possible.

In the past, users of communications systems first had to choose a communication format and activate a corresponding application or system prior to composing a message or selecting desired recipient(s). For example, if a person wanted to call someone, then he or she would need to pick up a telephone and enter the required phone number or directory in order to connect. If a person wanted to email a colleague, that person would be required to launch an email application before composing and sending the email. Further, while long-form text might be the most convenient format at the time for the composer, long-form text may not be convenient for the receiver—resulting in a delayed receipt of and/or response to the message by the receiver. With the multi-format communication composition techniques described herein, however, the user flow is much more natural and intuitive. First, the ‘Sender’ (e.g., a registered user of the multi-format, multi-protocol communication system), can select the desired recipient(s). Then, the Sender may compose the outgoing message (in any format, such as text, video recording, or audio recording). Next, the system (or the Sender, in some embodiments) intelligently chooses the delivery protocol for the communication, e.g., whether the communication is going to be sent via email, SMS, IM, or social media, etc. Finally, the outgoing message is converted into the desired outgoing message format (either by the Sender’s client device or a central communications system server) and sent to the desired recipient(s) via the chosen delivery protocol(s).

According to the multi-format communication composition techniques described herein, the emphasis in the communication interface is on the “who” and the “what” of the communication—but not the “how.” The multi-format communication composition system described herein takes care of the “how”—including an ‘Optimal’ option, as determined by a dedicated service in the central communication server, such as a service referred to herein as the ‘Optimal Decision Engine,’ which may be employed to deliver the outgoing communication to the desired recipient(s) in the most preferred way, e.g., either through preferences that the recipient(s) has specified via his or her profile in a multi-format communications network or through the communication protocol information regarding the desired recipient that is stored in the Sender’s contact list.

The subject matter of the present disclosure is directed to overcoming, or at least reducing the effects of, one or more of the problems set forth above. To address these and other issues, techniques that enable seamless, multi-format communications via a single user interface are described herein.

Read the full patent here.

ABOUT ENTEFY

Entefy is an enterprise AI software company. Entefy’s patented, multisensory AI technology delivers on the promise of the intelligent enterprise, at unprecedented speed and scale.

Entefy products and services help organizations transform their legacy systems and business processes—everything from knowledge management to workflows, supply chain logistics, cybersecurity, data privacy, customer engagement, quality assurance, forecasting, and more. Entefy’s customers vary in size from SMEs to large global public companies across multiple industries including financial services, healthcare, retail, and manufacturing.

To leap ahead and future proof your business with Entefy’s breakthrough AI technologies, visit www.entefy.com  or contact us at contact@entefy.com.

Context-aware AI to power privacy-preserving content indexing and retrieval of encrypted data within zero-knowledge systems

U.S. Patent Number: 11,669,554
Patent Title: System and method of information retrieval from encrypted data files through a context-aware AI engine
Issue Date: June 06, 2023
Inventors: Ghafourifar, et al.
Assignee: Entefy Inc.

Patent Abstract

This disclosure relates to personalized and dynamic server-side searching techniques for encrypted data. Current so-called ‘zero-knowledge’ privacy systems (i.e., systems where the server has ‘zero-knowledge’ about the client data that it is storing) utilize servers that hold encrypted data without the decryption keys necessary to decrypt, index, and/or re-encrypt the data. As such, the servers are not able to perform any kind of meaningful server-side search process, as it would require access to the underlying decrypted data. Therefore, such prior art ‘zero-knowledge’ privacy systems provide a limited ability for a user to search through a large dataset of encrypted documents to find critical information. Disclosed herein are communications systems that offer the increased security and privacy of client-side encryption to content owners, while still providing for highly relevant server-side search-based results via the use of content correlation, predictive analysis, and augmented semantic tag clouds for the indexing of encrypted data.

USPTO Technical Field

This disclosure relates generally to systems, methods, and computer readable media for performing highly relevant, dynamic, server-side searching on encrypted data that the server does not have the ability to decrypt.

Background

The proliferation of personal computing devices in recent years, especially mobile personal computing devices, combined with a growth in the number of widely-used communications formats (e.g., text, voice, video, image) and protocols (e.g., SMTP, IMAP/POP, SMS/MMS, XMPP, etc.) has led to a communications experience that many users find fragmented and difficult to search for relevant information in. Users desire a system that will provide for ease of message threading by “stitching” together related communications and documents across multiple formats and protocols—all seamlessly from the user’s perspective. Such stitching together of communications and documents across multiple formats and protocols may occur, e.g., by: 1) direct user action in a centralized communications application (e.g., by a user clicking ‘Reply’ on a particular message); 2) using semantic matching (or other search-style message association techniques); 3) element-matching (e.g., matching on subject lines or senders/recipients/similar quoted text, etc.); and/or 4) “state-matching” (e.g., associating messages if they are specifically tagged as being related to another message, sender, etc. by a third-party service, e.g., a webmail provider or Instant Messaging (IM) service). These techniques may be employed in order to provide a more relevant “search-based threading” experience for users.

With current communications technologies, conversations remain “siloed” within particular communication formats or protocols, leading to users being unable to search uniformly across multiple communications in multiple formats or protocols and across multiple applications and across multiple other computing devices from their computing devices to find relevant communications (or even communications that a messaging system may predict to be relevant), often resulting in inefficient communication workflows—and even lost business or personal opportunities. For example, a conversation between two people may begin over text messages (e.g., SMS) and then transition to email. When such a transition happens, the entire conversation can no longer be tracked, reviewed, searched, or archived by a single source since it had ‘crossed over’ protocols. For example, if the user ran a search on their email search system for a particular topic that had come up only in the user’s SMS conversations, even when pertaining to the same subject manner and “conversation,” such a search may not turn up optimally relevant results.

Users also desire a communications system with increased security and privacy with respect to their communications and documents, for example, systems wherein highly relevant search-based results may still be provided to the user by the system—even without the system actually having the ability to decrypt and/or otherwise have access to the underlying content of the user’s encrypted communications and documents. However, current so-called ‘zero-knowledge’ privacy systems (i.e., systems where the server has ‘zero-knowledge’ about the data that it is storing) utilize servers that hold encrypted data without the decryption keys necessary to decrypt, index, and/or re-encrypt the data. As such, this disallows any sort of meaningful server-side search process, which would require access to the underlying data (e.g., in order for the data to be indexed) to be performed, such that the encrypted data could be returned in viable query result sets. Therefore, such prior art ‘zero-knowledge’ systems provide a limited ability for a user to search through a large dataset of encrypted documents to find critical information.

It should be noted that attempts (both practical and theoretical) have been made to design proper ‘zero-knowledge’ databases and systems that can support complex query operations on fully encrypted data. Such approaches include, among others, homomorphic encryption techniques which have been used to support numerical calculations and other simple aggregations, as well as somewhat accurate retrieval of private information. However, no solution currently known to the inventors enables a system or database to perform complex operations on fully-encrypted data, such as index creation for the purpose of advanced search queries. Thus, the systems and methods disclosed herein aim to provide a user with the ability to leverage truly private, advanced server-side search capabilities from any connected client interface without relying on a ‘trusted’ server authority to authenticate identity or store the necessary key(s) to decrypt the content at any time.

Read the full patent here.

ABOUT ENTEFY

Entefy is an enterprise AI software company. Entefy’s patented, multisensory AI technology delivers on the promise of the intelligent enterprise, at unprecedented speed and scale.

Entefy products and services help organizations transform their legacy systems and business processes—everything from knowledge management to workflows, supply chain logistics, cybersecurity, data privacy, customer engagement, quality assurance, forecasting, and more. Entefy’s customers vary in size from SMEs to large global public companies across multiple industries including financial services, healthcare, retail, and manufacturing.

To leap ahead and future proof your business with Entefy’s breakthrough AI technologies, visit www.entefy.com  or contact us at contact@entefy.com.