Is My AI Working? Understanding AI Accuracy in Knowledge Management

In today's digital landscape, Artificial Intelligence (AI) is the rage, boasting unparalleled problem-solving capabilities at lightning speed. Among its many applications, one area witnessing rapid evolution is knowledge management – the ability to extract answers from vast pools of data owned or licensed by businesses.

Picture this: You invest in cutting-edge AI-powered knowledge management, expecting precise answers to your queries. Yet, sometimes, the results fall short. How can this happen? Just how accurate is AI, and is it even functioning properly?

For our development and product teams, accuracy is akin to solving a math problem. We have an exceptional team continually optimizing algorithms to enhance accuracy. However, before algorithms or models come into play, there's a profoundly human aspect to consider.

Humans introduce an incredible variable into AI systems. They can pose questions in myriad ways, challenging the AI to decipher intent and provide the best possible answers. For the most part, this is what we should expect; it is the AI’s job. Though there are instances where this just cannot happen, leaving users feeling dissatisfied or questioning the system's intelligence.

Consider this scenario: I query the world's smartest AI for the price of lumber in Puerto Rico, yet the AI lacks access to pertinent data sources that could answer the question. The result is predictable: zero accuracy, a disappointed user and a perceived failure of the AI.

AI results have a huge dependency on access to content and the quality of the data available.  In a business context, this data comprises proprietary or licensed information. To ensure successful queries, users must align their questions with the system's available knowledge, contingent upon their access privileges.

For instance, a pilot project conducted a few years back involved 150 users situated in Western Europe. The project focused on harnessing domain-specific knowledge tailored to a particular business unit. With an extensive repository of hundreds of thousands of pages of content, the user feedback regarding response quality was overwhelmingly positive. However, the pilot encountered a setback when two users based in the US expressed dissatisfaction with the system. Despite the majority of users rating the system highly, these outliers highlighted a significant issue: the AI lacked access to relevant data sources from the US, rendering it unable to address their inquiries effectively. Interestingly, these two users were not originally intended to participate in the pilot but somehow joined, underscoring a crucial lesson: for AI to thrive, it must be seamlessly connected to data that aligns with the diverse needs of its users.

Furthermore, following a couple of decades with corporate systems that predominantly rely on keyword, we still see approximately 20% of all queries employing this method. Yet, it's an imprecise approach for advanced AI systems.

Consider a scenario where a data security company possesses vast repositories of content covering various aspects of data security, including technical documentation, training materials, government policies, case studies, and promotional materials. When a user poses a query such as "security" to our advanced AI, it faces the daunting task of sifting through potentially millions of pages containing that keyword. Yet, the crucial question remains: what specific aspect of security does the user seek? How can the AI discern that the user is actually interested in "cloud security requirements when deploying their apps to the Azure Cloud"? Once again, relying solely on a keyword-based search system would yield a zero accuracy score in this context.


In thinking about this human equation and their impact on accuracy, the key items we try to emphasize during Lucy’s onboarding is that users should pose genuine questions, akin to human interactions, and be cognizant of available content. If a human was capable of reading and retaining all of the available content, would it be reasonable they could formulate an answer based on the question.

Navigating the intricacies of AI accuracy in knowledge management requires a holistic approach. To ensure the best measure of accuracy, it's imperative to pose meaningful, real-world questions to a system equipped with access to relevant data enabling it to provide informed answers. This allows developers and data scientists to precisely evaluate algorithm effectiveness and determine the system's accuracy levels. Such an approach enables efficient accuracy assessment and empowers engineers and data scientists to iteratively enhance algorithms and tune models, paving the path for a more precise and impactful solution.

Uploading is Obsolete! Automated Knowledge Management is the New Standard

AI-powered knowledge management systems now have the ability to remove time-consuming, manual processes of uploading, tagging and curation using automation. See what's possible in our latest white paper.

Get Your Copy