Agentic Service Management 101 – Getting started with AI agents for ITSM professionals on Nov 14, 2024. Register now ->

In This Article:

No items found.

Share Article

How Atomicwork and ragas bring precision to AI-powered service management

Find out how Atomicwork uses ragas and Weaviate to improve its accuracy and performance for intelligent service management.

Cutting edge AI tools require cutting edge reliability measures and as the developers of a modern service management and workflow automation software, we need to be extra sure that our solution holds up against the highest ethical and legal standards.

We have been fortunate to find a good collaboration partner for this endeavour in ragas; ragas has helped us achieve significant AI accuracy and performance improvements, through their sophisticated evaluation and synthetic data generation techniques. Let’s dive into it.

The ragas experiment

The primary goal of the ragas experiment was to improve our system's ability to accurately identify and retrieve accurate information for employee queries.

When employees message our AI assistant on Slack, Teams or email, Atom needs to identify the intent and either retrieve relevant info, a service item or a form. While we’ve written about our efforts to experiment with Loaders and knowledge support using LlamaIndex, this is the next step in our efforts to get fast answers to employees. This involved enhancing intent recognition to categorize user queries into three distinct types:

  • Knowledge Base (KB) inquiries - all queries that can be answered by snippets from knowledge documents uploaded or connected by IT/HR teams
  • Service requests - all queries for a service, drawn from a team’s service catalog which might contain anything from a laptop to a security service.
  • Small talk interactions - hi, hello and how are you?

To optimize Atom’s responses, we adopted Weaviate’s hybrid search platform feature.

While vector search is fundamental for an AI assistant, traditional keyword search is still important for use cases where precision matters like in critical HR scenarios or with legal documents.

Hybrid search in Weaviate combines keyword (BM25) and vector search to leverage both exact term matching and semantic context. By merging results within the same system, developers can build intuitive search applications faster.

Traditional keyword search vs. vector search

But implementing hybrid search in Weaviate involves specifying parameters such as the alpha value and fusion type. For example, to balance keyword and vector searches equally, you can set alpha to 0.5. Additionally, you can use the relativeScoreFusion method to combine the scores from both search techniques. This approach ensures that the search results are both contextually relevant and accurate in terms of keyword matching.

Weaviate hybrid search
Source: Weaviate hybrid search

💡 The alpha value determines the weight given to each search method. An alpha value of 0 uses only BM25 (keyword search), while a score of 1 relies solely on vector search. An alpha value of 0.5 balances both methods equally. Extensive experimentation with different alpha values allowed us to find the optimal balance for each dataset, enhancing the accuracy of user query responses.

Developing specialized datasets

To implement alpha tuning for our hybrid search method, we developed specialized datasets tailored to different user engagement facets:

  • A service catalog dataset
  • A knowledge base (KB) dataset built with unstructured data.
  • An FAQ KB dataset that is optimized for efficient FAQ interactions.

We used ragas to create a synthetic knowledge base dataset and for retrieval evaluation.

An ideal evaluation dataset should encompass various types of questions encountered in production, including questions of varying difficulty levels. LLMs by default are not good at creating diverse samples as it tends to follow common paths. ragas test data generation employs various SOTA methods like evolve-instruct, etc to generate a high quality and diverse test dataset from any given list of documents. This ensure high coverage of various test cases seen in production as well.

This ensures comprehensive coverage of potential user queries, enhancing the robustness of our AI evaluation.

The process begins with loading a collection of documents, which are then used to generate synthetic Question/Context/Answer samples. Techniques like multi-context rephrasing and conditional modification add complexity to the questions, creating a more challenging and representative dataset for evaluation. This approach not only saves significant time compared to manual dataset creation but also ensures a higher quality of evaluation through diversity and complexity in the generated questions.

We achieved the following through the ragas experiment:

  • A suite of specialized datasets, each tailored for a unique facet of user engagement.
  • Customized alpha values for individual tenants, optimizing user query responses.

The improvements in accuracy have been particularly noteworthy.

For instance, our AI's ability to correctly identify user intent has increased by 15%, and the precision of responses has improved by 20%

These enhancements have had a direct impact on the quality of service we provide to our users, making our AI systems more reliable and effective. When it comes to enterprise IT service management systems, a 20% improvement in precision can have a world of an impact on end-user satisfaction and an agent’s workload. We are eager to continue building on this success and exploring new opportunities for innovation.

In conclusion

Our partnership with ragas has not only enhanced our AI capabilities but also paved the way for future innovations in synthetic data generation and evaluation methods. Our enterprise AI systems are now more capable than ever, providing precise and reliable responses that enhance the user experience.

To see our assistant in action, sign up for a demo today or connect with us on LinkedIN.

No items found.
Get a demo

You may also like...

AI in IT Service Management - What the experts think
IT thought leaders Phyllis Drucker and Michael Dortch, on the evolving landscape of AI in IT, particularly in the realm of service management.
Embracing Responsible AI Practices with the TRUST Framework
Unveiling our AI security and compliance framework that helps CIOs and IT leaders to deliver exceptional value with enterprise AI while upholding ethical and security standards.
The New CAP Theorem for B2B GenAI Apps
Our learnings for enterprise IT, product and engineering teams looking to develop B2B GenAI applications for their business.