
Every ITSM tool ships with analytics. You've seen them: pie charts for ticket categories, bar charts for workload, volume trends over time. Deflection rate. MTTR. ESAT.
They all answer one question reasonably well: Is the system alive?
But when you’re running an IT (or HR, or Finance) service org day-to-day, the questions you need answered are more practical:
Most analytics can't answer these. Not because the data isn't there, but because of how it's structured.
I sat down with Gautham Menon, our Product Manager leading Atomicwork’s AI charter, to talk about how this gap drove us to rethink what our Insights report should actually do.
Here’s an underappreciated truth we’ve learned working with enterprises of all sizes:
Most service teams were never designed with automation in mind.
They were staffed for ticket volume, not for identifying what shouldn’t become a ticket in the first place. They built processes to handle inflow — not to continuously eliminate it. Automation is a muscle most organizations are building for the first time.
So, when an admin wants to improve things, their question usually isn't “Can we automate?” It’s “What should I automate?”
When you bolt an AI assistant onto an existing service desk, you typically end up with two separate analytics views:
They don’t talk to each other.
So when someone asks "How are we handling VPN issues?", you have to mentally stitch together the Assistant data (how many VPN conversations happened, what percentage was deflected) with the ticket data (how many VPN tickets were created, who handled them, were SLAs met).
You end up running the numbers in a spreadsheet, trying to connect dots your tools should be connecting for you.
Employees don’t experience your service org in layers; they experience it as one journey. Your analytics need to reflect that.
Instead of treating conversations and tickets as separate data sets, we model the entire lifecycle of a request. A conversation with Atom might surface knowledge articles, present catalog options, and eventually create a ticket that gets routed, reassigned, and resolved.
That’s not multiple reports. That's one story. And we analyze it as such.
So when you look at "VPN connection issues", you should see:
All in one view. All for that specific issue. But unified data is just the foundation. What matters is what kind of questions you can now ask.

Traditional quantitative dashboards are great for pulse checks and quarterly business reviews. Is the AI agent being used? Is adoption going up quarter-over-quarter? They tell leadership whether the investment in AI is paying off.
But they're the wrong tool for the IT admin who's trying to figure out what to improve this week.
They don't need to know that deflection went from 62% to 64% last month. They need to know:
That's qualitative insight. We made a deliberate decision to build Insights around this; not as an afterthought, but as the core.
Atom reads what people are saying across every conversation and request. Using Claude's language models, it then clusters similar issues together and organizes them into themes and sub-themes, ranked by volume.
As Gautham put it in the webinar:
“The inside joke is that Atom is a forward deployed analyst — always on, analyzing every request and every conversation 24/7. Except it’s an AI agent doing it behind the scenes."
The themes you see aren't predefined categories configured months ago. They're generated by analyzing how people describe their problems and how those conversations unfold.
This matters because real problems don't fit neatly into the category structures we set up months or years ago. "VPN issues" might break down into certificate problems, multi-client authentication failures, and connectivity on specific networks. Insights surfaces those sub-themes automatically, so you’re always looking at what matters most.
Once you drill into a sub-theme, Insights shows you a focused set of signals designed to answer a simple question: what should I fix or automate next?
For every sub-theme, you see the total conversations and total requests created, across all channels.
Not every problem starts in a chat. By pulling all entry points into one view, Insights shows the true demand signal, not just conversational traffic. It helps you answer: is this really high-volume or are employees bypassing the Assistant entirely?
You also see:
You understand whether it's a knowledge problem, whether the issue simply can't be resolved with information alone, or whether users are being overwhelmed with choices that lead to incorrect routing downstream.
You also see request metrics like:
These show how efficiently requests are handled once they reach a human. They reveal where ownership is unclear, where tickets bounce between teams, where certain problem types consistently breach SLAs and need different routing or staffing.
In a traditional ITSM dashboard, you'd see "average reassignments per ticket" as an org-wide metric. Maybe you can filter by team. But you don't know which problems are causing the ping-pong.
In Insights: Reassignment rates per theme.
You discover that VPN-related tickets have an 88% reassignment rate, averaging 2 reassignments each. Meanwhile, hardware requests are fine.
Now you have a clear next step: Fix routing for VPN issues. Create a dedicated incident catalog with clear assignment rules. The problem is scoped, not buried in an average.
Traditional view: "Deflection rate is 65%. Goal is 75%."
In Insights: Top 10 themes by volume.
For “Network Connectivity,” the deflection is 71%. Application Access is 48%.
Within Application Access, the sub-theme "SSO Authentication" has almost no deflection - Atom surfaces articles but users still create tickets.
You now know SSO authentication needs better knowledge content, or a workflow that actually resolves the issue instead of just explaining it.
Service desks accumulate catalogs over years. Mergers, team changes, new tools – every process change adds to the existing sprawl. Eventually you have six different ways to report a VPN problem.
Insights shows which catalogs are being surfaced for each theme. If "General IT Request," "Network Issue," "VPN Troubleshooting," and "Zscaler VPN Issues" all appear for the same sub-theme, that's a signal. Users (and Atom) are confused. Time to consolidate.
The current Insights report answers "what's happening". The next phase is "what should I do about it."
We're working on surfacing specific recommendations:
The goal: Close the loop. Not just show you data but tell you the three things to do this week to make Atom smarter, without turning your job into full-time forensics.
We do get asked about ROI constantly. It's important, but it's a different question than what Insights is designed to answer.
ROI is an output metric. You measure it quarterly or annually. It tells you whether the investment was worth it.
Insights focus on the input metrics that drive ROI. Fix your reassignment problems, improve deflection for your top five themes, clean up your catalog structure and ROI will follow.
Traditional ITSM analytics were designed for a world where humans handled most requests, and AI was a bolt-on. The data models reflect that: tickets are the primary object, conversations are a separate add-on.
We built Atomicwork differently. Conversations are first-class objects. Every interaction is tracked and linked - to its knowledge sources, to any tickets created, to the catalogs surfaced. This is what makes Atomicwork Insights possible. It's also what makes this kind of analysis impossible to retrofit onto legacy platforms.
If you’d like to see Insights in action or talk through what qualitative ops intelligence could look like for your team, reach out to us for a demo.
Faq answer paragraph
Faq answer paragraph
Faq answer paragraph
Faq answer paragraph
Faq answer paragraph
Faq answer paragraph
Faq answer paragraph
Faq answer paragraph
Faq answer paragraph
Faq answer paragraph



