You’ve built an agent. Now let’s make it great.
Mitra Abrahams and Jason Cole share how to monitor, debug, and improve live AI agents using real telemetry so quality doesn’t quietly degrade over time.



What to expect, and why it’s worth your time.
Your agent is in the wild. Real users are asking real questions. Some are getting good answers. Some aren’t. How do you know which is which - and how do you systematically make it better?
This session is about the operational reality of running a self-serve agent at scale. Alice and Jason walk through how to monitor agent quality using telemetry data, spot the patterns that signal something’s wrong, and update context, schema, and data models based on real usage rather than guesswork.
Most teams’ agents quietly degrade over the first few months because no one’s watching the right signals. By the end of this session, you’ll know exactly what to watch and what to do about it.
This is Part 2 of a three-part series. It builds naturally on Part 1’s ‘how to build it’ - but stands alone if your agent is already live.
Frequently asked questions
- I haven’t watched Part 1 - can I still get value from this? Absolutely. Part 2 stands alone. We’ll briefly recap the building blocks at the start so no one’s lost.
- Do I need an agent already running? No, but you’ll get the most out of it if you’re either running one or about to launch.
- Will this work for non-Count agents? Yes. The monitoring framework is tool-agnostic. We’ll demo with Count, but the principles apply universally.
- What’s the difference between this and Part 1? Part 1 is ‘how do I build one.’ Part 2 is ‘how do I keep it good once it’s live.’ Different problems, different stages.
Concrete takeaways, not a think piece.
Led by someone who’s actually done the work.

Mitra Abrahams
Mitra has helped hundreds of companies move their data team from a support function into a driver of business improvement. As Head of Customer Success at Count, she works directly with data leaders to unlock value from their analytics investment. Before joining Count, Mitra ran her own analytics consultancy.

Jason Cole
Jason is a Senior Software Engineer at Count, focused on the infrastructure that powers self-serve analytics agents. He works at the intersection of data engineering and product.
Who this is for
- Data and analytics teams with an agent in productionOr weeks away - and need to know how to keep it good over time.
- Heads of Data thinking about longevityWho want a plan for maintaining agent quality, not just launching it.
- Data engineers responsible for agent infrastructureWho need a monitoring and governance framework they can actually implement.
- Anyone whose agent has gone from ‘demo magic’ to ‘user complaints’And needs a systematic path back to trust.
Secure your spot.
It’s free.
Mitra Abrahams and Jason Cole share how to monitor, debug, and improve live AI agents using real telemetry so quality doesn’t quietly degrade over time.
3:00 pm BST / 10:00 AM EDT

