Next Webinar →Build a self-serve agent, live | 20 MAY
WebinarThe Self-Serve Agent Series · Part 2 of 3
Part 1 · Replay
Build a self-serve agent, live
Part 2 · You are here
You’ve built an agent. Now let’s make it great.

You’ve built an agent. Now let’s make it great.

Mitra Abrahams and Jason Cole share how to monitor, debug, and improve live AI agents using real telemetry so quality doesn’t quietly degrade over time.

Join 4,000+ data pros who already learn with us
AccentureBumbleCleoDoceboInsurelloJustParkKitLeadIQMoneyboxMUBIWriter
AccentureBumbleCleoDoceboInsurelloJustParkKitLeadIQMoneyboxMUBIWriter
AccentureBumbleCleoDoceboInsurelloJustParkKitLeadIQMoneyboxMUBIWriter
About this session

What to expect, and why it’s worth your time.

Your agent is in the wild. Real users are asking real questions. Some are getting good answers. Some aren’t. How do you know which is which - and how do you systematically make it better?

This session is about the operational reality of running a self-serve agent at scale. Alice and Jason walk through how to monitor agent quality using telemetry data, spot the patterns that signal something’s wrong, and update context, schema, and data models based on real usage rather than guesswork.

Most teams’ agents quietly degrade over the first few months because no one’s watching the right signals. By the end of this session, you’ll know exactly what to watch and what to do about it.

This is Part 2 of a three-part series. It builds naturally on Part 1’s ‘how to build it’ - but stands alone if your agent is already live.

Frequently asked questions

  • I haven’t watched Part 1 - can I still get value from this? Absolutely. Part 2 stands alone. We’ll briefly recap the building blocks at the start so no one’s lost.
  • Do I need an agent already running? No, but you’ll get the most out of it if you’re either running one or about to launch.
  • Will this work for non-Count agents? Yes. The monitoring framework is tool-agnostic. We’ll demo with Count, but the principles apply universally.
  • What’s the difference between this and Part 1? Part 1 is ‘how do I build one.’ Part 2 is ‘how do I keep it good once it’s live.’ Different problems, different stages.
What you’ll leave with

Concrete takeaways, not a think piece.

01
What to monitor
Questions asked, answer quality, rage signals, drop-off - and where users got the wrong answer and didn’t trust it.
02
Spotting themes in telemetry
What is the business actually asking about, and what are you missing?
03
Updating based on real usage
How to improve context, schema, and data models from patterns rather than guesses.
04
Governance and ownership
Who owns the agent’s accuracy, and how to stop blame-fear from killing momentum.
05
Three ways to use the same telemetry data
The same signal can drive three different types of improvement - we walk through all of them.
Your host & who this is for

Led by someone who’s actually done the work.

Mitra Abrahams

Mitra Abrahams

Head of Customer Success · Count

Mitra has helped hundreds of companies move their data team from a support function into a driver of business improvement. As Head of Customer Success at Count, she works directly with data leaders to unlock value from their analytics investment. Before joining Count, Mitra ran her own analytics consultancy.

Jason Cole

Jason Cole

Senior Software Engineer

Jason is a Senior Software Engineer at Count, focused on the infrastructure that powers self-serve analytics agents. He works at the intersection of data engineering and product.

Who this is for

  • Data and analytics teams with an agent in production
    Or weeks away - and need to know how to keep it good over time.
  • Heads of Data thinking about longevity
    Who want a plan for maintaining agent quality, not just launching it.
  • Data engineers responsible for agent infrastructure
    Who need a monitoring and governance framework they can actually implement.
  • Anyone whose agent has gone from ‘demo magic’ to ‘user complaints’
    And needs a systematic path back to trust.

Secure your spot.
It’s free.

Mitra Abrahams and Jason Cole share how to monitor, debug, and improve live AI agents using real telemetry so quality doesn’t quietly degrade over time.

Registration open
3 June 2026
3:00 pm BST / 10:00 AM EDT
Mitra Abrahams
Mitra Abrahams
Head of Customer Success
Jason Cole
Jason Cole
Senior Software Engineer