down arrow
Webinar

Evaluating AI Across the Research Pipeline

Evaluating AI Across the Research Pipeline

Decoding the Risk Cascade (& What to Do About It)

calendar page icon
Tuesday, March 31, 2026
Date TBD
calendar page icon
Date TBD
alarm clock icon
Time TBD
11:00am - 12:00pm ET
location pin on map icon
Remote
hand drawn right arrow
Watch on-demand
Lindsey DeWitt Prat, PhD headshot
Speaker
Lindsey DeWitt Prat, PhD
Director
Bold Insight
Speaker
Kaleb Loosbrock
Senior UX Researcher
in partnership with

Our 2026 UXR Tools Map showed that most tools are now AI-native or augmented. But AI research tools don't make one decision — they chain many together, each step inheriting what the last one got wrong.

Research leaders Lindsey DeWitt Prat and Kaleb Loosbrock team up to tackle how we evaluate AI tools across the research pipeline. Lindsey introduces the “research risk cascade” and walks us through what she found tracking every divergence across a real pipeline. Kaleb joins for discussion on context engineering and what practitioners can start doing now.

You’ll learn:

  • How errors compound across a research pipeline, and why “90% accurate” doesn’t mean what you think
  • A practical framework for evaluating AI tools against what your research needs to preserve
  • How context engineering can increase confidence in outcomes
map pin icon

Location

Featuring:

Lindsey DeWitt Prat, PhD headshot
Lindsey DeWitt Prat, PhD
Director
Bold Insight
Kaleb Loosbrock
Senior UX Researcher
illustration of a collection of stars

Your Hotline Host

author photo for Ben W
Ben Wiedmaier
Senior Content Marketing Manager
User Interviews

With User Interviews, it's simple to run high-quality research with your target audience.

User Interviews is the only tool that lets you source, screen, track, and pay participants from your own panel, or from our network of 6M+ participants.

Sign Up Free

Register Now

an illustration of four people in UI shirts, all holding a flag that reads "User Interviews"