UX Research & Design

  • (2025-Ongoing)

    At ServiceNow, I lead UX research for AI Agent Studio, shaping how medium to large enterprises build, test, and govern Agentic AI in their organization. I’ve conducted end-to-end research across onboarding, prompting, versioning, and governance workflows by uncovering what teams need to feel confident handing over work to AI.

    My research has directly informed product strategy, clarified complex concepts, and grounded early designs in the real-world expectations of admins, developers, and business users.

    Product Page - AI Agent Studio

  • (2023-2024)

    I conducted foundational user interviews and surveys to understand how service desk agents summarize support cases & incidents, including what details they include, how they describe the problem, and what makes a summary useful.

    These insights directly shaped the design of the case summarization feature in Now Assist, helping ensure it reflects real user needs and language.

    Product Page - Case Summarization

  • Task Intelligence is a ServiceNow AI capability that streamlines service operations by automatically creating, triaging, and investigating tasks. It uses AI to extract key information, detect sentiment and language, and intelligently route cases—freeing up agents to focus on high-value work.

    I conducted UX research across both the end-user service agent experience and the admin setup flow, including interviews and usability testing. My work focused on how agents interact with AI-generated suggestions and how admins define, train, and evaluate models through a guided setup. Insights from this research informed improvements in explainability, confidence score displays, and overall usability for both roles.

    Product Page - Task Intelligence

  • I led user interviews and usability testing for ServiceNow’s Document Intelligence, an AI-driven system for extracting data from text-based documents like receipts and PDFs, with a focus on how users interpret confidence scores and understand automated outputs.

    My research surfaced key trust and usability issues around explainability, directly informing design changes that improved review accuracy, user control, and adoption.

  • I conducted user interviews and literature reviews on collaborative internal data labeling tools, focusing on how teams navigate ambiguous inputs and disagreement during annotation.

    My research surfaced opportunities to improve accuracy, reduce friction, and support shared decision-making in collaborative labeling workflows.

Writing & Publications

  • Presented at SIGHCI 2024, a Special Interest Group on Human-Computer Interaction conference, and published in the AIS Electronic Library (AISeL).

    By Pauline Malaguti, Alexander J. Karran, Di Le, Hayley Mortin, and Constantinos K. Coursaris.SIGHCI 2024 Proceedings, Association for Information Systems. AIS Electronic Library (AISeL).

    Full paper here.

  • Written for UXMatters

  • Written for UXMatters

  • Written for UXMatters

  • Published with Data Science Alliance

  • Written for UXMatters

  • Written for UXMatters

  • Written for UXMatters

Speaking Engagements

  • This talk introduces a persona-based framework helps product teams design AI Agents by treating them as unique worker personas with defined goals, constraints, and success metrics rather than just reactive tools.

    We explored the shift from traditional task execution to intentional, proactive AI systems and provided criteria for identifying where Agentic AI makes sense alongside clear definitions of how these systems differ from standard automation.

    Presentation recording (38 mins)

  • If you're building generative AI applications for everyday consumers, there's room for error. Shipping something that's a bit half-baked, listening to feedback, and launching a better version is A-ok. 

    But, when it comes to generative AI in enterprise applications, it's a whole different story. How do we build great experiences for business customers with this early technology, without comprimising the consistency they expect from their tools?

    Presentation Recording (34 mins)

  • This presentation explored how UX research can bridge the gap between data annotation teams and user needs by designing meaningful feedback mechanisms that improve AI model training.

    Working alongside a Senior Linguist from our internal data annotation team, we examined how traditional annotation workflows often miss crucial user context and demonstrated research methods for capturing nuanced feedback that leads to more human-centered AI outputs.

  • At UXC23, we explored how generative AI and slot machines, despite operating on different technological spectrums, share a common thread of captivating users through uncertainty and variable outcomes.

    We introduced "controlled friction" as a necessary shift in UX design, moving away from traditional streamlined experiences toward empowering users to understand and navigate AI's complexities while addressing the ethical concerns and user expectations that come with uncertain outcomes.

  • Confidence scores