I've spent the last six years working as a UX researcher, mostly with Artificial Intelligence/Machine Learning platforms, enterprise software, and complex system design. I'm drawn to the early-stage, unwieldy questions: How can human users and AI systems work together in meaningful ways? What kinds of interfaces make the invisible workings of AI feel legible rather than mystifying? How do we design for trust when the systems themselves are still figuring out what they're supposed to do?
Before I was asking these questions professionally, I was a data annotator, spending my days training algorithms by teaching them to recognize patterns. That work gave me a particular vantage point on AI: not just as a user or researcher, but as someone who's been inside the machinery, watching it learn. It also made me something of a subject matter expert on how AI actually functions, which turns out to be essential for separating what these systems can genuinely do from the noise of hype cycles. Most importantly, it taught me to recognize the kinds of human problems AI is actually suited to solve versus the ones where it creates more friction than it resolves.
What it's like to work with me on a UX project:
Mixed methods with intention – I work across interviews, surveys, usability testing, and concept validation, but I lean toward qualitative approaches. I work best when I can triangulate across sources, but thrive in projects where I get to be face-to-face with people.
Enterprise fluency, human focus – I've worked with system admins, data analysts, data scientists, platform owners, service desk agents and executives across companies of different scales. I’m committed to finding patterns from these conversations and turning them into actionable insights that map onto product decisions and roadmap priorities.
Deep AI/ML exposure – Particularly around Agentic AI and Generative AI systems. I've led research on everything from prompting behaviors to trust boundaries, from onboarding flows to security and governance frameworks. For me, UX research and AI research aren't separate practices but work in tandem understanding how people and systems shape each other.
Synthesis and storytelling – Whether it's distilling insights for a five-slide executive presentation or facilitating workshops that help product teams see around corners, I'm interested in keeping the human stakes visible in technical conversations.
Comfort with ambiguity – I often work on problems before they're fully defined, helping teams figure out not just what to build, but what questions to ask and what success might look like in the first place.
Collaborative practice – I've worked embedded with design and product teams, led research initiatives across business units, and partnered with stakeholders who are still discovering what research can and can't do. I'm more interested in building shared understanding than protecting disciplinary boundaries.
Tools – Figma, Miro, Dovetail, Airtable, UserTesting, Maze, Qualtrics, Lookback, SPSS. But also spreadsheets, Post-Its, Sharpies, whatever gets us closer to understanding.
I believe research should be rigorous without being rigid. Especially when it comes to AI, we need to make these systems more legible, more accountable, and more aligned with the people whose lives they impact.