top of page
Apple_project_tracker.png
Client

At Apple, I was part of the Siri team, where I focused on improving the AI/ML and voice recognition systems behind the “Hey Siri” experience. My work helped boost accuracy and reduce false activations across 37 languages, making Siri more responsive and reliable in all kinds of real-world environments. I partnered closely with data engineers, natural language analysts, and product managers to refine wake-word detection and design five internal tools that ensured that clean, high-quality data flowed through the system. From client ingestion to data QA, I streamlined the entire project lifecycle to support continuous model improvement. The end result? A more intuitive, consistent voice interface that supports Apple’s goal of making Siri effortlessly helpful for users around the world.

Overview

CLIENT | Apple | Siri Team 

ROLE | Lead Interaction Designer 

PLATFORM | Web 

PRODUCT | AI/ML Internal Tools & Project Management Software

TIMELINE | 8 Months â€‹

Responsibilities

Human-centered design lead: Focused on streamlining the workflows of linguists, model trainers, and data scientists.

UX & Design Execution: Led UX research and system redesign efforts to improve the human-in-the-loop (HITL) feedback loop for AI/ML labeling.

Collaboration & Implementation: Worked with both internal stakeholders (like Siri language leads and data scientists) and external partners to optimize pipeline touchpoints.

Goal

Reduce the number of third-party tools, all while increasing the efficiency of Siri projects by improving workflow transparency, project tracking, and data accuracy for the global Siri AI/ML team.

Data-Driven Outcomes

85% - Reduction of third-party tooling (7 to 1) 

4X - Project creation time

20% - Increase to ENPS for managers utilizing the new software

Design Process

By following the six-step Design Thinking process—Empathize, Define, Ideate, Prototype, Test, and Implement—I developed a structured approach to solving key challenges faced by enterprise data teams. This process ensured that our solution was user-centered, technically feasible, and aligned with business goals.

Homepage_Apple_Hey Siri (1).png
Research

EMPATHIZE

QUANTITATIVE

Collaborating with multiple Siri team Product Managers, I worked to analyze the individual team key metrics and align them with a global approach. We conducted a time and tooling audit across 12 global locales to understand the operational impact of manual tracking. Key findings

  • 100% of participants expressed frustration with the multiple tools required to manage a project's life cycle.

  • One manager spent 40 hours/week on manual tracking tasks.

  • Across 12 managers, that totaled 480 hours/week—the equivalent of 12 full-time workweeks lost weekly.

  • Manual processes relied on 7 disconnected tools to manage end-to-end project tracking.

  • Estimated productivity loss: $36,000/week or ~$1.8M/year.

QUALITATIVE

Tracking success with hard numbers was important, but at the heart of the Siri team’s work were the Natural Language Analysts—the real MVPs behind the scenes. These global teams spend hours parsing through data, fine-tuning it for ML model training, and ensuring everything runs smoothly. They know how critical their role is, and they feel the pressure to get it right. So, making their workflows easier, more efficient, and even a little more fulfilling was a top priority.
To make sure we were solving the right problems, I went straight to the source—talking to analysts, AI/ML engineers, and product managers to understand what wasn’t working.

  • Conducted over 20 hours of user interviews with Natural Language Analysts, AI/ML Engineers, and Product Managers.

  • Used "5 Whys" methodology to uncover root causes of workflow pain points.

  • Identified Jobs to Be Done (JTBD) for key stakeholders.

KEY FINDINGS

Our research uncovered two primary personas—managers overseeing project lifecycles and analysts handling data labeling—that highlighted critical workflow gaps. We found that managers lacked visibility into team progress, analysts were burdened with repetitive tasks that compromised data quality, and Data Ops teams were losing valuable time to manual validation. These inefficiencies collectively slowed down AI/ML development and pointed to clear opportunities for automation and workflow redesign.

  • Managers lacked visibility into team progress, causing workflow slowdowns

  • Analysts were bogged down by repetitive tasks, hurting data quality

  • Data Ops teams spent too much time manually validating data, delaying model improvements

Personas

DEFINE

Manny_Persona.png
Jenny_Persona.png
Chris_Persona.png
PROBLEM STATEMENTS

Talking to the global team gave me a chance to uncover key challenges that were holding them back and shape a clear direction for improving their workflows. By listening to their experiences, I was able to identify the biggest pain points and define the "Jobs to Be Done" that would make their day-to-day work smoother and more efficient.
Key Problem Statements:
​

"I need a quick way to see my team’s overall performance with the option to zoom in on specific people or locations, so I can coach effectively and keep projects on track."
— Manager Persona

"It’s stressful not knowing if my work is properly recorded or tracked. I need real-time confirmation so I can focus on my job without distractions."
— Analyst Persona

"I rely entirely on our internal ticketing system, but Siri data projects aren’t properly tracked there. I need them integrated into Radar so I don’t have to jump between multiple tools just to fix an issue."
— Data Engineer Persona

Project Goals Venn.png
User Flows

IDEATE

One big hurdle was that teams working on the same project often had no visibility into each other’s work due to strict privacy policies. With up to nine teams working on a single Siri initiative, it was tough to track dependencies or coordinate efforts. On top of that, the internal tool ecosystem was fragmented, with teams relying on a mix of homegrown solutions and separate tracking tools, leading to inefficiencies and duplicated work.

Solving for the creation of a project that reduced redundencies and allowed for transparency across teams was the first workflow I tackled.

Projects_Sitemap_Siri.png
Wireframes

IDEATE

Wireframes_Siri.png
Prototype

TEST

Usability Testing

LEARN

By facilitating user testing sessions with managers and analysts in Japan, Seattle, and Ireland, I was able to confirm and refine the overall workflows for key user roles. These sessions provided valuable insights into how different teams interact with the system and helped ensure that the new workflow improvements met their needs.

SCENARIOS

The user testing scenarios aligned with the qualitative research findings from the user interviews I conducted earlier in the project, making sure I was demonstrating the user’s needs in the problems we were solving.

  • Scenario 1: As a QA Admin, I need to create a new project, add reviewers, specify the locales it covers, upload a CSV, and link it to Radar. I need to seamlessly transition my project from draft to running.

  • Scenario 2: As a manager, I need to access my project dashboard, view my team's performance at a global level, and drill down by locale to analyze trends and improve decision-making.

SUCCESS METRICS

I made sure to capture the success metrics called out in the quantitive research findings as a way to align with the multiple Siri team KPI’s and ensure team adoption.

  • Task completion time (create a new project) was reduced by 75%, making workflows 4x faster and freeing up valuable time for higher-priority tasks.

  • 7 external tools were consolidated into a single integrated system, eliminating inefficiencies, reducing context switching, and improving overall workflow consistency and visibility.

These improvements resulted in smoother operations, better team alignment, and a more efficient user experience across global teams.

Faster Project Creation
4X
Reduction in Tools
7 to 1
Feedback

LISTEN

Right after the user tests, I started getting messages from participants who were excited to see their feedback come to life in a real, tangible way. Seeing their ideas and needs reflected in the product made them feel heard, and that was incredibly rewarding. Tackling such a complex web of workflows and turning it into something clear and usable was not just a win for the team—it was genuinely satisfying to see it all come together.

High Fidelity

IMPLEMENT

Reworking Apple’s internal operations wasn’t an option, but reducing friction and simplifying manual processes was key to making this project a success. The final designs, while intuitive and straightforward, helped solve some of the most complex workflows across multiple teams and platforms. Below are some of the main screens that brought everything together, ensuring efficiency, usability, and seamless integration.

REQUEST INTAKE

Issue: No integration with Apple’s internal systems, causing manual tracking inefficiencies.

Solution: Integrated request portal with syncing to Siri projects, and auto-population of teams and projects for faster assignments.

PROJECT CREATION

Issue: No centralized way to assign teams, managers, and training.

Solution: Intelligent matching system with syncing to Radar (Apple's internal ticketing software), Siri projects, and auto-population of teams and projects for faster assignments.

TASK MANAGMENT

Issue: No visibility into task progress, dependencies, or blockers.

Solution: Task management system integrated with Apple’s tools, plus real-time project working sessions and multi-granular dashboards for tracking.

DATA VALADATION

Issue: Slow manual QA processes increase errors.

Solution: A centralized QA system cross-checks data, with a visual review interface and direct ingestion into Siri’s training model.

bottom of page