I learned firsthand that environment is part of the interface. While managing our pilot tests in the classroom, I noticed that participants struggled to feel immersed, but moving our "Wizard of Oz" setup into a real vehicle immediately exposed physical constraints we had missed—like the plushie blocking the windshield. This experience taught me that in-situ testing is non-negotiable for validating safety and ergonomics in hardware design.
A work on the initial research challenged my assumption that conversation is always a distraction. Through interviews and social listening, I discovered that users actually rely on "social facilitation" to stay alert. This reinforced for me that user psychology often defies surface-level logic, highlighting the importance of validating how users perceive cognitive load through qualitative research rather than relying on general heuristics.
Finally, witnessing users try to use our "Diary" feature for grocery lists during testing was a pivotal moment for me. It demonstrated that users will often repurpose tools to fit their immediate needs regardless of the original design intent. This taught me that successful UX requires adaptability, leading me to advocate for the pivot to the "Voice Notebook" to align with the productivity behaviors I observed in the car.
Our team of four divided responsibilities in a way that matched individual strengths while ensuring everyone had full context on the project at each phase.
During research, I took ownership of the preliminary survey design and the social media listening effort, while teammates led the interview protocol and moderation. We debriefed after every interview as a group, which meant that even when one person was in the room, the synthesis was collective. This mattered — the "spotlight effect" insight came out of a group debrief conversation, not any single interview.
During ideation, we each independently produced eight sketches to avoid anchoring on anyone's ideas early. We then came together for affinity mapping as a group, which kept the concept selection process democratic. The final three concepts were genuinely co-authored, not owned by any one person.
During prototyping and testing, roles were clearly divided: I managed the technical execution of the Wizard-of-Oz setup — operating the AI simulation, sound effects, and phone call triggers in real time — while teammates handled moderation and note-taking. Having dedicated roles reduced cognitive load during sessions and kept the simulation running smoothly for participants. After each session, we immediately convened to decide whether a pivot was warranted before the next test, which is how the RITE method worked in practice for us.
Design decisions were made collaboratively, with team members bringing evidence from their respective research areas to support recommendations. When we disagreed (for example, on the right form factor for the physical companion), we defaulted to what the user data suggested rather than personal preference.