Enterprise Data Validation Platform

January 2026 - Ongoing

Overview

Designed and developed novel museum navigation device and software to promote community-led curation.

Role

UX Researcher & Physical Prototyper

Timeline

January 2025 - May 2025

Tools

Figma,  Python, SQL, Arduino, AutoDesk Fusion,  Prusa, Blender, Videographer

Teammates

Jiawen Chen - Architect ①
Omar Mohammad -  Designer①
Nile Tan - Designer/Film maker ①

Client

The Asian Art Museum

The Problem

Museum curation is often divorced from community input, resulting in a museum experience that primarily reflects the intentions of the museum authorities. This often undercuts the benefits community members and museum visitors could experience by more community-informed modes of curation.

Results and Impact

We designed and prototyped a museum navigation algorithm that is built on voice-based community-art interactions at the Asian Art Museum in San Francisco. The system includes a screenless device to foster inter-visitor engagement and deepen art understanding. It also opens up responsive and active models of museum-visitor interaction.

Problem Statement

How might we make curation more community-centered at the Asian Art Museum?

Results
The resulting system consisted of a design artefact physically modeled after the Japanese Inrō . The device is able to collect visitor voice input when one of its ends is pointed at an artwork. The audio input is then added to a database corresponding to the selected artwork, which is in turn transformed into an algorithm that generates custom museum tours from approved, combined visitor audio input.

We premiered this technology at the Asian Art Museum and have since been working on an expanded and adaptable version of the system that operates in augmented reality.Working on this project was a valuable exercise and learning experience in contextual design.

As a cultural institution, what we built needed to thematically match the environment, and achieving this required extensive cultural learning, active engagement with the Asian and Asian American community at the museum, and building directly on these insights.This project went on to receive an honorable mention at the 2025 San Francisco Design Week. To learn more about this work, please click the button below.
Overview

Designed foundational enterprise SaaS platform for validating complex environmental performance data.

Role

UI/UX Product Designer (Sole Designer)

Timeline

January 2026 - Ongoing

Tools

Figma, Google NotebookLM, Python, V0 by Vercel, ZeroWidth LLM, Postman, BeautifulSoup, Selenium

Teammates

Product Mangers ②
Front-end Enginner ①

The Problem

Environmental Product Declaration (EPD) data is essential for compliance and sustainability reporting, yet for this enterprise client, validation is often manual, fragmented, and hard to scale. Internal teams rely on disconnected tools and informal processes, making accuracy, consistency, and audit readiness difficult to maintain as operations grow.

Results and Impact

Replaced fragmented environmental data reviews with a structured, system-driven validation workflows, introducing clear stages, ownership, and feedback loops across facilities and materials. i centralized issues, flags, and approvals to improve EPD readiness, reduced ambiguity around review states, and established scalable design and interaction patterns intended to serve as the foundation for future enterprise sustainability and reporting products.

Problem Statement

How might we enable enterprise industrial teams to validate and operationalize environmental data with confidence, at scale?

Constraints

I worked with incomplete user access, high regulatory risk, the need to design a scalable system that could support multiple roles and future products and I needed to deliver a this as a confident and tested system within a very short timeframe.

Approach

I grounded my design decisions in direct user research(interviews and focus groups), desk research, competitive analysis, and established patterns from enterprise and project management software. Where I felt uncertain, I treated my decisions as assumptions, designing a system that could flex as those assumptions are tested and refined.

Design & Iteration
From the start, I designed the platform as a coherent system rather than a collection of isolated screens. The goal was to ensure validation, collaboration, navigation, and feedback reinforce one another instead of competing for attention.

I used a system diagram to reason about how users move between states, how data and decisions flow across surfaces, and where handoffs or errors are most likely to occur. This helped me design for consistency, reuse, and scalability, rather than optimizing individual screens in isolation.


Key principles guiding the system design included:
  • Reusable components and shared patterns to reduce cognitive load and speed up learning
  • Consistent signaling for status, risk, progress, and ownership
  • Clear state transitions between review, flagging, escalation, and completion
  • Predictable interaction models across facilities, materials, and validation stages
  • Shared primitives (cards, flags, progress indicators, notes) used across workflows
  • Clear boundaries between manual review and automated checks in compliance contexts
Rather than treating the system diagram as a static artifact, I used it to surface assumptions to test and areas likely to evolve. This allowed the system to support iteration, where individual elements can change through testing while the underlying structure remains stable enough to scale to future enterprise tools.
Validation reframed as structured work
I reframed validation as a bounded task with a clear beginning, defined stages, and an explicit end, rather than an open-ended data exploration experience. This was based on the assumption that sustainability managers enter the system with work to complete under time and regulatory pressure.

This shows up through facility-scoped entry points, visible progress, and completion states that signal when work is done. These patterns borrow from task-completion models in project management tools to support momentum and orientation.
Validation screen showing data segmentation and action progression.
Validation screen action screen showing how validation would be carried out.
Validation reframed as structured work
Task progression nuance and iteration
Within the validation flow itself, I iterated on how progression and completion should be communicated. Early versions framed the primary action as “Review complete,” but testing showed this language was ambiguous and did not clearly signal forward momentum.

Based on this, I changed the primary action to “Next” to reinforce task progression, and added a “Save and exit” option to support interruption without loss of work. I also introduced gating that prevents progression until all items in a stage are reviewed, reinforcing confidence and completeness.

The remaining open question is how to establish hierarchy and priority within individual validation cards. This is an area I plan to test further to determine whether visual emphasis, ordering, or risk indicators best guide attention without overwhelming users.
Validation scene showing blocking progression to help validators not miss any important reviews.
Validation scene showing "Save and Exit" function.
Validation reframed as structured work
Tradeoff: validation scoped to a facility context
One key tradeoff of reframing validation as structured work is that validation actions are only surfaced after a user selects a facility or plant, rather than being globally available on the landing view.

This decision prioritizes accuracy and contextual safety over speed. By forcing users to explicitly choose a facility, the system reduces the risk of validating data in the wrong context. The tradeoff is that users who want to start with high-level analysis take an extra step, but analytical views remain accessible through previews, side panels, and dedicated visualization surfaces.

This reflects an intentional bias toward doing the right work in the right scope, rather than optimizing for the fastest possible entry into action.
Scene showing primary validation action embeded within facility card - tradeoff to help with organization
Designing for mixed expertise: Progressive disclosure
To reduce cognitive load, I applied progressive disclosure, showing summary information by default while keeping deeper data accessible when needed. The split between summary data and in-depth data being defined directly from user research.

This allows users to move quickly through validation without losing access to detail, avoiding separate “basic” and “advanced” modes.
Various scenes showing progressive/elective disclosure.
Collaboration anchored to decisions
Collaboration tools are embedded directly into the validation flow, anchoring discussion to specific data and decisions.

This keeps context intact, improves traceability, and reduces reliance on external coordination tools. This also speaks to the designing for mixed expertise guides my design thinking. I plan to test whether this becomes the primary collaboration space or remains a secondary reference.
Scenes showing collaboration through notes and shared issue review workspace.
Learning-oriented system design
From user observation and desk research I learned that users typically approach complex tools like these through ongoing use and experimentation, not one-time onboarding or extensive manuals.

Additionally, in similar enterprise tools, I observed that users often plateau early and rely on only a small subset of features.

To counter this, I added contextual tooltips and lightweight guidance that appear during real work. These prompts surface more efficient workflows, explain why certain actions matter, and encourage users to gradually expand how they use the system.

I plan to test placement and frequency to ensure guidance drives discovery without becoming noise.
Results
Designing this product required balancing multiple layers of expectation. I was working with an enterprise client whose immediate needs shaped the validation workflow, while also designing a system intended to serve as foundational infrastructure for future enterprise offerings at the company I was working for. This meant making decisions that solved real, near-term problems without overfitting the product to a single client.

Working with an enterprise client highlighted how much alignment, clarity, and restraint are required in complex environments. I often had to navigate differing priorities, push back on requests that added surface complexity without improving outcomes, and revise my own assumptions when feedback or testing showed they were off. Being the sole designer meant taking ownership of those decisions while staying open to correction.

I also collaborated closely with AI-assisted tools to rapidly prototype and explore interaction ideas, using them to pressure-test concepts and bring early flows to life. At the same time, this reinforced the importance of strong fundamentals. Clear wireframing, system thinking, and interaction design were essential before any high-fidelity or AI-generated prototypes could be useful.

There are still areas I would explore next, including deeper accessibility considerations, performance tradeoffs at scale, and more robust testing of hierarchy and prioritization within validation cards. These are intentional next steps rather than gaps, and they reflect where further user research and usage data would most meaningfully inform iteration.

Overall, this project reinforced the value of designing with humility, making assumptions explicit, and treating complex systems as something to be learned and refined over time rather than solved in one pass.
Overview

Reducing plastic bottle-use with  UV-sanitized, accessible drinking fountains for Abu Dhabi’s Corniche.

Role

Lead Product Designer

Timeline

March 2021 - May 2021

Tools

Nomad Sculpt, Fusion 360, Rhino 3D, Adobe Illustrator, KeyShot

Teammates

Mechanical Engineers ②
Business Strategists ③
Product Manager ①
Architect ①

The Problem

Abu Dhabi’s Corniche hosts thousands of daily visitors like walkers, joggers, cyclists, and families. Yet, the absence of safe, inclusive drinking fountains forces reliance on bottled water, contributing to waste and excluding wheelchair users. Traditional fountains often go unused due to hygiene concerns and outdated design.

Results and Impact

Delivered to the UAE Ministry of Urban Planning, we designed a UV-sanitized, foot-pedal fountain for Abu Dhabi’s Corniche. Fully accessible and solar-powered, the design inspired plans to install public fountains along the Corniche, offering a scalable solution to eliminate 180,000 plastic bottles annually.

Research Question

Why are Abu Dhabi residents less inclined to use public outdoor drinking fountains?

FINAL DESIGN & REFLECTIONS
Innovative nozzle sanitation system

Full accessible fountain design

The final design is a fully self-sanitizing, hands-free public drinking fountain created specifically for Abu Dhabi’s Corniche, accessible, sustainable, and grounded in local visual language. With its dual-height sinks, foot-pedal activation, and solar-powered UV sanitation, it offers a safer, more inclusive alternative to bottled water in one of the city’s most visited public spaces.

The design was presented to the UAE Ministry of Urban Planning, where it contributed to early-stage efforts to reintroduce drinking fountains in Abu Dhabi. Our proposal helped shape conversations around sustainable urban hydration and influenced plans to bring more accessible water infrastructure to the Corniche.

I'm incredibly proud of what we achieved, not just as a functional product, but as a rethinking of how public design can serve health, sustainability, and equity all at once.

Massive thanks to Eunseo Bong, Samantha Lau, and Zak Saeed for being thoughtful, talented, and tireless collaborators throughout the process.

Overview

Designed an AI networking application that reduces contact management time by 70%, with 3x follow-up efficiency.

Role

Lead UI/UX Product Designer

Timeline

Ongoing (Prototype completed)

Tools

Figma, Python, BeautifulSoup, Selenium, OpenAI API, ZeroWidth LLM, Lucidchart, Postman

Teammates

Software Engineers ②
AI/ML Engineers ②
Business Strategists ③
Product Manager ①

The Problem

Professionals often collect contacts at events, through LinkedIn, or referrals, but struggle to keep them organized, remember context, and follow up in a timely way. Existing tools store information but don’t support strategic, actionable networking.

Results and Impact

I designed and helped build an AI-powered tool that helps users collect, organize, and follow up with contacts more efficiently, turning scattered information into clear next steps creating a 70% reduction in manual effort and a 3x increase in follow-up effectiveness.

Opportunities Addressed
Contacty's Design Solutions
Users meet people through various channels and need one unified way to capture their info
Multi-channel contact capture
Users often forget context or lack complete info after meeting someone
AI-powered profile enrichment
Users waste time organizing contacts manually
Automatic tagging & categorization
Users struggle to find contacts later due to vague memory or poor structure
Smart search (contextual + tag-based)
Users don’t know who to reach out to or when to follow up
AI-generated follow-up suggestions
Users feel unsure about how to reinitiate conversations meaningfully
Personalized outreach drafting
Users forget when and where they met someone and what was discussed
Contact timeline & notes
Users want networking to be strategic, not random
Goal-based contact prioritization
Design & Testing
core User Journey

The user journey starts when a contact is captured through a scan, tap, or manual entry and Contacty immediately enriches the profile with relevant context. From there, the user can easily search, tag, and organize connections without any manual sorting. As career goals evolve, Contacty suggests who to follow up with and how, helping users move from collecting contacts to building meaningful, strategic relationships over time.

How Does The System Work?

The system begins when two users initiate a connection via any of the three modes available on the application. That request is sent to AWS, which relays the task to a central database server. The database interfaces with external APIs like LinkedIn, Twitter, and others to fetch relevant contact and contextual information. This enriched data is compiled and returned through AWS in a simplified, digestible format, which is then displayed on the user’s device for seamless, informed networking.

Testing the Technology

While developing the system, I focused on integrating AI with a custom social media scraping algorithm I developed to surface relevant connections. Using ZeroWidth, I tested how large language models could interpret user goals through prompt engineering, relevance scoring, and AI-generated messaging, creating a smarter, more personalized networking experience.

Mid-Fidelity Wireframes

I started with low-fidelity wireframes to lay out the core flows: capturing a contact, viewing enriched profiles, and receiving AI-driven follow-up suggestions. I focused on minimizing user effort, ensuring that adding a contact took no more than three taps and that smart recommendations felt accessible, not intrusive. These early wireframes helped test navigation logic, screen hierarchy, and how users might search or filter contacts before moving into more detailed visual and interaction design.

Iteration Journey

My interaction and design iteration journey was guided primarily by user testing, where I observed users complete tasks. By observing moments of hesitation, confusion, and delay, I was able to map out where users instinctively look for certain features and what the primary actions they completed on the app were. With this information, I was able to re-orient hierarchy in ways that favored organization over instant contact collection, resulting in a layout that prioritizes organization and strategization tasks.

High-Fidelity wireframes
Home Screen

Easily navigate between the applications' core functionalities through a vibrant interactive elements.

Scan ID Camera

Providing NFC, contact form and ID OCR technology, Contacty provides you with a one stop shop for however you want to network.

Apple Wallet Integration

Conveniently store your customized ID in your Apple wallet for easy custom information sharing, supplementing existing apple information sharing mechanisms by providing more custom sharing options.

Chat with Contacty AI

Powered by a custom scraping algorithm and ZeroWidth LLM, receive actionable recommendations for achieving your networking goals most efficiently.  

Manage Contacts

Using our "infinite tag" feature, find any contact you have made in the past through notes, emojis, tags, pictures, date, profession or event.

Customize Your ID

Customize both the information shared through your virtual ID as well as what your ID looks like to give you a dynamic experience.

Visual Architecture
Iterations & Insights
User Feedback

We ran usability tests using high-fidelity Figma prototypes with 186 users from our target audience. Each participant was asked to complete tasks such as capturing a new contact, searching for a past connection, and acting on an AI-generated follow-up suggestion. Key insights:

100%
Independent task completion
42%
Less task completion time
after UI refinement
60%
Search accuracy improvement
through auto-tagging
70%
Less networking
management time overall
Reflections

Building Contacty has been one of the most rewarding challenges I’ve taken on. Designing a tool that turns something as messy and human as networking into a clear, strategic process forced me to balance technical feasibility with real-world needs. Working at the intersection of product design, AI, and systems thinking pushed me to grow quickly, and made every iteration feel meaningful.

*Contacty was formerly known as "Linky".

I’m incredibly grateful to have worked alongside an ambitious and thoughtful team. Thank you to Jasmine Meziou, Javier Araiza, Koka Gugunava, Carmen Rodríguez, Facundo Kim, Patrick Jun, and Daniela Guerra for your insight, late nights, and commitment to making Contacty real. Presenting our work at the UC Berkeley Haas School of Business as part of a startup incubator was a full-circle moment, it helped validate what we were building and reminded us of the impact this could have beyond the classroom.

As we continue developing Contacty, we’re focused on:

1. Expanding AI recommendations with broader datasets

2. Integrating with  and calendar tools

3. Launching a public beta to gather deeper user feedback


Grateful for how far we’ve come, and excited about what’s ahead!

Overview

Designed a replicable model for low-tech migration for 300,000+ Tanzania–Burundi refugees with 92% success using geofencing.

Role

Product Designer, System Architect, Field Research Lead

Timeline

August 2022 - December 2024

Tools

KoboToolbox, Garmin eTrex GPS, OpenStreetMap (OSM), QGIS, PostGIS, Twilio SMS API, RapidPro

Teammates

Community volunteers ③
Refugee informants ②
Humanitarian workers ②
Telecom SMS engineers ①

The Problem

Burundian refugees moving between Tanzania and Burundi often travel without maps, internet, or verified information, relying on word-of-mouth while navigating dangerous, unmarked routes. This reflects a broader global migration challenge: the absence of reliable, low-tech systems to communicate real-time safety along human migration paths.

Results and Impact

We built a geofencing-based SMS alert system for Burundian refugees moving between Tanzania and Burundi, achieving 92% delivery success on feature phones. It serves as a potential replicable model for low-tech, migration-focused communication systems globally.

Content Note
Due to this project involving cross-border technology deployment between Tanzania and Burundi, detailed disclosures are currently limited. Mapping Nyumbani is undergoing formal regulatory review by relevant government bodies. As a result, this documentation does not include sensitive implementation details such as the names of contributing partners, telecom providers, or full architecture workflows. Specific field locations and humanitarian collaborator identities have also been withheld in accordance with compliance standards. That being said, let's dive in!
Defining the problem
Journey Map
Defining the problem
Problem statement
Refugees migrating between Burundi and Tanzania face life-threatening uncertainty, navigating volatile routes with no real-time safety guidance. Dependent on word-of-mouth in fast-changing environments, and cut off from digital tools due to limited connectivity and widespread feature phone use, they lack access to reliable information. Mapping Nyumbani seeks to solve this core problem: how might we deliver accurate, real-time route safety alerts using only the low-tech devices refugees already carry?
Understanding the System
70% of the East African refugees we interviewed own basic feature phones, not smartphones. These phones work over 2G SMS with no internet required. Refugees use them to get urgent information from family or aid groups, often without airtime. Mapping Nyumbani taps into this ubiquitous, low-cost infrastructure to deliver life-saving updates via SMS.
Scoping a Solution
Ssystem Flow Diagram
I led the scoping process, translating insights from field research into a technically feasible system. This included selecting the architecture (feature phones, SMS-based geofencing), identifying integration options (Twilio, PostGIS), and validating compatibility with existing mobile networks. I designed the zone logic and built early system flow prototypes in Figma.
Building Collaboratively
Contextualizing Our framework
We invited refugees to co-design features and messaging, but early feedback was superficial. I realized participants lacked full visibility into the system’s purpose and assumptions. In response, we began sharing our own methodology including mapping logic, design rationale, and constraints. This transformed the process: refugees began identifying not just surface-level changes but cultural gaps and deeper misalignments. They proposed workflow changes, rephrased alerts for clarity, and even influenced the structure of message logic. This mutual transparency led to stronger uptake and grounded the system in real community logic.
TESTING
Successful tests on no-internet feature phones entering remote geo-fenced areas
Successful tests on smart phones entering geo-fenced areas with internet access.
We tested 100+ SMS alerts across 15km routes of various internet and cell service consistencies using 10 feature and smart phones and various network providers and phone brands. 92% of messages were received within actionable time. Alerts will contribute to directly informing refugee movement behavior, helping them avoid danger zones and reach aid more confidently.
Reflections & next steps
Mapping Nyumbani gave me the opportunity to step beyond traditional design execution into strategic leadership and community-led innovation. Leading this project pushed me to consider how system thinking, ethics, and technology intersect in humanitarian contexts. A major turning point was realizing that early-stage co-design lacked meaningful insight, not because of disinterest, but because we had not opened up our own logic and process. Shifting to a more transparent model where we shared our rationale and invited critique created the conditions for real co-ownership. This experience continues to shape my approach to collaborative, community-informed design.

Currently, the project is undergoing cross-border regulatory review by Tanzanian and Burundian authorities. This phase has introduced a new layer of learning, how to ethically and legally manage inter-country data flows, telecom collaboration, and humanitarian system integration.

Once approved, we intend to revisit our framing for replication to include learnings from this process, specifically around regulatory compliance, sovereignty, and local control of technology-driven interventions.


Project Impact:
126,000+ Burundian refugees can benefit
50 alerts tested
92% delivered
30+ danger or support zones mapped
Pilot sparked policy conversations and NGO interest

Next Steps:
Secure telecom/government approval
Develop real-time location mapping and editing
Scale pilot geographically
Publish replication toolkit

Limitations & Ethics:
No personal data stored
Alerts depend on coverage
Risks mitigated via human verification
Clear opt-in

Read more about this project here:
Overview

Boosted AI engagement by 75%, introduced 300+ users to Google’s Chimera Painter at the Mozilla Festival.

Role

AI Education Facilitator– User Research & Engagement Lead

Timeline

March 2021 — April 2021

Tools

Figma, Chimera Painter, Photoshop, Google Slides, Miro

Teammates

ML Engineer (Google AI)
Festival Participants (300+)
Creative Technologists

Context & Opportunity
Machine learning tools like Google AI’s Chimera Painter offer powerful creative potential, but their interfaces and workflows often alienate beginner artists and non-technical users. At Mozilla Festival 2021, we saw an opportunity to bridge this gap and make AI-assisted art more inviting, intuitive, and accessible to a wider creative audience.
Animated GIF
What I Did
I conducted targeted user research with beginner artists and game designers to uncover where the AI tool created confusion or creative block. Based on these insights, I developed 20+ custom sketches and modular sketching guides designed to align with how users naturally think and create. I facilitated live onboarding sessions at Mozilla Festival, gathered real-time feedback, and delivered usability insights back to the development team to support further refinement of the tool.
Outcome
75%
increase in engagement
+300
first-time users onboarded
20+
custom sketches created
>10
usability insights delivered
 to Google AI team
NOTE
Due to confidentiality agreements, I’m unable to share further project details publicly. However, you can find more insights below.
Image Gallery
Using an image imported to Chimera Painter or generated with the tools provided, an artist can iteratively construct or modify a creature outline and use the ML model to generate realistic looking surface textures. In the examples below, I demonstrate how this is done.
Lessons & Reflections
Working on Chimera Painter in such a compressed timeline, and with such a globally diverse audience, deepened my belief that accessibility in design isn't just about usability; it's about translation. I learned how to rapidly distill complex technologies like AI into visual languages that resonate with beginners, creatives, and non-technical minds.

It pushed me to design not just for clarity, but for curiosity, to invite users into experimentation rather than just guide them through functionality. Collaborating with Google AI and engaging 300+ users from around the world also reminded me how powerful design can be when it connects people across skill levels, languages, and tools, not by simplifying complexity, but by scaffolding exploration.

Most of all, I learned that designing for unfamiliar tools requires deep empathy, procedural creativity, and a readiness to adapt fast. This experience sharpened my ability to design systems that teach as they guide, and to center the user even when the system itself is unfamiliar, invisible, or unpredictable.
Overview

Designed and delivered a hands-free AR shopping experience for Sephora stores in collaboration with Snap Inc. to reduce product returns.

Role

UX & Interaction Designer
• Designed scan-to-add feature, store mapping, and gesture recognition

• Drove user interviews, in-store testing, and iteration

• Partnered on backend product database integration and gesture workflows

Timeline

August 2025 - January 2026

Tools

Figma, Snap Lens Studio, OpenAI APIs, Gemini Live (vision), MongoDB, Notion

Teammates

• Isabella Wang
• Alistar Xiao,
• Cody Qiushi Chen,
• Edna Ho
•Aarya Harkare
•Katherin Velazquez
(Software Engineers, UX Researchers, Product Managers)

Clients

Snap Inc.
Sephora

The Problem

Retail shoppers struggle to get quick, trustworthy product context while browsing shelves especially for items with subtle differences (e.g., fragrances, cosmetics). Phone-based tools interrupt the physical experience and require cognitive switching, and existing in-store signage is static and generic. Ultimately, purchase confidence is low and product returns at Sephora stores is high.

Results and Impact

We delivered a wearable AR shopping assistant built on Snapchat Spectacles for Berkeley, CA Sephora stores. Backed by over $30k from Snap Inc., the system brings product information, reviews, and comparisons directly to the shelf using vision, voice, and gestures, helping shoppers make decisions with confidence, without pulling out their phones.

Problem Statement

How might we support confident, in-the-moment purchase decisions in Sephora stores without disrupting the embodied, social, and exploratory nature of shopping using the Snapchat Spectacles?

Research and Grounding
Understanding the User

The project began by visiting Sephora retail stores in Berkeley, CA, observing shopper behavior, and speaking directly with people while they browsed. I focused on moments of hesitation, comparison, and uncertainty.

Key insights:

1. Shoppers often want confirmation rather than deep research.

2. Phones interrupt the browsing rhythm.

3. Wearable interactions must feel lightweight and optional.

These insights pushed the design away from dense overlays and toward glanceable, on‑demand interactions.

Journey Mapping

After identifying Sephora shopper needs and pain points, I worked with fellow experience designers to pinpoint where wearable AR could add value in the in-store journey. We evaluated the affordances of Snapchat Spectacles alongside lessons from AR retail and wayfinding systems such as Amazon Go and Standard Cognition. This phase also included returning to stores to introduce shoppers to the Spectacles and gather feedback on comfort, social perception, and when AR support felt helpful versus intrusive.

I identified that the Spectacles could be a useful intervention if the interaction principles were grounded in:

1. Keeping interactions hands‑free and glanceable.

2. Give users control over when information appears.

3. Design for imperfect conditions such as noise, lighting, and shelf clutter.

Client Alignment and Iterative Design
Proposed User Journey

With research insights in place, we moved into alignment and negotiation with Snapchat and Sephora to define what could realistically be built, tested, and deployed within platform and retail constraints. These conversations helped narrow the scope to interactions that were technically feasible, socially acceptable, and valuable to both the platform and the retail context. With the two clients, the developed the user journey is as seen below.

Testing the Technology

Testing was conducted through repeated in-store demos and short guided trials with shoppers using live Spectacles prototypes. We observed how quickly users understood each interaction, where they hesitated, and when they disengaged or reverted to their phones. Feedback from these sessions directly informed iteration cycles, including simplifying gesture sets, reducing on-screen text, adjusting overlay timing, and refining scan-to-add reliability under varied lighting and shelf conditions. Each iteration was validated back in the store, ensuring changes improved confidence and flow in real shopping environments rather than controlled lab settings.

Reflections & Insights

This project highlighted how much care is required to introduce new technology into everyday, shared spaces. The strongest outcomes came from designing with restraint, allowing AR to support existing shopping behaviors rather than compete with them.

I’m deeply grateful to my teammates on this project for their collaboration across research, design, and engineering, and to our partners at Snapchat and Sephora for their trust, support, and openness to experimentation. We are continuing to iterate on the experience through ongoing user testing, with a focus on improving spatial reliability, refining interaction comfort, and validating longer-term impact on shopping confidence and behavior.