Dylan Nathan · Principal Product Designer (XR / Spatial)




HTC VIVE Eagle — Human–AI Interaction for Wearable Computing






Role: Principal Designer — AI Interaction & Experience Design
Focus: Product strategy, prompt architecture, multimodal interaction (voice, vision, sound)

As Principal Designer for HTC’s VIVE Eagle AI glasses, I defined the prompts and interaction model for the device, shaping how users engage with the AI assistant through voice, vision, and sound. The work focused on human–AI interaction within a wearable, real-world context, designing system behavior, multimodal interactions, and conversational responses that allow the assistant to operate naturally in everyday environments.

I designed the conversational AI interaction framework for the system, authoring prompt architectures that power features such as chat, multilingual translation, contextual search, meeting summaries, accessibility tools, and camera-based visual queries. This included defining the assistant’s personality, response tone, and conversational structure. Prompt architectures were developed for Gemini and translated across multiple LLM platforms including ChatGPT, DeepSeek, and Mistral to ensure consistent behavior.

I collaborated closely with UX, engineering, and QA teams to refine the interaction model for the glasses, including wake word behavior, capture flows, and query responses. Through extensive QA testing and iterative prompt revisions, we improved reliability, handled edge cases, and tuned responses for clarity and usefulness in real-world scenarios.

In parallel, I designed the sonic identity and sound UX of the device, creating the complete system sound language for VIVE Eagle. This included interaction feedback, AI response cues, and notification sounds that communicate system state. I worked directly with hardware and audio engineers to tune the device’s speakers using EQ, compression, and psychoacoustic bass enhancement so sounds remain clear above ambient environments without interfering with the user’s voice.

Key Contributions

• End-to-end AI experience design for wearable computing
• Prompt architecture for conversational and vision-based AI features
• Multimodal interaction design (voice, vision, sound feedback)
• Assistant personality and conversational tone design
• Cross-model AI integration (Gemini, ChatGPT, DeepSeek, Mistral)
• System sound design and sonic UX for wearable interaction feedback
• Hardware + software collaboration on speaker tuning and AI interaction delivery
• Iterative UX refinement through QA testing and edge-case resolution



(c)Dylan Nathan