tye.

COPILOT

2025
Project5 weeksVoice-based driving assistantInterface Design
Team1 Designer (Me)1 Researcher

Copilot is a voice based driving companion that uses AI that proactively assists by learning from user behavior and providing context aware recommendations (think JARVIS from Iron Man).

Copilot hero screens

The Problem

Most assistants today feel clunky, requiring memorized phrases, repetitive confirmations, and separate screens that interrupt focus. Drivers wanted something that understood them, not something they had to teach.

Talking to drivers & racers

We conducted 10 user interviews to try and discover the pain points of users and possible design opportunities.

Drivers found existing assistants slow, rigid, and impractical. Through interviews and surveys, two key frustrations emerged.

“Most in-car voice assistants are too slow and impractical. Something like ChatGPT would be more useful if it learned user behavior.”
Max, the Professional Racer

“The current assistant I use is still not great — I can only see messages through text-to-speech, and sometimes it just can’t answer me.”
Robert, the Avid Driver

Frustration #1

Drivers adapted to the assistant instead of the other way around. Anything off-script broke.

Frustration #2

Responses were “correct” but cold, with poor confirmation timing and little empathy.

Visual design

I designed the logo to communicate a balance of both precision and personality, using a stylized steering wheel.

The neuron at the centre of the wheel symbolizes artificial intelligence and the AI’s smart decision making.

Copilot logo and type style guide

The colours I picked use two palettes to keep the interface clear and readable in different environments. I use blue as the main accent because it signals trust in automotive contexts and stays readable against light surfaces. The grey tones in both palettes support hierarchy without pulling attention from key data.

Copilot light and dark mode colour palettes and screens

Designing

I built a clickable prototype in ProtoPie to validate flows before wiring live speech. Each hypothesis tracked: trigger, expected reply, fallback, and repair strategy.

User Testing

We conducted 10 user tests with the initial prototype to see and test the concept, learning not what to do.

“Even the most modern AI systems struggle with understanding natural speech, and this app is no exception.”
— User B

“It’s annoying that when you’re on the main page (Miles), you can see live subtitles of what you’re saying, but when you switch pages, those disappear. If the app doesn’t respond, you don’t know if it’s because it didn’t hear you or if it’s stuck.”
— User G

User Testing Insights

Early versions of Copilot lacked clarity. Users didn’t realize the app was built around widgets, making voice interactions feel isolated instead of connected. A clearer introduction was needed to explain how everything worked together.

The assistant, Miles, also felt too far removed. Sending users to a separate page broke the flow. So we brought him home, integrating voice feedback directly into the main screen, with persistent subtitles to show he’s always listening.

And not everything spoke the user’s language. Terms like “brake score” lacked context and left some drivers unsure what they meant. We refined our wording and added explanations to make insights feel intuitive, not intimidating.

The Handoff

Despite it being only a prototype, I wanted to truly emulate real-time, adaptive responses and make Miles interpret user input, contexts, and preferences to deliver natural flowing conversation.

Using ProtoPie Connect, I parsed transcripts into structured messages, sent them through the language model, and returned personalized dialogue on screen, giving Miles a smarter brain.

Miles’ OpenAI API integration
ProtoPie Connect setup and Miles' OpenAI JSON prompt

After 5 weeks, my handoff included:

3 user flows, 10 wireframes and 10 mockups of the design system in Figma.

A fully functioning conversational prototype on ProtoPie.

Integrated OpenAI API setup for real-time dialogue testing.

What I learned

Needing to learn and understand coding for richer implementations.

Diving into prototyping with JS/Web APIs clarified which parts of the conversation needed deterministic rules vs. model reasoning. By bridging UX and code, it made the system feel more coherent and more impactful that I was able to get the logic to work.

Finding personality in existing AI.

I find that throughout my education, AI has been something that was seen as another way of academic dishonesty or cheating, but yet with the world’s given trajectory, it is one of the most impactful things to know and understand. Learning and benefitting from leveraging AI is something that needs to be promoted in education as it can help enhance the UX process.