Conversational Interface

The mundane component that is the alert view: Not much to it at first glance, but they appear at critical times, and small details become important for this fleeting component within the experience. I found it interesting that this moment is where the user is actually being addressed by the system. A conversation (be it short) is happening.


Android Alert DIalog

Android Alert DIalog

 
iOS alert View

iOS alert View

What I learned from alert views

  • Be brief! It seems like a no brainer, but lately we’ve seen a trend toward more “human” language within alerts and other areas. The issue is that the spoken word is fairly verbose compared to written. You can be direct and still feel human. It is important that we keep this in mind as conversation starts to take over the user experience.

  • Avoid using “Yes” or “No”. They seems like the most basic response options, but they actually create more complication than an “OK/Cancel” Think back to a time when you were at a restaurant and the waiter asked if you needed anything else “No, Yeah, I’m all set” or Yeah, No, I'm satisfied” Does that sound familiar? Us humans can be confusing.

  • Be objective with what you’re telling the user. Avoid “Are you sure?” situations, they undermine and slow the user down.

  • The smallest details have the largest impact.


Conversation beyond Alert Views

Chat bots, voice, and virtual assistants became an extension of these alert views, a place where the conversations can be more than “yes/no” or “ok/cancel” responses. They became a place where the user can speak and prompt the system. We started offering voice navigation within our Android experience. It was limited, you could get a quote, get your portfolio balance, or open up a trade ticket, the voice assistant would take you to the new screen and read the data from the screen.

Over time, the assistant could support more and more intents/utterances, but at this point of release, there were not many, and it was important to make that clear to the user, so as not to set the wrong expectations about the experience.

First iteration of our Android Voice Assistant. Active state.

First iteration of our Android Voice Assistant. Active state.

First iteration of our Android Voice Assistant. “Recording” state.

First iteration of our Android Voice Assistant. “Recording” state.

First iteration of our Android Voice Assistant. Speaking state.

First iteration of our Android Voice Assistant. Speaking state.


Alexa - Pairing sound with vision

I took the lead on the work done for the echo show skill. What I found most exciting about this project with the effort put toward finding harmony between Alexa’s voice and the information shown on the screen, you didn’t want to them to offer the same information, but it was difficult to find the balance and to define which was the primary carrier of the information It did, however, help to avoid the need for timestamps (an area where we leaned on the Alexa voice).

01_welcome.png

Alexa - Tweaking the sound

As a passion project, I was interested in working with the echo skill. The market update as it stood seemed pretty dry, and I was interested in adding some life to it. It reminded me of the traffic hour you hear on the radio, and the radio always had these interesting back tracks, so I put together an audio track testing that out.


Thanks to the Mobile Design Team

Andy Flinders | Evan Gerber | Sam Hong | Damon Jones | An Kang | Jonathan Kardos | Chris Lackey | Dan Murphy | Julia Paranay | Elizabeth Ryan | Marcy Regalado