Monday, January 13
​
11:00 AM - 6:00 PM Registration
12:00 PM - 1:00 PM
Pre-Conference Workshops (Slate A)
Session #111, 12:00-1:00 PM, Monday January 13
International VUX Design Best Practices
Kane Simms (Host and Producer, VUX World)
In this workshop, we walk through how to set yourself up for international success. We cover the foundational elements of successful international voice experiences, as well as dive into design best practice that works across the board…ers.
On every continent, voice is gaining traction. All of the big platforms and players have international strategies. Amazon Alexa is available in 89 countries and in 8 languages. Google Assistant, 80 countries and 30 languages. As brand or designer, how can you reach millions of new users and position yourself for success internationally? Join VUX World co-founder Kane Simms to find out.
​
Session #112, 12:00-1:00 PM, Monday January 13
Synthesizing Natural Sounding Speech for Local Languages
Daniel Whitenack (Data Scientist, SIL International)
What if the voice and/or language you need for your application isn’t available in existing platforms? This workshop will walk you through the process of building your own text-to-speech model for a local language using PyTorch and the latest deep learning architectures. After the workshop, you will have an understanding of how speech is synthesized with neural networks, and you will have practical hands-on experience synthesizing a new voice.
​
Session #113, 12:00-1:00 PM, Monday January 13
User Testing, Without the Users [Part 1]
Jacob Soendergaard and Luis Arango (Account Managers, HEAD Acoustics)
How do you ensure the product you are developing works as intended? And how do you eliminate subjecting your users to beta-testing? This workshop is geared toward revealing how rigorous Speech Recognition testing is accomplished, and covers topics such as head and torso simulators, proper equalization of the equipment, and creating a multi-dimensional background noise sound field for accurate user scenario simulations. Additionally, the workshop dives into the relationship between speech and noise, how speech is affected by both background noise and distance, how a device behaves when reverberation is present in the speech, as well as interfacing with various ASR engines for automated closed loop testing. Regardless of your intended use case, all scenarios need to be accounted for during the product development phase before the customer experience begins.
​​
1:15 PM - 2:15 PM
Pre-Conference Workshops (Slate B)
Session #121, 1:15-2:15 PM, Monday January 13
Speech Markdown Workshop
Mark Tucker (Senior Architect, Voice Technology, Soar.com)
Get hands-on experience using Speech Markdown and see why it is a better alternative than SSML. Bring your laptop and visit speechmarkdown.org
To achieve the best conversational experiences possible with voice assistants, you must control how the text-to-speech content is formatted. Most voice platforms support formatting through a subset of Speech Synthesis Markup Language (SSML). Developers and designers have been using SSML since 2004, but with the new era of voice first, it is time for improvements. Speech Markdown is an open-source project that takes the power of SSML and makes it available in a simplified syntax for all content authors. Speech Markdown is simple, progressive, and cross platform. It is text-to-speech formatting for content authors, designers, and developers that converts to SSML while handling inconsistencies across Amazon Alexa and Google Assistant with plans to support other voice platforms.
​
Session #123, 1:15-2:15 PM, Monday January 13
User Testing, Without the Users [Part 2]
Jacob Soendergaard and Luis Arango (Account Managers, HEAD Acoustics)
How do you ensure the product you are developing works as intended? And how do you eliminate subjecting your users to beta-testing? This workshop is geared toward revealing how rigorous Speech Recognition testing is accomplished, and covers topics such as head and torso simulators, proper equalization of the equipment, and creating a multi-dimensional background noise sound field for accurate user scenario simulations. Additionally, the workshop dives into the relationship between speech and noise, how speech is affected by both background noise and distance, how a device behaves when reverberation is present in the speech, as well as interfacing with various ASR engines for automated closed loop testing. Regardless of your intended use case, all scenarios need to be accounted for during the product development phase before the customer experience begins.
​​​
2:30 PM - 5:30 PM
Pre-Conference Workshops (Slate C)
​​
Session #131, 2:30-5:30 PM, Monday January 13
The Google Conversation Design Workshop
Wally Brill (Head of Conversation Design Advocacy & Education, Google)
NOTE: this workshop limited to 40 attendees.
Conversational, natural language interfaces are emerging as a powerful new way for people to interact with digital services. In order to design a natural user interface, we need to apply a human-centered design approach. Research by Stanford professor Clifford Nass shows that people converse with computers in much the same way as they do with humans: they're most successful when the interface is natural and conversational. There’s a process for creating conversations, and this hands-on, three-hour Workshop teaches that process. It shows how to design and prototype a conversational experience. Creatives, designers, PMs, developers, writers, marketers and brand managers all benefit from understanding what goes into a great conversation.
The Workshop presents:
* The theory and principles of social interaction
* The personality and dialog of the conversational agent
* The rapid (paper) prototyping of the conversation.
​
Session #132, 2:30-5:30 PM, Monday January 13
Bixby Developer Sessions: Natural Language Driven Development
John Alioto (Chief Evangelist, Viv Labs); Roger Kibbe (Senior Evangelist, Viv Labs); Jonathan Pan (Evangelist, Viv Labs)
Come code with engineers from Viv Labs and Samsung, and learn to build your first Bixby capsule at this Bixby Developer Workshop. We’ll go through the basics of using Bixby Developer Studio to build a voice experience, learn about the new and innovative Bixby Templates, and have some fun.​
​
Session #133, 2:30-5:30 PM, Monday January 13
Crafting Cross-Platform Voice Experiences
Nick Laidlaw (Chief Technology Officer, Voicify); Ryan Tepperman (Strategist, Verndale)
Development is only a portion of executing a voice experience. In this workshop participants will work in small teams to work through a tried and true process developed by Voicify in a real-world use case. Teams will collaborate to identify the optimal MVP as well as long-term maturity strategy within the context of a multi-endpoint business mandate. Participants should expect to leave with a strong sense of how to establish, document, and expand a cross-platform voice experience plan in their own organizations.
​
Session #134, 2:30-5:30 PM, Monday January 13
Voice in Healthcare Boot Camp 101
Harry P. Pappas (Founder & CEO, Intelligent Health Association); David Box (Director, US Healthcare, Macadamian); Teri Fisher, M.D. (Founder & Host, Voice First Health); Ilana Shalowitz Meir (Voice Design Mentor, Alexa, CareerFoundry)
Commercial voice-activated intelligent assistants from Amazon, Apple, Google, and Samsung, among others, are growing in popularity. As consumers become more accustomed to using voice assistants for search, healthcare services and information will naturally become integrated into that behavior. As a result, voice is poised to revolutionize healthcare for both patients and providers. This bootcamp will enable you to develop an understanding of the current opportunities at the intersection of voice technology and healthcare. You’ll come away inspired, and with the knowledge to leverage voice technology to improve healthcare for patients, and maximize efficiencies for health care systems. By the end of this bootcamp, you’ll be able to:
- identify the opportunities of voice technology in healthcare settings
- describe the current state of the voice industry with respect to healthcare
- identify examples of voice applications that increase patient engagement and improve outcomes
- describe use cases that are best suited to voice and the benefits of the technology
- identify strategies in voice implementation for healthcare organizations
- understand the initial steps required to build a voice application for healthcare settings