Sound, Search, and Semantics


Session Description:

It’s not breaking news that voice search is the emerging technology of greatest interest, but what hasn’t been demystified is how it works. This session will uncover how the algorithm functions at a structural level by dissecting Google’s Automatic Speech Recognition and deciphering the nuances of the spoken word as they apply to semantic search.

My topic/session will not reiterate and reaffirm things that we already; it will pique curiosity and inspire greater knowledge seeking.

I will be sharing: Google Voice Search Case Study; Information Architecture for the Web and Beyond

1. Understand the two fundamental parts of Google’s Automatic Speech Recognition (ASR): how sound is processed and how speech modelling is conducted
2. Know exactly what the four metrics Google uses to track the quality of the system are
3. Apply given tactics on how to target content by searcher need states to your own strategy

Date: November 8, 2019 00:00

Track:

Speaker(s):