Adobe's New Brainchild Turns Alexa, Cortana, Siri Interactions into Rich Voice Analytics

adobe-and-39;s-new-brainchild-turns-alexa-cortana-siri-interactions-into-rich-voice-analytics photo 1

Let's say your company sells wireless headphones. You've already invested heavily in your e-commerce and mobile shopping experience, but now you start noticing an increasing percentage of new Bluetooth earbud orders coming directly from voice commands via Alexa and Siri. How do you capitalize on that? What if you could aggregate that order data, run it through machine learning (ML) algorithms, and generate predictive analytics on large order volumes and consumer behavior? Adobe's newly announced Voice Analytics capabilities give businesses a way to do something with that voice data.

Voice-activated virtual assistants are everywhere. Amazon's Alexa, Apple's Siri, Microsoft's Cortana, and Google Assistant can conjure up conversational artificial intelligence (AI) responses at a moment's notice from your smartphone, car, wearable device, or connected speaker with a prompt as simple as "Hey Siri" or "Okay, Google." As more and more consumers turn to voice queries over typed searches, and even buy products with nothing but a voice command, businesses must ask: How can we lean into that shift and take advantage of this new data vector, and how can we analyze and take action on voice data?

Adobe is beginning to answer that question. Today, the company announced new Voice Analytics within Adobe Analytics Cloud, becoming the first software company to begin collecting and analyzing voice data from all of the major platforms, including Alexa, Cortana, Google Assistant, Siri, and Samsung's Bixby AI. Colin Morris, Director of Product Management for Adobe Analytics Cloud, explained how the platform ingests large volumes of voice data and then analyzes them with ML from its Adobe Sensei AI engine, for which the company has announced a new Adobe Sensei for Voice capability.

"We're seeing a big shift from a user experience and interaction standpoint from touch to voice. In Mary Meeker's 2017 trends report, she said voice is beginning to replace typing and online queries," said Morris. "Voice query accuracy is higher now, whether it's interacting with an app, your in-car experience, or at home trying to make a purchase on your Amazon Echo. We want to make sure your brand can collect data from those interactions, whether it's Alexa, Google, Siri, or what have you."

"Then, with Adobe Sensei, we take that voice data and start to run behavioral analysis based on what people are saying across different assistants and channels, and cluster that based on what's valuable," Morris continued. "You might get all sorts of new insights based on what people are asking."

What Can You Do With Voice Analytics?

adobe-and-39;s-new-brainchild-turns-alexa-cortana-siri-interactions-into-rich-voice-analytics photo 2

Adobe is a software company with eggs in a number of different baskets. Beyond its flagship creative and design products such as Adobe Photoshop, Illustrator, and InDesign, the company also has a broad portfolio including document management and marketing automation. The latter is where voice analytics fit in. The company's marketing and analytics clouds help businesses run real-time web analytics and manage various forms of digital marketing and advertising campaigns.

Voice analytics is just like any other kind of data; you just need to know what to do with it to make it actionable. Morris said the business intelligence (BI) gathered could be applied in all sorts of business scenarios.

"We're doing the work of a data scientist and shorting the business time to getting insights on this virtual assistant data," said Morris. "Think about the voice interaction at the top of a customer funnel. Where do they go from there? You can use that data to boost email marketing, campaign personalization, and A/B testing, or bolster existing audience profiles to create more relevant, engaging ads. From a scientific standpoint, all of a sudden you have a totally new medium by which you can understand customer insight."

As for how you take the voice conversations users have with virtual assistants and turn them into data you can use, Morris explained that voice queries break down into two primary components: intent and parameter. Intent is what the user wants—what they ask the assistant to buy, do, or find—and parameter is the brand, location, or any other type of subject the assistant needs to interact with to complete that request.

adobe-and-39;s-new-brainchild-turns-alexa-cortana-siri-interactions-into-rich-voice-analytics photo 3

Image credit: Adobe Analytics Cloud

Once you break down a voice query into intents and parameters, Morris said Sensei can analyze it on a deeper level. Businesses can pull on the frequency of queries to chart user habits, the related actions or queries that followed the initial request, and beyond.

Using an example like the Wynn Las Vegas—which announced it's adding an Amazon Echo to every hotel room—Morris said the hotel could use the data gathered to anticipate guest needs, craft personalized promotions and offers, and plenty more, as part of an enhanced guest experience. As another example, he explained the data gathered behind a simple voice interaction: ordering a pizza.

Related

  • What Are Virtual Assistants and What Can You Do With Them? What Are Virtual Assistants and What Can You Do With Them?

"So, if a user said 'Hi Google, re-order me a large cheese pizza from Dominos,' re-ordering the pizza is the intent and Dominos is the parameter. Let's say intent spikes on a certain day for pepperoni, orders skyrocket, and that type of pizza sells out everywhere," said Morris. "If you're parsing that data and see the anomaly, Sensei's Virtual Analyst can trend that piece of data in a 90-day window, ping the analyst that something is off, and then take that data and run a conversion analysis. Once you find the correlation between those data points and what caused the spike, you can re-engage the user based on what they bought, and so on."

The key for Adobe, Morris said, is making sure the data gathered from voice assistants is processed into forms that can plug right into your other systems. Adobe Sensei for Voice can detect and flag data anomalies, use behavioral voice data to run automatically, segment different customer groups, and use attribution modeling to map the user's journey from voice query to customer interaction.

"We don't care where the data comes from. If there is a digital touch point that's growing at scale in the market and you can apply machine learning to get insight, then we want to support the collection and aggregation of that data," said Morris. "We hope Voice Analytics helps democratize and enrich voice data so it's not in a silo, and speaks to the larger customer journey to help brands make better business decisions."

Article Adobe's New Brainchild Turns Alexa, Cortana, Siri Interactions into Rich Voice Analytics compiled by Original article here

Recommended stories

More stories

Enhanced Screening to Replace Laptop Ban

US authorities will require airlines and foreign airports to implement tougher screening procedures for electronic devices instead of banning them on more flights.