Wednesday, 30 May 2018

New Research Shows Which Digital Assistants Actually Know Stuff

New Research Shows Which Digital Assistants Actually Know Stuff

According to a report from Edison Research, more than 51 million Americans now own a “smart speaker” like the Amazon Echo or Google Home. The adoption rate of these voice-activated devices is faster than the adoption rate for smartphones, a decade ago. And speaking of smartphones, they also have digital assistants onboard, including Apple’s Siri, Google Assistant for smartphones, and Microsoft’s Cortana (also accessible on Xbox and other devices).

All told, we are positively SURROUNDED by digital assistants, each practically begging to help us learn and be more productive. But can they actually accomplish that, consistently?

Digital assistants are as useful as a James Harden beard trimmer if they can’t answer the questions we want answered, right? This is why I am shocked, awed, and in love with the second-annual study from Stone Temple that exhaustively tested which digital assistants are best at answering questions.

I interviewed Stone Temple CEO Eric Enge about the study recently to learn how they conducted it and what they learned. My full interview is below. It’s worth the watch! Highlights are below.

How to Test Digital Assistants

Turns out, there are no shortcuts for figuring out which digital assistant can actually assist. Eric’s team at Stone Temple methodically asked 4,942 questions to Alexa, Siri, Google Assistant on a phone, Google Assistant on the Google Home, and Microsoft’s Cortana, running on the Harmon Kardon Invoke speaker. Yes, they asked 24,710 separate queries! This took a LOT of labor.

For each question, the team noted whether the response was accurate or inaccurate. They also noted if the assistant didn’t understand the inquiry and whether the response was “verbal” from the device, pulled from a database, or sourced from the web.

Which Is the Best Digital Assistant?

According to the research, the best performer in 2018 is Google Assistant on a smartphone. This may not be a huge shock, given Google has access to an unfathomable array of information and routinely handles billions of user queries. This digital assistant attempted to answer almost 80 percent of all questions, meaning there were very few of the frustrating “I don’t understand what you mean” replies.

And, among questions answered, Google’s accuracy rate exceeded 90 percent.

In comparison, Cortana attempted to answer slightly more than 60 percent of the questions, with Alexa at slightly more than half, and Siri at just over 40 percent.

When the assistants offered answers, accuracy rates were more closely grouped together. Google on smartphone is best at more than 95 percent, but Google assistant on Home and Microsoft’s Cortana are right there, as well. Alexa tops 80 percent, and even Siri gets it right 80 percent of the time (when it actually has an answer at all).

which digital assistants know something

Sometimes, the answers provided by the digital assistant are flat-out wrong. This is most likely to occur with Alexa and Siri. Each had more than 160 incorrect answers, compared to fewer than 40 for Google and Microsoft. Note, however, that Google and Microsoft own enormous search engines, which likely help with their data matching considerably.

Alexa and Siri

We Ask Digital Assistants Pretty Dumb Stuff (Today)

Today, in these early days, the questions we ask our digital assistants are fairly basic and banal. (This is NOT the case among Stone Temple’s test, as many of the 5,000 questions are tricky.) But for many of us, we are primarily using these devices to check the weather, learn sports scores, retrieve general knowledge, or set timers.

In our conversation, Eric and I discussed this situation, and we believe it to be temporary—a snapshot in time. As humans get more comfortable with voice-activated queries and replies, our use of these digital assistants will become more nuanced and complex.

To my eye, this mirrors what happened in the early days of search engines, when people typically used very short search strings when querying Lycos, et al. As comfort with online search grew, and the quality of the search results improved, we began using longer and longer queries.

Over time, these digital assistants will improve, and our use of them will correspondingly become more comprehensive.

Google Assistant 90 percent accurate

Voice Is a Huge Content Marketing Opportunity

In addition to their digital assistants study, Eric and his team have also created “skills” for Alexa and Google Assistant that allow you to ask those assistants questions about search engine optimization, and you’ll be served up answers from Stone Temple. And on Alexa, they even have an SEO quiz you can take instantly. Brilliant!

Eric reports that the company is getting visibility and usage from this voice-activated advice.  He said:

“On the Google Assistant, they have a mode called implicit queries, and if you check the box when you set up your device that you want that, somebody can ask Google a question without invoking our specific actions. They might just say, ‘How do you implement a no-follow tag?’ Google might come back and say, ‘Stone Temple has an answer for that, do you want to hear it?'”

To date, Eric says more than 1,000 people have interacted with the Stone Temple SEO advice via implicit queries on Google Assistant.

Impact of Digital Assistant Data on Traditional SEO Rankings

I’m fascinated by Eric’s foray into voice-activated SEO advice and want to work on some of my own. “Alexa: Ask Jay Baer about tequila”!

Given that Google and Microsoft have major stakes in the digital assistants battle, I wondered whether being a “source” of information for those devices—as Stone Temple is for SEO information—might “bleed over” and positively influence search rankings on Google and Bing? I asked Eric about that, and he replied:

“No evidence of benefits to date, and I think it’s too early for that to have happened at this point. But it definitely isn’t going to hurt, and if you’re delivering reliable information, and people are asking for you to give them answers, that’s a topic authority signal that search engines could mine.”

Grab a copy of Stone Temple’s personal digital assistants study, and start thinking about your own foray into voice-activated knowledge. And if you can, take a few minutes to watch my interview with Eric above, or read the transcript below. Good stuff in there.

Transcript

Jay Baer: Hey guys, it’s Jay Baer from Convince & Convert, and joined today by my friend Eric Enge who’s the CEO of Stone Temple Consulting, an unbelievably effective and famous SEO content organization. Eric, it’s great to talk to you. You and your team put together this new report recently that . . . it’s stunning to me that you even did this. I know it’s the second year you’ve done it, but I was still shocked. It’s called Rating the Smarts of the Digital Personal Assistants in 2018. You go through and figure out what’s the most accurate and actually useful version of Alexa, Siri, Google Home, and Microsoft’s Cortana. I still can’t believe this. You sort of lined up the devices and asked them a bunch of questions. Thanks so much for talking about this. How did this come together?

 

Eric Enge: Great question. First of all thanks for having me Jay, thrilled to be doing this together with you and talking about this, we always have great fun chatting. We have a set of 5,000 questions that we developed and that set of 5,000 questions are questions about informational topics as drawn from things that we happen to know, Google provides featured snippets for or that they might likely be providing future snippets for. Well to correct that, these are questions we thought there was some possibility they might. This is how those questions originally came together.

 

Jay Baer: But the array of questions is pretty broad. I mean there’s a lot of different types of questions and intentionally so.

 

Eric Enge: Yes, it is intentionally so. It is meant to be a broad range across any manner of different topics from history, to recipe, to . . . I don’t know how something is spelled, or kind of all over the map really. Broad by intent because we wanted to test a wide range of capabilities. Then what we did is we literally asked, using human voice, these 5,000 different queries of each device. We did it for Google Assistant running on a smartphone, Google Assistant running on Google Home, Alexa running on the Amazon Echo, Cortana running on the Harman Kardon Invoke speaker, and then Siri running on an iPhone, 25,000 questions that were manually asked. We took this set of questions and we did all this cataloging of all these things including did you get a verbal response from the device or the personal assistant? Did the response indicate that the device thought it understood the question and therefore tried to answer it? If it did so did it answer the question correctly? If it got it wrong then what kind of wrong answer was it? It was an extensive amount of work in analysis done on a query by query basis.

 

Jay Baer: I tell you what, I think you told me you had 10 people working on this, just asking questions and logging the responses. That is a tremendous amount of human capital put into this project.

 

Eric Enge: Yes absolutely, I mean for me actually I’m an intensely curious person. I want to know the answers to questions like this. It turns out a lot of other people wanted to know answers to these questions as well because we’ve gotten a lot of visibility out of the study. The fact that we did it last year and did it again this year, we kind of have an index now that we’re measuring how these things are progressing.

 

Jay Baer: Yes that was the fascinating thing, I think the conclusion this year, is that Google is sort of the “best”, and obviously that’s circumstantial and things like that, but if you sort of had to pick one that Google probably performs the best today. At one point Siri was maybe better and now it’s not as good as it was. It’s not a static condition. That was the most interesting thing looking at last years report versus this years report that there really is quite a lot of variance year to year, which means either some of these things are learning, as machine learning would make you think as in the name and getting better, but others are perhaps getting worse and I’m not quite sure how that happens.

 

Eric Enge: Well I don’t think anything actually got worse per se. In fact, the personal assistant that made the most progress was Alexa, so they made huge strides towards expanding the number of questions that they responded to and their overall accuracy. Cortana has expanded a lot and had actually a pretty good step forward as well, both in terms of numbers of questions answered and the accuracy in answering the questions.

 

Siri used to be the leader but they were the first out and that’s a few years back now. They just kind of didn’t push it the same way everybody else has. How something gets worse, so I’ll give you an example though. Alexa’s accuracy rate was actually down a bit from last year, but on the other hand, they were answering far more questions. The total number of questions . . .

 

Jay Baer: It almost stands to reason that your accuracy would go down a little bit.

 

Eric Enge: Yes that’s exactly how you might see a drop and that, in fact, happened with Alexa.

 

Jay Baer: Do you feel like there is a real advantage for Alexa because it does have so much market share in the smart speaker category, and certainly Google has so many more installed Android devices because even people who are not using Android are using iPhone, using Google search or Google maps on their iPhone, and as we know some 40% of local searches now are driven by voice search. Do you feel like those data points are sort of helping them get better, they’re sort of ingesting more queries and therefore they can build out better AI?

 

Eric Enge: Yes, I think there definitely is an advantage being able to leverage crawling the web. You get so much data available to you, but what comes with that is when you’re crawling websites, just because it’s published on the internet doesn’t mean-

 

Jay Baer: Garbage in, garbage out.

 

Eric Enge: Right, so you have to qualify that somehow and that’s a tough challenge. Google’s been working on that for years as we’ve also documented in some other studies that we do. Amazon is doing something and I can’t say what it is because I don’t know, but they’re clearly getting access to more information than just Wikipedia. You can see that based on the questions they’re answering today.

 

Jay Baer: Yes, it’s pretty interesting. If you had to buy a personal assistant for somebody as a Mother’s Day gift or something and you’re like, “All right, I can only buy one of these,” which one would you buy? Which one would you tell somebody to purchase?

 

Eric Enge: Well if I’m going to base it on how smart it is in answering questions, Google Assistant still does have the lead. On the other hand, I have both multiple Alexa units and multiple Google Home units at home and we use them for home control, so controlling lights and thermostats and stuff like that. Alexa is better at that, so the real nuance . . .

 

Jay Baer: Better recipes for now, a little bit of a head start on that side of it too.

 

Eric Enge: Yes exactly, so I think it depends on what you’re using it for. If you’re looking for home control I’d go with Alexa. If you’re looking for the raw intelligence, which is what our study focused on, then yes Google Assistant is still there.

 

Jay Baer: One of the things you have in the study and again, it’s called Rating the Smarts of Digital Personal Assistants in 2018, you can get it on the Stone Temple website, stonetemple.com. You list sort of the question sets, not that you necessarily asked in the study, although you mention that as well, but what people ask of these assistants generally speaking. It shows that a lot of the questions today are somewhat banal. It’s what’s the weather going to be tomorrow, though I’m certainly guilty of that. I use my Alexa for that all the time even though I’ve got multiple other ways to determine the weather tomorrow, it’s just easier. Do you feel like over time as humans become more comfortable with this technology, perhaps more trusting of it, that the types of questions we ask will change?

 

Eric Enge: I do, so we’re at very early stages and frankly for this whole space there is kind of a big thing that’s being sorted out right now, which is people getting comfortable speaking to devices, and those devices being able to have real conversations with people because people don’t always use the formulaic phrases that the device is expecting. This is a tricky process, getting that human-machine interaction to work.

 

Jay Baer: Right because at some point it is our error because we don’t phrase the question. In fact, I probably shouldn’t record this but my wife and I are always fighting about Alexa because I know how to phrase a question because I’ve been in digital marketing and search for so long, so I can phrase the question in a way that I have a better chance of it getting returned. She doesn’t typically phrase it that way and then she gets super frustrated. “This stupid Alexa doesn’t know anything,” and I’m like, “Well but if you said it this way.” She’s like, “I don’t want to say it that way. I don’t want to have to change the way I speak because of some relational database.” It’s sort of like this whose fault is it? Is it stupid or is it us?

 

Eric Enge: No it’s absolutely the case, and it is impacting how broad the usage is of these things. There’s no question that it’s having that impact. The whole thing about going to voice is okay, we have the decades where we learned to type the thing into Google using fewer words to have a better chance of what we want and we all get trained to do that. When we’re using voice we don’t want to have to do that, but maybe we will get trained to a certain degree and maybe they’ll get better and maybe both will happen and we’ll meet in the middle somewhere.

 

I really do think that’s very much going to happen. It’s just you have to get to the big vision of this thing and the big vision is we’re already at a point where something like 75% of the world’s internet connected devices are something other than a smartphone, PC, or a tablet. That’s an incredible amount of opportunities to interact with the internet, and if I’m going to use something like my watch here I ain’t typing it in. If I could access my Google Assistant through this thing . . . Well that’s a little unfair, it’s an iWatch but that’s beside the point. Basically, I just want to use my voice, I want it to know it’s me, and go. The technology in the personal assistants already exists, it’s already out there, they can connect from every single device that you connect to, and you’re going to be reaching the exact same personal assistant.

 

Integrated experience that can start setting up a reservation on my phone, I could finish it when I hop in my car through the internet connectivity I have there, and it’s all one session. With that level of opportunity it is just incredibly compelling and I really firmly believe that that’s the direction this is going to go. Right now there’s an awful lot of call mom, call dad, set a timer, what’s the weather, very basic stuff, but we’re getting used to it.

 

Jay Baer: You’ve been in SEO a really long time, as have I, I feel like we’ve seen this move before. If you look at early day Google, Yahoo, Ask Jeeves, search queries, they were all two, three, four-word strings. Then over time, your average search query length got longer and more detailed and more specific as well. I feel like that’s a parallel to what we’re going to see in voice. You’re going to see more detailed, more nuanced questions.

 

Eric Enge: I agree and just pulling your analogy out a little further or drawing it a little further, we also saw that the search engines capability to process evolved dramatically and their ability to deal with different kinds of language constructs and these kinds just changed underneath our feet. Some of the algorithms we know, things like RankBrain we’ve heard about and other algorithms like that, natural language search. They already were dealing with this even separately from the whole voice conversation.

 

Jay Baer: Eric I wanted to ask you before we go about the Alexa skill that you have built to answer SEO questions, which I find hilarious and awesome and amazing, and I’m going to, when we’re done here, go upstairs and sit in front of my device and go to Eric Enge Stone Temple SEO school. Tell us about that process and what we can ask it, etc.

 

Eric Enge: We have a couple hundred, maybe about 250 SEO related questions, so it might be something like what is a new index tag? How do you implement a no follow? What is a 301 redirect? Very common questions every household person wants to ask.

 

Jay Baer: Everybody needs to know that. Most common questions are what’s the weather tomorrow and how do I do a 301? Those are the two questions.

 

Eric Enge: We actually have built that out for Alexa. We also have one for the Google Assistant, and Alexa we have an SEO quiz where you can actually take a quiz and get your SEO skills graded. We developed it in-house. There are tools to help you do that. There’s a website you can go to called diagflow.com, which will walk you through the entire process of building what they call an actions on Google app for personal assistant. It’s not easy, it’s definitely some things to figure out but it’s not terribly hard, and when you’re done you can actually export from that code, which with very simple modifications can be used immediately on Alexa. You actually do it in one place and you get the . . . worked on for both.

 

One of the cool things about this is people are actually using them, not that it’s an enormously popular activity as we joked about a moment ago. We’re getting visibility out of it. We actually got articles written about it, some press, which was cool. In addition, on the Google Assistant they have a mode called implicit queries and if you check the box when you set up your app that you want that, somebody can ask Google a question without invoking our actions on Google app. They might just say, “How do you implement a no-follow tag?” Google might come back and said, “Stone Temple has an answer for that, do you want to hear it?”

 

Jay Baer: Nice.

 

Eric Enge: Yes, which is nice. It’s free visibility.

 

Jay Baer: It’s a top-down funnel, yes I like it.

 

Eric Enge: Yes, and I know at this point we have something like 1,000 people who have been prompted that way and accepted it at this point.

 

Jay Baer: I mean that’s pretty strong. I mean that’s a pretty tight target. I mean no one’s asking about no follow tags on accident.

 

Eric Enge: Right, for this particular B to B application, which is kind of what our business is. It’s actually awesome. There’s a big opportunity here because when you look an Alexa skill or an actions on Google app for Google, what you have is an ability to get on the ground floor of being an information provider to Google and to Amazon. Both cases they’re both looking for reputable sources of information to answer user questions. They’re going to have their Wikipedia relationships, Google might use crawling, Amazon is probably doing some other things to get data to people, and the people who provide these apps are another information source. They’ll draw on you if your app is getting good enough scores, however they’re scoring it. Another way to get visibility inside of the digital marketing atmosphere.

 

Jay Baer: Do you think that being one of those information providers on the voice side would improve your topic authority on the regular web search side or have you seen the evidence of that?

 

Eric Enge: No evidence to date, I think it’s too early for that to have happened at this point. I certainly think that some level of validation in a third party . . . Well it’s not a third party. I should say some level of validation, which is on the Google Assistant or the Alexa I think it could matter absolutely.

 

Jay Baer: Yes it certainly can’t hurt is the way I look at it.

 

Eric Enge: Definitely isn’t going to hurt and if you’re delivering reliable metrics and people are asking for you to give them those answers or the assistant, that’s a signal.

 

Jay Baer: Yes I love it. Thanks so much for putting all the time and effort into this, doing the work that everybody is curious about but nobody would put that kind of effort into it. Appreciate that you and your team, Eric at Stone Temple, are willing to sit around and ask 5,000 questions times five devices, for a total ladies and gentleman, of 25,000 questions. That is a labor of love that is for sure.

 

Eric Enge: No question about that. It was fun doing it.

 

Jay Baer: Grab yourself a copy of Rating the Smarts of the Digital Personal Assistants 2018, super interesting findings from Eric Enge and his team at Stone Temple Consulting. My friend thanks for being here. Great to talk to you as always.

 

Eric Enge: All right, thanks Jay.

 

Jay Baer: See you bud.

 

Eric Enge: Yes, bye.

 

The post New Research Shows Which Digital Assistants Actually Know Stuff appeared first on Convince and Convert: Social Media Consulting and Content Marketing Consulting.



from Convince and Convert: Social Media Consulting and Content Marketing Consulting https://ift.tt/2shz57E

No comments:

Post a Comment