Google Assistant list selection on surfaces with no screen

google-assistant

#1

A List Selector for the Google Assistant integration is really useful for letting the user choose from multiple options. The ON_ELEMENT_SELECTED() handler lets you get the ID of the selected item, if the user said the full name of the item, or just a cardinal choice (ie: 2nd)

The ON_ELEMENT_SELECTED() handler works great for the Google Phone, and Smart Display integrations. However when testing on the Speaker (Google Home) the ON_ELEMENT_SELECTED() is not triggered, and the Fallback intent is triggered.

Is it possible to add something to the Option Item so that items can be select on surfaces with no screen (Google Home)?
Or do you can to manually add intents to handle the Users selection. It would be very hard to handle, if the user reads back the title of an article for example without using the list selector.

The code I am currently using to construct a List of Headlines:

exports.ArticleHeadlineListBuilder = (articles) => {
   /**
    * Builds a Google Assistant Visual list to show article headlines
    */

   const list = new List();
   for (let i = 0; i < articles.length; i++) {
       list.addItem(
           (new OptionItem())
               .setTitle(articles[i]['title'])
               .setDescription(articles[i]['summary'].substring(0, 80) + '...')
               .setImage({
                   url: articles[i]['img_url'],
                   accessibilityText: 'Article Headline Image',

               })
               .setKey(`${i}`)
       );
   }

   return list
};

#2

Hi @Marko_Arezina, thanks for this suggestion, very interesting!

Right now, I believe it would be necessary to use ON_ELEMENT_SELECTED for touch input (on devices with screen) and build your own handlers for voice-only devices. I agree that this can be quite tedious though.

Maybe a “list component” could be something for the feature idea we’re discussing here?


#3

Hey @jan ,

I checked out the conversational components which seem like a really promising idea. However I’m surprised that there is no list selector that works just by voice. For example if the user needs to choose from a series of Song titles, Articles, or Podcasts. Do you know anything in Jovo that can be used to achieve this?

If your content is dynamic from an API, I don’t think it is possible to train the language model to recognize when the user reads the name of the selected item. While it is possible to train intents for Ordinal selection (1st, 2nd, …) it seems like that would make for a less than optimal user experience.

If there is no voice only method for selecting from a list of items than I can suggest a list component in the other thread. It can take a list choices, and the user can select an element using an ordinal (1st, 2nd, 3rd).


#4

Yes, this is one of the main challenges a lot of people have with Alexa right now. Typically, people default to ordinal selection, but I agree that this isn’t great!

The new dynamic entities feature could be helpful there: https://developer.amazon.com/docs/custom-skills/use-dynamic-entities-for-customized-interactions.html

We don’t have anything out of the box yet, most people build it themselves with states and intent handlers. It’s a great idea to think about an abstracted list element, though. Feel free to suggest it in the other thread, I’m curious what the others think. Thank you!