User response is being cutoff and throwing an Unexpected Error. How to get the error message that happened

amazon-alexa

#21

Hi Ben,

Looks like ON_ERROR is working OK at some points. However, I would like to bring one more thing here. My senior sitting in the US tried to record the following voice but it ended up with some weirdness. I am sharing below the voice recording.

https://drive.google.com/file/d/1RQug90eP1Z7KrdNl36sOVLvcDvyxAEi4/view

I looked into CloudWatch log and seems it was an EXCEEDED_MAX_REPROMPTS issue. If you please listen to the voice recording above (length is only 49 second). The response (slot value) from my senior after Alexa prompts what is it, is of 10 seconds length.

In the CloudWatch, I cannot even see the response anywhere in the log. Does this man Alexa completely ignored it and came up with an EXCEEDED_MAX_REPROMPTS error? Surprising thing is, after saying once, Alexa asked the same question, i.e. what is it. And after saying it for the second time, nothing happened and the interaction seemed to have exited unexpectedly.

If it was an error, I was expecting ON_ERROR will handle it and store into the database. But the slot value is nowhere in the log! I am not even sure whether ON_ERROR was called because this has a output. It rather looks to me that Unhandled was executed. I can see the output message in the log, but it was not in the voice recording, since Unhandled does not come up with a speech as you said earlier.

However, he did the following test and it worked.

Alexa: ā€¦say add note
User: add note
Alexa: what is it
User: this is a test note
Alexa: anything else? say continue
User: add this message to my last note
Alexa: thank you. anything else?
User: no

The above notes got saved fine into the database and the dialog was completed successfully. I really need your help @Ben_Hartman.

  • Subrata

#22

Hi Subrata,

What does your intent with the AMAZON.SearchQuery look like in the en-US.json file? It seems like Alexa is not detecting that youā€™ve triggered the intent and therefore not sending anything to your backend to process. Since it doesnā€™t trigger the intent, it times out after a retry and I believe it eventually ends up sending you a SessionEndedRequest which is being picked up by your Unhandled handler.

When Iā€™ve used AMAZON.SearchQuery with long (5-7 seconds) of speech before, I had to provide ā€œtrigger wordsā€ to alert Alexa that I was indeed in the correct intent. With this approach, however, one must be creative so that the conversation or interaction not seem too unnatural. Iā€™m not positive this is the problem, since ā€œthis is a test noteā€ worked - maybe because that phrase is so short. Iā€™d be curious if things worked as expected if you used phrases in filling the slot like:

"My note is {SearchQuery}",
"Start recording {SearchQuery}",
"Continue with {SearchQuery}",  //this may not work since "continue" is used elsewhere
"Record {SearchQuery}"

Let me know what you find outā€¦

-Ben


#23

Hi Ben,
Sorry for a delayed response. Actually I was trying a few things using your suggestion. Here is my observation:

  1. Trigger phrase is actually required for longer notes
  2. Ideal timing for each note is less than 7 seconds
  3. Alexa wants a clear speech (input) from me and the tempo should not be very fast
  4. There are times, where Alexa might not work as desired
  5. I have error handlers, but not all the time they are working properly as well.

I have a question.
Can custom slot may come handy in this situation?

Meanwhile, my senior has told me to try the following way:

maybe you could adjust the interaction model to acknowledge that. Like ā€œPlease say a few quick words to prompt discussion. We will follow up for more details later.ā€ And then after the initial note, we could add some specific questions to add detail to the note. Like ā€œHow high a priority is this?ā€ and ā€œWhere can we observe this, either on our project or somewhere else?ā€.

Do I need different intents for the above dialog flow? From the above suggestion user replies for the latter two questions, i.e.

  • How high a priority is this?
  • Where can we observe this, either on our project or somewhere else?

might be some thing like

  • very high / high / low etc. or even a short sentence as well
  • on our project / you can observe this blah blah blahā€¦ etc.

If those require multiple intents, again, I need to make sure that there are triggers (probably) to let Alexa know which intent to invoke.

Or this is a good idea to have multiple (required) slots in the same intent and then on filling up all the slots, I can stitch them into one and save in the database.

Sorry for so many questions Ben :slight_smile:

Regards,
Subrata


#24

Hi Subrata,

I donā€™t have a lot of time to really think out my answer today, but I wanted to respond with my initial thoughts to help you brainstorm. I like using custom slot types for intents, and you could create custom slot types for the two questions that you mentioned.

For a GetPriorityIntent() you could define a slot called priority and create a type that contains the enumerated list of values (very high, high, low, etc.). For WhereObservedIntent() you could try the same thing if the list of responses is a defined range of values.

One thing to keep in mind is that you can have intent handlers that you can call within Jovo that donā€™t necessarily have to have an Alexa trigger phrase. this.ask(), this.toIntent(), this.toStatelessIntent(), and this.followUpState() are all powerful ways to take control of the dialog.

Hereā€™s a little of my mental brainstormingā€¦

  1. Alexa starts by asking, ā€˜Please say a few quick words to prompt discussionā€™
  • Handler called something like FillDiscussionIntent()
  • this would fill a generic SearchQuery slot
  • In the handler you could add a this.$speech.addText('How high a priority is this?') and then either a this.ask(this.$speech) or a this.followUpState('GetNoteDetailState').ask(this.$speech).
  1. In the GetNoteDetailState you could then have two intent handlers: one to get the priority and the other to get the observed location. The GetPriorityIntent() would be triggered first and then in that intent handler you could then add a this.$speech.addText('Where is this observed?') and then a this.ask(this.$speech). I think (but am not positive) that Alexa will call the correct intent handler without a trigger word if the userā€™s response is in the range of values that you defined in the custom type.
  2. ORā€¦ you could try to have one main discussion intent that has a slot for the note (SearchQuery), the priority, and the observation. Make all three required to fill the intent, but only have the SearchQuery in your Sample Utterance.

One thing I really like about Jovo is how quickly you can try out different approaches (multiple slots vs. multiple intents, etc.) and try it out right away with the Webhook. I donā€™t know if I really answered your question, but hopefully youā€™re getting closer to a solution that works for you.

-Ben


#25

Definitely it helps me a lot. You are awesome and your help is extra ordinary. Thank you so much for bearing me for such a long time :slight_smile: