User response is being cutoff and throwing an Unexpected Error. How to get the error message that happened

amazon-alexa

#6

Hi @emfluenceindia,

Yes, I agree with you that this approach is not ideal - I just don’t know what other options you have at this stage of where the technology is at. I think your best bet if you’d like to continue moving forward with this skill is to engineer the heck out of it to try to avoid, handle, and/or recover from errors. Essentially, turn those “Unexpected” errors into ones your code knows about and can process. One example may be to have an intent where you can ask Alexa, “What was the last note that you heard?” in order to let the user pick up at the right place. Maybe you could also set a flag at the top level of your user database called successfullyClosed which you set to true when you exit the skill cleanly. If you open the skill and this flag is false (and it’s not the first use), you know that you probably are in an error condition.

Do you have any idea why (and how often) the skill is stopping unexpectedly? I have found that many errors are avoidable and these are the ones I try to eliminate (e.g. intent handlers not firing because the skill is in a different state, slot values not being valid, etc.). However, when I test on a slow internet connection I have had skills just stop for no apparent reason. Just trying to figure out what you’re dealing with and if you’ve found the root cause of the errors.

Anyway, hope this helps…


#7

Hi Ben,
I don’t have a perfect idea of how often the skill is experiencing unexpected error, since I am using Alexa test console. So, I have added myself as a beta tester and would now try to run the skill from my Android based Alexa App. Maybe, I would be able to reproduce the issue and be able to get some more idea about it.

I might come back to you for further help sooner or later, especially with State handling and error handling parts. I am not an experienced Alexa developer yet but trying my best to become one.
Hope you won’t mind.

Regards,
Subrata Sarkar


#8

Hi Subrata,

No problem, I don’t mind and I’m happy to help as long as I have the time (some days/weeks are busier than others!). Just curious - are you running with a Jovo webhook or have you uploaded your backend to Lambda?

-Ben


#9

Hi Ben,
I have deployed everything to Lambda.


#10

Are you using Cloud Watch to look at your logs?


#11

I am using Monitoring. Is this what you are talking about? If not, please help me with the CloudWatch thing.


#12

Yes, Monitoring is using CloudWatch. You can either click the “View logs in CloudWatch” button at the top right or click on a LogStream in the list at the bottom of the Monitoring page. I wanted to make sure you knew how to view your console.log outputs and see the request/response messages.


#13

I would like you to ask one different thing. I am building this skill under en-US locale. But I am based in India. So I added myself as a beta tester. The skill is under the Dev section of my Alexa App (Android). The skill is not live yet. The invocation phrase is “my quick notes”. But whenever I am trying to invoke by saying Alexa, open my quick notes, rather than opening the skill Alexa is starting to suggest me many other skills.

Initially the skill invocation was “emfluence project notes” (I am building this for emfluence), but the word emfluence is being treated as influence by Alexa! Maybe the dialect. So I used the above phrase “my quick notes”, but as I mentioned, the skill is not being opened and I am getting a lot of irrelevant suggestions, e.g. “my sticky notes”.

How can I get rid of this problem?


#14

When you click on the skill in your Alexa App, does it show that it is enabled? One thing you can try is to say, “Alexa, open my quick notes skill” and see if that works. I have an Alexa skill that will only work in the Android App if I say the word skill after my invocation name. I typically test on Echo devices, though, and I don’t have a lot of experience on the App.


#15

Interesting!! Let me try :slight_smile:


#16

Hi Ben,
Does a skill under Beta test (not yet live) run in Echo devices?
I might be getting an Echo dot soon.


#17

Yes, it will as long as you enable Development testing on the Test tab in the Amazon developer console for your skill. If you want to try it on a device that is registered to a different Amazon account then just add that email address to the Beta testers group under the Distribution tab.


#18

Hi Ben,
I am trying to do the following additional stuff inside Unhandled() method. This is the workflow:

User: Alexa, open virtual notes
Alexa: say continue
User continue
Alexa: what do you want me to know
User: this is a test note (Alexa stores this in a session variable)
Alexa: ok. say continue to add more, stop to finish
User: exit

I am expecting the Unhandled() to be called since I don’t have any sample utterance called exit. And when Unhandled is called, I want Alexa to do the following:

Unhandled() {
    let confirmation = "";
    if (this.$session.$data.new_note !== undefined) {
      const agent_note = this.$session.$data.new_note;
      confirmation = `${agent_note}`;
    }

    const goodbye = `${confirmation} it was nice talking to you. goodbye!`;
    return this.tell(goodbye);
}

I am testing in Alexa test console. When I type in exit, I am sure that Unhandled is called the output log displays the following:

{
	"body": {
		"version": "1.0",
		"response": {
			"outputSpeech": {
				"type": "SSML",
				"ssml": "<speak>this is a test note.  it was nice talking to you. goodbye!</speak>"
			},
			"shouldEndSession": true,
			"type": "_DEFAULT_RESPONSE"
		},
		"sessionAttributes": {}
	}
}

But in the test area, it is just showing exit. I am expecting Alexa to say this is a test note. it was nice talking to you. goodbye!.

Although the speech is correctly generated, why Alexa is not speaking it up?

The major objective is to put user notes, whatever is in the session, back into the database in case user suddenly stops the interaction, or there is an error comes up, which I have to handle inside ON_ERROR or END maybe?

One more thing, how can I generate an error from Alexa test console so that I can see what is going on inside ON_ERROR or END?

Long question. Sorry about this!

Regards,
Subrata


#19

Hi Subrata,

It appears that you and I have had similar issues! Check out this post that I submitted about a month ago: Is it possible to have speech output on a SessionEndedRequest?.

When you say ‘Exit’ or ‘Quit’, Alexa will end the session and issue the SessionEndedRequest. I believe that you may be able to do some final processing at this point in the backend, put Alexa is done speaking/listening. If, however, you say ‘Stop’, Alexa will issue an AMAZON.StopIntent that can be handled normally and any output speech (like Goodbye) will be heard.

Have you tried writing to the database in your Unhandled state? I’m curious if it would still work even though you don’t hear the final it was nice talking to you. goodbye! message.

One more thing - do you have an END() intent handler? That’s probably where the SessionEndedRequest wants to go, but is falling over to the Unhandled() method.

-Ben


#20

Hi Ben,

Nicely explained. I am on my way trying to do some database stuffs inside both Unhandled and ON_ERROR. The good thing is I can access the session values when they fire.

Since I have the ability to access the sessions, I believe I can make them async and call another async method inside it to complete the database stuff. I shall get back to you with the result.

ON_ERROR however, is able to speak. I should be able to put the user back on track from here again.

By the way, I don’t have any END yet.


#21

Hi Ben,

Looks like ON_ERROR is working OK at some points. However, I would like to bring one more thing here. My senior sitting in the US tried to record the following voice but it ended up with some weirdness. I am sharing below the voice recording.

https://drive.google.com/file/d/1RQug90eP1Z7KrdNl36sOVLvcDvyxAEi4/view

I looked into CloudWatch log and seems it was an EXCEEDED_MAX_REPROMPTS issue. If you please listen to the voice recording above (length is only 49 second). The response (slot value) from my senior after Alexa prompts what is it, is of 10 seconds length.

In the CloudWatch, I cannot even see the response anywhere in the log. Does this man Alexa completely ignored it and came up with an EXCEEDED_MAX_REPROMPTS error? Surprising thing is, after saying once, Alexa asked the same question, i.e. what is it. And after saying it for the second time, nothing happened and the interaction seemed to have exited unexpectedly.

If it was an error, I was expecting ON_ERROR will handle it and store into the database. But the slot value is nowhere in the log! I am not even sure whether ON_ERROR was called because this has a output. It rather looks to me that Unhandled was executed. I can see the output message in the log, but it was not in the voice recording, since Unhandled does not come up with a speech as you said earlier.

However, he did the following test and it worked.

Alexa: …say add note
User: add note
Alexa: what is it
User: this is a test note
Alexa: anything else? say continue
User: add this message to my last note
Alexa: thank you. anything else?
User: no

The above notes got saved fine into the database and the dialog was completed successfully. I really need your help @Ben_Hartman.

  • Subrata

#22

Hi Subrata,

What does your intent with the AMAZON.SearchQuery look like in the en-US.json file? It seems like Alexa is not detecting that you’ve triggered the intent and therefore not sending anything to your backend to process. Since it doesn’t trigger the intent, it times out after a retry and I believe it eventually ends up sending you a SessionEndedRequest which is being picked up by your Unhandled handler.

When I’ve used AMAZON.SearchQuery with long (5-7 seconds) of speech before, I had to provide “trigger words” to alert Alexa that I was indeed in the correct intent. With this approach, however, one must be creative so that the conversation or interaction not seem too unnatural. I’m not positive this is the problem, since “this is a test note” worked - maybe because that phrase is so short. I’d be curious if things worked as expected if you used phrases in filling the slot like:

"My note is {SearchQuery}",
"Start recording {SearchQuery}",
"Continue with {SearchQuery}",  //this may not work since "continue" is used elsewhere
"Record {SearchQuery}"

Let me know what you find out…

-Ben


#23

Hi Ben,
Sorry for a delayed response. Actually I was trying a few things using your suggestion. Here is my observation:

  1. Trigger phrase is actually required for longer notes
  2. Ideal timing for each note is less than 7 seconds
  3. Alexa wants a clear speech (input) from me and the tempo should not be very fast
  4. There are times, where Alexa might not work as desired
  5. I have error handlers, but not all the time they are working properly as well.

I have a question.
Can custom slot may come handy in this situation?

Meanwhile, my senior has told me to try the following way:

maybe you could adjust the interaction model to acknowledge that. Like “Please say a few quick words to prompt discussion. We will follow up for more details later.” And then after the initial note, we could add some specific questions to add detail to the note. Like “How high a priority is this?” and “Where can we observe this, either on our project or somewhere else?”.

Do I need different intents for the above dialog flow? From the above suggestion user replies for the latter two questions, i.e.

  • How high a priority is this?
  • Where can we observe this, either on our project or somewhere else?

might be some thing like

  • very high / high / low etc. or even a short sentence as well
  • on our project / you can observe this blah blah blah… etc.

If those require multiple intents, again, I need to make sure that there are triggers (probably) to let Alexa know which intent to invoke.

Or this is a good idea to have multiple (required) slots in the same intent and then on filling up all the slots, I can stitch them into one and save in the database.

Sorry for so many questions Ben :slight_smile:

Regards,
Subrata


#24

Hi Subrata,

I don’t have a lot of time to really think out my answer today, but I wanted to respond with my initial thoughts to help you brainstorm. I like using custom slot types for intents, and you could create custom slot types for the two questions that you mentioned.

For a GetPriorityIntent() you could define a slot called priority and create a type that contains the enumerated list of values (very high, high, low, etc.). For WhereObservedIntent() you could try the same thing if the list of responses is a defined range of values.

One thing to keep in mind is that you can have intent handlers that you can call within Jovo that don’t necessarily have to have an Alexa trigger phrase. this.ask(), this.toIntent(), this.toStatelessIntent(), and this.followUpState() are all powerful ways to take control of the dialog.

Here’s a little of my mental brainstorming…

  1. Alexa starts by asking, ‘Please say a few quick words to prompt discussion’
  • Handler called something like FillDiscussionIntent()
  • this would fill a generic SearchQuery slot
  • In the handler you could add a this.$speech.addText('How high a priority is this?') and then either a this.ask(this.$speech) or a this.followUpState('GetNoteDetailState').ask(this.$speech).
  1. In the GetNoteDetailState you could then have two intent handlers: one to get the priority and the other to get the observed location. The GetPriorityIntent() would be triggered first and then in that intent handler you could then add a this.$speech.addText('Where is this observed?') and then a this.ask(this.$speech). I think (but am not positive) that Alexa will call the correct intent handler without a trigger word if the user’s response is in the range of values that you defined in the custom type.
  2. OR… you could try to have one main discussion intent that has a slot for the note (SearchQuery), the priority, and the observation. Make all three required to fill the intent, but only have the SearchQuery in your Sample Utterance.

One thing I really like about Jovo is how quickly you can try out different approaches (multiple slots vs. multiple intents, etc.) and try it out right away with the Webhook. I don’t know if I really answered your question, but hopefully you’re getting closer to a solution that works for you.

-Ben


#25

Definitely it helps me a lot. You are awesome and your help is extra ordinary. Thank you so much for bearing me for such a long time :slight_smile: