User response is being cutoff and throwing an Unexpected Error. How to get the error message that happened

amazon-alexa

#1

Hi,

I am developing a skill using JOVO which takes notes from users and saves in a database. The workflow is working OK but when I am saying something the data is being cut off and a portion of it is getting saved in the database but the session seems to be alive still. When I am trying to say something again, Alexa is saying Unexpected error. This is happening for one intent only where I have used AMAZON.SearchQuery as slot type.

I want to know the exact error happened and take it to my email address for better understanding. How can I do this?

According to this article, I can customize the error. But I want to know the exact type of error occurred and get the full error message in my email. How can I do this?

I checked my database field length and it is good enough to store a long text. The portion being saved, is nowhere near the field length. It is much lower.

However, this is not happening when I am testing from Alexa test console. Even a 500 character long text is getting saved fine in the database from here. I am using AMAZON.SearchQuery for this slot. Does Alexa have a restriction of character length to be passed as slot value?

I am not able to find out the issue. Google could not help either :frowning:

Looking for your help.

Kind regards,
Subrata Sarkar


Can't handle request object Error on Lambda
#2

I think the problem is that Alexa doesn’t allow for very long user input, I think it’s just a few seconds that work.


#3

Thanks Jan!

Few seconds means how many seconds? Is there a way to increase the time limit? And this cutoff thing is happening only with voice input. Test console is working fine. So is there a kind of device dependency as well? If so, how to deal with that?

A few more things I would like to get help with:

  1. How to handle unexpected errors
  2. How to take a user back on track if an invalid, a no-intent-match or any error is encountered
  3. Is there any options to still work with session values in case an error is encountered?
  4. Can Fallback intent be used on error so that we can still store the data in the database and respond with a friendly message?
  5. When the END() handler is called? Is the session accessible inside it? If yes, I will be able to do some task there as well.
  6. Does ON_ERROR kills the session?

Is there any way that I can still access my session values even after something went wrong?

Regards,
Subrata Sarkar


#4

Hi Subrata, I dealt with the time limit last year and I think you have about 8-10 seconds maximum for a response. Alexa needs to process the data that it is receiving, so it cannot absorb voice data beyond that time threshold. Would it be possible to give your notes one sentence at a time instead of a constant stream? Maybe you could then have Alexa have a random response while waiting for the next sentence (“Got it”, “Yeah”, “Ok”, “Good”).

The tricky part of this approach could be getting your intent Handler to fire. You would probably need a word or phrase that each note starts with to trigger the correct handler.

Here’s an example:
User: CAPTURE this is the first sentence of my note.
Alexa: Got it
User: CAPTURE this is the second sentence of my note.
Alexa: Ok
User: CAPTURE this is the third sentence…

Then your intent phrase would be "Capture {note} where note is an AMAZON.SearchQuery.

Just my thoughts… Good luck! Hopefully as time goes on these voice devices will be able to handle more natural conversational interactions.

One more lesson learned: I like to put an UnhandledIntent() handler in all of my Jovo states with a simple console.log to capture situations where Alexa just stops without reporting an error. And yes, I believe as long as the session is still open you should be able to recover and get your user back on track. I haven’t had a lot of luck using Alexa’s validations in their auto dialog model, so I typically try to handle the bad responses in the back end and handle the dialog myself using Jovo states. Others may have had better luck with the auto validations.

Finally, I’m going to defer to the Jovo experts for the rest of your questions. I just wanted to respond since I fought with the same thing last year…


#5

Thanks Ben! In fact I am taking the same approach as yours, i.e. collecting shorter notes from user, saying ‘GOT IT… Continue’ for the next note. I am then stitching the notes and putting it in a session variable. When user terminates the interaction (e.g. by saying STOP), I am retrieving stitched note from the session and saving it back to the database.

But this, in my opinion, is not a good practice, especially if the skill unexpectedly stops and I am unable to retrieve session value unless one or some typical intents are capable of doing this in such a stage, which I don’t know yet.


#6

Hi @emfluenceindia,

Yes, I agree with you that this approach is not ideal - I just don’t know what other options you have at this stage of where the technology is at. I think your best bet if you’d like to continue moving forward with this skill is to engineer the heck out of it to try to avoid, handle, and/or recover from errors. Essentially, turn those “Unexpected” errors into ones your code knows about and can process. One example may be to have an intent where you can ask Alexa, “What was the last note that you heard?” in order to let the user pick up at the right place. Maybe you could also set a flag at the top level of your user database called successfullyClosed which you set to true when you exit the skill cleanly. If you open the skill and this flag is false (and it’s not the first use), you know that you probably are in an error condition.

Do you have any idea why (and how often) the skill is stopping unexpectedly? I have found that many errors are avoidable and these are the ones I try to eliminate (e.g. intent handlers not firing because the skill is in a different state, slot values not being valid, etc.). However, when I test on a slow internet connection I have had skills just stop for no apparent reason. Just trying to figure out what you’re dealing with and if you’ve found the root cause of the errors.

Anyway, hope this helps…


#7

Hi Ben,
I don’t have a perfect idea of how often the skill is experiencing unexpected error, since I am using Alexa test console. So, I have added myself as a beta tester and would now try to run the skill from my Android based Alexa App. Maybe, I would be able to reproduce the issue and be able to get some more idea about it.

I might come back to you for further help sooner or later, especially with State handling and error handling parts. I am not an experienced Alexa developer yet but trying my best to become one.
Hope you won’t mind.

Regards,
Subrata Sarkar


#8

Hi Subrata,

No problem, I don’t mind and I’m happy to help as long as I have the time (some days/weeks are busier than others!). Just curious - are you running with a Jovo webhook or have you uploaded your backend to Lambda?

-Ben


#9

Hi Ben,
I have deployed everything to Lambda.


#10

Are you using Cloud Watch to look at your logs?


#11

I am using Monitoring. Is this what you are talking about? If not, please help me with the CloudWatch thing.


#12

Yes, Monitoring is using CloudWatch. You can either click the “View logs in CloudWatch” button at the top right or click on a LogStream in the list at the bottom of the Monitoring page. I wanted to make sure you knew how to view your console.log outputs and see the request/response messages.


#13

I would like you to ask one different thing. I am building this skill under en-US locale. But I am based in India. So I added myself as a beta tester. The skill is under the Dev section of my Alexa App (Android). The skill is not live yet. The invocation phrase is “my quick notes”. But whenever I am trying to invoke by saying Alexa, open my quick notes, rather than opening the skill Alexa is starting to suggest me many other skills.

Initially the skill invocation was “emfluence project notes” (I am building this for emfluence), but the word emfluence is being treated as influence by Alexa! Maybe the dialect. So I used the above phrase “my quick notes”, but as I mentioned, the skill is not being opened and I am getting a lot of irrelevant suggestions, e.g. “my sticky notes”.

How can I get rid of this problem?


#14

When you click on the skill in your Alexa App, does it show that it is enabled? One thing you can try is to say, “Alexa, open my quick notes skill” and see if that works. I have an Alexa skill that will only work in the Android App if I say the word skill after my invocation name. I typically test on Echo devices, though, and I don’t have a lot of experience on the App.


#15

Interesting!! Let me try :slight_smile:


#16

Hi Ben,
Does a skill under Beta test (not yet live) run in Echo devices?
I might be getting an Echo dot soon.


#17

Yes, it will as long as you enable Development testing on the Test tab in the Amazon developer console for your skill. If you want to try it on a device that is registered to a different Amazon account then just add that email address to the Beta testers group under the Distribution tab.


#18

Hi Ben,
I am trying to do the following additional stuff inside Unhandled() method. This is the workflow:

User: Alexa, open virtual notes
Alexa: say continue
User continue
Alexa: what do you want me to know
User: this is a test note (Alexa stores this in a session variable)
Alexa: ok. say continue to add more, stop to finish
User: exit

I am expecting the Unhandled() to be called since I don’t have any sample utterance called exit. And when Unhandled is called, I want Alexa to do the following:

Unhandled() {
    let confirmation = "";
    if (this.$session.$data.new_note !== undefined) {
      const agent_note = this.$session.$data.new_note;
      confirmation = `${agent_note}`;
    }

    const goodbye = `${confirmation} it was nice talking to you. goodbye!`;
    return this.tell(goodbye);
}

I am testing in Alexa test console. When I type in exit, I am sure that Unhandled is called the output log displays the following:

{
	"body": {
		"version": "1.0",
		"response": {
			"outputSpeech": {
				"type": "SSML",
				"ssml": "<speak>this is a test note.  it was nice talking to you. goodbye!</speak>"
			},
			"shouldEndSession": true,
			"type": "_DEFAULT_RESPONSE"
		},
		"sessionAttributes": {}
	}
}

But in the test area, it is just showing exit. I am expecting Alexa to say this is a test note. it was nice talking to you. goodbye!.

Although the speech is correctly generated, why Alexa is not speaking it up?

The major objective is to put user notes, whatever is in the session, back into the database in case user suddenly stops the interaction, or there is an error comes up, which I have to handle inside ON_ERROR or END maybe?

One more thing, how can I generate an error from Alexa test console so that I can see what is going on inside ON_ERROR or END?

Long question. Sorry about this!

Regards,
Subrata


#19

Hi Subrata,

It appears that you and I have had similar issues! Check out this post that I submitted about a month ago: Is it possible to have speech output on a SessionEndedRequest?.

When you say ‘Exit’ or ‘Quit’, Alexa will end the session and issue the SessionEndedRequest. I believe that you may be able to do some final processing at this point in the backend, put Alexa is done speaking/listening. If, however, you say ‘Stop’, Alexa will issue an AMAZON.StopIntent that can be handled normally and any output speech (like Goodbye) will be heard.

Have you tried writing to the database in your Unhandled state? I’m curious if it would still work even though you don’t hear the final it was nice talking to you. goodbye! message.

One more thing - do you have an END() intent handler? That’s probably where the SessionEndedRequest wants to go, but is falling over to the Unhandled() method.

-Ben


#20

Hi Ben,

Nicely explained. I am on my way trying to do some database stuffs inside both Unhandled and ON_ERROR. The good thing is I can access the session values when they fire.

Since I have the ability to access the sessions, I believe I can make them async and call another async method inside it to complete the database stuff. I shall get back to you with the result.

ON_ERROR however, is able to speak. I should be able to put the user back on track from here again.

By the way, I don’t have any END yet.