Google action testing with API Gateway

google-assistant

#1

I have a google action created on the actions console. Im able to deploy that action with the jovo CLI. the webhook is enabled and points to a Lambda API.
Everything works fine with Alexa off course Its the same code base.
When I test from the actions simulator the resquest reach the API gateway and it passes on the request the lamda handler. request is the following:
{
“handler”: {
“name”: “Jovo”
},
“intent”: {
“name”: “actions.intent.MAIN”,
“params”: {},
“query”: “Talk to project shell”
},
“scene”: {
“name”: “actions.scene.START_CONVERSATION”,
“slotFillingStatus”: “UNSPECIFIED”,
“slots”: {}
},
“session”: {
“id”: “ABwppHFfAOvTPuiiooXBnnA_r9R5ptvUiYmba1QrnVtcUe2Wc2mMhzAWqbpc9u6mjMiRd7LjYUj5qYofUamPzz0w7gtsNRFLcBQ”,
“params”: {},
“typeOverrides”: [],
“languageCode”: “”
},
“user”: {
“locale”: “en-US”,
“params”: {
“userId”: “f72b6f69-d831-43b7-a816-b1448c17ea83”
},
“accountLinkingStatus”: “ACCOUNT_LINKING_STATUS_UNSPECIFIED”,
“verificationStatus”: “VERIFIED”,
“packageEntitlements”: [],
“gaiamint”: “”,
“permissions”: [],
“lastSeenTime”: “2021-05-31T17:15:32Z”
},
“home”: {
“params”: {}
},
“device”: {
“capabilities”: [
“SPEECH”,
“RICH_RESPONSE”,
“LONG_FORM_AUDIO”
],
“timeZone”: {
“id”: “America/Guatemala”,
“version”: “”
}
}
}
The error on the lambda code is the following:
{
“errorType”: “Error”,
“errorMessage”: “Can’t handle request object.”,
“code”: “ERR_NO_MATCHING_PLATFORM”,
“module”: “jovo-core”,
“hint”: “Please add an integration that handles that type of request.”,
“stack”: [
“Error: Can’t handle request object.”,
" at App.handle (/var/task/node_modules/jovo-core/dist/src/core/BaseApp.js:176:23)",
" at async App.handle (/var/task/node_modules/jovo-framework/dist/src/App.js:265:9)",
" at async Runtime.exports.handler (/var/task/index.js:26:3)"
]
}

thats what I’m able to find on the cloudwatch logs on my voice app log group. Any idea of may be causing the issue?


#2

Hey @Mairon_Corrales

Does it work locally with the Jovo Webhook?


#3

Hey @AlexSwe, yes this works on the webhook, and on the lambda function on AWS. What I found most interesting is that when testing against the actions simulator error happens, but if testing on dialog flow it works. So the question now would be, how to deploy dialog flow instead of action on action builder?
Or at least how to make action on action builder to work?


#4

Hey @Mairon_Corrales, as far as I know Jovo is looking if your project.js contains a dialogflow configuration to check if deployment is done for conversational actions or dialogflow.
grafik
If it contains one like shown above deployment is done for dialogflow.
The no matching platform error occured for me, when I deployed a conversational action but instantiated a usual google action object (instead of conversational action). Could you check if your app.ts imports the google object from the “conv” googe action repo? The old googleAction would not have the “conv” at the end.
grafik


#5

I have my project.js like
googleAction:{
projectId: ‘my-project-id’,
webhook: ‘my-wehook-url’
So I need to modify it as the one you showed me to deploy to dialog flow?
Thing is that I already have a a dialog flow manually set up and been able to make requests to the backend but when testing from a device it is pointing to the action on actions builder.
Also my app.js have this 2 lines:
const {GoogleAssistant}=require(‘jovo-platform-googleassistant’);

const{Dialogflow}=require(‘jovo-platform-dialogflow’);


#6

I would recommend using the new Actions Builder.

Here’s the sample project for the new Conversational Actions.

Hope it helps to add the missing parts in your code. Let me know if you need further help.


#7

Hello Guys! thanks for your answers. I have my code working now with an action on google and dialogflow as the NLU engine. That is ok but now I´m not able to test on smart display devices, or mobile devices, it just works on speaker devices.
Do you happen to know why is this happening?


#8

This is weird.

Do you get any error messages? Is the webhook called?


#9

Yeah error mesages says that ssml are malformed. I think its due to not having cards or simple texts set as part of the response.


#10

Could you post the response json?


#11

sure:
this is the response:
{
“responseMetadata”: {
“status”: {
“code”: 10,
“message”: “Failed to parse Dialogflow response into AppResponse because of invalid platform response: Could not find a RichResponse or SystemIntent in the platform response for agentId: 68f524ae-0c63-4e90-a14e-b3977d81abbe and intentId: 8b1139dc-f4b6-4d99-b506-e0ca85e29bf3. WebhookStatus: .”
}
}
}
also this is the request:
{
“user”: {
“locale”: “en-US”,
“userStorage”: “{“userId”:“d6eb62ee-0cef-43c8-820c-2e2ac0ae75d3”}”,
“userVerificationStatus”: “VERIFIED”
},
“conversation”: {
“conversationId”: “ABwppHEkSO-8xhRUdoGJaUq0RcHXtEntedqAEfKoe6YR-rfyCIVvoh74KGAXPjQ5UsteX2WjkA”,
“type”: “ACTIVE”,
“conversationToken”: “[”_jovo_session_wwphb"]"
},
“inputs”: [
{
“intent”: “actions.intent.TEXT”,
“rawInputs”: [
{
“inputType”: “VOICE”,
“query”: “no”
}
],
“arguments”: [
{
“name”: “text”,
“rawText”: “no”,
“textValue”: “no”
}
]
}
],
“surface”: {
“capabilities”: [
{
“name”: “actions.capability.ACCOUNT_LINKING”
},
{
“name”: “actions.capability.MEDIA_RESPONSE_AUDIO”
},
{
“name”: “actions.capability.AUDIO_OUTPUT”
}
]
},
“isInSandbox”: true,
“availableSurfaces”: [
{
“capabilities”: [
{
“name”: “actions.capability.AUDIO_OUTPUT”
},
{
“name”: “actions.capability.SCREEN_OUTPUT”
},
{
“name”: “actions.capability.WEB_BROWSER”
}
]
}
],
“requestType”: “SIMULATOR”
}
On the speaker everything seems to work. but on smart display it just dont


#12

Thanks, could you also provide the response json (from the app) that causes the error?


#13

sure. From the app on a smart phone this is the request:
{
“responseId”: “55278a32-0715-404b-bba1-a821c482ea34-c1c4ad2e”,
“queryResult”: {
“queryText”: “GOOGLE_ASSISTANT_WELCOME”,
“action”: “input.welcome”,
“parameters”: {},
“allRequiredParamsPresent”: true,
“fulfillmentText”: “Good day! What can I do for you today?”,
“fulfillmentMessages”: [
{
“text”: {
“text”: [
“Good day! What can I do for you today?”
]
}
}
],
“outputContexts”: [
{
“name”: “projects/dgts-ms/agent/environments/__aog-1/users/-/sessions/ABwppHGJLdsyOfRWpPsiMCDIB4sdyXCNbxwL8F5I-4X6fkdatTTApjW67xXHhOSVIO1ICOPt8A/contexts/actions_capability_screen_output”
},
{
“name”: “projects/dgts-ms/agent/environments/__aog-1/users/-/sessions/ABwppHGJLdsyOfRWpPsiMCDIB4sdyXCNbxwL8F5I-4X6fkdatTTApjW67xXHhOSVIO1ICOPt8A/contexts/actions_capability_audio_output”
},
{
“name”: “projects/dgts-ms/agent/environments/__aog-1/users/-/sessions/ABwppHGJLdsyOfRWpPsiMCDIB4sdyXCNbxwL8F5I-4X6fkdatTTApjW67xXHhOSVIO1ICOPt8A/contexts/actions_capability_web_browser”
},
{
“name”: “projects/dgts-ms/agent/environments/__aog-1/users/-/sessions/ABwppHGJLdsyOfRWpPsiMCDIB4sdyXCNbxwL8F5I-4X6fkdatTTApjW67xXHhOSVIO1ICOPt8A/contexts/actions_capability_media_response_audio”
},
{
“name”: “projects/dgts-ms/agent/environments/__aog-1/users/-/sessions/ABwppHGJLdsyOfRWpPsiMCDIB4sdyXCNbxwL8F5I-4X6fkdatTTApjW67xXHhOSVIO1ICOPt8A/contexts/actions_capability_account_linking”
},
{
“name”: “projects/dgts-ms/agent/environments/__aog-1/users/-/sessions/ABwppHGJLdsyOfRWpPsiMCDIB4sdyXCNbxwL8F5I-4X6fkdatTTApjW67xXHhOSVIO1ICOPt8A/contexts/google_assistant_input_type_keyboard”
},
{
“name”: “projects/dgts-ms/agent/environments/__aog-1/users/-/sessions/ABwppHGJLdsyOfRWpPsiMCDIB4sdyXCNbxwL8F5I-4X6fkdatTTApjW67xXHhOSVIO1ICOPt8A/contexts/google_assistant_welcome”
},
{
“name”: “projects/dgts-ms/agent/environments/__aog-1/users/-/sessions/ABwppHGJLdsyOfRWpPsiMCDIB4sdyXCNbxwL8F5I-4X6fkdatTTApjW67xXHhOSVIO1ICOPt8A/contexts/system_counters”,
“parameters”: {
“no-input”: 0,
“no-match”: 0
}
}
],
“intent”: {
“name”: “projects/dgts-ms/agent/intents/59bc7c3e-b1fb-4472-83fd-8d29de6efb95”,
“displayName”: “Default Welcome Intent”
},
“intentDetectionConfidence”: 1,
“languageCode”: “en”
},
“originalDetectIntentRequest”: {
“source”: “google”,
“version”: “2”,
“payload”: {
“user”: {
“locale”: “en-US”,
“userStorage”: “{“userId”:“d6eb62ee-0cef-43c8-820c-2e2ac0ae75d3”}”,
“userVerificationStatus”: “VERIFIED”
},
“conversation”: {
“conversationId”: “ABwppHGJLdsyOfRWpPsiMCDIB4sdyXCNbxwL8F5I-4X6fkdatTTApjW67xXHhOSVIO1ICOPt8A”,
“type”: “NEW”
},
“inputs”: [
{
“intent”: “actions.intent.MAIN”,
“rawInputs”: [
{
“inputType”: “KEYBOARD”,
“query”: “talk to project shell test”
}
]
}
],
“surface”: {
“capabilities”: [
{
“name”: “actions.capability.SCREEN_OUTPUT”
},
{
“name”: “actions.capability.AUDIO_OUTPUT”
},
{
“name”: “actions.capability.WEB_BROWSER”
},
{
“name”: “actions.capability.MEDIA_RESPONSE_AUDIO”
},
{
“name”: “actions.capability.ACCOUNT_LINKING”
}
]
},
“isInSandbox”: true,
“availableSurfaces”: [
{
“capabilities”: [
{
“name”: “actions.capability.WEB_BROWSER”
},
{
“name”: “actions.capability.AUDIO_OUTPUT”
},
{
“name”: “actions.capability.SCREEN_OUTPUT”
}
]
}
]
}
},
“session”: “projects/dgts-ms/agent/environments/__aog-1/users/-/sessions/ABwppHGJLdsyOfRWpPsiMCDIB4sdyXCNbxwL8F5I-4X6fkdatTTApjW67xXHhOSVIO1ICOPt8A”
}

and the response is this one:
{
“fulfillmentText”: "<audio src=“https://taps-voice-app-bucket.s3.amazonaws.com/ms_audios/ms_intro.mp3”/> <audio src="https://taps-voice-app-bucket.s3.amazonaws.com/static_responses/welcome_welcome_message_smart_display_option_1.mp3"/>",
“end_interaction”: false,
“outputContexts”: [
{
“name”: “projects/dgts-ms/agent/environments/__aog-1/users/-/sessions/ABwppHGJLdsyOfRWpPsiMCDIB4sdyXCNbxwL8F5I-4X6fkdatTTApjW67xXHhOSVIO1ICOPt8A/contexts/_jovo_session_rriff”,
“lifespanCount”: 1,
“parameters”: {
JOVO_STATE”: “WelcomeFollowUpState”
}
}
],
“payload”: {
“google”: {
“expectUserResponse”: true,
“richResponse”: {
“items”: [
{
“simpleResponse”: {
“ssml”: “<audio src=“https://taps-voice-app-bucket.s3.amazonaws.com/ms_audios/ms_intro.mp3”/> <audio src=“https://taps-voice-app-bucket.s3.amazonaws.com/static_responses/welcome_welcome_message_smart_display_option_1.mp3”/>”
}
}
]
},
“noInputPrompts”: [
{
“ssml”: “<audio src=“https://taps-voice-app-bucket.s3.amazonaws.com/static_responses/welcome_no_input_smart_display.mp3”/>”
}
],
“userStorage”: “{“userId”:“d6eb62ee-0cef-43c8-820c-2e2ac0ae75d3”}”
}
}
}

that’s the part from aws cloudwatch. from Google Cloud the response is quite different.
{
insertId: “1js75mhfxsms2s”

labels: {
channel: “preview”
querystream: “GOOGLE_USER”
source: “JSON_RESPONSE_VALIDATION”
}
logName: “projects/dgts-ms/logs/actions.googleapis.com%2Factions”
receiveTimestamp: “2021-06-23T16:30:34.716623120Z”

resource: {

labels: {
action_id: “actions.intent.MAIN”
project_id: “dgts-ms”
version_id: “”
}
type: “assistant_action”
}
severity: “ERROR”
textPayload: “MalformedResponse at expected_inputs[0].input_prompt.rich_initial_prompt.items[0].simple_response: ‘display_text’ must be set or ‘ssml’ must have a valid display rendering”
timestamp: “2021-06-23T16:30:34.697658487Z”
trace: “projects/673240905465/traces/ABwppHGJLdsyOfRWpPsiMCDIB4sdyXCNbxwL8F5I-4X6fkdatTTApjW67xXHhOSVIO1ICOPt8A”
}


#14

Are you using audio files? If yes, make sure to set a display text before returning the response

this.$googleAction.displayText('some text');

#15

yeah this was the problem. Now it works. We are using audio on ssml for every response. My question now is: does this display text appears everywhere? because we are using APL for Alexa, and we will use Image cards or bbasic cards for google action. So If use display text can also use Image cards? or basic cards?