Google Assistant request is INTENT, so nlpjs not called, nluData not created

v4
google-assistant

#1

Take a Google Assistant action, with a single intent and a single Type collecting all text.
This is sending a Request of type INTENT and already contains an Intent and Entities properties.

So jovo interpretation module does not call nlpjsnlu.processText.
this.$input.entities contains the full utterance, the GA Type collecting all/any text.
this.$input does not contain a nluData property.
this.$input contains intent and entities from GA, not from nlpjs.

  • Adding INTENT to supportingTypes does not call the nlpjsnlu
  • Calling nlpjsnlu.processText directly fails.
  • nlpjsnlu is not added to GoogleAssistantPlatform.plugins, it is {}, even when added in the config.

How can we call nlpjsnlu the parse the complete utterance phrase supplied by GA? and thus get a nluData property containing nlpjs intent and entities?


#2

Hey @David_MacDougall,

Thank you for the details! Could you show how you’re adding NLPjs to the Google Assistant platform? Pasting the relevant part of your app.ts code would be ideal.


#3

Hi Jan

First I declare nlpjs… Here I’ve added INTENT but it makes no difference in GoogleAssistantPlatform if it is included or not.

const nlpjs = new NlpjsNlu({
    input: {
      supportedTypes: ['INTENT', 'TEXT', 'TRANSCRIBED_SPEECH', 'SPEECH'],
    },
    languageMap: { 
      en: LangEn,
    },
    preTrainedModelFilePath: './model.nlp',
    useModel: false,
    modelsPath: './models',
});

Then I use it in both GoogleAssistantPlatform and CorePlatform. I’ve since found out nlpjs is the default so not sure if required in CorePlatform, but I’ve tried it in one, then the other, then both, it never shows up in the GAP plugins property, and nluData never appears in this.$input

const app = new App({

  components: [EchoComponent, GlobalComponent],

  plugins: [
    // Add Jovo plugins here
    new GoogleAssistantPlatform({
      plugins: [
        nlpjs
      ],
    }),
    new CorePlatform({
      plugins: [
        nlpjs
      ],
    }),
    new GoogleSheetsCms({
      caching: false,
      serviceAccount: ServiceAccount,
      spreadsheetId: '1Oe_Z.............G3NcUY13Q',
      sheets: {
        translations: new TranslationsSheet(),
      },
    }),
  ],

Does this help?


#4

To the second point, here is my code attempting to call the processText() directly.

I can see that the first argument (jovo) is only used to get the locale, which is never where its looking because the google request is formatted differently so it always defaults to ‘en’. which in this case is fine and should not cause an error, but it still fails, even though its the same function you are calling in the interpretation module.

I realise this is an attempted workaround, but why can’t I call it directly?

app.hook('after.interpretation.end', async (jovo) => {
  let utterance = jovo.$input.text;

  // all references are good entering this next line
  const nluProcessResult = await jovo.$plugins.CorePlatform.plugins.NlpjsNlu.processText(jovo, utterance);
  // fails with Cannot read properties of undefined (reading 'toJSON')
  if (nluProcessResult) {
      jovo.$input.nlu = nluProcessResult;
      jovo.$entities = nluProcessResult.entities || {};
  }
});

#5

@jan can you provide a hint here? Am I going about this all wrong? Should I try and implement nlp.js as a separate npm module and use its nlu outside of jovo/jovo plugins? I’ve been doing other sections, but want to try and implement the nlu side of things in the coming week, do you have any pointers? Thanks


#6

Hey there! I just tried the following:

// app.ts

const nlpjs = new NlpjsNlu({
  input: {
    supportedTypes: ['INTENT', 'TEXT', 'TRANSCRIBED_SPEECH', 'SPEECH'],
  },
  languageMap: {
    en: LangEn,
  },
});

// ...
plugins: [
    new GoogleAssistantPlatform({ plugins: [nlpjs] }),
    // ...
  ],

When testing this in the Actions on Google Simulator and monitoring it in the Jovo Debugger, I can see that the $input is updated in the Lifecycle view:

This is what the input looks like in my case:

{
  "intent": {
    "name": "YesIntent"
  },
  "entities": {},
  "raw": {
    "locale": "en",
    "utterance": "yes",
    "languageGuessed": false,
    "localeIso2": "en",
    "language": "English",
    "explanation": [
      {
        "token": "",
        "stem": "##exact",
        "weight": 1
      }
    ],
    "classifications": [
      {
        "intent": "YesIntent",
        "score": 1
      },
      {
        "intent": "NoIntent",
        "score": 0
      }
    ],
    "intent": "YesIntent",
    "score": 1,
    "domain": "default",
    "entities": [],
    "sourceEntities": [],
    "answers": [],
    "actions": [],
    "sentiment": {
      "score": -0.25,
      "numWords": 1,
      "numHits": 1,
      "average": -0.25,
      "type": "senticon",
      "locale": "en",
      "vote": "negative"
    }
  }
}

Also, logging this.$plugins.GoogleAssistantPlatform?.plugins shows that NLP.js was added to the Google Assistant config.

If this doesn’t work for you, could you create a minimal reproducible repository for us to test?


#7

Yes, but when it is called by Google Action, not the debugger, then the nlu has already happened in GA so nlp.js is not called and the input does not have this information.


#8

I did call it using the Google Action. If you keep the Debugger window open, you can still see the requests coming from other platforms


#9

Okay, let me double check because at first glance your settings etc… look the same as mine, and I have yet to see nlpjs info in this.$input. If it still doesn’t work I will setup a shared repo. Thanks


#10

Creating and adding nlpnlu to GA is the same.

Inputting into GA test console I get this, which is using the GA nlu
There is only 1 input section

{
+   "type": "INTENT",
+   "intent": "matchAny",
+   "entities": {
+     "allText": {
+       "native": {
+         "original": "let's build a doorman model",
+         "resolved": "let's build a doorman model"
+       },
+       "id": "let's build a doorman model",
+       "value": "let's build a doorman model",
+       "resolved": "let's build a doorman model"
+     }
+   },
+   "text": "let's build a doorman model"
+ }

Inputting into the debugger directly I get this. Which is nlpjs. Which is what I was expecting/am trying to get from the GA supplied free text, and what you are seeing.

{
+   "intent": {
+     "name": "matchAny"
+   },
+   "entities": {
+     "command": {
+       "id": "create",
+       "resolved": "create",
+       "value": "build",
+       "native": {
+         "start": 6,
+         "end": 10,
+         "len": 5,
+         "levenshtein": 0,
+         "accuracy": 1,
+         "entity": "command",
+         "type": "enum",
+         "option": "create",
+         "sourceText": "build",
+         "utteranceText": "build"
+       }
+     },
+     "model": {
+       "id": "door",
+       "resolved": "door",
+       "value": "doorman",
+       "native": {
+         "start": 14,
+         "end": 20,
+         "len": 7,
+         "levenshtein": 0,
+         "accuracy": 1,
+         "entity": "model",
+         "type": "enum",
+         "option": "door",
+         "sourceText": "doorman",
+         "utteranceText": "doorman"
+       }
+     }
+   },
+   "raw": {
+     "locale": "en",
+     "utterance": "let's build a doorman model",
+     "languageGuessed": false,
+     "localeIso2": "en",
+     "language": "English",
+     "nluAnswer": {
+       "classifications": [
+         {
+           "intent": "matchAny",
+           "score": 1
+         },
+         {
+           "intent": "YesIntent",
+           "score": 0
+         },
+         {
+           "intent": "CancelIntent",
+           "score": 0
+         },
+         {
+           "intent": "NoIntent",
+           "score": 0
+         }
+       ]
+     },
+     "classifications": [
+       {
+         "intent": "matchAny",
+         "score": 1
+       },
+       {
+         "intent": "YesIntent",
+         "score": 0
+       },
+       {
+         "intent": "CancelIntent",
+         "score": 0
+       },
+       {
+         "intent": "NoIntent",
+         "score": 0
+       }
+     ],
+     "intent": "matchAny",
+     "score": 1,
+     "domain": "default",
+     "sourceEntities": [],
+     "entities": [
+       {
+         "start": 6,
+         "end": 10,
+         "len": 5,
+         "levenshtein": 0,
+         "accuracy": 1,
+         "entity": "command",
+         "type": "enum",
+         "option": "create",
+         "sourceText": "build",
+         "utteranceText": "build"
+       },
+       {
+         "start": 14,
+         "end": 20,
+         "len": 7,
+         "levenshtein": 0,
+         "accuracy": 1,
+         "entity": "model",
+         "type": "enum",
+         "option": "door",
+         "sourceText": "doorman",
+         "utteranceText": "doorman"
+       }
+     ],
+     "answers": [],
+     "actions": [],
+     "sentiment": {
+       "score": 0.375,
+       "numWords": 6,
+       "numHits": 1,
+       "average": 0.0625,
+       "type": "senticon",
+       "locale": "en",
+       "vote": "positive"
+     }
+   }
+ }

#11

Okay, hopefully sorted now, the project is here https://github.com/macasas/jason


#12

I found the problem: It’s because you define GoogleAssistantPlatform again in your app.dev.ts. If you comment that out, it works.

Here’s an example (requests come from Google Assistant, the Debugger is just used to observe the data). Notice how $input changes twice in the lifecycle:


#13

Thanks very much for sorting this out, it’s a big step forward.

Yeah, I wondered why there were multiples of a lot of items in the debugger, especially $user and $request. I just accepted that was how it worked :slight_smile: Never realised it might show I was doing something wrong :frowning: