Unit Testing Hacks and Best Practices


Hi all! I know that a lot of people here use unit tests pretty heavily and I’d love to discuss a few things:

  • How do you structure your tests (and test groups)?
  • How much are you trying to cover?
  • Are there any things/hacks that you’ve been using lately that really improved your unit testing process?

I personally think that the conversation.$user.$data feature that @AlexSwe built a few weeks ago is great and saves a lot of time: https://www.jovo.tech/docs/unit-testing#user-data

Unit testing Jovo 4 Components

We use the conversation.$user.$data attributes so we don’t have to create MASSIVE tests to route users through an entire conversation to set the attributes. Instead, now we just set the conversation.$user.$data attribute at the start of our tests to save ~50 lines of code per test

Here’s an example of what we did before:

test (`should update note and repeat order summary`, async () => {
      const conversation = testSuite.conversation();

      const launchRequest = await testSuite.requestBuilder.launch();
      await conversation.send(launchRequest);

      const firstNameRequest = await testSuite.requestBuilder.intent('FirstNameIntent', {name});
      await conversation.send(firstNameRequest);

      const orderTypeIntentRequest = await testSuite.requestBuilder.intent('OrderTypeIntent', {orderType:orderType})
      await conversation.send(orderTypeIntentRequest);
      const addressEntryRequest = await testSuite.requestBuilder.intent('AddressEntryIntent', {addressLine: address, city:city})
      await conversation.send(addressEntryRequest)

      const phoneIntentRequest = await testSuite.requestBuilder.intent('PhoneNumberIntent',{phoneNumber:phone})
      await conversation.send(phoneIntentRequest)

      const specialNoteYesIntentRequest = await testSuite.requestBuilder.intent('YesIntent');
      await conversation.send(specialNoteYesIntentRequest)

      const specialNoteEntryIntentRequest = await testSuite.requestBuilder.intent('SpecialNoteIntent',{note:specialNote});
      await conversation.send(specialNoteEntryIntentRequest)
      await conversation.clearDb();

With the new conversation.$user.$data attributes here’s how the same test looks

test('should update note and repeat order summary', async () => {
        const conversation = testSuite.conversation();
        conversation.$user.$data.name = 'Pat';
        conversation.$user.$data.orderType = "deliver";
        conversation.$user.$data.addressLine1 = "10 lane avenue";
        conversation.$user.$data.city = "columbus";
        conversation.$user.$data.phone = "1234567890";
        conversation.$user.$data.specialNote = "this is a test";
        conversation.$user.$data.businessName = 'voice dry cleaner dot com';
        conversation.$user.$data.NeedsWelcomed = false;
        conversation.$user.$data.repeat = true;

        await conversation.clearDb();

Thank you Jovo and the massive time savings you add to our development lifecycle for test-driven development


Great question @jan, and thanks to @Voice_First for sharing how you do it!

I absolutely see that your approach “The app is in state X, now do one thing and assert that is arrives in state Y.” is in the true spirit of a unit test, and makes sense if

  • you can be sure that X is a state that your voice app will naturally arrive at, and
  • it has no practical value to assert that the voice app arrives in state X through a sequence of requests.

When I developed ‘Mau-Mau’, I used a similar approach, but back then with session attributes and in Postman because I didn’t know about Jovo, and the testing functionality wasn’t was mature as it is today.
For the next deep voice app that I’ll develop, I’ll also make use of manipulating the user object directly.

Recently I’ve been working more with relatively flat voice apps that have few states to go through before you get a result. For these cases, my approach is to use unit testing to assert that the entire dialog happens as planned. In this sense, it’s more of an end-to-end testing.
The way I do this while allowing for variation in the response texts is to test against the translation keys from i18n. The way I do this is by setting the locale of my unit tests to a dummy value so that i18n doesn’t resolve them, and then making sure that the response string contains the respective key(s):

In case of a dialog with turns, I include multiple assertions in one test:

This might seem inefficient, but with the number of tests I wrote so far, it hasn’t been particularly slow. And I found it a great way of making sure that certain paths through my voice apps work as intended.

One challenge with this approach is how to assert the correct behavior in case of APL, where parts of what the app says to the user comes from the SpeakItem command.

Looking forward to hear your thoughts on this, and/or other approaches! :smiley: