Jovo v4: Feature Request Thread


#1

Hey all!

We’re currently planning our next major release: Jovo v4. :sparkles:

We want to involve the community as much as possible in this process, so please share some thoughts how you’d like to see Jovo improved. Wild ideas appreciated!

  • If you could wave a magic wand, what would you like to see us build?
  • What feature is currently missing where you had to build workarounds?
  • What would make your life as Jovo developer more efficient (and fun!)

Can’t wait to start brainstorming with all of you :sunny:


pinned globally #2

#3

Here are a few things we’re working on:

  • Improved project structure for more complex projects
  • Jovo CLI refactoring and the ability to hook into its commands
  • Improved config for both project.js and config.js
  • Improved deployment processes
  • Easier ways to use components

#4
  • Support for Dialogflow Input & Output Contexts
  • Support for Dialogflow Events

#5
  • Language Model Versions and diffs
  • NLU Testing Support

#6

Component

Model merging:
Each used component model should be tried to merge against each other - taking more care than lodash.

  1. enhance intents if only utterances were added
  2. mark conflicting intents (for example if two intents could be triggered with same phrases)

overwrite I18n keys and don’t loose them on updating component

Jovo for flutter

Mycroft Plattform


#7

Increased support of Actions on Google Converations

  • Unit Testing for AoG Conversations.
  • jovo deploy updates Google Conversations No_Match handler to jovo’s Unhandled() handler.

#8

Jovo model supports creating a twilio langauge model


#9

A better integration of google cloud functions:

  • Some templates & examples that do not deploy towards lambda
  • and use a GCP Database
  • make jovo deploy able to deploy to google cloud

#10

Hi! I made my very first Jovo App recently, because I’ve always used Alexa SDK for Nodejs.
We made a skill and an action for kids and it was dubbed by a kid too, so we don’t send any text back to the user, but only a sequence of audio.
In order to do that, we “created” a special syntax for our i18n json file and we had to manually parse each string. This is an example:
"speech": "lets_start _{{audioInfo.name}} <1.5s> *listenState.shall_we_play_it"

  • lets_start is an audio which is stored in S3 and lets_start is appended to the base URL of the bucket
  • _{{audioInfo.name}} is replaced by i18next with a valid link of an audio. _ states that the new string is already an URL and doesn’t need to be appended
  • <1.5s> adds a break which lasts for 1.5s
  • *listenState.shall_we_play_it is a key contained in the json and it represents an array. The * states that speechbuilder has to fetch a random key from that array. Each entry of the array contains a string, like lets_start, which will be appended to the base URL of the bucket.

I don’t know if this is the case, but I’ve never seen this kind of features implemented at framework level and it would be nice if it was added.