ShowImageCard -- updating for ongoing playback?



As noted elsewhere I’m starting to play with the showCard mechanism for devices with screens. The simplest-but-least-pretty solution is to update the card only when I start a new track or when the user explicitly invokes one of the “tell me about this track” intents, which I’ve already implemented.

But it’d be nice to update the screen dynamically when a new track comes up.That’s two separate issues, depending on what playback mode I’m in.

For normal playback, where I’m in control of the tracks, theoretically it would be as simple as calling
this.showImageCard() from either the playback-started or playback-finished intent, passing the description of the track we are starting. I haven’t made that work yet, but I’m playing around with it.

For the live stream, I get the descriptive info by calling a REST routine, and there’s no event when the stream moves from track to track. Solution might be a periodic event that wakes me up to issue the showImageCard() call. But I’d need the Jovo object to call it against, and it’d have to be called via a path that will process the new card request… and that’s starting to sound like it needs deeper knowledge of Jovo than I have.

Has anyone already done this kind of automatic device-screen update in Jovo, and if so do you have any tips on how to approach it?


Interesting, I didn’t know this works. I always that the media player only supports album art


Cards can certainly be changed any time you start a new MP3 file. And I’ve been able to put up new cards as part of my response to intents which then return to playing music. (In fact if I do that while Amazon’s music player is doing its scrolling-lyrics thing, my card stays up until the end of the song, “on top of” the lyrics.)

But I haven’t yet figured out how to make it work from Jovo’s handling of the Audioplayer events – PlaybackStarted, PlaybackNearlyFinished (which as you know now occurs right after PlaybackStarted, but which still seems to be the right place to add the next track to the queue), or PlaybackFinished. It may be that Jovo isn’t reading this part of the response structure when passing those events through… or there may be other reasons, including errors in my own code.

But “If it happens, it must be possible” At least on the Alexa family. I haven’t seriously started trying to get the Google Home side of my skill running and approved.

It’s the sort of thing I would design into any audio player that had a screen – PC-based players have been doing it for decades now, when not displaying “visualizations” based on the audio data – so I have some slight hope it will be a general capability. And some hope that Jovo can, or will, make it possible (and preferably straightforward).


I just did a fast check, dropping debuging calls to this.showSimpleCard() into Alexaskill.PlaybackStarted, .PlaybackNearlyFinished, .PlaybackFinished, and .PlaybackStopped.

None of them appeared on screen.

Not having a good Alexa-native app handy to try the equivalent experiment on, I can’t tell whether this is because Alexa doesn’t use that data when returned from these intents, or because Jovo doesn’t do so.


My understanding with the Alexa playback events is you can not return any voice or display commands. This is not a Jovo issue, but an Alexa issue - I’ve coded this both ‘raw’ and with Jovo.


When you say you show a card any you start an mp3, I assume you are launching the mp3 based on a voice command - i.e., your skill has the context. Once you send the command to play the audio and display the card your skill looses the context and the audio starts playing ‘in the background’. What I have never figured out is how to keep the skill in the foreground. An observation I have made is that if you respond with speech and a display card, you loose the card within seconds. If you just respond with a display card, the display card will stay up a lot longer. You can see this in action with ‘The KEXP Archive’ - start a show playing, then say ‘Alexa, ask The KEXP Archive what’s playing’. Alexa will announce the track playing and display it, but once she is done announcing it, the card disappears. Then say ‘Alexa, ask The KEXP Archive to display the playlist’. If the Alexa gods then smile, you will get a list of the last ten songs played, and it will stay up for 30+ seconds, longer if you scroll through it with your touchscreen.


One more thing. APL has a timer event that you can setup and respond to. What I haven’t tried yet is to use this as a way to poll an API to update the Now Playing information. APL also appears to have a lot of capabilities to let the ‘front end’ handle interactions. The issue becomes you almost end up having two versions of your skill, one for devices with that support APL, and one for devices that do not. One of these days…


Thanks for the info, @CraigH.

I’ve been putting both audio and spoken output on the same intent response for a while now, and it seems to be working now. (When I started, one tended to step on the other, but apparently someone fixed that.)

Most of my cards do get speech responses at the same time, and some of them stay up longer than others. I’m not sure why. There’s some evidence that it’s related to whether the intent was called via named or name-free interaction, since the next and previous built-in name-frees stay up longer.

Keeping the skill in the foreground… Hm. In theory, making it an Ask ought to keep it in control until that times out. You might have to turn off reprompt, or prompt with an empty string … I may try that when I get a chance; as you say, it would at least give the user more chance to read the card.

Of course what we really want is to just improve the Alexa’s native sound-player card, which has the touch controls. Since that has a space for a large image (though that defaults to a greyed-out icon), I presume that there is a way to do so in native Alexa… but it may not be exposed for easy access.

APL timers: Well, we already have places where Jovo needs to test which platform it’s running on (though I’ve started trying to use subroutines to encapsulate those so my main-line code doesn’t have to deal with them). One could conditionally add/remove APL markup, I suppose. But yeah, it might be better to have our own APL parser which would be used with non-Amazon device support… even if it can’t support everything in the APL and winds up just stripping some of it out. Somewhat similar to how Jovo has handled the model files…