Change theme
Help
Press space for more information.
Show links for this issue (Shortcut: i, l)
Copy issue ID
Previous Issue (Shortcut: k)
Next Issue (Shortcut: j)
Sign in to use full features.
Vote: I am impacted
Notification menu
Refresh (Shortcut: Shift+r)
Go home (Shortcut: u)
Pending code changes (auto-populated)
View issue level access limits(Press Alt + Right arrow for more information)
Request for new functionality
View staffing
Description
What you would like to accomplish: I want to generate an adaptive bitrate HLS stream with variations of
fMP4
video and audio streams.How this might work: The Transcoder API job settings need to be modified to allow for the pairing of mux streams (each mux stream containing a single elementary stream). For example: if I produce 3 video streams "video-low", "video-medium", "video-high", and 2 audio streams "audio-low", and "audio-high", then I should be able to pair "video-low" with "audio-low", "video-medium" with "audio-high", and "video-high" with "audio-high" to achieve my desired bitrate ladder. As far as I can tell, this currently isn't possible with the Transcoder API unless you mux the audio and video into combined segment files using the
ts
container.The produced manifest might look something like this:
but instead, currently is made to look like this:
The manifest is putting both audio streams into the same group, when instead (if I understand correctly) they should be in separate groups and then each video stream should be assigned the appropriate group.
If applicable, reasons why alternative solutions are not sufficient: Traditionally, you can use the
ts
container to mux audio and video streams together. Nowadays, HLS can handle separated audio and video streams with thefMP4
container. In fact, Apple recommends this practice (see link below). Separate audio streams yields storage savings (A: you don't need to save the same audio data in every video variant, and B: if you're also targeting DASH then you can use the samem4s
files for both instead of duplicating the video data) and improved CDN efficiency. Another reason to use fMP4 is to be able to utilize HEVC. fMP4 does not support multiple elementary streams in one mux stream, so we are required to pair separate steams anyways...although it doesn't seem like there is a way to do that currently. At the moment, the Transcoder API simply selects the first audio stream as the default.Other information (workarounds you have tried, documentation consulted, etc): Item 9.5 of Apple's "HTTP Live Streaming (HLS) authoring specification for Apple devices" recommends this practice.