Live Encode from Playlist Source and Live Source
For 24/7 internet radio/TV station operation I want to use a playlist with multiple pre-encoded files to simulate a live event. At predefined times e.g. at 6am I want the playlist to switch to a Live Encoding Source from another encoder for 1 hour and afterwards continue with playing the pre-encoede files in the playlist. This should work similar to the server side playlists (.WSX) availabe in WIndows Media Server. The encoder should only pass-through the pre-encoded file sources or live source from another encoder without re-encoding anything. Maybe you could extend the existing Smooth Streaming Simulator to support ASX or SMIL playlist input and a Live endoding source.
PS: I need this in Expression Encoder 4 SP1 (late summer 2010) Thanks :-)
Doug Drinka commented
I'd like to see this support re-encode as well. This would facilitate a single high-resolution encode streamed live to EC2 GPU instances for re-encode to multiple additional bitrates.
This is crucial for many scenarios.
EE4 live stream source into EE4 live stream encoding session WITHOUT the need for any transcoding or re-encoding if input output formats match.
In addition I need to simulate live streams from a file source WITHOUT the need for transcoding or re-encoding if input output formats match.
Ideally an EE4 live encoding session could take .asx or .smil playlists as input. The playlist should be able to contain a mix of already encoded files as well as live sources form other EE4 live encoders and pass through the content with minimal processing, maybe just adding overlays, closed captioning, script commands, and additional audio streams for multilanguage support, where each languages audio is live encoded by different encoders and fed into the same downstream live encoding session.
This would allow to have off-site simultaneous translators (located in different parts of the world) live encodig their own audio stream with EE4. Those live audio streams could then be fed into the downstream EE4 live encoder running somewhere in the cloud, which then combines the primary audio/video feed and the multiple simultaneous translation live audio feeds and creates the final live output stream with video and multilanguage audio. The translators around the world would watch the primary audio/video feed from the source encoder and use low latency audio codecs to encode. It would be OK to buffer all the streams for several seconds on the final downstream encoder that is combining all the srouces into a single live stream.
Of course a silverlight player template that dynamically populates a dropdown box for multiple language audio sources based on the number of languages it finds in the live source playlist file (.asx or .smil) would be the icing on the cake!