Introduction
This document describes live-stream–only workflows where Videolinq produces real-time TTML subtitle feeds and customers push those feeds into downstream platforms that accept TTML (or TTML-derived) sidecar subtitles for multilingual live playback.
The guidance is aligned with Videolinq’s TTML styling and delivery model and focuses on real-time ingestion, not file-based VOD processing.
TTML feed → agent → platform → player
End-to-End Overview
Live Signal flow
Live video ingest into Videolinq (RTMP or SRT).
Audio extraction + STT with optional real-time translation into multiple languages.
TTML generation per language and publication as secure, continuously updated endpoints.
Customer-side agents (Node.js / Python) fetch the TTML streams in real time.
Agents optionally apply styling and push TTML into supported platforms as sidecar subtitle tracks.
Players render selectable live subtitles per language.
Videolinq generates and publishes TTML. Customers control how that TTML is delivered to their chosen playback or packaging stack.
TTML Output Model
One TTML feed per language (for example:
/ttml/en,/ttml/es,/ttml/ar).Continuous XML updates with time-aligned subtitle cues.
Supports rich TTML styling attributes (font size, color, alignment, regions).
Designed for sidecar subtitle delivery rather than in-band SDI captioning.
Styling
Styling is applied by modifying TTML attributes (for example, tts:color, tts:fontSize, tts:textAlign, tts:origin) either:
directly in the agent before forwarding, or
downstream if the target platform supports TTML styling overrides.
Supported Consumer Platforms
The following platforms can be used as real-time TTML consumers when integrated via agents or APIs.
Media Servers & Origins
Wowza Streaming Engine
Supports live subtitle sidecar workflows for HLS/DASH.
Recommended pattern: agent pulls TTML from Videolinq and injects it as a live subtitle track or via a custom module.
Common use cases: private CDN, enterprise streaming, hybrid on-prem/cloud.
Wowza Video
Cloud-managed workflow built on Wowza infrastructure.
TTML can be supplied as external subtitle tracks and exposed to players that support TTML or converted formats.
Supported Players
JW Player
Supports XML-based subtitle tracks for live streams.
Integration pattern: agent republishes TTML to a JW-compatible subtitle endpoint or converts to a JW-supported live text track while preserving timing and language separation.
Video.js
Open-source HTML5 player with plugin ecosystem.
TTML can be consumed directly via plugins or via conversion to a compatible live text track while maintaining TTML timing semantics.
Kaltura Player
Enterprise video platform with live subtitle support.
TTML feeds can be attached as external subtitle tracks to live entries, enabling multilingual selection during playback.
Bitmovin Player
Native support for TTML and EBU-TT-D in live DASH/HLS workflows.
Preferred when maintaining full TTML styling and region layout is required.
OTT Packagers Pipelines
Unified Streaming (live packaging)
Accepts TTML (IMSC / EBU-TT-D profiles) as live subtitle inputs.
Agent pattern: write or push TTML into the packager’s live subtitle ingestion interface.
Custom HLS / DASH Pipelines
Many live packaging stacks accept TTML as a sidecar input.
Agents act as the bridge between Videolinq’s TTML endpoints and the packager’s live subtitle interface.
Videolinq Samples
Videolinq provides downloadable Python and Node.js sample agents to demonstrate how to work with live TTML feeds.
Step A — Connect to Videolinq TTML Endpoints
Enable TTML output in the Videolinq UI or API.
Retrieve the secure TTML URLs for each language.
Agents authenticate using your Videolinq credentials or API token.
Step B — Stream & Buffer TTML
Agents open a persistent HTTPS connection.
Incoming TTML cues are read and buffered in near real time.
Buffer size is configurable to balance latency vs stability.
Step C — Apply Styling (Optional)
Modify TTML styling attributes inline.
Apply branding rules consistently across languages.
Preserve timing and region metadata.
Step D — Push to Target Platform
Typical push mechanisms include:
HTTP POST / PUT to a subtitle ingestion API.
Writing to a watched directory or socket consumed by a media server.
Publishing to a live subtitle endpoint exposed by a player or origin.
Each language feed is handled independently, enabling parallel multilingual delivery.
Multilingual Live Streams
Videolinq produces parallel TTML feeds, one per language.
Agents fetch all feeds concurrently.
Downstream platforms expose each feed as a selectable subtitle track.
Players allow viewers to switch languages during the live stream.
This model avoids re-encoding video per language and scales efficiently for global audiences.
Latency, Reliability, and Fallbacks
Latency
Caption delay is determined by STT latency, agent buffering, and platform ingestion behavior.
Agents should avoid excessive buffering to preserve real-time experience.
Resilience
Agents should handle reconnects and short network interruptions gracefully.
TTML feeds resume automatically once connectivity is restored.
Fallback Formats
If a platform does not accept TTML directly, agents may convert TTML to WebVTT or another supported live subtitle format while maintaining timing integrity.
Summary
Videolinq delivers real-time TTML subtitle feeds per language for live streams.
Customers use sample agents to fetch, style, and forward TTML as sidecar subtitles.
Platforms such as Wowza, JW Player, Video.js, Kaltura Player, Bitmovin, and OTT packagers can consume these feeds for multilingual live playback.
This architecture supports scalable, low-latency, and standards-based live accessibility without altering the video signal itself.