circle icon


Join Videon at DEMUXED 2022

October 12-13, 2022 | San Francisco, CA

If you plan on attending DEMUXED, we'd love to connect in person. Our EVP of Product and Engineering, Lionel Bringuier, will be giving a talk called Taking the headache out of timed metadata for live video.

Feel free to request a meeting if you live in San Francisco or will be in the area during the conference.

We'd love to connect with you!

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Videon Speaker

Lionel Bringuier

Chief Product Officer
LinkedIn logo

Lionel has 25 years of experience in the telecommunications, broadcast, and media industry, managing real-time and mission-critical services in voice and video applications. He has launched several successful products and services for companies including AWS and Anevia.

Lionel's Talk

Taking the headache out of timed metadata for live video

The trend to merge video and time metadata is now mainstream. However, inherent challenges still exist when it comes to merging multiple live video feeds with multiple sources of timed metadata in the media and entertainment (M&E) space. For example, captioning, digital rights management and synching multiple live streams with multiple cameras are a few among others. This creates barriers for live bettering, sports, and events to create better viewing experiences for their end-users.

Why is it a challenge?
Today, many live video operators use HTTP-based OTT workflows sending video feeds from the camera to the Content Delivery Network (CDN). However, these workflows are subject to latency of up to seven seconds, if not more. Also, this does not allow the live video operators to process the live streams and leverage data without encoding and transcoding them, raising the cost and overall complexity of the workflow. In addition, workflows generally use SDI VITC timestamp versus UTC for each frame creating a discrepancy of synchronization across multiple metadata sources between different camera feeds across various locations, and degrading the overall viewing experience.

How did we solve this?
KLV, a SMPTE data encoding standard also used by the military to embed data live video feeds, combines metadata with geospatial visualization, offering a new way to enhance the user experience enabling new use cases such as precise synchronization and timestamping of event highlights across multiple live video streams. As practical use cases, a Precision Time Stamped wall clock embedded in live video streamscan enable effective sport adjudication, betting, gamification….