Skip to main content Link Menu Expand (external link) Document Search Copy Copied

Edit By Reference

In a traditional edit workflow, all the frames of the new edit need to be rendered into a new file which is then written out to storage. For many simple edits, the actual content can be largely unchanged. However, the essence needs to be duplicated in order to have a file which can be played back directly.

The TAMS store holds the media in very small segments–typically, a few seconds in duration. This opens up interesting possibilities when editing content where the frames are unchanged, allowing us to reference the existing media instead of duplicating it. This not only means that less storage space is required by de-duplicating the content, but time is saved at the export stage from the edit system, as the time taken to render a new asset is no longer required. It is instead replaced with an API call.

To re-use an existing segment within a new flow, register it as a segment for the new flow, referencing the same object ID as the object was originally created with. The store will recognize the same ID and create the link in the background.

When declaring the reused segment, the timerange should be updated to the location where the segment exists on the new flow timeline. If the timing has changed, the the new segment will also need the ts_offset field to be set to ensure that the player understands there is a difference between the timing data in the media segment and the new timeline.

For edits where some content on the timeline is new or has changed, the content will need to be rendered out and uploaded to the store as new segments. It is possible to mix both existing and new segments on a new flow timeline. This means that a simple edit with a dissolve between two unchanged clips would only need the frames for the dissolve to the rendered and uploaded as new segments. All the other content would be references to existing segments.

App Note 15 in the TAMS API repo has more information about rendering edited content back to a TAMS store. It also covers how to reference content in a TAMS store within the OpenTimelineIO format to describe content edits:
https://github.com/bbc/tams/blob/aaefef3126606a02b73e4c9e24761364a5d5586a/docs/appnotes/0015-using-tams-in-opentimelineio.md#rendering-back-to-tams

Frame Accurate Working

In a typical TAMS workflow, the segments are usually between around 1 and 10 seconds in duration. This means that working at a segment level for editing will not result in a frame accurate level of editing. While this may be acceptable for some use cases, this will not be considered acceptable for most workflows.

There are two options available to carry out frame accurate edits within a TAMS environment:

  1. Within the segment metadata, it is possible to define only part of the segment that should be used within the new flow timeline. This is done using the sample_offset and sample_count fields at the segment level to define which part of the segment is required. This approach requires the reading system to understand all the details of the segment supplied by the API and be able to play just part of a segment.

  2. For workflows which need to work at a segment level (for example converting to an HLS stream), the only option is to render the part of the boundary segment. By creating the new ,smaller subsegment and uploading it to the store, the player will receive a constant stream of whole segments to play back. Only the boundary segments need to be created. All other segments can remain as pointers.

Clearly, the first option should be the preferred route, as this is both efficient in terms of storage, complete. and time. However, it is recognized that this will not always be possible, hence the second approach.

Working With Multiple Flows

When creating content through an edit by reference workflow, it is important that all the original content used within the edit have matching flows. This is due to a source level edit needing to be applied to each of the flows for a source. If one of the pieces of original content does not have all the flows present, the resulting edit will have holes in the timeline.

The next challenge is matching flows from the different pieces of original content. The current approach would be to do the following:

  1. For the first section of content required, copy the technical details of all the flows from the first source to the create a new set of flows and sources to represent the edit.

  2. While creating the new flows, create a hash of core technical parameters (for example, resolution, bitrate, codec, and frame rate) and store.

  3. Copy the segments from the flows in the first piece of content into the newly created destination flows.

  4. For the second section of required content, read the flows available and generate the same technical parameter hash generated earlier. Using this, it would then be able to match the original flows to the target flows.

  5. Copy the segments from the original flows to the matched target flows.

  6. Repeat steps 4 and 5 for each section of required source content.

Creating the new segments ensure that the new timerange is specified for where the segment should exist on the new flow timeline. Additionally the ts_offset should be declared to indicate the difference between the timing data in the media files and the new flow timeline position.