Transferring digital cinema production pipelines to cloud infrastructures

Jathavan Sriram, Head of R&D Division, NABLAVFX GmbH, Hamburg 

With growing computation and storage demands in digital cinema production, companies have to face various problems. These problems already begin in the production pipeline, at the point of image acquisition on set, when high end, single sensor CMOS digital cinematography cameras (e.g. Arri Alexa, Sony F65, RED Epic) with Bayer patterns are used.

After recording, large undebayered raw files need to be stored and backed up for postproduction and above all meet the requirements of digital film and media insurance policies. Often these insurance contracts dictate backup storage of the entire image data on digital magnetic tape (e.g. LTO). By doing so, financial expenses rise for production companies.

Subsequently these large amounts of raw data have to be debayered in daily pipelines for approval and offline editorial workflows. With increasing image resolutions up to 8k and intensions to use high framerate recording (HFR), the computational expenses for debayering and encoding go sky high, challenging Moore’s law.

Today postproductions tackle these challenges by buying more and more technical assets, concurrently increasing energy and maintenance costs for those devices. Therefore scalability becomes another main problem of modern digital workflows.

Our research can be summarized in the goal of improving storage, computation and delivery pipelines. To tackle these issues in this paper we discuss the research and development process of “PHI”, a cloud based pipeline tool that shifts successively these manifold challenges into cloud infrastructure.

After identifying the four main challenges of storage, computation, delivery and the scalability of each of this factors, a constructive research approach with building a solution prototype, was taken that involved quantitative research and the collection of numerical data specific to the realm of storage or computation.

Before building a prototype pipeline, storage and computation requirements where identified from several feature projects the pipeline team was working on. This involved comparing computation durations on local hardware to manifold cloud architectures (e.g. Amazon EC2 instances). This measured data was used to heuristically develop several algorithms that would balance the computation and storage load in the cloud, dependent on the raw data captured. Additionally storage and computation costs on the infrastructure played a main role in modeling and specifying the “PHI” cloud based pipeline.

Since movie production is taking place in a highly paced environment, to proof our solution pipeline to be feasible, several field tests in real on-set production environments were carried out at projects from the Filmakademie Baden –Würrtemberg in Germany. With recorded performance data and questionnaires for the postproduction department the cloud solution was evaluated.

The results found can be divided into technical and financial ones. Beforehand it was evident that for the strong usage of cloud based architectures, the pipeline had to rely or better postulate a stable and fast internet connection.

The computational speed improvements, due to the massive parallel debayering of raw flies, lead to huge time improvements and financial benefits.

Especially a pipeline configuration that involved “on the fly” upload (recorded data was uploaded to the infrastructure as soon as it was available to the DIT, without waiting until shooting was completed for the day), lead to large improvements, that reduced the time between image acquisition and dailies creation by 90%. Recorded raw files where immediately processed by several new cloud nodes and the files where distributed to the director, editorial and the DOP.

Another technical benefit was the easier and faster distribution of high quality debayered files for VFX departments and later for online editing/colour grading pipelines. This further enables the creation of strong global production pipelines.

The storage costs could be lowered by 32% in comparison to a classical pipeline that involved the storage on SAN infrastructure and LTO magnet tapes and at the same time increasing backup security.

The constructive research, backed by the actual creation and usage of a cloud based pipeline, indicates that cloud based architectures are feasible for feature film productions. As our research shows a key element for a sophisticated pipeline that spans from production to postproduction must involve rich metadata. For future involvement of preproduction, these assets have to be marked with identifying metadata. We propose the usage of the well-defined UMID for these purposes. Furthermore the usage of these pipelines increases productivity and lowers overall financial burdens postproduction companies have to defray. It even supports the idea of a “greener” eco system friendly movie production by sharing computational and storage resources.

With future developments like the increasing number of high resolution secondary displays (e.g. tablet computers) also the complexity and efforts of encoding increases drastically as it can be seen with the High Efficiency Video Coding (HEVC) currently standardized by JCT-VC. Therefore massive parallel cloud based finishing pipelines are the key to tackle these developments.

We recommend future research especially in the realm of connecting metadata and digital asset management in cloud infrastructures to pave the way for pipelines that are able to involve the entire movie making process from preproduction to postproduction.

Link zum Vortrag: https://vimeo.com/56999068