Video Editing Codecs Explained

Reuben Evans • Jun 01, 2021

video editor

What is a video codec?

The word “codec” comes from a mashup of “encode” and “decode.” So a video codec is the specific video format of your file. Your camera encodes the visual data of a scene into a format, putting it into a “container” file (more on that later). And then, your computer recognizes that video format, decodes it, and turns it into a video on your screen.

Over the years, there have been a ton of video codecs released. We’ll highlight the most popular ones for today’s filmmakers who deliver straight to the web.


What are acquisition, mezzanine, and delivery codecs?

Generally speaking, there are three different purposes for codecs: recording in the camera, editing in an app, and exporting to the final destination for viewing. These are known as acquisition, mezzanine, and delivery codecs, respectively.

Typically, the acquisition format in-camera produces large file sizes. Those large files get reformatted to a medium-sized “mezzanine” format your computer can playback more easily. Once you complete your edit, your video is exported into a smaller “delivery” codec. That makes it easier to upload to a streaming site or playback on a variety of devices. A good example of this workflow would be recording an ArriRaw or REDCODE RAW in-camera (acquisition), transcoding to ProRes Proxy for editing (mezzanine), and exporting in H.264 (delivery).

In the past few years, these clean-cut distinctions have become a bit blurrier. Editors have been cutting with raw files. Cameras now record with mezzanine or delivery formats. But in general, the idea is to capture as much data as possible in-camera, edit with a computer processor-friendly codec, and deliver with an eye toward balancing video quality with practical file size.


What is a video container file?

Every video file has an extension like .mov or .MXF. These refer to the container, not the codec. A .mov file may contain a video file with a delivery codec like H.264 or another codec. So don’t get confused and equate a file extension with a codec.


What is a RAW codec?

The highest quality method of acquiring your footage uses a raw codec. A raw codec captures the data that your camera’s image sensor sees without changing it into a format that causes significant data loss. The post-production team also has access to change settings in post, such as white balance, ISO, highlight rolloff, and contrast.

These raw formats are specific to individual camera manufacturers, and not all camera companies offer raw codecs. Some examples are ArriRaw, which is an uncompressed codec. REDCODE RAW, which is a compressed in-camera raw codec. BRaw from Blackmagic offers smaller camera files but fewer raw settings to adjust in post. ProRes RAW from Apple appeared on external recorders from Atomos and allowed for cameras to send a raw signal via HDMI. ProRes RAW worked its way into some new cameras as well.

The raw advantage

The big advantage to working with raw files is flexibility in post. If you underexpose an image, it is easier to brighten it. Video noise reduction typically isn’t applied in-camera so that the images can seem a bit noisier. But when post noise reduction is applied, the images retain more detail than when noise reduction is applied in-camera. Color is much easier to manipulate as well. If the raw file features 16-bit color, you’ll be able to pull green screen keys with ease and apply selective color adjustments with precision.

Raw formats offer the best image quality and flexibility. But they can be quite large, as is the case with ArriRaw. REDCODE Raw applies sophisticated variable compression rates in-camera. That way, you can choose between various levels of quality.

Also, keep in mind that though shooting in RAW is best for the longevity of your footage, you might start running out of storage very quickly. If that’s the case, you would want to look into a Thunderbolt RAID solution for yourself, or if you work on a team and want everyone to access the same data simultaneously, a NAS system like a Jellyfish server.

There’s so much more that could be said about the advantages and challenges of raw capture, but for me, the ability to “save a shot” in post outweighs any drawbacks to file sizes.


What is ProRes?

Years ago, Apple introduced the now ubiquitous mezzanine codec Apple ProRes. It featured several quality levels ProRes 4444, HQ, 422, lite, and proxy. It was designed as the go-to format to transcode all your footage upon importing into Final Cut Pro.

The idea behind a mezzanine codec like ProRes (or Avid’s DNxHD) is to make editing easier on your computer’s processor. Since then, ProRes became an acquisition format too. Companies like Arri, RED, and Blackmagic enabled their cameras to create ProRes files internally, eliminating the need to transcode footage into the mezzanine format on import.

This codec looks great and is widely used. However, you don’t get the flexibility that raw offers. Additionally, typical ProRes files are actually larger than REDCODE Raw or BRaw files.

Since its release, Apple added ProRes 4444XQ and Raw. ProRes RAW allows for users of NLEs that support it to adjust the raw settings. Most of the time, ProRes RAW has mainly been implemented by Atomos’ external monitor recorders. Certain cameras can output a raw signal via HDMI, and the Atomos unit can record it as ProRes RAW.


What is hardware acceleration?

Some computer graphics cards can process raw video files with greater speed than CPUs. For instance, ProRes RAW has the advantage of being a modern raw codec, and Apple’s machines are tuned to process it very efficiently. The Mac Pro offers the afterburner card that is solely designed to accelerate the processing of ProRes files. The result is truly stunning speeds.

The drawback to ProRes RAW is that it is only 12-bit color vs. 16-bit with REDCODE Raw. NVIDIA, in particular, has provided advanced CUDA processing for RED footage. DaVinci Resolve takes advantage of multiple graphics cards to provide the processing power necessary for complex color grading. AMD worked with Apple to ensure compatibility with Apple’s Metal 2 hardware acceleration of video codecs. However, it still appears that NVIDIA offers the best performance for formats outside of ProRes RAW.

What is transcoding?

Transcoding is the process of reformatting a video into another format or codec. So if you shot in a raw format, you would transcode into a mezzanine format for editing. If you have a bunch of files that are encoded with the MPEG-2 format and you wanted them as MPEG-4, you could fire up an app like Apple’s Compressor and transcode them into the new format. When you export a video file and upload it to YouTube, they transcode it into several formats for different devices to read at different bandwidths. That’s why your video looks a little different when it plays back on YouTube vs. when it just got exported on your computer.

For Jellyfish users, you can quickly batch transcode media using Kyno for Jellyfish, or even remotely transcode using the power of your Jellyfish with the Media Engine tool.


What is a proxy file?

Proxies are video files in a mezzanine codec that have been transcoded to lower quality to improve their performance while editing. Apps such as Final Cut Pro offer a built-in feature for automatically creating proxy files in Apple ProRes Proxy. This way, the editor can work on a laptop or a Mac Mini for the entire editing process.

Once you are ready to go into your final color grade, you “conform” the project back to the original raw files. This might include sending a project to an application specializing in finishing or color grading, like Blackmagic Design’s DaVinci Resolve. Proxies can be created alongside the raw files in RED cameras or even in the cloud with a SaaS app like Frame.io.

Proxies load quickly, play back smoothly, and make the whole editing process more enjoyable. Personally, I always fire off proxy creation when importing files because I seldom need to do a same-day edit. So why not let the computer chug for a while and make editing smoother?

What is a delivery codec?

Once your edit is complete, you’ll determine where you want your video to be viewed. That destination will accept a certain codec or codecs. For instance, a DVD will require you to use the MPEG-2 codec. A Blu-ray requires MPEG-4. When you upload to YouTube or Facebook, the most common format is h.264 (but you can upload other formats). Many NLEs export h.264 in a .mov container. Or you might use an .mp4 container.

H.264 combines nice image quality with smaller file sizes. And now we see h.265 more and more. Some cameras are even using it as their acquisition format. Unfortunately, it isn’t as flexible as raw, and it is very processor-intensive for computers to playback multiple streams in a video editing app. When you export a video for streaming, you might even need to create it in the HLS format, which creates “chunks” so that a streaming server can shift the playback quality depending on your available bandwidth.

Generally speaking, delivery codecs are the most efficient but least flexible codecs.


Conclusion

So there you have it. A full rundown of video codecs from the time you turn on the camera, through the work of post, to the point of delivery. This is an ever-changing world, and new formats come to market regularly. But it helps to think in terms of quality, flexibility, and efficiency. Those concepts will help you make good decisions in purchasing equipment, the choice of tools, and the method of delivery.

Other topics you might like