Comparing Canon Log to Pixla HTP



I shot two simple scenes (back and front lit) with my Canon 1Dc in 4k resolution, 8bit, 422, MJPEG at 500Mbps.

First you see the Canon Log clip as it comes out of the camera. Then the same clip graded. Then my Pixla HTP settings at the same exposure value (EV), then the Pixla HTP settings stopped down one stop (which puts it closer to Canon Log). At the end of each scene I repeat the graded Canon Log so that you can do a direct comparison to the Pixla HTP when it's stopped down one stop.

Typically, the base EV was around 12 (f5.6, ISO400 1/50 and 0.9ND—3 stops).

Note: Pixla HTP is Neutral PP with contrast and sharpness all the way down, and saturation at -1. In addition, Highlight Tone Priority is active. See separate blog post here.

Is the 1Dx mkII MJEPG a bad codec?

Long GOP

Since the Canon 1Dc was launched with its high bitrate (520Mbps) MJPEG 422 8bit codec, self proclaimed experts have been complaining that it’s old technology, inefficient and Canon really should have used XAVC.

Is it really as bad as they say, or is there more to the story?

The main complaints are:

  • inefficient compression (large files)
  • poor playback performance (old, not optimized video compression).

The short story

The short story is that it’s a good way to store high quality video files, it was the best available to Canon at the time (in a DSLR), but yes, a dedicated video encoding chip that offered intra frame XAVC would have been even better. Canon’s relatively cheap XC-10 UHD camera uses the new XAVC codec for example (305Mbps). The XC-10 camera has a Digic DV5 chip that makes it possible.

UPDATE: If the 1Dx mkII had access to a DV5 chip like the XC-10 and the C300 mkII, its file sizes would land between 325 Mbps (XC-10 bitrate at 4096x2160) or 410 Mbps (what the C300 mkII uses). These are good examples of modern intra frame compression. The benefit would be better compression and no need to convert files for editing.

The long story

Now, the long story is, as expected, much more nuanced. Let’s look at the two main complaints.

Inefficient compression. When you record 4k files at 500Mbps, the gigabytes of your memory cards get eaten up like you wouldn’t believe. Every 15 seconds you save a GB worth of data. So that’s bad, right?

Well… compared to what?

Had you saved your compressed video directly to ProRes 422, a codec often hailed as a golden standard and a good compromise between file size, quality and great playback performance—your files would have been even bigger. OK, so the compression is better than ProRes then? Well, from a storage point of view—yes. It shouldn’t be that surprising that the compression is good. MJEPG stands for Motion JPEG and we have all used .jpg to compress our photos and images for many, many years. And of course, jpeg is not only a consumer compression format. It’s used every day in all kinds of professional applications because the quality is great.

I feel that the mistake most people do first is to compare the Canon 1Dx mkII, or Canon 1Dc’s codec, which is an intra frame codec, to modern consumer camera codecs that are inter frame (long GOP).

Intra frame means that you store each frame of video as a unique image frame and then you compress that. Just like you would compress a raw photo to a jpeg photo. This is what the Canon 1Dx mkII and Canon 1Dc do. They basically save 25 jpeg photos per second in 4096x2160 px resolution.

Consumer cameras achieve great compression by only saving a small number of actual frames out of the 25 images per second. Out of 25 ‘frames’ you might get 3 actual images. All of the other frames get only partially saved and then calculated (estimated) by advanced mathematical calculations. It certainly saves space, but it also famously introduces various image artifacts. Typically Long GOP codecs have problems when the image content change a lot from frame to frame. Examples of this might be flowing water, foliage that is blowing in the wind or simply that the camera moves. Most of the time, a non critical user won’t see these imperfections. And final delivery of even the highest end cameras mostly end up in this format for display (but they don’t get captured that way). But while it’s a great delivery format, I don’t want to capture my images this way.

Lesson: Don’t compare data rates between intra frame to inter frame. They are not the same thing!!

Poor playback performance. This I have no problem agreeing with—because it’s more objectively true. MJPEG has very low requirements at time of compression. That’s great because it means cameras can use it even if they don’t have super fast CPU. It also requires less power and saves your battery. But it isn’t optimized for playback in editing software and your playback will likely stutter. Not good.

Well, I (almost) wouldn’t know, because my files get converted to ProRes on import into FCPX. I then replace the original files and have ProRes 422 masters. They are a little bit larger, but it’s negligible. Playback performance is a total non issue to me and the same should apply to anyone—just convert the files once.



Note: The reason the Canon 1Dc and now the 1Dx mkII uses the older MJPEG compression is because they are DSLRs that use Digic chips and not Digic Video (DV) found in Canon’s video cameras. For a Canon DSLR to use a dedicated video codec, Canon would need to integrate a DV chip, or perhaps launch new hybrid Digic chips that have these types of codecs built in. Legacy h.264 compression is already integrated in the DSLR Digic chip, which is the reason all Canon DSLRs use that for their video recording (8bit, 420, h.264 at modest bitrates).

Where 4:2:0 breaks //Re-posted on request


One of the defining differences between the C100 and the C300 is the internal codec each camera uses. While the C300 records to the broadcast approved Canon XF codec (4:2:2) at 50 Mbit/s the C100 records to the familiar AVCHD (4:2:0) at 24 Mbit/s.

A common mistake I see people make, is looking at the bitrates of the two codecs and then they assume that AVCHD’s 24 Mbit/s is only half as strong as the XF codec’s 50 Mbit/s. However, since the underlying technology is different, you can’t compare them directly like that.

Instead, the real difference is that XF codec is a 4:2:2 codec, while AVCHD is 4:2:0. But what does that mean exactly?

Look at the video above and see if there is something that stands out. Then continue reading.

Chroma subsampling
Due to storage and transmission limitations it has always been the desire to compress image data. Taking advantage of the fact that humans are less sensitive to changes in color than brightness, methods have been developed where images are encoded with less chroma (color) precision.

Take your time and study the image below carefully. It illustrates the theory behind reduced color precision, or chroma subsampling. The top row represents the Luma channel and Chroma channels combined, while the two lower rows breaks the channels apart so that you can study the precision in the respective channels.

edge

When you see the amount of data dropped in 4:2:0 encoding, you might be amazed that it actually works as well as it does. While it isn’t the optimum way to treat images from a strict quality standpoint, it is a proven method that has worked well for many years.

This type of image compression also allows for long recording times to inexpensive media. It’s up to the user to decide what tradeoffs are reasonable in every situation.

4:2:0 limitations and vulnerabilities
I have stated on various forums on numerous occasions that I think that the internal recording capability of the C100 (AVCHD 4:2:0) is fine for almost every situation. I still stand by that. Would I rather have 4:2:2? Of course I would.

But I bought the C100 half expecting that for any serious work I’d have to hook it up to an Atomos Ninja 2 in order to get uncompressed 4:2:2 via HDMI out and record that to ProRes at 220 Mbit/s. After looking at footage shared by others on Vimeo (by downloading the original files) I’m struggling to justify the Ninja 2. The internally captured footage simply looks good enough in almost every situation. There are times though, when you should be especially on your guard. If you know what to look for you can check and see if you are in trouble and/or if you need to take steps to correct an issue that might otherwise show up.

edge
Look at this frame grab from the video above. I’ve sized it up to 200% in order to show the problem more clearly. To see it in context, please watch the video above again. You can download the original file in ProRes format.

Since 4:2:0 subsampling only offers half vertical and half horizontal resolution, sharp edges defined by the red or the blue channel will be prone to be jagged. It almost looks like a ‘field’ from interlaced footage before de-interlacing.

Did you notice it while watching the video before the page break? Was it disturbing? Only you can decide how much of a real problem this is to you.

Not suited for green screen work?
I’ve heard from several users that do frequent green screen work that the internal AVCHD codec actually holds up pretty well. That’s good news! I’m sure the generally clean and sharp image from the C100 helps a lot. Also, since green screen work means keying edges from the green color channel (where we have full resolution), depending on what you have in front of the green screen, results can be quite good.