The five basic principles of video compression
You can spend a lifetime learning about video compression and using your discoveries to optimize video. But if you don't have a lifetime, and you just want the basics so you can make better choices when you work with video on demand or video streams this is the list for you. Today we'll quickly cover the key principles of video compression. You will be able to apply these when deciding what codec to choose, whether something can be compressed further and what might slow down compression times.
Principle #1: Predictable and redundant data can be compressed more
Guessing how information will be distributed is the backbone of many compression algorithms. If you can predict repeating patterns - for example letters or words if you're compressing text - you can more efficiently store the data. This is also true for redundant details, for example if you found a block of color that was all the same, you might be able to store just one pixel of the color, and indicate the places it will appear in a block.
It's easier to predict what's in a simple image, just as it's easier to find redundant data in a simple image. So sections of video where less is going on - say a person sitting in front of a sunset motionless - will be easier to compress than a busy scene with lots of motion and detail. Sometimes compression is referred to as entropy coding, because the less entropy (or randomness) your video has the more it can be compressed.
Principle #2: Efficiently compressed data won't easily compress further
If you efficiently compress a video, there may not be anything further that you can compress. This can have weird effects if you try compressing the file more. For some algorithms, like .zip, it can tell that no further compression can be done, and won't do anything. For others, additional compression might be possible, but lead to bigger files or worse, bigger files with less quality. If you've already compressed with a popular codec, a safe assumption is you can't compress further.
Principle #3: Specific compression is better than generic compression
It's better to figure out the nature of the video data you're compressing. Compression algorithms do this for you- they're specialized to the type of data they're commonly used with. For example text files and video files will probably do better with algorithms that are tailored to the different types of data they contain.
Principle #4: Efficient compression makes data look extra random
When all the patterns and redundancies in data are stored in more compact ways, the remaining data looks extra random.
Principle #5: A small increase in compression will take a big increase in compression time
The Shannon limit describes how much a file can be compressed by. The more random the data, the bigger the limit, the more redundant the data, the smaller the limit. The closer you get to the limit when it's a big limit, the longer the compression takes. Sometimes you can't get close to the limit. Frequently you will have a scenario where you do something like double the compression time, but it might only make your file 3-5% smaller! Compression is expensive, but worth it. Most algorithms for compression balance time to compress vs. rate of compression.
- Image credit: Photo by Joshua Sortino