api.video

Features

Infrastructure

Our three pronged CDN approach to achieve perfect streaming and ultra-low-latency

December 10, 2020 - François

As you may know, api.video aims to provide the best user's experience possible: this means a perfect streaming of your videos on demand, but also an ultra-low-latency live streaming. To achieve this, we need to totally rethink the way we design our service, both in terms of software applications as well as of infrastructure.

Several approaches are possible, but we have chosen the solution to manage everything we can internally, in order to achieve our goal, but also to be able to accompany you in the most specific way possible.

The usual suspect

When you define your service, a question often comes up to distinguish what you have to do from what you have to delegate: what is my core business? I tend to believe that this question should always be counterbalanced by the following: what is the value to me of doing it or not doing it?

The Usual Suspects - PolyGram Filmed Entertainment / Spelling Entertainment Group / Bad Hat Harry Productions

For an online service, this often concerns issues such as the database, the storage of large files, or the deliverability of the service. At api.video, these subjects are all the more sensitive as they are the perfect echo of our daily life: providing you with a service allowing the management of videos (live or VOD), with any useful functionality, in the best conditions.

The CDN will cache me

Let's focus on the deliverability first. The deliverability of an online service is almost inexorably dependent on the use of a CDN-type service: an intermediary between your users and yourself, guaranteeing performance (latency and throughput) via shared access, and the storage (known as caching) of various objects.

The CDN will provide and focus on the following:

  1. connecting towards the end-users with good network performance
  2. providing capacity to cache elements (part of the cache hit ratio)
  3. hiding the web server (a.k.a. the Origin) from any outside requests

When we talk about different approaches, even for these three topics, different schools argue on a daily basis. For example, on the network aspect, some large suppliers (Cloudflare, Akamai, ...) will put forward the number of points of presence and their location, playing the game of outbidding, but may retain some technical debt over time. Conversely, smaller players (as Fastly) will focus on fewer but more important points of presence, focusing on connectivity. Both approaches are valid, and guided by the profitability ratio between the engineering, and the actual use of the platform, leading to the second point.

CDN - How it works

The hit ratio cache is a key element when comparing CDNs. Let's assume your site is fully cacheable. Now assume that your audience is global, with very diverse countries, and that it is large. Statistically, you will quickly have hundreds of visitors requesting the same file on a particular Point of Presence. Perfect, the CDN does its job and the content is 100% cached.

Now, let's assume that we are talking about your video, and that it weighs 1 GB. So you will occupy 1 GB of memory at your CDN on each Point of Presence.

Knowing that you are using a public CDN (like the ones mentioned above), other clients, say 100, have the same usage: a 1 GB video, with hundreds of users connected to each Point of Presence. You end up consuming together 100 GB of memory on each Point of Presence.

Let's take these assumptions, and apply more realistic values: several thousand customers, with audiences ranging from one to tens of thousands of users, for equally variable caching objects. Very quickly, the capacity of each point of presence reaches its limits.

From there, and to guarantee the service to everyone in a pragmatic way, different rules will be taken into account: the real audience per file, the frequency of requests, the original settings (or the overrides defined on the CDN), ... so that some objects will really be cached and others not.

So much for the 100% hit ratio cache. The more Points of Presence you'll manage, the more you'll prove this to be so.

My data is redundant

Now that we've talked about what is generally done for deliverability, let's talk a little bit about data. If we continue our discussion, your data is therefore at the level of your Origin. Beyond your local replication to a datacenter, this data must be replicated elsewhere, to guarantee your very high availability. Another datacenter, in another country. Preferably on another continent. Shit happens.

Look what I have created - Cast Away - 20th Century Fox

Let's say you deploy them in Paris & New York to begin with. The distance between these two locations, even with your own transatlantic fiber, is not negligible (the average is usually around 80 ms). Synchronous replication is therefore to be banned, as access would be blocked. An example on the subject is our previous article about geo-replication with GlusterFS, as we did before.

Cloud storage is certainly in vogue, but no provider today would dare to sell you geo-replication of your data between regions (from one country to another). Why not? The reason is twofold:

  • Their regions are only too rarely interconnected natively: they use the "internet".
  • Making a geo-replicated service, in bi-directional read/write, is magic because of latency constraints

To overcome this, your application must be capable of managing this situation and therefore be able to adapt to the service to be set up.

Otherwise, all you have to do is have your data in one region (excluding backup, obviously) and to pray: after all, nothing can go wrong.

In the same way that this reflection relates to your videos, it relates to everything that makes the service work: authentication, token, ...

My name is Legion

At api.video, as you will have understood, we have decided to take all these topics from the ground up, and scale around them, to be able to assure you of our initial promise.

In order to do so, we decided to invest in our infrastructure, with the following imperatives:

  • A network as close as possible to the users
  • To be agnostic of any infrastructure provider, to guarantee our independence and our ability to deploy
  • Be able to scale locally
  • Intelligent routing, for our users, but also internally to our platform

We are numerous

This leads us to have a reflection with 3 types of Points of Presence:

  • Core PoP: where we host data, and perform the main aspects of the intelligence of the service
  • Edge PoP: close to you, for all your ingestion (live feeds, uploads, ...)
  • CDN PoP: close to you, numerous, to deliver the service

A stranger ... from the outside - Toy Story - Pixar / Disney

Each time we select a new location, we have to ask ourselves again why we should invest in this or that address, and via which provider. The choice must be as pragmatic as possible, with clearly identified criteria, such as:

  • For a Core PoP, what connectivity to our other PoPs nearby?
  • For a PoP Edge, what connectivity to our users publishing content (from home, from a studio, or from another datacenter)?
  • For a PoP CDN, where will the content be played from?
  • What are our projections in terms of usage evolution since this PoP?
  • Is there a local legal imperative (GDPR, CCPA, ICP, ...)?

This list, far from being exhaustive, is in addition to the normal questions about vendor viability, RTO, RPO, FRT, SLA, on-site interventions, ...

The same, but different

After having defined our locations and our associated providers (basically, our data centers and interconnections), we have to "populate" them: i.e. install everything necessary to provide the service. This is where all the work done around the services essential to the proper functioning of your platform, as discussed in our article on platform management, comes in handy.

Each PoP is different (Core, Edge or CDN - possibly a mix of those), via various providers, automation is key here to deploy our services.

Legion - Marvel Television / Bad Hat Harry Productions / FX Productions

As they get deployed, we know that they will rely on each other. As we multiply the number of PoPs and their diversity, we also multiply the reliability of the overall platform as we know each element may fail at some point. Aware of this, we define complex templates for our automated configuration to tackle the different scenarios of outages, leading to the proper retry, delayed by appropriate delays, mixed with smart ordering due to distance.

Whatever happens, we always want you to have the best experience. To do so, the service will always steer you to the PoP that works best for you at any given time. This is not about geographic routing, nor about routing based on the number of intermediaries (hello there, BGP) but on your user experience mixed with our internal monitoring information.

As we deploy more PoPs, the former ones get gracefully reconfigured, according to any dependency that would occurs with the new PoPs.

As we scale within a PoP, any dependency to be update would be gracefully reconfigured.

Shit happens

Now that we have these Points of Presence, geo-replicate the data, and deliver at the speed of light, what happens? Well, we fail... but on purpose. Quite simply because we know that each of our bricks can come to have a problem and that our platform must be resilient to it.

Shit happens - Dr House - Fox

If one of our CDN points (in the case of delivery), or Edge (in the case of ingest) becomes unavailable as a whole, you are automatically routed to the next best PoP, and so on across all of our PoPs, providing resilience on these services. This is why we complement our user experience metrics (Real User Monitoring) with internal monitoring information from our own infrastructure.

If one of these PoPs can no longer access its "nominal" Core PoP, it will try with the other Core PoPs.

If one of our Core PoPs suffers too much trouble, we mark it automatically & preventively as unavailable to others.

If one of these same Core PoPs is unavailable, thanks to the very principles of geo-replication, it will first catch up before announcing itself as available.

This requires adequate monitoring in order to be able to act preventively, or even to be able to make predictions for capacity planning (for equipment requiring lead times) or live scaling, where possible.

This requires adequate monitoring in order to be able to act preventively, or even to be able to make predictions for capacity planning (for equipment requiring lead times) or live scaling, where possible.

Tomorrow never dies

In spite of all these great plans and their implementation, we also foresee the worst of the cataclysms: as such, all our services are easily re-deployable, quickly replaceable, but above all, we also have a plan B, to failover the entire service. So the hardest part is to keep up the momentum.

We may have forgotten about a risk? Let's challenge it.

The failover is too slow to resume service? Let's fine tune the beast.

And if you want to be part of this adventure and live a challenge, just join us: we're recruiting.

François

Head of Infrastructure @api.video

Get started now

Connect your users with videos