Our One Server One Job Architecture

When we built our cloud platform we wanted to make it super fast. To do this, we had to break some of the common rules of normal web hosting architecture.

We didn’t just use the best hardware and software we could find, we also had to use the very best design. We use the One Server One Job approach.

 

Traditional Web Hosting Architecture

When web hosting platforms are build, a number of components need to come together to make it all work. Typically you have:

  • A LAMP Stack (Linux, Apache, MySQL and PHP)
  • Cache’s such as OpCode and Memcache or Redis
  • A Mail Server

There are lots of other things in between but the above gives a good overview.

Next, there’s the actual server equipment and it’s operating system

  • Linux based with an operating system such as Centos, Ubuntu or Debian
  • Processors (CPU’s)
  • Memory (RAM)
  • Disk

In typical web hosting design, all of the above is placed onto one server and the server is replicated across the platform.

What it means is that all of the servers processes share the same resources. They all use the same core CPU’s and Memory.

This can lead to poor performance at times as there are only so many connections that can be made to the CPU’s and Memory. Processes then have to queue up. This may be seconds or even milliseconds, but in terms of loading time every millisecond counts.

For example, MySQL can be resource heavy – especially on busier websites. It can hog quite a lot of CPU meaning there’s not much left for everything else.

Add this the fact there could be 1000’s of other websites on the same server, performance can deteriorate quite quickly and you’ll see an increase in loading time – sometimes quite visual.

 

One Server One Job Architecture

To try and avoid the problems above we’ve adopted the one server one job approach.

The One Server One Job approach means using one server to house one system. So rather than use one server for everything, the processes are spread across different servers, each with their own memory and CPU.

We call these server groups and rather than replicate servers across the platform, we replicate server groups.

For example, one of our server group will include:

  • Server for Web Server (LiteSpeed), PHP and Normal Cache
  • Server for MySQL
  • Server for Object Cache (Redis)

This means that each system has it’s own processing power and isn’t stealing it from another.

This is particularly good for MySQL. As we said earlier, it can be quite resource hungry.

Not just that. Each server is configures especially for that service. Our Web Server + PHP + Normal Cache has lots of CPU power and lots of memory. Our Redis server requires memory so it has a lot of this. MySQL requires lots of CPU so it has a lot of this.

 

The Benefits of One Server One Job

Here’s the headlines:

  • More processing power and memory for individual processes
  • Each system can run in parallel meaning faster load times
  • Less bottlenecks meaning everyone sees higher performance
  • Much easier to scale
  • Easier to manage failover if one system suffers and issue
  • Independent backups makes system restore easier
  • Much much faster performance

 

What’s Next?

We’re always looking at ways to tweak our platform and add additional improvements. Keep an eye on our blog pages for they next lot of improvements we deliver.