Facebook website speed ranks as one of the most critical company tasks. In 2009, we successfully tripled the speed of the Facebook website. And it's a few key innovations from our team of engineers that make it possible. In this article, I'll introduce you to one of our secret sauces, the great underlying technology we call BigPipe .
BigPipe is a redesigned basic dynamic web service system. The general idea is to break down web pages into small pieces called Pagelets, and then build pipelines through web servers and browsers to manage their execution at different stages. This is similar to the pipelined execution process of most modern microprocessors: multiple instruction pipelines pass through different processor execution units to achieve optimal performance. Although BigPipe is a redesign of the existing service network infrastructure process, it does not require changes to existing web browsers or servers, and it is implemented entirely using PHP and JavaScript.
motivation
In order to better understand BigPipe, we need to take a look at the existing dynamic web service system. Its history can be traced back to the early days of the World Wide Web, but now it has not changed much compared with the early days. Modern websites have much more dynamic effects and interactivity than they did 10 years ago, but traditional web service systems have long been unable to keep up with today's Internet speed requirements. In the traditional model, the life cycle of a user request is as follows:
The traditional model is very inefficient in modern websites because the operation sequences of many systems cannot overlap each other. Some optimization techniques such as delayed loading of JavaScript and parallel downloading have been widely adopted by the online community to overcome some of the limitations. However, these optimizations rarely address the bottleneck caused by the execution order of the web server and browser. While the web server is busy generating a page, the browser is idle, wasting its cycles doing nothing. When the web server finishes generating the page and sends it to the browser, the browser becomes the performance bottleneck and the web server can't help. By overlapping the web server's generation time and the browser's rendering time, we can not only reduce the final time delay, but also enable the web page to display the user-visible area to the user earlier, thus greatly reducing the user's perception of delay.
This is especially useful for content-rich sites like Facebook, where the web server's spawn time overlaps with the browser's rendering time. A typical Facebook web page contains data from many different sources: friend list, friend updates, advertisements, etc. In traditional web page rendering mode the user would have to wait until these query data are returned and the final file is generated and then sent to the user's computer. Any query delay will slow down the entire final file generation.
How BigPipe works
To exploit the parallelism between the web server and the browser, BigPipe first breaks the web page into multiple callable Pagelets. Just as a pipeline microprocessor divides the life cycle of an instruction into multiple stages (such as "instruction fetch", "instruction decoding", "execution", "write back to register", etc.), BigPipe's page generation process is divided into the following stages :
Source: isd