This post originally appeared on loige.co, written by Luciano Mammino. Luciano is a web developer & entrepreneur from Italy.
In this article, Luciano highlighted some of the most common principles you should consider while building or testing high performing web applications (specifically on the backend part).
The following concepts discussed here can be applied to any language and framework. Though this post will cover some concrete examples, design patterns and tools that are mostly used in the PHP ecosystem.
[otw_is sidebar=otw-sidebar-1]
How to build a web architecture?
Before we get started with the basic rules for building fast web applications, I’d like to recommend our recent blog post on building a reliable cloud-based SaaS architecture.
TL;DR The basic rules for building fast web applications are:
- Rule 1. Avoid premature optimization
- Rule 2. Do the minimum amount of work to solve the problem
- Rule 3. Defer the work you don’t need to do immediately
- Rule 4. Use cache when you can
- Rule 5. Understand and avoid the N+1 query problem with relational databases
- Rule 6. Prepare your app for horizontal scalability when possible
Rule 1: Avoid premature optimization
One of the most famous Donald Knuth‘s quotes says:
“premature optimization is the root of all evil”
Knuth’s noticed that a lot of software developers generally waste a huge amount of time thinking about the performance of non-critical parts. To avoid to fall into the premature optimization trap you should write the first version of your code without worrying much about performance.
Then you can use a profiler to instrument your code and see where the bottlenecks are. This way you can focus on improving only the parts that really need your attention.
Knuth’s quote doesn’t mean that you don’t have to care about optimization at all. And it’s not an excuse to write shitty code and then abandon it.
It should be considered as an encouragement to learn how to “optimize smartly” and that’s the way you should read it as well.
If you are working on the PHP land there are a lot of tools that you can easily adapt to profile your code:
- xdebug: probably the most famous PHP debugger and profiler, it must be installed as a PHP extension and it’s easily integrable in most of the IDEs.
- xhprof: a function-level hierarchical profiler for PHP. It comes with a simple HTML based navigational interface and offers some cool diff capabilities to compare the performance of different versions of your code.
- Symfony profiler: The Symfony profiler it’s one of the best features of the Symfony framework. It allows you to inspect the execution time of every request, showcasing a nice timeline that allows you to easily understand which part of your code is the most time-consuming. It is automatically enabled in “development” mode and does not need any PHP extension to be installed.
- The Stopwatch component: It’s the low-level library used in the Symfony profiler to measure the execution time of a piece of PHP code. It can be easily integrated into any PHP project and does not require any extension.
- Blackfire.io: a profiler optimized for PHP that offers a very nice web interface that allows you to understand visually what your code does and where the CPU spends most of its time.
- Tideways: a promising alternative to Blackfire, offers a lot of graphical tools (timeline, call graphs, etc.) to make it really easy to find bottlenecks. It’s meant to be run continuously (also in production).
Rule 2: Only do what you need to do.
Very often your code does more things than it’s required to do.
This is especially true if you are using complex libraries and frameworks in your code.
There are a number of design patterns and techniques that can help you to avoid these situations and achieve better performances.
- Autoloading: it’s a PHP feature that allows you to require the file containing the definition of a class only when you are about to use that class (instantiation, static method call, access to a constant, etc.). This way you should not worry about which files to include in your script, but just to use those classes that you need. Autoloading will do the rest for you. Configuring autoloading has been a little bit complex in the past, especially because every library used its own conventions, but today thanks to the PSR-0 and PSR-4 standards and tools like Composer it is a piece of cake to use autoloading.
- Dependency Injection: it’s a very common design pattern in the Java world that in the last years has got a lot of traction even in the PHP world thanks also to the effort of frameworks like Symfony, Zend, and Laravel that use and advocate it widely.
- Lazy Loading: another important design pattern used to defer initialization of an object until the point at which it is needed. It’s mostly used with objects that deal with heavy resources like database connections or file-based data sources.
[otw_is sidebar=otw-sidebar-2]
Rule 3: I’ll do it tomorrow!
How many times do you need to send an email to a user after he/she triggered a specific event in your web app (e.g. password changed or order completed)? How many times did you need to resize an image after the user uploaded it?
Well, it’s quite common to do these “heavy” operations before sending a success message to the user. To put it another way, our users expect to see some message in their browsers as soon as possible and we need to ensure that any additional task (not directly related with creating that message) should be deferred.
The most common way to do that is to use job queues, which means that you have to store the minimum amount of data needed to perform the deferred task into a queue of some kind (e.g. a database, a message broker) and forget about it.
You have to get back immediately to your main task: generating the output for the user!
There will be some kind of worker in place with the goal to read from the queue periodically and perform the deferred job (e.g. sending the e-mail or generating the image thumbnails).
A simple queue system can be easily done with any kind of data store (very often Redis or MongoDB are used) or a message broker like RabbitMQ or ActiveMQ.
Rule 4: Gotta cache ’em all!
Nowadays web apps are really complex pieces of code. In order to generate a response to every request we generally do a lot of things: connect to one or more database, call external APIs, read configuration files, to compute and aggregate data, serialize the results into some parseable format (XML, JSON, etc.) or render it with a template engine into a wonderful HTML page.
Using a naive approach we can do that for every request that we get, our servers will never get bored to do repetitive tasks.
But there’s a smarter and better way to do repetitive tasks, avoiding to calculate the same results again and again. It’s called Cache.
Cache, which is pronounced “cash” stores recently used information so that it can be quickly accessed at a later time.
The cache is used widely in computer science and you can find it pretty much everywhere. For example, the RAM itself can be considered as a way to cache the code of running programs to avoid the CPU to read the (slow) hard disk sparsely millions and millions of times.
In general, there are several different levels of cache on which we focus in web development. From Byte Code Cache, to Application Cache, to Proxy Cache. Check out this blog post from Luciano to learn more about these caching types.
Once you got the concept of caching, it is really easy to adopt it. The issues arise when you need to understand whether something changed and the cached version of your data might not be relevant anymore. In such cases, you need to delete the data on the cache to be sure it gets correctly recomputed the next it’s requested. This process is called “cache invalidation” and it generally makes developers insane to the point that a very famous quote exists:
There are only two hard things in Computer Science: cache invalidation and naming things.
— Phil Karlton
There’s no silver bullet to make cache invalidation easy, it really depends on the architecture of your code and the requirements of your application. In general the less caching layers you have the better: always avoid to add complexity!
[otw_is sidebar=otw-sidebar-3]
Rule 5: Avoid the damn N+1 Query Problem
The “N+1 Query Problem” is a very common anti-pattern unintentionally used especially when dealing with relational databases. Basically, it reads N record from the database by generating N+1 queries (one to read the n IDs and 1 for every record). Take a look at the following piece of code to have a real case (well… almost real) example:
<?php
function getUsers() {
return $users;
}
function loadLastLoginsForUsers($users) {
foreach ($users as $user) {
$lastLogins = ...
$user->setLastLogins($lastLogins);
}
return $users;
}
$users = getUsers();
loadLastLoginsForUsers($users);
The given piece of code loads a list of users at first and then, for every user, it loads his last login times from the database. This code produces the following N+1 queries:
SELECT id FROM Users;
SELECT * FROM Logins WHERE user_id = 1;
SELECT * FROM Logins WHERE user_id = 2;
SELECT * FROM Logins WHERE user_id = 3;
SELECT * FROM Logins WHERE user_id = 4;
SELECT * FROM Logins WHERE user_id = 5;
SELECT * FROM Logins WHERE user_id = 6;
That’s obviously inefficient and it happens quite often with “has many” relationships in databases. Especially when you are using some kind of magic ORM and you don’t exactly know what it is doing out of the box (and probably you haven’t configured it properly).
In general, you can solve this problem by producing a query like the following:
SELECT id FROM Users;
SELECT * FROM Logins WHERE user_id IN (1, 2, 3, 4, 5, 6, ...);
or by using the JOIN syntax where possible.
This problem can be only addressed when you are in control of your SQL queries or if you have a clear understanding of the ORM library you are using (if you are using one).
Anyway: Keep it in mind and be sure you don’t fall in the N+1 queries trap, especially when you deal with large datasets. Many PHP profilers allow you to inspect the generated queries for every page request. They can be a very useful companion to understand if you are doing things properly in terms of avoiding the N+1 queries problem.
Rule 6: Scale… horizontally!
“Scalability” is not exactly the same thing sd “performance”, but the two things are tightly intertwined.
To give you my personal definition, “scalability” is the ability of a system to adapt and remain functional without noticeable performance issues when the number of users (and requests) grows.
It’s a very complex and broad topic and I don’t want to get into the details here. But for the sake of performance, it’s worth to understand and keep in mind some simple things that you can do to be sure your app can be easily scaled horizontally.
Horizontal scaling is a particular scaling strategy in which you add more machines to the cluster where your app is deployed. This way the load is split among all the machines and thus the whole system can remain performant even when there are a lot of simultaneous requests.
The two major problems to take in consideration when preparing for horizontal scaling are user sessions and user files consistency.
Conclusions
With this post, we wanted to give you an idea of some practical concerns to take into consideration when writing a new app. That said, don’t fall into the trap of premature optimization and just focus on writing the right code for the right job.
About the Author:
Luciano is a Software Engineer born in 1987. The same year that the Nintendo released “Super Mario Bros” in Europe, which, by chance is his favorite video game!
He is passionate about code, the web, smart apps and everything that’s creative like music, art, and design. As a web developer, his experience has been mostly with PHP and Symfony2. Even if he recently fell in love with Javascript, NodeJS, and Docker. In his (scarce) free time he writes on his personal blog at loige.co.
This article is brought to you by Usersnap. It’s your central place to organize user feedback and collect bug reports. Report bugs in your browser, and see the bigger picture. Get your 15-day free trial now.
[otw_is sidebar=otw-sidebar-4]
Resolve issues faster with visual bug reporting.
Simplify and reduce issue & bug reporting efforts with screen recordings, screenshots, and annotations.
And if you’re ready to try out a visual bug tracking and feedback solution, Usersnap offers a free trial. Sign up today or book a demo with our feedback specialists.