Understanding how many concurrent requests your web server can handle is crucial as your website begins to receive more visitors and scale.
A slow server is one way to turn visitors away from your website. The probability of bounce increases by 32% when page load times go from 1 second to 3 seconds. Around 70% of consumers also say a slow site makes them less willing to buy anything. So speed is important and one of the best ways to serve a website quickly is to have a web server that is up to the job.
With a single CPU core, a web server can handle around 250 concurrent requests at one time, so with 2 CPU cores, your server can handle 500 visitors at the same time. Getting the balance right between performance and cost is crucial as your site grows in popularity. If your server isn’t able to cope it is going to turn visitors away and degrade the user experience.
Let’s take a closer look at the factors that affect a server’s ability to handle multiple requests at once.
How many concurrent requests can a web server handle?
A webserver with one CPU can handle around 250 concurrent requests but you’ll also need 4GB of RAM if all those requests are trying to access a MySQL database. The truth is that different factors determine how your server will cope with an influx of visitors which include capacity, processing power, RAM, and architecture.
The effectiveness of your server to be able to handling simultaneous requests also depends on the type of request being made. If each user is requesting something that also requires accessing the database then things start to slow down. A simple request to access a webpage is far quicker than a database query which requires sifting through large amounts of data.
Similarly, requesting unoptimized images slows down the browsers and affects the user experience. Let’s explore some key reasons that affect how many current requests a web server can handle.
The more CPU core you have the more simultaneous requests a website can handle. A single core can handle an average of 220 to 250 connections at one time. As your website grows you are going to need the ability to scale at will. Purchasing a server with a lot of cores can be costly, so using a cloud-based service means you can add cores as your peak user count increases.
Type of requests
A MySQL database can have 75 concurrent connections per gigabyte of usable memory. Most operating systems on a server take up around 350MB, so whatever is leftover in your RAM can be used to serve database requests.
So if you have a website that interacts a lot with a MySQL database then you may have to take into account the large amounts of RAM you’ll need. Think WordPress websites with a lot of pages that have thousands of visitors a day.
Similar to the CPU cores, servers with a lot of RAM can be expensive. If you need 4000+ concurrent database requests, purchasing a server could cost up to $5,000+. 16 cores cost more than $250 a month on popular hosting sites like SiteGround.
Web servers come in many shapes and sizes. They range from high-end performance servers to retasked old PCs. Some run on Unix and Linux, others on Windows and macOS.
The operating system and underlying technology on your server are incredibly important. It is a major factor in how many requests a server can handle.
The underlying architecture of your server and the backend of your site has a major impact on how quickly your site feels. A poorly designed front and backend may make a lot of unnecessary requests to the server which can impact performance. It is up to developers to implement an approach in the system design that lowers the impact on server resources.
Poor database design can also mean that requests can take a long time and take up unnecessary resources on your server.
Other considerations for how many simultaneous requests a web server can handle
There are a few other considerations when trying to understand the number of requests your server will be able to handle.
You need to figure out what the service capacity of your website is. If you don’t do this you could be losing out on visitors as your site isn’t able to handle large volumes. The best way to do this is to understand your peak concurrent users.
You can use Google Analytics to find out how many concurrent users your site has had in the last month. Use this to determine how many users your WordPress site can handle, the same approach can also be used for non-WordPress sites.
If your peak concurrent users are 450, you need a server that can at least handle a minimum of 500 requests.
How many API requests can a server handle?
With a single-core, a server can handle roughly 250 API requests per second, although this can be increased dramatically with more processors and RAM. Each API is also different and some purposely limit the number of requests a single user can make.
A good engineering team will be able to design a website and app in a way that limits unnecessary calls to an API. Throttling and debouncing requests is a popular technique on the front end to limit the number of server resources being used.
It is also the job of backend engineering to make sure the architecture of the API is created in a way that it can interact with a database in a performant manner. As your site or app grows you can ramp up how many requests it can handle by splashing out on RAM.
Either way, most modern hosting packages are more than capable of handling everything you need. They can scale up quickly and do most of the majority of the work for you. The days of having to plan everything yourself are over.
Managed hosting will handle server design and set up for you so you’ll never really have to worry about the number of requests it can handle.
Nathan Britten, the founder and editor of Developer Pitstop, is a self-taught software engineer with nearly five years of experience in front-end technologies. Nathan created the site to provide simple, straightforward knowledge to those interested in technology, helping them navigate the industry and better understand their day-to-day roles.