Apache 2.4 Faster Than Nginx?

Some reports came out recently after Apache 2.4 was released, saying it’s “as fast, and even faster than Nginx”. To check it out if it’s true, I ran a benchmark by myself. Here are the benchmark results.


It turned out that it’s a false claim. Apache 2.4 is actually much slower than Nginx!

The benchmark was run on the same Linux box (localhost), to avoid possible network affection. I used ab (ApacheBench) as the benchmark client. Apache listened on port 80 and Nginx listened on port 81. In the whole benchmark process, Apache was stressed first then Nginx. There was a 60 seconds sleep between each test, which was taken by five times per concurrency (from 100 to 1000). I gave up the tests with higher concurrencies because Apache was so unstable with concurrency greater than 1000 that there would be some failures. While Nginx was very stable without problems.

I’ve tried my best to fully “unleash the power of Apache”:
1) configured with apr-1.4.6 and apr-util-1.4.1, using the fastest atomic API:

$ ./configure --prefix=/home/shudu/apache --with-included-apr \

2) least modules were enabled:

$ apache/bin/httpd -M
Loaded Modules:
 core_module (static)
 so_module (static)
 http_module (static)
 mpm_event_module (static)
 authz_core_module (shared)
 filter_module (shared)
 mime_module (shared)
 unixd_module (shared)

3) MaxRequestWorkers was raised to 800 and ServerLimit to 32.

Nginx was just compiled with its default options:

$ ./configure --prefix=/home/shudu/bench/nginx

The common features of Apache and Nginx:
1) Sendfile on.
2) KeepAlive off.
3) AccessLog off.

The configuration files of Apache and Nginx are as followings:

# Apache 2.4.1
ServerRoot "/home/shudu//bench/apache"
KeepAlive Off
ServerLimit 32
MaxRequestWorkers 800
Listen 80
ServerName localhost
LoadModule authz_core_module modules/mod_authz_core.so
LoadModule filter_module modules/mod_filter.so
LoadModule mime_module modules/mod_mime.so
LoadModule unixd_module modules/mod_unixd.so
<IfModule unixd_module>
User shudu
Group shudu
ServerAdmin you@example.com
<Directory />
    AllowOverride none
    Require all denied
DocumentRoot "/home/shudu/bench/apache/htdocs"
<Directory "/home/shudu/bench/apache/htdocs">
    Options Indexes FollowSymLinks
    AllowOverride None
    Require all granted
ErrorLog "logs/error_log"
LogLevel warn
<IfModule mime_module>
    TypesConfig conf/mime.types
    AddType application/x-compress .Z
    AddType application/x-gzip .gz .tgz
EnableSendfile on
# Nginx-1.0.12
user  shudu users;
worker_processes  2;
events {
    worker_connections  10240;
    accept_mutex_delay  100ms;
http {
    include             mime.types;
    default_type        application/octet-stream;
    sendfile            on;
    tcp_nopush          on;
    keepalive_timeout   0;
    access_log off;
    server {
        listen          81;
        server_name     localhost;
        location / {
            root        html;
            index       index.html index.htm;

My Ubuntu-10.04 box:

$ uname -a
Linux shudu-desktop 2.6.32-38-generic #83-Ubuntu SMP Wed Jan 4 11:13:04 UTC 2012 i686 GNU/Linux
$ grep "model\ name" /proc/cpuinfo 
model name	: Intel(R) Core(TM)2 Duo CPU     E8400  @ 3.00GHz
model name	: Intel(R) Core(TM)2 Duo CPU     E8400  @ 3.00GHz
$ free -m
             total       used       free     shared    buffers     cached
Mem:          1995       1130        864          0         80        341
-/+ buffers/cache:        708       1286
Swap:         2491          0       2491
$ cat /etc/security/limits.conf
root soft nofile 65535
root hard nofile 65535
shudu soft nofile 65535
shudu hard nofile 65535

Comments are welcome :)


  1. Mike said,

    February 29, 2012 @ 1:02 am

    Numbers from 2.2 would have been interesting as well. Is 2.4 even faster than 2.2?

  2. peng said,

    February 29, 2012 @ 1:29 am

    only static file?

  3. sdfsdfsf said,

    February 29, 2012 @ 1:36 am

    Wow, it’s amazing you can come to that conclusion without properly setting your apache configs to allow for more connections.

    Which mpm were you using, worker or event?
    From the documentation for event mpm: “the absolute maximum numbers of concurrent connections is: (AsyncRequestWorkerFactor + 1) * MaxRequestWorkers”

    Why did you set the maximum number of connections that will be processed simultaneously on apache to only 800? You capped it’s limit before it had a chance to perform. This should be something more like 4096.

    You might want to try these values and test again:

    ListenBackLog 5000
    StartServers 40
    ServerLimit 40
    ThreadsPerChild 103
    MinSpareThreads 500
    MaxSpareThreads 1000
    MaxRequestWorkers 4096

  4. Sam said,

    February 29, 2012 @ 1:44 am

    @sdfsdfsf I think it is a valid point that if the defaults don’t work well, then why are they the defaults.

  5. Isaac said,

    February 29, 2012 @ 1:47 am

    I am curious to see results with the new settings… would you mind posting them up if you get a chance?

  6. Guest said,

    February 29, 2012 @ 1:54 am

    @Sam, maybe because they are suited to the average target audience which doesn’t really need to respond to 15k requests per second, i guess.

  7. sdfsdfsf said,

    February 29, 2012 @ 1:56 am


    The defaults ARE JUST DEFAULTS. You don’t serve 20,000 connections with defaults. EVER.

    Nginx’s default worker_connections is not 10240. So he’s not using the defaults.

    You would have to be a fucking idiot to benchmark a tool using only it’s defaults. The whole point of benchmarks is to assess their maximum performance, not to assess how terribly they’re configured with their defaults.

  8. Guest said,

    February 29, 2012 @ 1:57 am

    @Sam, and another thing:
    you shouldn’t compare nginx to apache and refer only to the static files performance, its wrong.

  9. useWeighttp said,

    February 29, 2012 @ 2:12 am

    You should use weighttp instead of ab, because “tadaa” you have a dual core processor.
    ab is single threaded and doesn’t scale nearly as good as weighttp.

    I get 32k req/sec with nginx on a much slower machine (2×1.3ghz), you have to optimize your TCP/IP Stack if you want to run a LOAD Test. sysctl.conf

    I know apache squeezes like a dead pig when the concurrency reaches >1000, but hey, somaxconn, ulimit and many other things should be done before doing such a benchmark. Otherwise it’s just a naive and wrong benchmark.

    Look and learn how it’s done: http://openbenchmarking.org/ and http://www.phoronix-test-suite.com/

  10. useWeighttp said,

    February 29, 2012 @ 2:16 am

    btw. I’m glad you benchmarked the new apache at all otherwise I wouldn’t know that they had some progress after a little more than decade. I don’t think that apache will play in a role in the future of the web anymore.

  11. Aaron Greenlee said,

    February 29, 2012 @ 2:58 am

    You’ll get some traffic on this post. Please re-post with the recommendations. Good idea.



  12. Jonas B. said,

    February 29, 2012 @ 4:44 am

    You run the client and server on the same box? Your results will have _nothing_ in common with a networked situation. Normally lingering connections and slow clients are what determines the server’s performance in insanely high concurrency situations like this one.

    That you run one benchmark after the other gives the latter server a boost by serving it a warm cache. Not fair. Also, you give Apache deliberately lower number of connections to work with. What can you possibly learn from this?

    This benchmark is so unfair I almost believe you had to put in some extra work to make it so. But in that case, why not twist the numbers even further?

  13. Anon said,

    February 29, 2012 @ 5:04 am

    Please redo the benchmarks with sdfsdfsf’s advice

  14. DenBrown said,

    February 29, 2012 @ 5:04 am

    @Jonas B.
    Apache guys said in the press release that 2.4 as fast or even faster than nginx WITHOUT any numbers, tests, benchmarks, configs, etc. Just marketing bullshit!

    So, Apache adepts, please shut your dirty mouthes.

  15. learnbot said,

    February 29, 2012 @ 5:29 am

    Hi, why dont you try to properly configure a test environment. Using localhost is really out of question for any benchmark than to test the lo driver. thanks :D

  16. guest said,

    February 29, 2012 @ 9:55 am

    Is the server multi-core? Does that mean in this test nginx was using only one core while apache was using all cores?

  17. Daniel Lyons said,

    February 29, 2012 @ 1:48 pm

    I’m surprised nobody has submitted a link to Zed Shaw explaining how to do statistics yet. It’s worth a read.

  18. Big Admin said,

    February 29, 2012 @ 9:24 pm

    If you don’t want to look like a fool, you should make a propper benchmark test with the recommended values and not a unfair and skewed result like this one. I actually believe you ment well, you just did a very poor job or are incompetent. Feel free to prove otherwise?

  19. Joshua said,

    February 29, 2012 @ 9:52 pm

    Thanks for your advice. It’s clearly that Apache 2.4 was using event-mpm.
    The options you provided sound reasonable, except ListenBackLog 5000 (to be fair, Nginx’s listening backlog have to be tuned).

    Thanks. I’ll give weighttp a try.

    @Jonas B.:
    I ran the benchmark just because my computer is in a large LAN. So I think the network may not be that stable. Nginx ran after Apache didn’t give it a warm up, because the files they serve were in different document root!

    Guys, thanks for your suggestions. But calm down, I’ll redo the benchmark.

  20. Joshua said,

    February 29, 2012 @ 10:09 pm

    Dear Apache expert, the options you suggested look a little bit hilarious.

    $ sudo bin/httpd -t
    AH00507: WARNING: ThreadsPerChild of 103 exceeds ThreadLimit of
    64 threads, decreasing to 64.
    To increase, please see the ThreadLimit directive.
    AH00515: WARNING: MaxRequestWorkers of 4096 would require 64 servers and
    would exceed ServerLimit of 40, decreasing to 2560.
    To increase, please see the ServerLimit directive.
    Syntax OK

  21. Jim said,

    February 29, 2012 @ 11:38 pm

    It’s no surprise that Nginx feels threatened when their only story is “fastest web server”… How is “fastest” measured? And does that even make sense? There are use cases where Nginx is the best choice. There are also use cases where it’s not, and Apache httpd is. All we are saying is that with Apache 2.4, it’s more applicable and usable in some of the old “Nginx-only” use cases. That’s it. And there are benchmarks which show the increased performance, both in concurrency and latency in Apache 2.4 as compared with Nginx. But again, so what? Benchmarks are not real world situations, and anyone foolish enough to base their selection of a web server based *just* on benchmarks are hardly experts or even people I would bother to listen to.

  22. Cliff Wells said,

    March 1, 2012 @ 2:38 am


    Jim, Nginx’ claim-to-fame isn’t only “fastest”. In fact, I don’t think that is really Nginx’s main strength. Rather, Nginx has very stable memory utilization due to its asynchronous architecture. This is where Apache has traditionally had problems. Nginx can be run in a VPS with only 64MB of RAM and no swap and still serve hundreds of concurrent requests. Apache will likely never be able to do this. While such a deployment scenario might not be terribly interesting in itself, it does reveal why Nginx has been much more scalable than Apache. It’s not (only) because of speed, but because a single Nginx server can handle a much higher concurrency than Apache while consuming only small amounts of RAM.
    I’m sure anyone with long Apache experience has seen the “thundering herd” syndrome, where under heavy load, Apache forces the OS into swap, which in turn slows the system, which keeps requests alive for longer, forcing more swap utilization and slowing, until the system either slows to a crawl or crashes. This is the real success story of Nginx, not raw speed.

    I certainly don’t disagree with you that both Apache and Nginx have niches where they excel and that to choose one or the other based on a single criteria such as speed is foolhardy (unless speed happens to be the number one priority).

    In any case, “Nginx feels threatened” is a bit of an overstatement. The Apache folks made several unsubstantiated claims about speed (which tells me that perhaps *they* are the ones who feel threatened on this front) and now people are attempting to validate those claims. I expect we will see much testing on this front over the next few weeks from both camps and eventually some interesting results will emerge (along with some not-so-interesting ones).

  23. Jim said,

    March 1, 2012 @ 4:28 am

    “unsubstantiated “? Hardly. And as far as “claim-to-fame”, most of the discussions on Nginx simply mention and focus on rps only. As mentioned, concurrency is one way to measure “performance”, but it’s not the only way and, for many, many people out there, it isn’t the primary issue in the least. And as far as Apache “forcing” on OS into swap… give me a mismatched config, and I can make *any* daemon “force” the OS into swap.

  24. perusio said,

    March 1, 2012 @ 8:24 am

    This comparison isn’t fair since last time I checked Apache stable version is 2.2.22. So the comparison should be against that version, or, use Nginx 1.1.16 alternatively.

    How much heartburn in the Apache people. They regularly flood the #nginx timeline with “speed” propaganda but cry fire in crowded theater whenever someone tries to clarify what’s really behind such claims.

    Blatant ad hominem like some of the Apache “experts” employ above is always the hallmark of a lack of arguments.

    Why don’t you invest energy in stuff like fixing the braindead XML like Apache httpd config language? That’s a start for some positive action.

  25. My most perfect computer Site said,

    March 5, 2012 @ 4:26 am

    Because the admin of this website is working, no hesitation very quickly it will be well-known, due to its quality contents.

  26. halcyonCorsair said,

    March 7, 2012 @ 4:47 pm


    Apache 2.4 IS the current stable version, so his comparison is fair.

  27. angrybee said,

    March 8, 2012 @ 7:44 pm

    Better than doing your benchmark again you may want to consider stopping doing benchmarks because you obviously don’t know it should be repeatable and comparable. Don’t waste your time anymore please.

  28. Lennie said,

    March 9, 2012 @ 6:23 pm

    Why did you turn off Keepalive ? Pretty much no-one does that in real life, so why did you benchmark it that way ?

  29. tinyray said,

    March 11, 2012 @ 8:03 am

    For people who have been only with apache, its hard for them to accept what your benchmark has shown.

  30. Pierre said,

    March 14, 2012 @ 4:22 pm

    For a comparative benchmark using weighttp (a faster, more capable, multi-threaded ApacheBench), see:


    Nginx uses more CPU and memory (and is slower) than other Web servers (Lighty, G-Wan).

    For dynamic contents, see: http://gwan.ch/benchmark

    Tengine clearly shows that Nginx’s design is not only limited for the performance: the lack of clear, simple interfaces reflects a convoluted architecture which translates into poor scalability and ridiculous efforts needed to write modules.

    See how an FLV module can be written in 16 lines of code:


    Apache needs 137 lines of code:


    Lighttpd needs 352 lines of code:


    …and Nginx needs 257 lines of code:


    To shine in today’s environment, Nginx needs a whole rewrite, not mere patches.

  31. dk said,

    March 19, 2012 @ 7:56 pm

    Pierre I was super impressed with gwan stats and benchmarks, until I actually ran it on my live server, where nginx held the cpu down to a minimal and gwan hit 100% during the time I used it.
    Which wasn’t very long as the site went sloooow when replacing nginx for gwan for my static content. Will post a blog one of these days so you can check it out and see wth went wrong :)

    Luckily my config for gwan can’t be wrong as there is none ;P which is a good thing, if it was actually faster as advertised.

  32. wangbin579 said,

    March 20, 2012 @ 6:17 pm

    maybe tcpcopy is the best tool that can be used to compare the performance of apache 2.4 and nginx.


  33. virtualeyes said,

    March 21, 2012 @ 6:25 am

    15K requests per second, hmmm, 99.99% of the sites on the net do nowhere near this kind of traffic.

    What I am more interested in is, how does Apache 2.4 stack up against Nginx under “normal” load scenarios?

    Realistically, both Apache and Nginx perform just fine in the vast majority of use cases, but given that either one generally is more than sufficient, which one is the LEAST latent per request under modest load? In other words, handling a whopping 10 requests per second what is the difference in response times per request in milliseconds?

    Secondly, when using Apache or Nginx as a proxy/load balancer, which one has the best performance (again, least latent) in connecting to the application server (e.g. Jetty or Tomcat)?

    These are the kinds of things I’m interested in as I don’t run Twitter, but am interested in trimming down response times in the client browser server loop.

    Basically, benchmarks mean little, real-world scenarios mean everything.


  34. Ryan said,

    March 22, 2012 @ 10:33 pm


    Apache sucks up memory like a sponge and it’s still considerably slower and less scalable than Nginx. Not to mention that it’s configuration language is uglier, it’s architecture is worse and that it is bloated with junk features like htaccess.

    I put it to you that Apache is obsolete software. There are *no* cases where Apache is a “better fit” any more. Nginx has you beat on all fronts by a very wide margin. You are clearly butthurt about that.

  35. Pierre said,

    April 1, 2012 @ 11:25 pm


    You wrote:

    “Pierre I was super impressed with gwan stats and benchmarks … Will post a blog one of these days so you can check it out and see wth went wrong”

    We are 10 days later and you did not bother to tell the world (or me by email) how Nginx performed so well.

    I am forced to conclude that disclosing your (so far undocumented G-WAN / Nginx) results would undermine their credibility.

    Not the first time that Web server authors revert to groundless claims when they are faced to G-WAN’s open-sourced tests.

    If you want to talk technology (rather than FUD) then drop me a line and we will make progress on both sides.

    The rest is crap.

  36. c said,

    April 11, 2012 @ 12:27 am

    The guy that wrote this article is a butt hole.

  37. usr said,

    April 26, 2012 @ 3:19 pm


  38. wangbin579 said,

    April 26, 2012 @ 5:51 pm


  39. Laravel said,

    June 12, 2012 @ 5:59 pm

    Please update your benchmark.

  40. Cliff Wells said,

    June 24, 2012 @ 4:48 am


    It’s probable that someone could misconfigure any server badly enough to send a system into swap, but let me ask you this: what is the proper configuration for Apache that will allow it to serve 200 concurrent requests in a 128MB VPS? I don’t believe there is one, but I’m willing to defer to your expertise. If you need an Nginx config to compare with, I’d recommend the default.

    By “unsubstantiated”, I mean there were only one set of benchmarks from anyone in the ASF (you, I assume, since I recognize your talking-points). Unfortunately these “benchmarks” had the same flaws that are being criticized here. You used the default Nginx config, you didn’t benchmark memory/CPU utilization, etc. So yes, I do mean “unsubstantiated”. I was actually really interested to see how far 2.4 might have advanced Apache, but your benchmarks raised more questions than they answered.

    As far as the “real world” goes, there’s plenty of real-world examples (GIYF) where companies have either put Nginx in front of Apache, or switched to Nginx entirely, and seen significant benefits in higher throughput, lower latency, and reduced resource utilization.

    In any case, your animosity here is badly misplaced. Nginx and Apache happen to work quite well together. Nginx makes Apache better. Both Apache and Nginx are working to improve the state of open source HTTP servers (and the ASF is doing much more). It seems likely that Apache will never catch Nginx in speed and scalability, and it seems unlikely that Nginx will ever have the wealth of features present in Apache. I expect there will be room for both of them, probably often on the same system.

  41. Cliff Wells said,

    June 26, 2012 @ 8:13 am


    In my own testing of Apache 2.4.2, I found that the default settings for event MPM worked the best. Your suggestions (once I’d adjusted them to make them accepted by Apache) actually reduced requests per second and throughput, while somehow increasing system load (with your suggestions, 10K connections caused load of over 150 on an 8 core system, while defaults caused load of “only” 24. For reference, Nginx caused load of 0.4 while serving the file 2x faster).

    FWIW, event MPM seems far superior for serving lots of small requests than worker MPM, but still lags behind Nginx significantly, in req/s, throughput, and resource utilization. It’s clear that 2.4 isn’t changing the landscape nearly as much as Jim has suggested in press releases. Nginx will still be the clear winner as an edge server, and Apache will continue to power the application layer for most.

  42. Cliff Wells said,

    June 26, 2012 @ 8:33 am


    I agree that defaults are rarely optimal for benchmarking. Unfortunately I suspect Jim might take umbrage with being called a “fucking idiot”, since he did the same thing in his benchmarks:


    Ironically, the defaults here are even worse for Nginx, since it uses one worker (and hence a single core) by default. Jim didn’t clarify what hardware he used (a “Xeon”), but I assume it’s multi-core since single-core Xeons haven’t been made in years. That leaves me with the impression that Nginx was using a single core and matched the speed of Apache using multiple cores (which would be at least 4 in any semi-modern Xeon). He also somehow overlooked resource utilization, but that’s a relatively minor nit in comparison.

  43. Coach Factory Store said,

    June 28, 2012 @ 2:38 pm

    It is a good article ,and the information is very useful, every title is very nice and very fatastic concept.really like your style of writing,Thank author for it. I’m sure i am very happy to make a high comment here.

  44. Sylvester Arslan said,

    July 5, 2012 @ 2:49 am

    Hi there. I found your site by way of Google whilst looking for a similar matter, your web site got here up. It seems great. I have bookmarked it in my google bookmarks to come back later.

  45. CSRedRat said,

    July 12, 2012 @ 4:52 am

    Nginx rulles!

  46. Peter said,

    July 13, 2012 @ 4:07 am

    Pierre really, I think your g-wan would fare poorly at handling dynamic page. So hello world is what you good at?


  47. Ray said,

    July 13, 2012 @ 2:05 pm

    Thanks for comparing both servers. I will make a similar test today and share my results. Btw. U should delete the last comments. They are spams:)

    Regards ray

  48. Sunglasses Sale said,

    August 6, 2012 @ 5:31 pm


  49. greg said,

    August 11, 2012 @ 1:17 am

    Lol, why would anyone say apache is faster than NGiNX? What a joke.. nginx is done sending requests by the time apache spawns a new dumb child process to handle that request

  50. Polo Outlet said,

    August 14, 2012 @ 4:13 pm

    After studying you site, your internet site is extremely useful for me .Thanks for your sharing!I besides believe this s a extremely great internet site.

  51. Jimmy Choo Bags said,

    August 15, 2012 @ 3:11 pm

    great advice and discussing,I’ll get this amazing for me .thanks!

  52. Louis Vuitton Purses On Sale said,

    September 3, 2012 @ 11:43 pm

    I admire the valuable info you offer you inside your articles. I’ll bookmark your website and have my kids examine up the following typically. I am really confident they will understand a lot of new stuff below than anybody else!

  53. Louis Vuitton Outlet Store Online said,

    September 10, 2012 @ 4:31 pm

    It really is good to possess the capacity to examine a great high quality article with practical specifics on topics that plenty are interested on. The stage that the information indicated are all first hand on reside experiences even guide a lot more. Proceed performing what you do as we really like readi?-

  54. celine bag said,

    September 24, 2012 @ 10:51 am


  55. Kfz Diagnose said,

    October 15, 2012 @ 2:05 pm

    Professional repair shops need current, factory-correct information to meet the complex repair demands of today’s automotive industry. ALLDATA shops have fast access to the lyj

  56. Adelle Siebeneck said,

    October 20, 2012 @ 10:51 am

    It’s the best time to make some plans for the future and it is time to be happy. I have read this post and if I could I want to suggest you some interesting things or suggestions. Maybe you could write next articles referring to this article. I want to read even more things about it!

  57. Shawanda Streitnatter said,

    October 25, 2012 @ 2:42 am

    Thanks for the sensible critique. Me and my neighbor were just preparing to do some research on this. We got a grab a book from our area library but I think I learned more from this post. I am very glad to see such magnificent info being shared freely out there.

  58. Cherlyn Nquyen said,

    October 26, 2012 @ 9:45 am

    Hello.This post was extremely interesting, particularly since I was investigating for thoughts on this topic last Sunday.

  59. Beverley Schwager said,

    October 30, 2012 @ 3:39 am

    Wow! Thank you! I always wanted to write on my site something like that. Can I implement a fragment of your post to my blog?

  60. dissertation said,

    November 6, 2012 @ 4:55 am

    Very good post so nice!

  61. Polo Ralph Lauren said,

    November 9, 2012 @ 5:29 pm

    h a good knowledge gaining news. I have learnt a lot

  62. matthew said,

    January 8, 2013 @ 2:46 am

    ~7%/10mil americans suffer from red/green colorblindness in the US.
    just saying… it would be nice to see what the graphs are trying to communicate.

  63. SAM said,

    January 15, 2013 @ 4:57 pm

    can you please provide a detailed tutorial of how to move from apache to nginx on a server and how to change .htaccess wewrite rules from apache to nginx.
    my whole website is based on apache and i am not able to take benefits of nginx and lightppd

  64. Christian said,

    January 22, 2013 @ 4:17 pm

    I am running a couple of websites, and have peak-times where one such site serves around 150 dynamic-page-requests (perl-scripts to be precise).
    Being a web-based game, it has a lot of graphics too. And it is the server responsible for serving the images that I wish to mention, as it is the most relevant one.

    I tried using Apache, configged to serve 1000 requests concurrently, utilizing all 8 cores of the system, and maxing out my 4 GBs of ram. I’ve tried several MPMs ofc. Unfortunately, during my peak hours, apache was not able to keep up at all, and the entire site slowed down and was more or less useless for a lot of users.
    And yes, I’ve been working with many apache-servers since 1999.

    Then I gave it a go with Nginx…..
    1 worker_process, 1024 connections, and hey presto, done deal.
    It did this on 1 core, using a whooping 18 megabytes of ram, and no more than 1% cpu….

    In real-life scenarios, there is no doubt that nginx is much faster, much more memory efficient and lightweight in all measurable aspects, when serving static content, than apache (any version).

    Practically speaking, I bumped the nginx-server up to 4096 connections, still with only 1 worker_process though, and it took over serving all images for all sites I have, freeing up 2 servers in the process. Furthermore, as the server never goes beyond 3% cpu, and still has nearly all it’s ram available, I can keep throwing work at it.

    Truth be told, on my backend-server, where the Perl-scripts are running, I still use Apache and Mod_perl, “cause that’s what I always have done”. Some day I might be brave enough to try Perl on nginx, I just need to find an equivalent to ApacheDBI in Nginx, and I should be good to go.

    - Christian

  65. richard said,

    February 28, 2013 @ 7:02 am

    The author of this article doesn’t specifies which Apache MPM he is even using. Not exactly a fair comparison between Nginx and Apache.

    Apache is multi-process by default for a reason. The results here are meaningless.

  66. Craig said,

    March 3, 2013 @ 4:22 am

    “and another thing:
    you shouldn’t compare nginx to apache and refer only to the static files performance, its wrong.”

    No, dickhead, a static file benchmark is a static file benchmark. He didn’t misrepresent it, he just measured some specific thing within a set of constraints. That’s *exactly* what a benchmark is you knuckle-dragging fucktard.

  67. suuny said,

    March 8, 2013 @ 8:48 pm

    I was thinking that nginx is faster then apcahe for static files but for dynamic files like php,py etc apche is faster.
    You should try this with php

  68. write essay said,

    March 19, 2013 @ 8:08 pm

    Intresting post with useful information.

  69. http://wedoessays.com/ said,

    April 9, 2013 @ 10:02 pm

    Nice Article. Hope to see more article from your side under this category.

  70. marcus said,

    July 3, 2013 @ 12:10 am

    How did you calculate Requests/sec on the Nginx side?

  71. java nginx said,

    July 6, 2013 @ 2:37 pm

    i doubt if Apache 2.4 is really faster than NGINX overall. But I am biased because I am a fanatic to NGINX. However, improved performance on Apache is always a good thing.

  72. seayar said,

    August 29, 2013 @ 5:02 pm

    最好能把资源占用情况也列出来。 比如:cpu, ram, io等。

  73. Carsten Schipke said,

    September 25, 2013 @ 2:47 pm

    - Why are people bashing about default settings if he didnt use defaults at all - but simply wrong customizations?

    - The first major issue with this benchmark is introduced by “I’ve tried my best to fully “unleash the power of Apache”:

    How is nginx compiled / installed / how is apache compiled - defaults generated by a configure script are dependent on your system/environment and do not say anything at all.
    Most people that compile software like this end up with far slower packages than the systems binary distribution would have provided.

    Especially if we talk about benchmarking static files over HTTP - thats 95% system calls composed by the webserver, triggered & executed by the system. What about available/active polling/select/rtsig mechanisms? socket limits & timeouts etc… Which c / c++ standard libraries are linked against for nginx/apache? std++/libc++? Which C/C++ versions/standards? Which further pre-compiled/system-side/whatever libraries & version have been used? Which compiler has been used? Which compile time optimizations? Link time optimizations? Supported CPU instruction sets etc.? Which modules are active in either webserver? e.g. there is a static file cache in nginx - which could also be enabled for apache.

    We can just go on with an endless list… fact is the benchmark is wrong - right are the people that say: use case & knowledge matters.

    In this particular use case / static files over HTTP - no problem to compile any apache to get same results as you have for nginx.

    I had myself as many setups where nginx was faster as where apache was faster - I also do not see an issue with a well configured apache and 64MB RAM. Most people use dynamic language extensions like modphp which add to it - as well as tons of unneccessary modules that could be disabled - same for nginx.

    Regarding configuration, style, standards and whatever - keep in mind that apache IS decades old - and there is not much software with similar lifespan & durability that still matters… I like both products, and there may be use cases where apache’s time has come - but there also might be experts that see quite the opposite between their environments & dependencies & requirements etc.

  74. how to get bigger boobs naturally fast said,

    November 3, 2013 @ 1:30 am

    Have you ever thought about including a little bit more than just your articles?
    I mean, what you say is valuable and everything. But think about if you added some great photos or videos to give your
    posts more, “pop”! Your content is excellent but with images and videos, this blog could definitely be one
    of the very best in its niche. Great blog!

  75. Para said,

    January 9, 2014 @ 12:41 am

    Best usage of both daemons is run nginx as reverse proxy. nginx or apache only is slower than the proxy ;)

  76. www.cs-ly.net said,

    January 29, 2014 @ 2:47 pm

    The features integrated within it are fully functional.
    If you are not sure of your choice, you can always ask
    interior decor experts. In the 1950s Casio set in motion their first calculator.

  77. consolers of the lonely lyrics said,

    February 19, 2014 @ 2:15 pm

    I have you bookmarked your web site to look at the latest stuff you post…its always give pleasure to read your article..

  78. President Bush comes said,

    February 19, 2014 @ 4:44 pm

    hmm…, does anyone reading Nginx source code ?

  79. Katherina said,

    February 26, 2014 @ 1:57 am

    Hi i am kavin, its my first occasion to commenting anywhere, when i read this piece of writing i
    thought i could also create comment due to this brilliant post.

  80. brown hair with blonde highlights said,

    February 26, 2014 @ 2:14 pm

    Nice Article. Hope to see more article from your side under this category.

  81. http://mouthtoears.com/?p=42438 said,

    February 28, 2014 @ 10:29 pm

    I got this web site from my friend who told me about this website and at the
    moment this time I am browsing this website and reading very
    informative articles or reviews at this time.

  82. web design london said,

    March 12, 2014 @ 6:00 pm

    I am regular reader, how are you everybody? This paragraph posted at this site
    is really good.

  83. Free Vps said,

    April 2, 2014 @ 2:09 pm

    excellent post, very informative. I ponder why the other specialists of this sector do not notice this.
    You must continue your writing. I am sure, you’ve a great readers’ base already!

  84. obd diagnostic tool for French car said,

    April 9, 2014 @ 4:34 pm

    auto diagnostic tools,obd2 scanners, obd diagnostic tool for French car, car diagnostic cable, VAG diagnostic cable, auto transponder chip, key program tool, replacement car key ,auto mileage change .

RSS feed for comments on this post · TrackBack URI

Leave a Comment

To prove you're a person (not a spam script), type the security word shown in the picture. Click on the picture to hear an audio file of the word.
Click to hear an audio file of the anti-spam word