This Ghost blog is running on a tiny Raspberry Pi (Model B) with 512 MB memory and is located under my desk connected to a 30Mbit down/1Mbit up broadband cable connection.
Given that background I (and some people on reddit) found it impressive that the page loading time is relatively fast. I have been measuring the response time of this blog with UptimeRobot and it stays consistently between 400 and 500ms.
Now that alone is not too impressive so I used ab to benchmark the page/server. I performed three tests with 1000 requests each. The tests used a concurrency level of 1, 5 and 10.
Results
Test 1: ab -n 1000 -c 1 devdiary.io/
Server Hostname: devdiary.io
Server Port: 80
Document Path: /
Document Length: 4639 bytes
Concurrency Level: 1
Time taken for tests: 278.838 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 4963000 bytes
HTML transferred: 4639000 bytes
Requests per second: 3.59 [#/sec] (mean)
Time per request: 278.838 [ms] (mean)
Time per request: 278.838 [ms] (mean, across all concurrent requests)
Transfer rate: 17.38 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 19 26 3.7 25 79
Processing: 232 252 36.9 238 558
Waiting: 220 239 36.2 225 509
Total: 256 279 37.0 264 582
Percentage of the requests served within a certain time (ms)
50% 264
66% 268
75% 279
80% 298
90% 311
95% 327
98% 387
99% 483
100% 582 (longest request)
Test 2: ab -n 1000 -c 5 devdiary.io/
Concurrency Level: 5
Time taken for tests: 228.952 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 4963000 bytes
HTML transferred: 4639000 bytes
Requests per second: 4.37 [#/sec] (mean)
Time per request: 1144.762 [ms] (mean)
Time per request: 228.952 [ms] (mean, across all concurrent requests)
Transfer rate: 21.17 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 22 28 31.8 26 1020
Processing: 666 1115 132.3 1061 1723
Waiting: 653 1101 130.6 1048 1723
Total: 693 1143 135.6 1089 2096
Percentage of the requests served within a certain time (ms)
50% 1089
66% 1115
75% 1142
80% 1202
90% 1307
95% 1440
98% 1604
99% 1629
100% 2096 (longest request)
Test 3: ab -n 1000 -c 10 devdiary.io/
Concurrency Level: 10
Time taken for tests: 229.406 seconds
Complete requests: 1000
Failed requests: 3
(Connect: 0, Receive: 0, Length: 3, Exceptions: 0)
Write errors: 0
Total transferred: 4962814 bytes
HTML transferred: 4638811 bytes
Requests per second: 4.36 [#/sec] (mean)
Time per request: 2294.064 [ms] (mean)
Time per request: 229.406 [ms] (mean, across all concurrent requests)
Transfer rate: 21.13 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 21 41 212.2 26 3034
Processing: 781 2247 260.2 2167 2997
Waiting: 768 2234 259.9 2155 2984
Total: 815 2289 323.7 2193 5282
Percentage of the requests served within a certain time (ms)
50% 2193
66% 2207
75% 2334
80% 2460
90% 2664
95% 2793
98% 2883
99% 2996
100% 5282 (longest request)
Summary
With a concurrency level of 1 (no concurrency) the page load consistently stays under 600ms which I find pretty impressive. There are a lot of Wordpress blogs on high end servers out there that will not even send you a single byte in that time. The comparison between Wordpress and Ghost is of course not fair, since Wordpress does a lot more things than just deliver a blog nowadays, but it is still the most used blogging platform.
When we go up to a concurrency of 5 things start to look a bit different. This is not unexpected, but most requests are still server in less than 2 seconds. As a user you would definitely notice the difference.
At a concurrency level of 10 the page load is far beyond acceptable but it at least shows that the Raspberry Pi can handle this much load without crashing.