In most cases web applications are well covered with unit tests, to covert a functional requirements and acceptance tests. Unit tests combined with Continuous Integration can improve the deviling rate of a team/company. At the same time it can remove the need of manual testing, which is error prone. Tools for CI are widely available in self hosted solutions like Jenkins or cloud based like CircleCi or Travis, if the project source does not need to be secured as much. In any case these tools offer a wide variety of build and test scheduling to automated deployment and configuration. I think for most Software Engineers these are familiar concepts and possible part of daily work routines.
One important part of most web based application is not covered with these tools, the benchmark or load test of new features and the application in general. It is obvious that CI tools are not meant for this task, but to have good understanding where the application bottlenecks and limitations are is crucial. In most cases this is an SRE or DevOps task clearly, but I think it is good for all Engineers to understand they changes and the consequences of them (if any). There is also another important factor which is the price of keeping one system up. Many times I heard, read that “…the application is not busy, so it does not matter…”. I believe that it does, even in not busy apps, because the development time/cost is a small percentage of the system all over lifetime cost. There is a really good breakdown of this idea in the Site Reliability Engineering – How Google Runs Production Systems.
For a developer there are many tools to benchmark new features, in a standalone setup in most modern stacks. The tricky part I think is to being able to measure and understand the capabilities of an application as a whole. For this post I will stick to the HTTP load testing tools I found useful and use on regular bases.
The project and test environment
The project I run the tests on is available on github. The application can be considered as a “toy” project, I wrote with a friend, Ciaran. The application is nothing more than a simple to-do app. The backend(REST api) is written in Golang with MongoDb, the front-end is AngularJs(1.*). The application is not optimized, and does not include any advanced caching mechanism like Varnish for front-end or Redis for backend. It’s clearly visible that Golang garbage collection and the language in general is pretty amazing in terms of cpu, memory usage compared to the load.
I do all the load tests presented here on a KVM, 4x Intel E5-1650v3, 4Gb ECC ram, Ubuntu 16.04 server with 10Gbit nic.
Apache Bench – ab
Apache Bench is comes bundled with the apache(httpd centos/redhat) web server and is part of the apache2-utils in nix* operating systems. This means that for stack LAMP developers, ab should be part of the setup already. Most of the simple loads can be simulated with ab as it support the standard HTTP methods with custom header options, and has concurrency parameters. A downside for me is that the test are performed on ‘n’ number of request and not time or throughput.
Run ab with 50 concurrent connection, 1000 request with authentication header on a GET REST endpoint.
ab -c 50 -n 1000 -l -H "Authorization: Bearer c9d1db41a5b5c227c639c2f5f79c69151831537ccee3725a169acedebc94cd7f" http://127.0.0.1:8080/v0.1/user/get/57ee7ff124f7cf6738b221c2 ################################### ## The result ## ################################### Finished 1000 requests Server Software: go-todo-api/v0.1 Server Hostname: 127.0.0.1 Server Port: 8080 Document Path: /v0.1/user/get/57ee7ff124f7cf6738b221c2 Document Length: Variable Concurrency Level: 50 Time taken for tests: 0.651 seconds Complete requests: 1000 Failed requests: 0 Total transferred: 441000 bytes HTML transferred: 233000 bytes Requests per second: 1537.13 [#/sec] (mean) Time per request: 32.528 [ms] (mean) Time per request: 0.651 [ms] (mean, across all concurrent requests) Transfer rate: 661.98 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 10 5.6 9 31 Processing: 1 22 12.3 19 61 Waiting: 1 17 10.6 14 52 Total: 1 32 11.4 29 70
Sending post requests with Apache Bench with 50 concurrent connection, 1000 request with authentication header on a POST REST endpoint. The POST json body is coming from a file called test_post.json.
ab -c 50 -n 1000 -l -H "Authorization: Bearer c9d1db41a5b5c227c639c2f5f79c69151831537ccee3725a169acedebc94cd7f" -T 'application/json' -p test_post.json http://127.0.0.1:8080/v0.1/task/new ################################### ## The result ## ################################### Server Software: go-todo-api/v0.1 Server Hostname: 127.0.0.1 Server Port: 8080 Document Path: /v0.1/task/new Document Length: Variable Concurrency Level: 50 Time taken for tests: 0.816 seconds Complete requests: 1000 Failed requests: 0 Total transferred: 280000 bytes Total body sent: 346000 HTML transferred: 67000 bytes Requests per second: 1226.24 [#/sec] (mean) Time per request: 40.775 [ms] (mean) Time per request: 0.816 [ms] (mean, across all concurrent requests) Transfer rate: 335.30 [Kbytes/sec] received 414.33 kb/s sent 749.63 kb/s total Connection Times (ms) min mean[+/-sd] median max Connect: 0 10 7.1 9 43 Processing: 4 30 13.4 28 75 Waiting: 4 23 11.3 23 68 Total: 10 40 13.4 38 106
Siege
Siege is a bit more focused on load testing and benchmark, with parameters for time based load testing as well as number of requests. Siege has almost all configuration options as Apache Bench and supports configuration files. The configuration file can come in handy if a automated tests has to be conducted, so there is not much scripting is necessary to perform multiple tests. The configuration file is called .siegerc and a template can be generated by executing “siege.config”. Another advantage is that it is available trough official packages on most nix* systems. Another really nice feature is that it is possible to pass multiple Urls into a runtime, where the results are shown as an average.
I found that using it for applications with UI is really usefull because of the details available on execution. One can see the HTTP requests with status codes conducted during a certain test.
Siege is available on Github and it has an easy to follow documentation here.
The installation of Siege on Ubuntu 16.04 is really straight forward.
apt install siege -y
Run Siege with 50 concurrent connection, for 10 seconds with authentication header on a GET REST endpoint.
siege -c50 -t10s -H "Authorization: Bearer 5345c6cbdee4462a708d51194ff5802d52b3772d28f15bb3215aac76051ec46d" http://127.0.0.1:8080/v0.1/user/get/57ee7ff124f7cf6738b221c2 ################################### ## The result ## ################################### ** SIEGE 3.0.8 ** Preparing 50 concurrent users for battle. The server is now under siege... Lifting the server siege... done. Transactions: 935 hits Availability: 100.00 % Elapsed time: 9.66 secs Data transferred: 0.21 MB Response time: 0.00 secs Transaction rate: 96.79 trans/sec Throughput: 0.02 MB/sec Concurrency: 0.23 Successful transactions: 935 Failed transactions: 0 Longest transaction: 0.03 Shortest transaction: 0.00
Sending post requests with Siege with 50 concurrent connection, 10 seconds with authentication header on a POST REST endpoint. The POST json body is coming from a file called test_post.json.
siege -c50 -t10s -H "Authorization: Bearer 5345c6cbdee4462a708d51194ff5802d52b3772d28f15bb3215aac76051ec46d" 'http://127.0.0.1:8080/v0.1/task/new POST < test_post.json' ################################### ## The result ## ################################### ** SIEGE 3.0.8 ** Preparing 50 concurrent users for battle. The server is now under siege... Lifting the server siege... done. Transactions: 972 hits Availability: 100.00 % Elapsed time: 9.84 secs Data transferred: 0.06 MB Response time: 0.00 secs Transaction rate: 98.78 trans/sec Throughput: 0.01 MB/sec Concurrency: 0.33 Successful transactions: 972 Failed transactions: 0 Longest transaction: 0.02 Shortest transaction: 0.00
WRK
Wrk is a modern tool for HTTP load testing, with a really lightweight client. It supports all necessary benchmark parameters like concurrency keep-alive and the executing can be bound to threads. This feature is not available in ab or siege and even with out looking into the implementation one can feel the it might be more suitable for large scale loads. I also really like that the output is very compact, and it has builtin scripting support.
Documentation on in installation is available here and source is on github.
Run wrk with 5 concurrent connection, 5 threads for 10 seconds with authentication header on a GET REST endpoint.
wrk -t5 -c 5 -d10s -H "Authorization: Bearer 5345c6cbdee4462a708d51194ff5802d52b3772d28f15bb3215aac76051ec46d" "http://127.0.0.1:8080/v0.1/user/get/57ee7ff124f7cf6738b221c2" ################################### ## The result ## ################################### Running 10s test @ http://127.0.0.1:8080/v0.1/user/get/57ee7ff124f7cf6738b221c2 5 threads and 5 connections Thread Stats Avg Stdev Max +/- Stdev Latency 1.33ms 1.56ms 38.41ms 92.36% Req/Sec 0.94k 155.33 1.38k 69.20% 47057 requests in 10.02s, 19.79MB read Requests/sec: 4697.10 Transfer/sec: 1.98MB
Sending post requests with wrk with 5 concurrent connection, 5 threads for 10 seconds with authentication header on a POST REST endpoint. The POST request is passed with the “-s” flag from test_post.lua.
# Create the test_post.lua echo -e 'wrk.method = "POST" wrk.body = "{\"name\":\"test task\",\"content\":\"test task content\",\"status\":0,\"importance\":0,\"until\":\"12/03/2017\",\"ttl\":1200}" wrk.headers["Content-Type"] = "application/json" ' >> test_post.lua #Run the test wrk -t5 -c 5 -d10s -H "Authorization: Bearer 5345c6cbdee4462a708d51194ff5802d52b3772d28f15bb3215aac76051ec46d" -s test_post.lua http://127.0.0.1:8080/v0.1/task/new ################################### ## The result ## ################################### Running 10s test @ http://127.0.0.1:8080/v0.1/task/new 5 threads and 5 connections Thread Stats Avg Stdev Max +/- Stdev Latency 1.54ms 1.74ms 26.31ms 91.36% Req/Sec 823.93 181.74 1.22k 64.20% 41020 requests in 10.01s, 10.95MB read Requests/sec: 4098.19 Transfer/sec: 1.09MB
Gobench
Gobench is very similar in execution strategy like wrk. As you might have guessed it is written in go and it’s goal is to being able to execute tests with higher concurrency rate. With Siege and Apache Bench the biggest problem is that they consume lot of resources to conduct the tests, above the obvious network load. The problem with them is that they map to cpu cores and run the tests separate ports. These are limited resources so to run bigger loads the executor has to scale as well, which is not really an option in some companies “just for testing”. There is a good comparison to Siege in the github repository readme. That said Siege offers a different type of solution in my opinion.
The installation process requires Golang to be installed, which is not really a downside for me as I write Go anyway. For most nix* OS there are official repositories available and the Windows installation is straight forward as well (Go install instructions are here).
Building gobench
GOPATH=/tmp/ go get github.com/valyala/fasthttp GOPATH=/tmp/ go get github.com/cmpxchg16/gobench # Move to bin for accesiblity cp /tmp/bin/gobench /usr/local/bin
Run gobench with 50 concurrent connection for 10 seconds with authentication header on a GET REST endpoint.
gobench -k=true -c 50 -t 10 -auth "Bearer 5345c6cbdee4462a708d51194ff5802d52b3772d28f15bb3215aac76051ec46d" -u http://127.0.0.1:8080/v0.1/user/get/57ee7ff124f7cf6738b221c2 ################################### ## The result ## ################################### Dispatching 50 clients Waiting for results... Requests: 36697 hits Successful requests: 36697 hits Network failed: 0 hits Bad requests failed (!2xx): 0 hits Successful requests rate: 3669 hits/sec Read throughput: 1618381 bytes/sec Write throughput: 779057 bytes/sec Test time: 10 sec
Sending post requests with gobench with 50 concurrent connection for 10 seconds with authentication header on a POST REST endpoint. The POST request is passed with the “-d” flag from test_post.json.
gobench -k=true -c 50 -t 10 -auth "Bearer 5345c6cbdee4462a708d51194ff5802d52b3772d28f15bb3215aac76051ec46d" -u http://127.0.0.1:8080/v0.1/task/new -d test_post.json ################################### ## The result ## ################################### Dispatching 50 clients Waiting for results... Requests: 35626 hits Successful requests: 35626 hits Network failed: 0 hits Bad requests failed (!2xx): 0 hits Successful requests rate: 3562 hits/sec Read throughput: 997528 bytes/sec Write throughput: 1309309 bytes/sec Test time: 10 sec
Vegeta
Vegeta is a load testing tool written in go. The application is light weight and offers plenty of parameters. What I really like with this tools is the fact that they did think about the use cases at development time, so it supports piping and generally the usage of other handy nix* utils. There is a really cool feature for visualizing the result of the test with “-reporter=plot” flag which has html output format with “-output=results.html”.
Source is available on Github and it has compiled sources for various OS.
# Download the binary, un-tar and move into /usr/local/bin cd /tmp && wget https://github.com/tsenart/vegeta/releases/download/v6.1.1/vegeta-v6.1.1-linux-amd64.tar.gz && tar -xvzf vegeta-v6.1.1-linux-amd64.tar.gz && rm -rf vegeta-v6.1.1-linux-amd64.tar.gz && mv vegeta /usr/local/bin
Vegeta GET load test for 10 seconds with 2 cores.
echo "GET http://127.0.0.1:8080/v0.1/user/get/57ee7ff124f7cf6738b221c2" | vegeta -cpus 2 attack -duration=10s -header="Authorization: Bearer 5345c6cbdee4462a708d51194ff5802d52b3772d28f15bb3215aac76051ec46d" | vegeta report ################################### ## The result ## ################################### Requests [total, rate] 500, 50.10 Duration [total, attack, wait] 9.981196626s, 9.979999882s, 1.196744ms Latencies [mean, 50, 95, 99, max] 1.280979ms, 1.153943ms, 1.96403ms, 3.822774ms, 7.16895ms Bytes In [total, mean] 116500, 233.00 Bytes Out [total, mean] 0, 0.00 Success [ratio] 100.00% Status Codes [code:count] 200:500
Vegeta POST load test for 10 seconds with 2 cores.
echo "POST http://127.0.0.1:8080/v0.1/task/new" | vegeta -cpus 2 attack -body=test_post.json -duration=10s -header "Authorization: Bearer 5345c6cbdee4462a708d51194ff5802d52b3772d28f15bb3215aac76051ec46d" | vegeta report ################################### ## The result ## ################################### Requests [total, rate] 500, 50.10 Duration [total, attack, wait] 9.981164517s, 9.979999889s, 1.164628ms Latencies [mean, 50, 95, 99, max] 1.445512ms, 1.244871ms, 2.577279ms, 4.818824ms, 7.868125ms Bytes In [total, mean] 33500, 67.00 Bytes Out [total, mean] 54500, 109.00 Success [ratio] 100.00% Status Codes [code:count] 202:500 Error Set:
Conclusion
There are many different tools available for different application types and work loads. Automation of HTTP load testing is just a matter of simple scripting with the above tools. For more throughout testing one could crawl or create a list of test conditions(url/http methods/params) and could automate the authentication process with curl and awk in a single line command.
For quick testing I use wrk and gobench because of the simplicity and because they both are really lightweight.