Part of my daily routine is to consume, design and deliver services built for HTTP clients. This might manifest in something special like WebSocket, Protobuf but most often a plain JSON REST API. In any case, the performance is key, and an Undertow REST API with JAX-RS can deliver a stable performance under consistent load.
In this article, I will cover some basics for setting up and deploying a REST API with Undertow, in Docker.
Top Sites REST API
I choose to make a wrapper around the public Alexa top 1 million site list. The list gives an opportunity to consume an external input, in this case, the daily updated compressed csv, from S3. The API exposes two simple endpoints get all domain names and get domains for TLD, with pagination.
The project is available, with setup and deployment details at https://github.com/pete314/alexa-top-sites-api
Disclaimer: this project is just a toy, to play around with technologies and solutions, and as such it is not mean to run in production.

Alexa Top Sites REST API – response sample
Configuration
I like to design the applications with my SRE hat on and work towards a good deployment experience, resiliency, and mostly performance. The first building block to accomplish these is an easy and always accessible configuration. Environment variables are powerful enough for most applications, and it is inherited in most tools required during the application life cycle, from CI to production deployment on OS or via orchestration tools.
I create a simple wrapper around the Java core “System.getenv()”, to overload the default Map functions with parsing to Integer or String with defaults. This enables easy access to config, and the only downside is, this approach does not support “hot-swapping” of variables. The example variables for the project can be viewed in config/.env.example
Application Structure
api/ ├── cli │ ├── Environment.java │ └── Runner.java ├── common │ ├── client │ │ └── JedisFactory.java │ ├── exception │ │ └── RestException.java │ ├── filter │ │ ├── CorsFilter.java │ │ └── RestExceptionMapper.java │ ├── request │ │ └── PaginatedRequest.java │ └── response │ └── PaginatedResponse.java ├── resources │ └── sites │ ├── model │ │ ├── Site.java │ │ ├── SitesDataMapper.java │ │ └── SitesRequest.java │ └── resource │ ├── SitesController.java │ └── SitesResource.java ├── server │ ├── RestApplication.java │ └── RestServer.java └── service ├── SitePoison.java ├── SitesFileExtractor.java ├── SitesRedisExtractor.java └── TopSitesUpdater.java
Running the application
The application is designed to run on a server running JVM directly or within a docker image. The design also enables the execution in clusters for both cases, I personally prefer the combination of Docker in Kubernetes, although the performance is slightly lower.
Before you begin please check the correctness of the system environment variables, examples can be found in config/.env.example
In order to build the project Java JDK 1.8+, Maven and Redis is required. These can be either exist as a regular installation or available within container(s). In order to improve the build speed, the Alexa Top 1 million sites file can be downloaded once, and the path can be passed via the TOP_SITES_PATH
environment variable.
The default deployment contains a self signed cert, which can be replaced by specifying the required Java Key Store path with credential via the environment variables KEY_STORE_PASSWORD, KEY_STORE_PATH
Deploy on JVM directly
- Navigate into the repository root
- Build the project
mvn clean install
- Run the project
java -jar -server target/java -jar target/alexa-top-api-1.0.jar
Deploy in Docker
- Navigate into the repository root
- Build the project
mvn clean install
- Build docker container
docker build --no-cache -t alexa-top-api:latest .
- Run docker container
docker run -it -p 8888:8888 -p 4443:4443 -e REDIS_MASTER_HOST=$DOCKER_REDIS_HOST alexa-top-api:latest
Note that the env variable “DOCKER_REDIS_HOST” represents a host only, where the service is listening on external calls.
Benchmark
Writing test case is must for enterprise grade applications, but I also like to write performance tests just to see what are the limitations of my implementation. This can also reveal that a small change in the implementation might result in a better performance. Of course this is only helpful if you know what the frameworks theoretical limitations are. For an Undertow REST API, the limit for cloud deployed plain text response is at 474,884, and JSON serialization is at 80,720 based on techempower.com benchmarks.
For this project I choose wrk to use as benchmark tool.
# Run benchmark without paramters wrk -t8 -c 100 -d5s "http://127.0.0.1:8888/v0.1/alexa-top" # Run benchmark with paramters wrk -t8 -c 100 -d5s "http://127.0.0.1:8888/v0.1/alexa-top?page=2&tld=com"
Results:
request | http method | throughput (/sec) | latency(avg) | virtualization |
---|---|---|---|---|
/v0.1/alexa-top | GET | 16200 | 6.98 ms | – |
/v0.1/alexa-top | GET | 14300 | 8.68 ms | Docker |
/v0.1/alexa-top?page=2&tld=com | GET | 24100 | 4.78 ms | – |
/v0.1/alexa-top?page=2&tld=com | GET | 10700 | 10.37 ms | Docker |
In a real word application this should also be run over the network, possibly under production equivalent load.
For more details on HTTP load testing, check out my details post – HTTP load testing strategies and tools