System Testing the Docker EE stack is all about large numbers: large numbers of nodes, concurrent users, and deployed applications.
One basic building block for our System Test automation is to have a general purpose application that we can deploy in different ways:
- Via docker-compose
- As a docker stack
- As a kubernetes deployment
Several excellent demo applications have been created, such as Jérôme Petazzoni’s docker-coins, and Evan Hazlett’s docker-demo. From a testing perspective, they are useful for performing sanity checks that a deployed application is reachable under various scenarios (i.e., deployed through UCP Layer-7 Routing.) However, for purposes of our automated testing, we had somewhat different requirements:
- A multi-tiered application, where the amount of network traffic among the containers can be adjusted while the containers are running.
- A front end that was geared toward interacting with a REST client instead of a browser, so automation can interact with the deployed application without needing Selenium.
- Support for statistics scraping (packet error count, packet round-trip timing, byte Tx, byte Rx)
- Docker images for Linux and Windows nodes.
To meet our needs we created a simple application called reference-app.
The controller and worker are written in go. At its heart, the worker is a net/http client, and the controller is a net/http server. At startup, each worker opens a connection to the controller and begins sending traffic to it over the app-network.
The traffic is a serialized JSON structure; the structure contains header information (a timestamp, packet ID, sender ID, payload size, and checksum) as well as a variable length payload. The payload is packed with random data to achieve the desired overall packet size.
When the controller receives a packet from a worker, it validates the payload, updates its metrics counters and replies with a 200 OK to the worker and a serialized JSON structure containing the current configuration settings for payload size and transmit frequency.
The worker then computes the round trip time, updates its metrics counters, and waits for the next send interval to send a packet.
The controller is also listening on port 8080 for traffic coming in from outside (ie., via curl). This allows us to collect metrics and modify the network traffic configuration.
Interacting With The Front End
The front-end service supports the following REST endpoints
|POST||/config||Modify network IO profile: Packets per second and bytes per packet.|
|GET||/config||Retrieve current config settings|
We can scale up the application in several ways:
- Increase the number of worker containers via docker service scale (or updating the number of replicas for a kubernetes deployment)
- Increase the network activity among existing workers by an HTTP POST to the /config endpoint
We can scale the overall load on the cluster by deploying multiple instances of the reference-app. Because the published port (30000 in the above compose file example) must be unique for each instance, we have tooling that will machine-generate each compose file (or kubernetes .yaml file) and deploy the applications. The tooling supports deploying multiple instances of the reference-app with options for:
- Deploying as a docker stack
- With UCP’s Layer-7 routing
- Using an encrypted overlay network
- Setting memory limits when deploying
- Deploying as a Kubernetes deployment
- Deploying on Linux and/or Windows workers
Conclusions and Future Work
The reference-app has proven useful for loading a cluster and verifying the deployed applications are available. Our current work is focused on adding automation around the reference-app:
- A program that creates a cluster on a cloud provider, installs the docker-EE stack, instruments the overall cluster using cAdvisor , node_exporter and InfluxDB, then deploys the reference-app.
- A program that deploys some number of instances of the reference-app as a given UCP user, then spawns monitoring threads to continuously verify each deployed application is reachable.
- A program that performs repeated administrative tasks to the UCP cluster, such as creating users, adding and removing worker nodes, and upgrading the cluster.