How many `Dockerized` Java and Nodejs applications run on a host with 4GB of RAM?
As said in my previous article, I have written a couple of hobby or prototype projects over the course of the last 15 years. Those are mostly simple web games or things like link or lunch-place management systems. I want to showcase them as cheap as possible.
Those 7 games, 3 management systems and my ‘homepages’ always ran on a single server. So for years they were deployed on a 2GB host, running as a single Tomcat, a single MySQL, a single CouchDB, a single Apache and two Nodejs processes.
While this was working fine — and quite stable — all applications were deployed too close to each other, they were coupled too tightly and there were version dependencies between all of them.
At the dawn of Containerization I asked myself, how much memory does the host need to run all applications in Docker containers?
The answer is: just 4GB.

So the host runs 20 Docker containers in total. But to run so many Docker containers on just 4GB of RAM, all containers need to have restricted memory.
Before we look into details, I would like to point out that this is not a recommendation for a production setup nor a recommendation in terms of how much memory one should assign to Docker containers. This just answers the question, “How many Dockerized Java and Nodejs applications can run on a host with 4GB of RAM” when stability and performance is not a priority. My goal is to run all my showcase applications as cheap as possible inside Docker. That’s all.
Now let’s look at the memory limits for the different container types.
Tomcats
There are 8 Tomcats running and their memory settings go from 70M to 150M. Version 7 and 9 of Tomcat is being used and it seems there are no differences between those versions in terms of memory requirements.
CouchDBs
The 4 CouchDBs use Version 1.7 and have 200M to 250M of memory assigned. CouchDB recommends way more memory, but this setting works for my showcase. I also noticed that Version 2.x needs more memory, so I decided to stay on 1.7.
PouchDB
As one system doesn’t use any fancy feature of CouchDB it can also run on PouchDB with memory limited to 90M.
Nodejs’
The two nodejs containers have 50M respectively 200M max set. Both use version 11 of Nodejs. While Citybuilder uses just a few dependencies and runs with 50M, Linky has many dependencies plus Babel and Webpack and needs 200M to run.
MySQLs
Both MySQL are Version 5 and memory is limited to 200M or 250M.
Java processes
Two applications need separate Java backend processes. While one system is set to 150M, the other is set to 350M. Of course those settings highly depend on the process and its nature, so for this article I am just saying Java processes have varying memory needs.
nginx
The nginx container with support for php is limited to 30M.
JVM
I always use OpenJ9 instead of Oracle’s Hotspot JVM. It has a smaller memory footprint which means it runs with less memory.
I tried to run containers with the same memory settings but with Oracle’s Hotspot but they often get terminated by the OOM-killer. So I have build Tomcat running OpenJ9 images.
Java Memory Settings
When using Java 8 or 9 you need to set 2 JVM parameters to ensure Java and Docker memory limits are in sync:
-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
Continuous Delivery Pipeline
As you might have realized a primary goal of my setup is to run everything on one host. Therefore my CD pipeline runs on the same host as well.
I will talk about this in a later article, but for now I would like to mention that for all builds Docker containers are started on this host. Those containers don’t have any memory limit set and (of course) are short lived.
Additional non-Docker host setup
For the sake of completeness, I would like to say that the host itself is only running an haproxy and postfix. All web-servers, databases or other processes are inside a docker container.
This is how the overall memory situation:

Closing notes
As said this setup is not recommended for a production host with low latency, stability, heavy load or many concurrent users in mind.
But for a showcase environment, with expectations on low operational cost, this is great news and it works better than expected.