Develop in Docker: a Node backend and a React front-end talking to each other

Xiaoli Shen
6 min readMay 31, 2018

Recently I’ve started a new project which involves both a Node backend and a React front-end. The Node backend is a simple application server which will later serve the React front-end’s build, as well as proxy all external API services to the React front-end.

On bootstrapping my dev environment I decided to give Docker a try and develop both parts in Docker containers.

Motivation & Goal

First things first, why did I want to do it in Docker, anyway?

The key motivation is to exempt my fellow developers who might join this project later from having to manage dependencies in their local dev environment. I had the experience of working on multiple projects requiring different Node versions. Despite convenient Node version managing tools like nvm, having to switch version every now and then was really a pain.

Besides, a dev Dockerfile could also serve as a foundation for later deployment to the staging and production environments.

And what’s more, every time when you start working, running a one-liner docker-compose up --build is definitely easier than jumping into different folders and then run npm start.

So the goal is as follows:

- Run both the Node and the React app in its own Docker container.

- Communicate between the two apps running in containers.

- Every edit in the local IDE will automatically be reflected in the apps running in containers.

Implementation

To keep things simple, I created a client folder and a server folder in the same repository. Outside of the two folders, there is the docker-compose.yml file to build and spin up the two containers, and a .env file providing default values of environment variables for both apps.

Project folder structure

1. Create a Dockerfile in the client folder and also one in the server folder

These two Dockerfiles are the blueprints that docker-compose will use to build the two containers.

Here is the Dockerfile in the client folder, which contains the product of running a npx create-react-app command. Let’s see what’s happening in this file step by step:

Dockerfile in the client folder
  • FROM node:8.7.0-alpine Tell Docker we want to use Node v8.7.0 installed in a alpine Linux image. The alpine image is an extremely optimized minimal Docker image based on the Alpine Linux. Its own size is as small as 5 MB, adding Node on top of it will make it around 50 MB, still very slim (quasi 1/10 of the “-slim” Linux image whose size is about 650 MB, according to this post )
  • RUN mkdir -p /app/directory/path Create a directory in the container to hold the app, the -p flag enables us to create the directory recursively without having to go into each level
  • WORKDIR /app/directory/path Go into the app folder by making it the working directory
  • COPY package.json and COPY package-lock.json Copy the local package.json and package-lock.json files into the container to install node modules
  • RUN npm install install the node modules that the project needs
  • COPY . /app/directory/path Copy local code into the container
  • CMD ["npm", "start"] Now our app lives in the container, we can run the command npm start . The CMD basically put together a command to run by joining what you write in the array with spaces, and then run it in the shell inside the container

The Dockerfile in the server folder is nearly the same, except that in the last line the command to run is npm run dev , which is an npm script I defined in the package.json file that starts the Node app using nodemon instead of node to trigger recompiling the app server every time I edit something in the local server source code.

Dockerfile in the server folder

2. Create the docker-compose.yml file which makes the two containers communicative

Now that we have the Dockerfiles for building each container, we can take a look at the docker-compose.yml file and see what’s happening when we run docker-compose up —-build :

The docker-compose.yml file in project root
  • version: '3' In the first line, we state that this is a version 3 docker-compose file. There are different versions of docker-compose files targeting different Docker Engine releases. Version 3 is the most current and recommended version.
  • services: Then is the section to define each of the services we would use to build and spin up our two containers. I named them “server” and “client” accordingly. The service name is the key to access each container later if we would need to.

What’s happening in each of the service blocks is basically the same:

  • build: path/to/directoryBuild a Docker image using the Dockerfile found in the directory (the path is relative to the docker-compose.yml file)
  • environment: Here we inject environment variables. In this project, both the client and the server apps need some environment variables to start running, e.g., the port on which the Node express server should be listening is set through the APP_SERVER_PORT env variable, the port on which the react dev server runs is set through the REACT_APP_PORT env variable.
  • expose: What port the container should expose to other services that are also defined in this docker-compose.yml file. This is vital to enabling the two containers to communicate with each other. Here we use environment variables whose default values are set in a separate .env file, which sits also in the project root.
.env file in project root
  • ports: Then we map container port to a port on the host machine, aka my local computer. By this means we can access the running containers from the local environment if we’d need to. These ports are also given as environment variables.
  • volumes: Mounting volumes enables us to map local source code to the corresponding code in the container, so that every time we edit these code files in our local IDE the changes will be instantly reflected in the container.
  • command: The command to run after the container is up. What’s specified here will override the CMD part in the Dockerfile.

In the client service block we have two more lines than the server service block:

  • links: This links a container to other services which are also configured in the same docker-compose.yml file. This is how the client container is aware of and can talk to the server container. Be sure to use the service name here, not the container name or id.

3. Set proxy in the React app’s package.json file with the server container’s service name and port

We only need to do this if in the production environment the server app will be serving the client’s built code. In this setting the front-end will call API endpoints using urls such as “/api/…” and there will be no concerns on CORS.

In dev environment, where the client is a React app generated with create-react-app and runs on a dev server, we emulate the same-origin production scenario by setting a proxy in the client’s package.json file which points to the server’s service name and port:

package.json in client folder

Note that setting a proxy to emulate same-origin scenario also works without Docker. In that case, instead of the service name “server”, the proxy should point to the url of the service, i.e.:
“proxy”: “http://{url, e.g. localhost}:{port if needed}”

4. Access the shell in the running container

Sometimes we might want to access the shell to run some commands on an app running in a container. For example, maybe we want to run unit tests in watch mode while developing. To do this in the settings we have, simply open a new tab in the terminal and type the following command:

docker-compose exec {service name} sh

Then you are in that service’s shell with a “#” prompt and here is where you can run npm scripts & co.

To get out of this shell, simply type “exit”.

--

--

Xiaoli Shen

Solutions Architect at AWS with focus areas in MLOps and now-code-low-code ML. 10 yrs in tech and counting. Opinions and posts are my own.