Building and deploying a custom image of Reaction with Docker


#1

The Reaction docs recommends using Docker for deployment: Deploying Reaction using Docker

Currently these docs are missing some elaboration on the process of building and deploying a custom image of Reaction. Having worked through a number of the steps with help from @jeremy I will try to note of a few useful things here and raise questions as they come up.

How to build in production mode?
When running docker build with the Dockerfile in the root directory of Reaction and deploying the javascript assets do not concatenate. Is it necessary to specify a production build and how?

That’s it for the moment. Thanks!


#2

Building a custom version of Reaction just takes one command from the root of the project:

docker build -t my-custom-reaction .

Then run it with:

docker run -d -p 80:80 \
  -e MONGO_URL="mongodb://some-url" \
  -e ROOT_URL="http://yoursite.com" \
  my-custom-reaction

The build inside the Docker container should definitely be a production build with all of the scripts concatenated. The app gets built right here, then Meteor gets deleted completely and the Node process is started in the production bundle directory. If you’re seeing an app running in development mode, it definitely isn’t coming from that Docker image.

Can you give me more details about the issue you’re seeing? What have you tried so far?


#3

Thanks for verifying that @jeremy – it helped me to quickly find the culprit. I am using the fourseven:meteor-scss package and there was a change with Meteor 1.3 where the standard-minifier package needed to be swapped and as an oversight I also removed the js minifier package as well. At the time it didn’t realize that package is responsible for the minification when running meteor bundle. Thanks!


#4

next Question: How to avoid long build times

Currently my custom reaction build takes about 30 minutes when building with a virtualbox docker-machine.

While watching the clock and looking at the logs the build seems to spend 20 minutes working through packages with npm dependencies:

reactioncommerce:reaction-logger: updating npm dependencies -- bunyan,
bunyan-format...
reactioncommerce:core: updating npm dependencies -- node-geocoder,
lodash.merge, lodash.uniqwith, jquery.payment, autosize, tether, draggabilly,
imagesloaded...
reactioncommerce:reaction-ui: updating npm dependencies -- classnames,
sortablejs, postcss, postcss-js, autoprefixer, css-annotation, tether-drop,
tether-tooltip, react-textarea-autosize, react-color, autonumeric,
accounting-js, jquery, jquery-ui...

@jeremy you mentioned once that you spin up a high-powered DO or AWS instance when you are running lots of builds. Would you recommend that approach for building Reaction?

I guess that would free up my machines memory/cpu and lessen the need to sit by and wait out the build. Thanks in advance!


#5

There’s a lot going on, but it’s not a very CPU intensive process. I have a 2013 MacBook Pro and ~150mbps download speed and it usually takes me about 8-10 mins. 15 mins wouldn’t be crazy, but 30 definitely sounds pretty extreme. I suspect that’s largely a bandwidth issue. A ton of stuff gets downloaded every time. It’s setting up a completely fresh Debian linux container with nothing installed, so there are a bunch of steps before the app is even getting built.

You could also use the development Dockerfile if you want to save time on test builds. It’s configured to cache more layers of the container, so it’ll only have to build the app each time instead of installing all of the OS dependencies from scratch. You can use the dev Dockerfile like this:

docker build -f docker/reaction.dev.docker -t reaction .

(the -f argument is for specifying a different Dockerfile)

It runs all of the same scripts, but caches the dependencies installation step. Not really great for production because it makes a giant image (usually 3-4x the size), but it’s useful for faster test builds locally. The first build will take the same amount of time, but every one after that on the same machine should be a bit faster.


#6

Thanks for sharing those details! I’ve simply plugged in via an ethernet cable and that has helped speed things up. Very helpful.

Now on to the next… I’ve got an SSL issue with Let’s Encrypt:

docker -f logs lets-encrypt:

CA marked some of the authorizations as invalid, which likely means it could not access http://example.com/.well-known/acme-challenge/X. Did you set correct path in -d example.com:path or --default_root? Is there a warning log entry about unsuccessful self-verification? Are all your domains accessible from the internet? Failing authorizations: https://acme-v01.api.letsencrypt.org/acme/authz/-Oa7Pa5w6mrt1dIM9PV0shrpyNkznu5UC4Zb3VulBjk

It appears that I have the same issue as here and the OP fixed his issue with:

My sites were only visible on the SSL port (443) and I’ve never opened port 80 on my router to the container.
After mapping the port in the NAT configuration, the certificate gets created perfectly.

@jeremy Does the above comment seem relevant? Where would these settings live having followed your gist?

The confusing thing is that I had previous had success with these steps and on duplicating them have run into this error.

Thanks in advance.


#7

Your NGINX container needs to be reachable from the public internet on port 80 (which it likely isn’t if you’re on your laptop on a private network). The domain name you’re requesting a certificate for needs to resolve to an IP that Let’s Encrypt can reach. Their servers check that you own the domain name by making a request to that IP (which sends its a response via the linked NGINX container).

So in short, if you’re running this on your laptop on your home network or don’t have port 80 open, it isn’t going to work because Let’s Encrypt can’t make the handshake it needs.

If you followed my gist and have a domain that resolves to your host on the public internet, it should just work. I’ve been using that exact setup with several apps for months without issue.


#8

Thanks for the response: I am running this on a DigitalOcean instance and my the domain resolves to the host and is accessible on the public internet. http://prize-editions.com

I had followed the same steps from the gist with a subdomain of this domain and it worked without a hitch. Strange… I’ll try to come up with a way to run a sanity-check.

Edit: Just had the opportunity to sanity check this and after running through the steps from scratch with a fresh docker-machine it works as described. Something must of slipped up in my previous attempt, no idea what though. Thanks!


#9

after issuing the docker build command with the -t flag, When i use the docker images command i do not see an image with the name my-custom-reactiom.


#10

Hi Jeremy, i followed your gist but am not able to deploy my custom reaction with nginx and letsencrypt. I’m getting “503 Service Temporarily Unavailable”. However my nginx container logs dont show me any error. My letsencrypt container logs say there is a failure in certificate authorization.
My domain does resolves when i only run my custom reaction docker container without nginx and letsencrypt.
I’m building and running dockers on digitalocean droplet using a VM on local machine.
Can you please help me here how do i need to get this fixed?