Reaction Commerce Forums

Kubernetes and Reaction


Does anyone have experience running Reaction Commerce in Kubernetes?

I’m hoping to spark up great debates of mongo replication and crazy shard clusters, massively parallel communication on both naked network drivers (calico), and secure methods (flannel or many others [*including calico depending on config]) appropriate for public clouds, and even storage drivers.


To that end I’ve packaged much of my work into a helm template which I have called reactionetes.

Which brings me to my oneliner:

curl -L | bash

This will:

  1. install minikube and kubectl,
  2. startup a cluster,
  3. initialize helm,
  4. and finally spin up the reaction cluster

You end up with a local kubernetes cluster with the fastest setup of reactioncommerce I know of.

Feel free to submit issues, requests, PRs, etc to the github page.

Mongo Replication

As far as debates on mongodb, does anyone else have a mongodb cluster going?

If so, what does your mongo url look like?

At the moment I’m using this:

mongodb://{{ .Release.Name }}-mongo:27017/{{ .Release.Name }}?replicaSet={{ .Release.Name }}-rs

This is what my endpoints look like after settling:

kubectl get ep
NAME                                ENDPOINTS                                            AGE
kubernetes                                                       14m
solitary-hummingbird-mongo,,   12m
solitary-hummingbird-reactionetes                                      12m

Does anyone have any good methods of benchmarking mongo? I’m building a docker image to put a mongoDB under load and get some sort of metrics out of how performant the thing is here.

Notes on stateful sets in kubernetes with mongoDB

Kubernetes blog post from January 2017

Mongodb blog post

KubernetesUpAndRunning examples repo

Cluster Networking

Kubernetes cluster networking is no simple matter with over 15 networking technologies to choose from.

Calico can achieve amazing performance if you turn off all the security mechanisms, but this is insecure in a public cloud.

I’m of the opinion you are opening yourself to up to having your traffic sniffed and this is inappropriate for an ecommerce site, but I’d like to discuss the implications of this and the other network drivers if anyone has looked into them.

Storage Drivers etc

Hostpath and Emtpydir work fine for testing, but production demands something bigger and distributed.

Do not punish yourself with NFS, in my experience you can get it to work, but it will perform poorly and even worse corrupt itself at some point.

GlusterFS and CephFS do not seem to perform well to me. Maybe I need to throw more ram at them, or configure them differently, so I am very open to debate on this one.

Flocker was awesome, but that link is down, and I think ClusterHQ is dead.

iSCSI and FiberChannel do fit the bill when backed by serious resources, which brings me to my favorite option so far:

OpenEBS, it turns out Amazon EBS is actually a wrapper around iSCSI, so there has been an effort to replicate this functionality on other clouds which gave us OpenEBS.

To that same tune, there is also MInio which is an S3 api compatible object storage (i.e. a DIY S3 you can run yourself).



I made a much simpler chart to test with based on the stable/ -

A proper mongodb solution like you demonstrate/discuss would be great. It seems most helm charts are at best test examples and severely lacking in reliable storage.

I wonder if this discussion might be better in the helm community as I to would love to know what a best practice for replication would be and why it isn’t common to charts.


I’m very open to discussion on helm in general, in fact, eventually I want to contribute this to the incubator. I think before I go about doing that though, I want

  1. to be able to choose storage drivers using go templating
  2. through that support the major cloud providers as well as in-house baremetal setup through kubespray
  3. test all of that by giving a good metric of how performant the MongoDB is
  4. and run through at least some of the reaction tests in CI to determine that the site is indeed working

feel free to add requests/issues to the github repo, or check out the waffle:


I used this guide for setting up MongoDB on Google Kubernetes Engine. It solves some of the weaknesses of the sidecar deployment mentioned in the Kubernetes blog post and improves performance/security with XFS/transparent huge pages disabled in the host vm and enables authentication by default.


@nick funny enough I was just looking at that post and subsequently the site Paul Done setup here:

I will be converting my mongo stuff into something similar to that very soon.


Any reason why you haven’t tried this “official” chart?

I’ve used it on AWS and GCE many times and never had any issues. Persistent data just works on both platforms. And just about any Mongo customization you could need is available in the values.yaml.

Running this with no custom values…

helm install --name my-mongo stable/mongodb-replicaset

sets you up with a 3 member replica set with persistent volumes and this (rather long) MONGO_URL



That official chart is a part of the gymongonasium branch I was bringing in.

But I’ve always had issues with that chart, for example, right now I am having this specific issue with no primary found in replicaset.

I’m going to reexamine things this week and look at both the official chart and the Donester’s method.

@jeremy I would greatly appreciate any advice or suggestions here, as I am kind of stuck and in a re-evaluation mode.

Despite all of the environment variables, everything is pretty much the same as the defaults in the way I fire off the helm command


@jeremy @cchamilt @nick I just merged a bunch of changes into master, including using the official chart as @jeremy suggested. Also, there are charts in there for the api and gymongonasium which I’m also toying around with.

Let me know if you have issues! Cheers!

btw - as far as the long url goes, I made a bash script that made the url, not satisfied, I went all the way and made a template in the _helper.tpl that builds the url using go templating:

{{- define "mongodb_replicaset_url" -}}
  {{- printf "mongodb://" -}}
  {{- range $mongocount, $e := until (.Values.mongodbReplicaCount|int) -}}
    {{- printf "%s-mongodb-replicaset-%d." $.Values.mongodbReleaseName $mongocount -}}
    {{- printf "%s-mongodb-replicaset:%d" $.Values.mongodbReleaseName ($.Values.mongodbPort|int) -}}
    {{- if lt $mongocount  ( sub ($.Values.mongodbReplicaCount|int) 1 ) -}}
      {{- printf "," -}}
    {{- end -}}
  {{- end -}}
  {{- printf "/%s?replicaSet=%s" $.Values.mongodbName  $.Values.mongodbReplicaSet -}}
{{- end -}}

I’d like some feedback on that one.


@thoth Awesome work! Thank you.

Do you think it’s reasonable to reuse a mongo deployment between services and apps (one of which would be Reaction) in my cluster and just use different db’s?

I would like to (ideally) maintain just 1 (scalable) db server per cluster. So just a single MongoDB, single PostgreSQL, etc. and use different db’s for different apps & services.

If I go this way, I think I should probably have a helm chart that doesn’t spin-up the MongoDB fleet and only deals with Reaction deployment.


  • Also thanks to @cchamilt for the simpler chart. Really useful for someone like me, who built his first docker image less than a week ago.
  • Off-topic: @thoth Have you ever tried to make a helm chart for Discourse?


@tsenkov I think it is very reasonable to reuse mongo in this fashion. That’s the sort of work it was designed for. And it scales very well in such a fashion.

That being said, in kubernetes it is very easy to give every project it’s own namespace and it’s own set of dbs. While this is less efficient in terms of memory, it does have the effect of distributing out your bottlenecks to the individual applications. And then your overall bottleneck becomes the load balancer. But there is nothing wrong with either scenario.

Of note, reactionetes’ reactioncommerce chart is already separate from the mongodb chart, you feed it the name of your mongodb deployment release name as an option in the helm call:

helm install \
  --name my-release-name \
  --set mongodbReleaseName=massive-mongonetes \

PS - I have not done a discourse chart yet. Though I need a mattermost very soon.


Can the template for the mongodb url be used within the values.yaml as well as the configmap.yaml?

Update (latest):
OK, I saw what was the idea there. I’ve added the option to supply a custom url for the mongodb and in the configmap I’m switching between that and the template function (in case no url is provided). @thoth, do you want me to make a pull request for this?

This (.Values.mongodbUrl) doesn’t seem to be used anywhere, actually… And thinking about this further - maybe it shouldn’t be guessed at all. I think the configmap.yaml should just use {{ .Values.mongodbUrl }} (which by default would be targeting the default install of the stable/mongodb-replicaset chart)?

apiVersion: v1
  reaction.mongo_url: {{ .Values.mongodbUrl }}

Otherwise if I create my own values.yaml file and just want to use the default reactionetes chart, the value I specify (for mongodbUrl) will just be ignored when I do:

$ helm install -f custom-reactionetes-values.yaml local/path/to/reactionetes/


The w8 scripts are such a nice pattern to follow. I haven’t done much dev-ops before, but this seems like an elegant way to get it done. Thanks again for building this, it’s an awesome primer.


Thanks on the w8s, I think the name is kind of kitchy too!

I’d gladly take pull requests btw, and welcome thoughts on future direction etc.


PS - sorry about the delay, I’ll try to be more attentive especially if you submit a PR


Nice blog. I understood the concept Kubernetes very well. This blog is very informative. And it’s very interesting topic.