Skip to main content

OpenShift in Fedora Infrastructure NEWS

I thought I would share today some history, recent happenings and plans for OpenShift in Fedora Infrastructure. We currently have two OpenShift clusters running in Fedora Infra (not counting two more dedicated to osbs, the container build system). One of these is our staging instance and the other our production instance. Astute readers who know OpenShift may ask why we even bother with a staging cluster, since a lot of workflows with OpenShift let you test and validate things and deploy them as you like. Well, there's a few reasons we set things up this way: First we don't have all that many apps moved into OpenShift yet, so they have to interact with all the rest of our staging deployments on vm's. Also until recently it wasn't easy to seperate routes in Openshift to make sure you were only sending staging messages to the staging network and not ever mixing in the production network (this has since been vastly improved, but haven't taken advantage of the new functionality yet). Finally we wanted to make sure we had a test bed for deploying and upgrading the clusters without just having to do all that work in production. Or production cluster has 3 apps running in it currently: bodhi's web frontend, waiverdb and greenwave. Or staging instance has those three, as well as a number of not quite completed apps: modernpaste, rats, release-monitoring, librariesio2fedmsg, and transstats. Last week I went ahead and reinstalled both our staging cluster (on wed) and then our production cluster (thursday). I had actually upgraded our staging cluster from 3.7 to 3.9, but I wanted to make sure we could redeploy from the ground up. Production had been on version 3.6 (upgraded from 3.5). The nodes we were using were actually sized when we installed 3.5 and the requirements changed after that making them too small. I had re-installed the compute nodes ok, but the masters didn't go as well, so I figured it would be a good time to just reinstall completely with the right sizes and 3.9. Additionally, support was added in 3.9 to use cri-o instead of docker containers and we wanted to take advantage of that, as well as adding persistent storage for our registery so we didn't need to rebuild images all the time. There were various hiccups and issues in our config, but after staging we didn't hit too many in production. All our apps are now using cri-o containers! Our plans moving forward include:

  • Moving more apps into OpenShift. It's still not as smooth as we would like to do initial config, but we hope to improve that and after an app is setup, it's much nicer for everyone to maintain and deploy from there.
  • We want to improve our ansible setup for deploying / managing apps. Right now our roles and setup are very flexable, but if we change them to be more opinionated we can hopefully make it easier for people to run apps there.
  • As soon as our new cloud is up and running we plan to deploy a OpenShift there. I have been calling it a 'dev' env but I am not sure I think thats the right name now. It might be better to think of it as 'manage your own'. We hope to give people who need it wide access there. The downside is of course if we do a reinstall everyone would have to redeploy their apps.
  • Someday we should just have our staging cluster to use for upgrades/re-installs and have all our apps in one prod cluster.
  • Someday it would be great to pull from upstream git a prod or a stg branch and build and deploy from it. Most of our apps currently are just using rpms, so it doesn't save as much work on deployments.
  • We definitely want to move as many apps as make sense over, just will need some elbow grease.
It's exciting to see all the improvements to OpenShift over the last year. It just gets better and better!