Deploy a .Net Core 2.1 Microservice API in OpenShift
So you want to see a .Net Core web api implementation of a microservice? And you want to see how you automate the deployment in Minishift (or OpenShift really)? Well you are in luck I have one to show! What a coincidence. “I love it when a plan comes together”. I have a GH repo you can use to follow this or to fork, update, and follow along while customizing based on your own code. When you go to my GH repo it is defaulted to the develop branch. I am always tweaking this so hopefully I did not break it!
What you will need for this: OpenShift or Minishift; VS Code or some other IDE; Git; about 30 minutes.
**UPDATED: 1/19/19 with ConfigMaps so Jenkins Slave Agents are automatically setup
If you do not have OpenShift or Minishift setup with SonarQube inside then read the other blog post on setting up OpenShift and SonarQube as you will use these later. Now clone the repo linked above and look at the deployment.yaml file. THAT is what makes the OpenShift project look like the image above. I won’t go into breaking down the YAML file in explicit detail however, I will break down what it creates in OpenShift so you can do that yourself with some foundational knowledge. (Note: anywhere I put OpenShift you can test locally with Minishift.)
Launch your OpenShift web console (if it is not already) and log in. You do not need admin/admin and can use the developer login if you wish. Create a new project by clicking the Create Project button in the far top right called “peopleapi” and make the Display Name whatever you wish so you recognize it. Once done click Create and click the new name in the project listing to enter the PeopleAPI project screen in the web console. You should now have a blank canvas! Click the Import YAML / JSON and copy/paste the information from the deployment.yaml link above in there. (Or clone the repo and choose the deployment.yaml file from the cloned repo directory.) Click the Create button and then Continue to process (but not save) the template. Then click Close. You just created a project in OpenShift with a web front end, database backend, and Jenkins to automate the CI/CD process. Congratulations it was that easy when you template it out!
The template has parameters in here you can edit but they are filled out for you to just click Create and use my public GH repo. I highly suggest going back after all this and studying the deployment against the OpenShift project pods and images and such so you 1) understand it and 2) can repeat it for your own project.
You should now see a few things in the Overview screen. The Jenkins setup is running. And the peopleapidb-dc (People API Database deployment config) and the peopleapi-dc (People API deployment config) are there but nothing is running. Jenkins already has a link (route) to it for later use. And we need to launch the builds on your OpenShift to fire up the other container images and such. If you see this above you are good to go! If not, kill the project and start over.
Building the Images
This project has a pipeline. I want to show you how to do the steps the pipeline calls though. So click on Builds in the far left project screen in OpenShift and then click Builds on the popup menu. You should see two builds there with “No Builds” as the last build listed. And on the far right you see the type “Docker” and the link to the GitHub repo. Click on the database configuration (remember which is which?) and then click the Start Build button. Watch it fly and see all it does. You can also click Logs and see the logs just as if you ran a docker build -t xxxxxxx . type of command locally. (If the build fails because of a timeout or slow OpenShift setup you have, try again. Pulling the starting image definition to do this takes a while and sometimes makes it fail I have seen intermittently.)
Click back on the Overview menu and you will see Jenkins running and now your peopleapidb running as well! (Like the image just below) 1 pod each with 1 container in each. Nicely done! Wait what?!? How did this deploy? Well we have it setup when there is a new build to automatically deploy. If you click back on the peopleapidb-dc link in the middle of the Overview screen and then click the Configuration tab on that next page, you will see a few things. 1 replica = 1 pod running, and you can always up that amount as the minimum # of pods running. Also you will see the image is project-name/peopleapidb because in the YAML deployment we made it that. And because we are deploying the image inside this project so it is always prefaced with “project-name”.
We also have a persistent volume claim (it is like saving data on a mounted drive inside OpenShift) to save data in /var/opt/mssql from the container image when the container is restarted or redeployed. This way we do not lose data and can write to the container without a bunch of permission issues. We also have the Triggers area and that my friend is why a build made a deployment happen! Our build is setup to automatically compile and create/push a Docker image to peopleapi/peopleapidb:latest. Pushing a new image there automagically creates a new deployment the way we set this up!
Don’t believe me?! Go into the build configuration for the peopleapidb and see for yourself. You can click the Actions menu at the top of the build or the deployment screens and get more detailed information, forms to change things, or even edit the YAML straight. Again feel free to do this. If you mess it up, click the OpenShift logo top left link to go back to listing projects, find your project, click the 3-dot menu to the far right of it and choose “Delete Project” so you can start over. Don’t worry! I did that about 6 times before I got this project correct. No lie.
I am not going to explain every single little thing on this page. “Teach a man to fish” is my thing here. I want you to get comfortable and start digging in as you “learn by doing”. Try to update something, break it, do your worst. You can always wipe it out (delete the whole project) and start over with a brand new deployment.yaml copy/paste in a brand new project and get going quickly.
Now do the same thing on the peopleapi build. Go into it, start the build, watch the logs, and then see it automatically deploy to its own pod with 1 container. And this peopleapi container has a route already, a path to call into it. You can go to http://peopleapi-peopleapi.192.168.99.103.nip.io/swagger/ or whatever your root route is and see the Swagger UI from Swashbuckle fire up. So now you have 3 pods running: Jenkins, PeopleAPI, and PeopleAPI Database. Nicely done! Want a challenge? Delete the project and go back and do it all again from memory only using the Deployment YAML file. You will be glad you did!
Network connectivity within the project namespace
If you go to the actual deployment of the peopleapidb and click the Configuration tab one other thing you will notice is the Ports 1433/TCP listed. That means this container is listening on port 1433, the default MS SQL Server port as this DB is the Linux variant of that. You will also see there is NO ROUTE for the database as we do not want people going straight to the database. It is hidden behind Minishift.
Now go to the peopleapi deployment and click Configuration and notice there are a few differences. One is the ports is 8080/TCP of course and the image name is people/peopleapi:latest which is what the Build makes which triggers the deployment. Click on the Environment tab and see a few more things. A running container has three things: an image, an environment, and a configuration. We talked on the image and the configuration (i.e. port 8080). The environment in the peopleapi makes the magic happen. The ASPNETCORE_ENVIRONMENT and ASPNETCORE_URLS are variables that the .NET Core web api reads to run a specific way. Check the C# code on that to see how it is used. But the peopleapicontext is the database connection string. (Yes I know, DO NOT store in ENV and use a secret w/in Minishift. I am doing this for explanation only. DO NOT run production like this!) You will notice the server=peopleapidb part. Why is that the name of the database “server”? Because that is the name of the container running. Check that in the Configuration tab for the deployment. In the repo you pulled this from, this is similar to the docker-compose.yml file in that you fire up the DB and use that name as the database server in the connection string. The rest is the user/pwd/database I setup in the DB Dockerfile. This is how you can inject environment variables into the container running. If you change an environment variable and click Save, a new deployment happens.
Advanced: Go into the container through OpenShift
Let’s say you want to go into the container and see if something is set, permissions, files, or just to see how they made it. In Docker Community Edition you can do a docker exec -it name-of-container sh to get a command prompt. Well it is easier in OpenShift. Click the Applications menu and then Pods. You will see 2 completed pods (the builds you did) and then 3 running pods.
Click on the peopleapi-dc-xxxxxx one. This screen kind of looks like the regular Deployment Configuration screen except for an extra tab: Terminal! Click that! Mind. Blown. Run commands in here and see what you can do. Again, kill it and then just rebuild/redeploy it. No worries here.
So what did we just do?
We just deployed a microservice API in .NET Core 2.1 web api format inside Minishift with Jenkins. That is what we did! Pretty cool stuff unless you have been following k8s and OpenShift and Docker for a while. Even so, cool to have it templated and get rolling to test quickly. Now what?
Really the next steps are to link in the Jenkinsfile we have, add plugins to Jenkins for SonarQube and Aqua Microscanner and other items, and then run the pipeline. But that is in my other post. For now click around in the project and learn. And then read the deployment.yaml file again and match the setup to the actual running project in Minishift. That is how I learned this. Enjoy!