0

Building a CaaS for the enterprise. Part 2

datacenter-title

In our last post we looked at building our CaaS with Docker and Puppet from the point of a ‘sysadmin’, now we have the platform built let’s put on out ‘developer’ hat and create a CD pipline on top of the platform. The first thing we will need is to bring up our CaaS, we can do that by issuing the command vagrant up as we did in the last post.

In the last post we built three applications for the developer to use, etcd, Interlock and Jenkins. So let’s break it down and look at the application individually, so we can get a better understanding on how to take advantage of them. The first application that we will look at is Interlock.

Interlock is a really awesome project that is maintained Evan Hazlett and its is Dynamic, event-driven extension system using Swarm. Extensions include HAProxy and Nginx for dynamic load balancing. As you know from the last post that we have implemented Interlock using Nginx the we defined in interlock.yml.erb. So what are we going to do with Interlock ?
We are going to use Interlock to do our HTTP/Layer 7 routing for us. We need this as our CaaS will have many servers in the cluster, we don’t want to limit where a container can run. If a node fails, we want our containers to restart on a healthy node. All while having connectivity from external applications from our CaaS. This is what Interlock will provide us. We will define a URL for our applications and not matter what node the application runs on, we will be able to route to it.

The first configuration we will need to do is add a URL to our local machines hostfile. This is to replicate a DNS server in production. So in your hostfile add the following configuration 172.17.10.103 jenkins.ucp-demo.local This will point the URL jenkins.ucp-demo.local to our Nginx container. Now if our CaaS is up and we put the URL http://jenkins.ucp-demo.local we will get the Jenkins login page.

Screen Shot 2016-07-25 at 6.34.45 PM
We can login to Jenkins with the following credentials username admin password admin. Before we go any further let’s just loop back around to Interlock, we can tell that it is all working like clockwork. We can tell this because whatever node our Jenkins container spawned on we will be able to route to it via http://jenkins.ucp-demo.local. For example my Jenkins which container is running on ucp-02 which has the ip address of 172.17.10.102 but the request is going to 172.17.10.103 hitting Nginx and routing to 172.17.10.102

Once we have logged in we can see that we have two Jenkins jobs already. The first in the seed job, this job just automates the Deploy container. So we are not going to spend much time looking at the sed job. If you would like to see the code for the seed job you can find it here.

As a Developer I don’t really want to know the really need to know the low level details of the CaaS, all I want to know is that its healthy and I can deploy to it. Thanks to Docker remote API we have that. Of course we need to think about security wraps the API endpoint, we don’t want just anyone to send requests to our API endpoint. This is all made really seamless to us thanks to Docker UCP, as it looks after all the TLS for us. So let’s look at our the code in our Deploy container job.

#!/bin/bash

echo "Authenticating with the UCP cluster"

AUTHTOKEN=$(curl -sk -d '{"username":"admin","password":"orca"}' https://172.17.10.101/auth/login | jq -r .auth_token) && \
curl -k -H "Authorization: Bearer $AUTHTOKEN" https://172.17.10.101/api/clientbundle -o bundle.zip && \
unzip bundle.zip && rm -rf bundle.zip

echo "Creating JSON payload"

cat << EOF > ./docker.json
{
    "Hostname": "webapp",
    "Image": "scottyc/webapp:latest",
    "AttachStdin": false,
    "AttachStdout": true,
    "AttachStderr": true,
    "Tty": false,
    "OpenStdin": false,
    "StdinOnce": false,
    "Labels": {
               "interlock.hostname": "webapp",
               "interlock.domain": "ucp-demo.local"
    },
    "HostConfig": {
        "PortBindings": {
            "3000/tcp": [{
                "HostPort": ""
            }]

        }
    }
}  
EOF

echo "Creating container"

CONTAINER=$(curl -sk --cacert ca.pem --cert cert.pem --key key.pem -H "Content-Type: application/json" -X POST --data "@docker.json" https://172.17.10.101/v1.22/containers/create?name=webapp | jq -r '.Id')

echo $CONTAINER


echo "Staring conatiner $CONTAINER" 
curl -sk --cacert ca.pem --cert cert.pem --key key.pem  -X POST https://172.17.10.101/v1.22/containers/$CONTAINER/start



echo "Cleaning workspace"

rm -rf ./docker.json
rm -rf *.pem
rm -rf *.pub
rm -rf env.sh
rm -rf *.ps1
rm -rf *.cmd 

So let’s break down the code to see whats is happening in our deployment. We will break it down into three stages. Authenticating with the API, creating our payload and deploying our container.
First here is the code that we are requesting a cert bundle from our UCP controller and unzipping it.

AUTHTOKEN=$(curl -sk -d '{"username":"admin","password":"orca"}' https://172.17.10.101/auth/login | jq -r .auth_token) && \
curl -k -H "Authorization: Bearer $AUTHTOKEN" https://172.17.10.101/api/clientbundle -o bundle.zip && \
unzip bundle.zip && rm -rf bundle.zip

Next we will create our payload that will define the container that we are going to deploy. In this example we are going to use the image scottyc/webapp, pass the interlock URL webapp.ucp-demo.localvia Docker labels and lastly expose the TCP port 3000 to a random host port and let Interlock do the rest.

{
    "Hostname": "webapp",
    "Image": "scottyc/webapp:latest",
    "AttachStdin": false,
    "AttachStdout": true,
    "AttachStderr": true,
    "Tty": false,
    "OpenStdin": false,
    "StdinOnce": false,
    "Labels": {
               "interlock.hostname": "webapp",
               "interlock.domain": "ucp-demo.local"
    },
    "HostConfig": {
        "PortBindings": {
            "3000/tcp": [{
                "HostPort": ""
            }]

        }
    }
}  

Now we have the our payload, we can deploy it to the Docker remote API. We will do this with two API calls. The first to create the container and the second to start the container. Note that we are using the certs to authenticate with the remote API.

CONTAINER=$(curl -sk --cacert ca.pem --cert cert.pem --key key.pem -H "Content-Type: application/json" -X POST --data "@docker.json" https://172.17.10.101/v1.22/containers/create?name=webapp | jq -r '.Id')
curl -sk --cacert ca.pem --cert cert.pem --key key.pem  -X POST https://172.17.10.101/v1.22/containers/$CONTAINER/start

In the real world this job would be triggered by a git push or a web hook from your DockerHub/Trusted registry, as this a demo we will just trigger the build manually from the Jenkins web ui.
The logs of a successful job will look like

Started by user admin
Building in workspace /home/jenkins/jobs/Deploy container/workspace
[workspace] $ /bin/bash /tmp/hudson1857442634599539315.sh
Authenticating with the UCP cluster
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 13751  100 13751    0     0  40666      0 --:--:-- --:--:-- --:--:-- 40804
Archive:  bundle.zip
 extracting: ca.pem                  
 extracting: cert.pem                
 extracting: key.pem                 
 extracting: cert.pub                
 extracting: env.sh                  
 extracting: env.ps1                 
 extracting: env.cmd                 
Creating JSON payload
Creating container
c647b9492662f432bf093dfbd0f5f007611761ff916c1340415322c7d4573257
Staring conatiner c647b9492662f432bf093dfbd0f5f007611761ff916c1340415322c7d4573257
Cleaning workspace
Finished: SUCCESS

Then we can test our web app by going to the following url in a browser http://webapp.ucp-demo.local. No matter what node that our container is deployed on Interlock will route us there.

So that is the wrap up to building a CaaS for the enterprise. We covered how to build our CaaS with our ‘sysadmin’ hat on, then build a CD pipeline on top of our CaaS as a ‘developer’. I hope this has given some insight on how to build a CaaS, and some good principles to get both the sysadmin and development teams working together to breed efficiency in your business.

scottyc

Linux geek, Docker Captain and Retro Gamer

Leave a Reply

Your email address will not be published. Required fields are marked *

5 − two =