0

Building a CaaS for the enterprise. Part 1

datacenter-titleThere are lot of great blog posts at the moment around Docker and there open source offerings (especially from my following Docker Captains). So I thought that I would write a blog about something different. At DockerCon there was a push to move containers to the enterprise. From the conversations that I had with people at DockerCon there is definitely a big interest in this area. So I thought I would write a series of blog post on how to deploy your own CaaS with a solution that would be suitable for the enterprise. As in many enterprise organisations there are teams with quite different responsibilities, so we are going to brake the blog down into 2 parts. In the first blog we will wear our ‘sysadmin’ hat and build the platform that the CaaS will sit on. In the second post we will change our hat to be an application developer that wants to consume the CaaS and deploy to it. By braking the solution down and seeing what each team brings to the table, I hope you can see that by both teams working together it will breed efficiency offering a better outcome for every one.

As most enterprise customers are moving to containers and most like will have a configuration management tool. So in the post we will use a configuration management tool to build the CaaS, I am a Puppet man so in the post we will use Pupet. It could easily be swapped out to the configuration management tool of your choice. So as the base application for the CaaS we will use Docker’s UCP https://www.docker.com/products/docker-datacenter and wrap Docker Compose with Puppet to deploy several applications to our CaaS. So let’s get started.

The first thing we will need to do is setup our Vagrant repo as we are going to have three servers in our UCP cluster. So git clone https://github.com/scotty-c/vagrant-template.git In the root of the directory we are going to make changes to three files Puppetfile, servers.yaml and docker_subscription.lic

We are going to add the following code to our Puppetfile

#!/usr/bin/ruby env

require "socket"
$hostname = Socket.gethostname

forge 'http://forge.puppetlabs.com'

mod 'garethr/docker', :git => 'https://github.com/garethr/garethr-docker.git'
mod 'puppetlabs/apt'
mod 'puppetlabs/docker_ucp', :git => 'https://github.com/puppetlabs/puppetlabs-docker_ucp.git'
mod 'puppetlabs/stdlib'
mod 'maestrodev/wget'

The next we are going to add our configuration for our three servers to the servers.yaml file

---
-
box: scottyc/ubuntu-14-04-puppet-kernel-4-2
cpu: 2
ip: "172.17.10.101"
name: ucp-01
forward_ports:
- { guest: 443, host: 8443 }
ram: 4096
shell_commands:
- { shell: 'apt-get update -y' }
- { shell: 'apt-get install -y wget git' }
- { shell: 'mkdir ~/.docker || true' }
- { shell: 'mkdir /etc/docker/ || true && cp /vagrant/docker_subscription.lic /etc/docker/subscription.lic' }
- { shell: 'echo -e "172.17.10.101 ucp-01\n172.17.10.102 ucp-02\n172.17.10.103 ucp-03">/etc/hosts'}
- { shell: '/opt/puppet/bin/gem install r10k && ln -s /opt/puppet/bin/r10k /usr/bin/r10k || true'}
- { shell: 'cp /home/vagrant/ucp-01/Puppetfile /tmp && cd /tmp && r10k puppetfile install --verbose' }
- { shell: 'cp /home/vagrant/ucp-01/modules/* -R /tmp/modules || true' }

-
box: scottyc/ubuntu-14-04-puppet-kernel-4-2
cpu: 2
ip: "172.17.10.102"
name: ucp-02
forward_ports:
- { guest: 443, host: 9443 }
ram: 4096
shell_commands:
- { shell: 'apt-get update -y' }
- { shell: 'apt-get install -y wget git' }
- { shell: 'mkdir ~/.docker || true' }
- { shell: 'mkdir /etc/docker/ || true && cp /vagrant/docker_subscription.lic /etc/docker/subscription.lic' }
- { shell: 'echo -e "172.17.10.101 ucp-01\n172.17.10.102 ucp-02\n172.17.10.103 ucp-03">/etc/hosts'}
- { shell: '/opt/puppet/bin/gem install r10k && ln -s /opt/puppet/bin/r10k /usr/bin/r10k || true'}
- { shell: 'cp /home/vagrant/ucp-02/Puppetfile /tmp && cd /tmp && r10k puppetfile install --verbose' }
- { shell: 'cp /home/vagrant/ucp-02/modules/* -R /tmp/modules || true' }

-
box: scottyc/ubuntu-14-04-puppet-kernel-4-2
cpu: 2
ip: "172.17.10.103"
name: ucp-03
forward_ports:
- { guest: 443, host: 10443 }
ram: 4096
shell_commands:
- { shell: 'apt-get update -y' }
- { shell: 'apt-get install -y wget git' }
- { shell: 'mkdir ~/.docker || true' }
- { shell: 'mkdir /etc/docker/ || true && cp /vagrant/docker_subscription.lic /etc/docker/subscription.lic' }
- { shell: 'echo -e "172.17.10.101 ucp-01\n172.17.10.102 ucp-02\n172.17.10.103 ucp-03">/etc/hosts'}
- { shell: '/opt/puppet/bin/gem install r10k && ln -s /opt/puppet/bin/r10k /usr/bin/r10k || true'}
- { shell: 'cp /home/vagrant/ucp-03/Puppetfile /tmp && cd /tmp && r10k puppetfile install --verbose' }
- { shell: 'cp /home/vagrant/ucp-03/modules/* -R /tmp/modules || true' }

The last file that we need to modify is the docker_subscription.lic. You will need to sign up for a trial license key for Docker Data centre and add it to this file. You can do that at the following url https://www.docker.com/products/docker-datacenter.

Now we have that complete we will create a module called “““ you can use puppet module generate or however you like to build your module skeletons. This will be a wrapper class that uses Puppet’s puppetlabs/puppetlabs-docker_ucp module. We are going to add some extra functionality, especially around security and TLS.

In our init.pp we will add the following code

class ucpconfig (

$ucp_master = $ucpconfig::params::ucp_master,
$ucp_deploy_node = $ucpconfig::params::ucp_deploy_node,
$ucp_url = $ucpconfig::params::ucp_url,
$ucp_username = $ucpconfig::params::ucp_username,
$ucp_password = $ucpconfig::params::ucp_password,
$ucp_fingerprint = $ucpconfig::params::ucp_fingerprint,
$ucp_version = $ucpconfig::params::ucp_version,
$ucp_host_address = $ucpconfig::params::ucp_host_address,
$ucp_subject_alternative_names = $ucpconfig::params::ucp_subject_alternative_names,
$ucp_external_ca = $ucpconfig::params::ucp_external_ca,
$ucp_swarm_scheduler = $ucpconfig::params::ucp_swarm_scheduler,
$ucp_swarm_port = $ucpconfig::params::ucp_swarm_port,
$ucp_controller_port = $ucpconfig::params::ucp_controller_port,
$ucp_preserve_certs = $ucpconfig::params::ucp_preserve_certs,
$ucp_license_file = $ucpconfig::params::ucp_license_file,
$consul_master_ip = $ucpconfig::params::consul_master_ip,
$consul_advertise = $ucpconfig::params::consul_advertise,
$consul_image = $ucpconfig::params::consul_image,
$consul_bootstrap_num = $ucpconfig::params::consul_bootstrap_num,
$docker_network = $ucpconfig::params::docker_network,
$docker_network_driver = $ucpconfig::params::docker_network_driver,
$docker_cert_path = $ucpconfig::params::docker_cert_path,
$docker_host = $ucpconfig::params::docker_host,
) inherits ucpconfig::params {

class { 'docker':
tcp_bind => 'tcp://127.0.0.1:4243',
socket_bind => 'unix:///var/run/docker.sock',
extra_parameters => "--cluster-store=etcd://${consul_master_ip}:4001 --cluster-advertise=eth1:2376",
} ->

class {'docker::compose':
version => '1.7.1',
require => Class['docker']
} ->

case $::hostname {
$ucp_master: {

contain ucpconfig::master
contain ucpconfig::config

Class['ucpconfig::master'] -> Class['ucpconfig::config']
}

$ucp_deploy_node: {

include ucpconfig::node
contain ucpconfig::config
contain ucpconfig::compose

Class['ucpconfig::config'] -> Class['ucpconfig::node'] -> Class['ucpconfig::compose']
}

default: {

include ucpconfig::node
contain ucpconfig::config

Class['ucpconfig::config'] -> Class['ucpconfig::node']
}
}
}

So what are we setting up here ? This is going to set up all our params to pass to our installation, we will install Docker and set the daemon to use etcd as its key value store. This is for Docker networking in the engine version 1.11.
We will install Docker compose version 1.7.1, Then split the logic to classify a node as a master, a node or a deploy node. The master node will control the cluster and be master of the TLS CA and the key value store, a node will just be a worker and a deploy node will handle deploying our Docker compose files for applications. No we will create a file called master.pp

In the master.pp we will add the following code

class ucpconfig::master(

$ucp_version = $ucpconfig::ucp_version,
$ucp_host_address = $ucpconfig::ucp_host_address,
$ucp_subject_alternative_names = $ucpconfig::ucp_subject_alternative_names,
$ucp_external_ca = $ucpconfig::ucp_external_ca,
$ucp_swarm_scheduler = $ucpconfig::ucp_swarm_scheduler,
$ucp_swarm_port = $ucpconfig::ucp_swarm_port,
$ucp_controller_port = $ucpconfig::ucp_controller_port,
$ucp_preserve_certs = $ucpconfig::ucp_preserve_certs,
$ucp_license_file = $ucpconfig::ucp_license_file,

) {

class { 'docker_ucp':
controller => true,
host_address => $ucp_host_address,
version => $ucp_version,
usage => false,
tracking => false,
subject_alternative_names => $ucp_subject_alternative_names,
external_ca => $ucp_external_ca,
swarm_scheduler => $ucp_swarm_scheduler,
swarm_port => $ucp_swarm_port,
controller_port => $ucp_controller_port,
preserve_certs => $ucp_preserve_certs,
docker_socket_path => '/var/run/docker.sock',
license_file => $ucp_license_file,
require => Class['docker']
} ->

file { '/etc/etcd':
ensure => directory,
} ->

file { '/etc/etcd/docker-compose.yml':
ensure => file,
content => template('ucpconfig/etcd.yml.erb'),
require => File['/etc/etcd'],
} ->

docker_compose { '/etc/etcd/docker-compose.yml':
ensure => present,
require => File['/etc/etcd/docker-compose.yml']
}
}

In the code we are just specifying that state of our master and deploying the key value store. The next file we will create in node.pp to define our work nodes. So in the node.pp we will add the following code.

class ucpconfig::node (

$ucp_url = $ucpconfig::ucp_url,
$ucp_username = $ucpconfig::ucp_username,
$ucp_password = $ucpconfig::ucp_password,
$ucp_fingerprint = $ucpconfig::ucp_fingerprint,
$ucp_version = $ucpconfig::ucp_version,
$ucp_host_address = $ucpconfig::ucp_host_address,
$ucp_subject_alternative_names = $ucpconfig::ucp_subject_alternative_names,

){

class { 'docker_ucp':
ucp_url => $ucp_url,
fingerprint => $ucp_fingerprint,
username => $ucp_username,
password => $ucp_password ,
host_address => $ucp_host_address,
subject_alternative_names => $ucp_subject_alternative_names,
replica => false,
version => $ucp_version,
require => Class['docker']
}
}

This code just defines that the node will be a UCP node and points it to the master. Now we have a third type of node, a deployment node. This node has the node class applied to it and also a compose class. So next we will create that file. So in the compose.pp we will add the following code.

class ucpconfig::compose {

file { ['/etc/interlock', '/etc/jenkins']:
ensure => directory,
}

file { '/etc/interlock/docker-compose.yml':
ensure => file,
content => template('ucpconfig/interlock.yml.erb'),
require => File['/etc/interlock', '/etc/jenkins'],
}

exec { 'docker-compose-interlock':
command => 'bash -l -c "docker-compose -f /etc/interlock/docker-compose.yml up -d"',
path => '/usr/bin:/usr/sbin:/bin:/usr/local/bin',
timeout => '1800',
require => Class['ucpconfig::config'],
}

file { '/etc/jenkins/docker-compose.yml':
ensure => file,
content => template('ucpconfig/jenkins.yml.erb'),
require => File['/etc/interlock', '/etc/jenkins'],
}

exec { 'docker-compose-jenkins':
command => 'bash -l -c "docker-compose -f /etc/jenkins/docker-compose.yml up -d"',
path => '/usr/bin:/usr/sbin:/bin:/usr/local/bin',
timeout => '1800',
require => [ Class['ucpconfig::config'], Exec['docker-compose-interlock']]
}
}

So in this class we are deploying two applications Interlock and Jenkins. We will use Interlock with Nginx to do our layer 7 routing and Jenkins we will use in the next post as part of the development team. You will notice that we are using an exec type to call Docker compose. This is because we are going to change the value of DOCKER_HOST in the $PATH half way through the catalogue run. So we will need to reload the shell value for $PATH we can do this by using the exec type.

Now as this post is looking at what an enterprise would configure, we should have security at the front of mind. So we will create another class called config.pp. As this class contains security information we want to give the class a generic name. We will create the file config.pp and add the following code.

class ucpconfig::config (

$ucp_url = $ucpconfig::ucp_url,
$ucp_username = $ucpconfig::ucp_username,
$ucp_password = $ucpconfig::ucp_password,
$docker_network = $ucpconfig::docker_network,
$docker_network_driver = $ucpconfig::docker_network_driver,
$docker_cert_path = $ucpconfig::docker_cert_path,
$docker_host = $ucpconfig::docker_host,
$consul_master_ip = $ucpconfig::consul_master_ip,
) {

package { ['curl', 'zip', 'jq']:
ensure => installed,
} ->

file { '/etc/docker/get_ca.sh':
ensure => file,
content => template('ucpconfig/get_ca.sh.erb'),
} ->

exec { 'ca_bundle':
command => 'sh get_ca.sh',
path => '/usr/bin:/usr/sbin:/bin:/usr/local/bin',
cwd => $docker_cert_path,
creates => "${$docker_cert_path}/ca.pem",
require => File['/etc/docker/get_ca.sh'],
} ->

file { '/etc/profile.d/docker.sh':
ensure => present,
content => template('ucpconfig/docker.sh.erb'),
mode => '0644',
} ->

docker_network { $docker_network:
ensure => present,
driver => $docker_network_driver,
require => File['/etc/profile.d/docker.sh'],
}
}

Before we move onto creating our template files we will add the last manifest, params.pp. This will have sensible defaults, any sensitive data we will move to Hiera. So let’s add the following code to our file.

class ucpconfig::params {
$ucp_master = ''
$ucp_deploy_node = ''
$ucp_url = ''
$ucp_username = ''
$ucp_password = ''
$ucp_fingerprint = $::ucp_fingerprint
$ucp_version = '1.0.0'
$ucp_host_address = ''
$ucp_subject_alternative_names = ''
$ucp_external_ca = false
$ucp_swarm_scheduler = 'binpack'
$ucp_swarm_port = ''
$ucp_controller_port = '8443'
$ucp_preserve_certs = 'true'
$ucp_license_file = ''
$docker_network = 'private-net'
$docker_network_driver = 'overlay'
$docker_cert_path = ''
$docker_host = ''
}

Now I like to make absolutely everything automated, as it just makes life easier. Especially when you are deploying you CaaS at scale. So we will add a quick custom fact to grab the UCP controllers finger print. To do this we create a folder called lib in the root of our module. Next a folder called facter under the lib directory. Then we will create a file called ucp_fingerprint.rb. As you can see from the file type (.rb) this is a Ruby extension for Puppet. We will add the following code.

Facter.add('ucp_fingerprint') do
setcode do
Facter::Core::Execution.exec("echo -n | openssl s_client -connect 172.17.10.101:443 2> /dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' | openssl x509 -noout -fingerprint -sha1 | cut -d= -f2")
end
end

So now is our code when we reference $ucp_fingerprint it will dynamically get the fingerprint from the controller. Now we can move onto our templates. So in the root of our module we will create a folder called templates. In that folder we will put 5 files. We have already referenced these file in our manifests. They are docker.sh.erb, ectd.yml.erb, get_ca.sh.erb, interlocak.yml.erb and jenkins.yml.erb. Now all the files with yml in the name are Docker Compose files. So we will look at them last. In the docker.sh.erb file we will add the following code.

#!/bin/bash
export DOCKER_TLS_VERIFY=1
export DOCKER_CERT_PATH=<%= @docker_cert_path %>
export DOCKER_HOST=<%= @docker_host %>

This is going to set our DOCKER_HOST to point at the controller for all nodes.
The next file we will look at is get_ca.sh.erb. We will add the following code.

#!/bin/bash
AUTHTOKEN=$(curl -sk -d '{"username":"<%= @ucp_username %>","password":"<%= @ucp_password %>"}' <%= @ucp_url %>/auth/login | jq -r .auth_token) && \
curl -k -H "Authorization: Bearer $AUTHTOKEN" <%= @ucp_url %>/api/clientbundle -o bundle.zip && \
unzip bundle.zip && rm -rf bundle.zip

This code is going to request the CA bundle from the controller, we do this so we have TLS set up from end to end. Now last but not least our three Docker Compose files, you can tell what they are going to deploy from the names of the files. The first one is etcd.yml.erb

version: '2'
services:
etcd:
image: gcr.io/google_containers/etcd:2.2.1
container_name: etcd
network_mode: host
command: ['/usr/local/bin/etcd', '--listen-client-urls=http://127.0.0.1:4001,http://172.17.10.101:4001', '--advertise-client-urls=http://172.17.10.101:4001', '--listen-peer-urls=http://127.0.0.1:2380', '--data-dir=/var/etcd/data']

The second is interlock.yml.erb

version: '2'
services:

interlock:
image: ehazlett/interlock:master
container_name: interlock
command: -D run
tty: true
ports:
- 8080
environment:
INTERLOCK_CONFIG: |
ListenAddr = ":8080"
DockerURL = "https://172.17.10.101"
TLSCACert = "/certs/ca.pem"
TLSCert = "/certs/cert.pem"
TLSKey = "/certs/key.pem"
[[Extensions]]
Name = "nginx"
ConfigPath = "/etc/nginx/nginx.conf"
PidPath = "/etc/nginx/nginx.pid"
MaxConn = 1024
Port = 80
volumes:
- /etc/docker:/certs
restart: always

nginx:
image: nginx:latest
container_name: nginx
entrypoint: nginx
command: -g "daemon off;" -c /etc/nginx/nginx.conf
ports:
- 80:80
environment:
- "constraint:node==ucp-03"
labels:
- "interlock.ext.name=nginx"
restart: always

The last is jenkins.yml.erb

version: '2'
services:

jenkins:
container_name: jenkins
image: scottyc/jenkins
restart: always
privileged: true
volumes:
- /opts/jenkins/jobs:/home/jenkins/jobs
ports:
- "8080"
labels:
- "interlock.hostname=jenkins"
- "interlock.domain=ucp-demo.local"

Ok so we are nearly finished, we need to add our Hiera data now. We do this by adding the following code into hieradata/global.yaml

docker::version: 1.10.3-0~trusty
ucpconfig::ucp_master: ucp-01
ucpconfig::ucp_deploy_node: ucp-03
ucpconfig::ucp_url: https://172.17.10.101
ucpconfig::ucp_username: admin
ucpconfig::ucp_password: orca
ucpconfig::ucp_version: 1.1.1
ucpconfig::ucp_host_address: "%{::ipaddress_eth1}"
ucpconfig::ucp_subject_alternative_names: "%{::ipaddress_eth0}"
ucpconfig::ucp_external_ca: false
ucpconfig::ucp_swarm_scheduler: spread
ucpconfig::ucp_swarm_port: 2375
ucpconfig::ucp_controller_port: 443
ucpconfig::ucp_preserve_certs: true
ucpconfig::ucp_license_file: /etc/docker/subscription.lic
ucpconfig::docker_network: swarm-private
ucpconfig::docker_network_drive: overlay
ucpconfig::docker_cert_path: /etc/docker
ucpconfig::docker_host: tcp://172.17.10.101:443

As you can can see our Hieradata contains sensitive information like passwords example. The last thing that we need to do is give our three node definitions. So add the following code to manifests/default.pp

node 'ucp-01' {
include ucpconfig
}

node 'ucp-02' {
include ucpconfig
}

node 'ucp-03' {
include ucpconfig
}

Now all our code is written, we can build our CaaS, so open a terminal and change directory to the root of the vagrant repo and issue the following command vagrant up. Now give it some time to build, once the node ucp-01 is up you can login into the ucp console via https://172.17.10.101/ with the username admin and the password orca.

So this is the end of part one of this blog series, we have built the platform of our CaaS with our ‘sysadmin hat’ on. In the next blog post we will put our ‘dev hat’ on and consume the CasS. Until that blog is finished login to the CaaS and play around. If you want a full copy of the code from this blog, head to my GitHub page and clone the following repo https://github.com/scotty-c/ucp-dockercon.git

scottyc

Linux geek, Docker Captain and Retro Gamer

Leave a Reply

Your email address will not be published. Required fields are marked *

four × two =