Here is an article we wrote that provides a few helpful tips for Independent Consultants on how to effectively position yourself to land your next gig.
How to Get Out of The Way of Yourself in 3 Steps
Let’s discuss 3 key strategies to avoid founder bias to ensure business success and avoid common pittfalls.
Upgrading Docker CE TO EE for the Impatient -Part II
This is the second article in our series of posts where we demonstrate how you can upgrade your existing Docker CE environment to Docker EE without having to redeploy running services and applications.
Upgrading Docker CE to EE for the Impatient -Part 1
In this series of posts, we’ll demonstrate how you can upgrade your existing Docker CE environment to Docker EE without having to redeploy running services and applications. We’ll start with a set of servers running Docker CE engine using Docker Swarm, upgrade the engine to Docker EE, and then in part II install UCP and implement DTR in part III — the two major enterprise tools that are built on top of the Docker EE Engine.
3 Key Professional Relationship Lessons for Independent Consultants
Many consultants who are highly skilled in a specific technical domain can find it challenging to secure meaningful work if they are not also adept in growing their professional network.
3 Problems IT Leaders Solve with Container Technology
Many IT leaders, especially in larger enterprise organizations are still struggling to define the value and return on investment.
Stone Door Group® Releases Ansible Migration Accelerator(SM) to Automate and Consolidate IT Infrastructure
New Ansible Accelerator solution transforms silos of automation tools into one powerful interface using one industry-standard language.
3 Reasons to Upgrade from Docker Community Edition to Enterprise Edition
Organizations that are using Docker CE are now trying to figure out how to scale their Docker environment to meet the security and compliance requirements for enterprise production. In this article we’ll look at 3 reasons why Docker EE is a natural upgrade path from CE. We have also written a 3 part tutorial series for administrators to implement a Docker EE upgrade.
3 Reasons Why Ansible is Replacing Homegrown Automation
Let’s discuss three challenges with taking a homegrown approach to automation, and ways to get you strategically leveraging IT automation to support the needs of the business for years to come.
Top 3 Financial Considerations for Independent Consultants
For many skilled IT professionals who decide to go it alone, the first few years can be a steep learning curve.
The Top 3 Considerations when becoming an Independent IT Consultant
If you’re considering making the move to become an independent IT consultant, here are some important considerations to keep in mind before you make the switch.
A Practical Blockchain Example for Supply Chain | Part II
We continue with our case study of Main Street Hospital and their implementation of Blockchain to more accurately track drug shipments from manufacturers.
A Practical Blockchain Example for Supply Chain | Part I
In this article, we will continue with our case study of Main Street Hospital and their implementation of Blockchain to more accurately track drug shipments from manufacturers.
CI/CD, Jenkins, Containers, and Microservices | A Hands On Primer | Part III
This is Part 3 of a three part series on Jenkins, a popular automation tool that can unlock the power of CD/CI and DevOps workflows for the small/medium business, or for large enterprises.
CI/CD, Jenkins, Containers, and Microservices | A Hands On Primer | Part II
This is the second of a three-part series on getting Jenkins installed and contributing to the workflow of your organization. Now we’ll look at the steps required to set up a rudimentary build pipeline, which is a popular use case for Jenkins.
CI/CD, Jenkins, Containers, and Microservices | A Hands On Primer
This post is the first in a three part series on Continuous Deployment and Continuous Integration with Jenkins, containers, and microservices covering installation, general configuration that works for most use cases, and finally some advanced techniques that demonstrate some of the possibilities Jenkins provides for an enterprise CI/CD environment.
Google Cloud Architecture for the Impatient | Part III
In the first part of this series, we discussed a few of the infrastructure components available in GCP. Now, let’s discuss some of the products that augment this infrastructure.
Google Cloud Architecture for the Impatient | Part II
This is part 2 of a 3 part primer for IT professionals in a hurry. We will be discussing the minimum products and requirements for architecting on Google Cloud Platform.
In the first part of this series, we discussed a few of the infrastructure components available in GCP. Now, let’s discuss some of the products that augment this infrastructure. These solutions act as force-multipliers, which allow you to get more done with fewer resources and systems administrators.
Scaling and High-Availability
Most enterprise level applications have a need for multiple copies of a service to run concurrently. This may be to handle more workload by simply having more instances of the service (horizontal scaling), as opposed to vertical scaling by increasing the vCPUs or RAM of the server. Having multiple copies of a service in separate locations is valuable in case one copy crashes or an accident destroys a physical host. To scale in a highly available manner, where the end user will not experience an interruption if on instance goes down, requires a single endpoint that redirects user traffic to healthy instances. GCP provides tools to automatically scale the number of instances of a service and to direct traffic across them.
Instance Groups
Instance groups allow you to bundle your VMs together for load-balancing and manageability purposes. They can also be configured with an instance template that allows the instance group to automatically create additional instances if demand increases (autoscaling) or an existing VM crashes (autohealing).
Load Balancers
If a natural disaster disrupts a data center or a hardware failure fries a rack of servers, a load-balancer can detect any unhealthy or absent VMs and direct customer traffic to a healthy instance without intervention. Since GCP networks operate at a global scale, this could mean the users that are normally directed to the Sydney data center are temporarily directed to Singapore instead. GCP makes this automatic via anycast IPs that route a user to the nearest VM which can satisfy the request. This means a cloud architect no longer needs to design a routing solution to handle users from different regions. For example, instead of having users in Australia visit www.example.com.au, there can be a single domain such as www.example.com that all users in the world can use.
Auto-Scaling
An instance group can be set to automatically scale based on any metric, such as CPU usage or number of connections. This is enabled just by checking a box stating that you want auto-scaling and providing the target values for certain metrics. This alleviates the need for a systems administrator to wake up to a late-night page in order to provision another machine to handle an unexpected increase in workload.
The autoscaler will also delete VMs when workload decreases. This could be a huge savings for applications with long lulls in workload as you will not pay for VMs to sit idle. A common case is a website targeted to a particular region, say a state government website, where most traffic will be during waking hours for that given region.
Automated Infrastructure
Every Google product has a REST API allowing for automating provisioning, maintenance, and monitoring tasks. These APIs can be explored via the APIs Explorer.
Deployment Manager
Deployment Manager is a hosted tool that allows you to define the entire infrastructure needed by your application in template files. This allows you to version control your infrastructure definition. More importantly, it enables exact clones of your infrastructure to be deployed multiple times. There are numerous other benefits to defining your Infrastructure-as-Code.
Perhaps you would like multiple environments, such as Development, Staging, and Production, for your application in order to promote the latest versions of your application code. Deployment Manager allows you to define the infrastructure once but deploy to each of these environments. Your workflow could be to promote an application version from one environment to the next.
First, deploy both the infrastructure and application to Development whenever there is a new version. Then, when that version has been tested and blessed, deploy the same version of the infrastructure and application to Staging. Finally, deploy the version to Production. It is likely that any infrastructure misconfigurations will be identified in the earlier environments as each environment will be deploying near identical infrastructure.
The practice of defining your infrastructure in this way has many other benefits as well. Instead of creating complicated change management requests when a new piece of infrastructure is required or configuration needs to change; developers, administrators, or operators can make the change themselves to the code repository that defines the infrastructure. Then these changes could be peer-reviewed, merged, and then the infrastructure is updated automatically. This also promotes the coordination between development and operations (DevOps) by removing barriers between the teams and processes.
Cloud Launcher
Cloud Launcher builds on Deployment Manager by allowing various third parties to upload their infrastructure definitions to a kind of market place. For example, you can spin up an entire WordPress site by clicking a button in Cloud Launcher. This will provision the required VMs and storage and then configure the software — all of which is defined in a Deployment Manager template for you.
While deploying to the cloud removes the burden of managing hardware, these automation tools further simplify the initial and ongoing management of infrastructure. Google takes this yet another step forward by providing a number of Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) solutions that can ease the responsibility of the systems administrator and give more power to the developer.
In the final part of this series, we will discuss the fully managed services Google provides, which allows you to get work done without ever being concerned with infrastructure.
About the Author
John Libertine is a Google Certified Architect and VMWare Certified Professional that specializes in hybrid cloud infrastructure consulting and training for Stone Door Group, a DevOps solutions integrator who specializes in helping companies execute on their digital transformation initiatives. To learn more, drop us a line at letsdothis@stonedoorgorup.com.
Google Cloud Architecture for the Impatient | Part I
This is part 1 of a 3 part primer for IT professionals in a hurry. We will be discussing the minimum products and requirements for architecting on Google Cloud Platform.
Build a Web Server Cluster using Docker, Linux and Windows in 1 Hour
The Docker platform has moved forward fast since Docker and Microsoft announced late last year that it would run on the Windows platform, particularly Windows 2016 Server. According to a recent Infoworld article, Docker now runs Linux and Windows in the same cluster. Additionally, the release of the 17.06 Docker Engine has moved us closer to the multi-platform Docker swarm reality than ever before.
So how does this work in practice? This article walks you through the process of building a Docker Swarm based web service that contains both Linux and Windows server containers. This can be done in about an hour.
The Basics
When containers stormed onto the scene a few years ago, to be honest, I was not impressed. I recall thinking: “We’ve had Linux namespaces, network isolation, file system isolation, etc. for years — big deal!” However, the genius of the container world, with Docker being first-of-mind in this space, is that a nice wrapper placed around these technologies makes them accessible as a group, programmable, and scriptable. Now, creating isolated processes is a simple call to an API — no master Linux/UNIX skills needed. Fast forward five years and look what Docker and containers have done to the industry. Wow!
Containers at the rudimentary level can be thought of as “encapsulated processes or applications.” Everything needed for an application or service to run is packaged into one running process on a server. The deployment speed of the container is literally as fast as starting a new process (if the image used to launch the container is already cached). You can achieve remarkable container density as compared to their older cousins, the virtual machine. For more on what containers are and the images used to build them, seehttps://www.docker.com/what-container.
With roots in the Linux world, Linux containers have dominated the scene, and still do today. However, Microsoft is now playing along. It is clear the hatchet has been buried because you can now run the same application on both platforms at the same time in a Docker Swarm! For those of us who have been around the block, we never thought this day would come.
Curious? Here is a quick tutorial on how to run an Nginix and IIS cluster in a Docker Swarm using both Microsoft and Linux based containers.
Prerequisites
For this walk-through, you’ll need access to at least one Linux server or VM that supports Docker (I used Ubuntu 16.04) and one Windows 2016 server or VM running on the same network. I used Google Cloud Platform to host mine, but you can choose whatever you like, even VMs on your local system.
My original, never-used-before hostnames for this walkthrough were windows-1 and ubuntu-1.
In addition, you must use the EE version of the Docker engine on Windows, so I opted to use it on both platforms to keep things aligned as closely as possible. This means you’ll need a license, which you can get for free (for a 30-day trial) at https://store.docker.com/editions/enterprise/docker-ee-trial
After you have signed up, there is a link for setup instructions, and when you click on the link you will find a URL that is unique to your trial. Capture it, as you’ll need it later:
Install Docker
To start, install the Docker engine on each system to be used. Since I am not a fan of opening multiple browser tabs to get a task done, I’ll summarize the steps here, but you can refer to this link for Ubuntu Linux (other derivatives are in the navigation menu on the left) and this one for Windows for more information.
Ubuntu 16.04
Remove older versions of Docker, if necessary:
$ sudo apt-get remove docker docker-engine docker-ce docker.io
2. Prepare the prerequisite packages and grab the Docker GPG key:
$ sudo apt-get update
$ sudo apt-get install apt-transport-https ca-certificates curl Software-properties-common
$ curl -fsSL <DOCKER-EE-URL>/ubuntu/gpg | sudo apt-key add -
The key should be DD91 1E99 5A64 A202 E859 07D6 BC14 F10B 6D08 5F96, confirmed by running:
$ apt-key fingerprint 6D085F96
3. Add the repository, replacing <DOCKER-EE-URL> with the URL you grabbed from your Docker trial above:
$ sudo add-apt-repository
“deb [arch=amd64] <DOCKER-EE-URL>/ubuntu
$(lsb_release -cs)
stable-17.06”
4. Finally, install Docker:
$ sudo apt-get update
$ sudo apt-get install docker-ee
5. Test Docker — right now, Docker is only available to the root user, and you can add your ID to the docker group to gain access. For this guide, we’ll simply run as root:
$ sudo su -
# docker container run hello-world
Unable to find image ‘hello-world:latest’ locally
latest: Pulling from library/hello-world
b04784fba78d: Pull complete
Digest: sha256:f3b3b28a45160805bb16542c9531888519430e9e6d6ffc09d7…
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
<snip>
Windows Server 2016
1. Open an elevated (Administrator) PowerShell command prompt, and run the commands:
PS> Install-Module -Name DockerMsftProvider -Force
PS> Unregister-Package Source -Provider Name DockerMsftProvider -Name Docker Default -Error action Ignore
PS> Register-Package Source -ProviderName DockerMsftProvider -Name Docker -Location https://download.docker.com/components/engine/windows-server/index.json
PS> Install-Package -Name docker -ProviderName DockerMsftProvider -Source Docker -Force
PS> Restart-Computer -Force
2. After the reboot, confirm the Docker service is running; if not start it manually.
3. Open an elevated (Administrator) PowerShell command prompt again and test your installation:
PS> docker container run hello-world:nanoserver
Unable to find image ‘hello-world:nanoserver’ locally
nanoserver: Pulling from library/hello-world
bce2fbc256ea: Pull complete
3ac17e2e6106: Pull complete
8cac44e17f16: Pull complete
5e160e4d8db3: Pull complete
Digest: sha256:25eac12ba40f7591969085ab3fb9772e8a4307553c14ea72d0e…
Status: Downloaded newer image for hello-world:nanoserver
Hello from Docker!
This message shows that your installation appears to be working correctly.
<snip>
4. To save time later, go ahead and download (pull) the image we will use later onto the Windows server. It seems that the Microsoft images are much larger than most Linux images:
PS> docker pull microsoft/iis
Create the Swarm
You can create a swarm from either platform, IF you correctly copy the link presented from one platform to the other, as shown below. In my testing, I discovered that if I created the swarm on a Linux server, the Windows docker could not join successfully, as the wrap in the terminal of the command was carrying over as a carriage return in the PowerShell prompt. If I removed any line break, it worked. Be sure you get the full command on a single line, or you will be misled as I was in my testing.
Windows Server 2016
Open another elevated prompt again as before (in all probability, your first one is still pulling that image), and issue the command to create the swarm. You’ll receive output that gives you a link to use for other nodes to join. Copy this command to use on the Linux server. The advertise-addr is needed if your system is multi-homed — it tells the Docker swarm on which IP to be listening. Since I was on Google Cloud, I used the internal IP interface to keep things simple:
PS> docker swarm init — advertise-addr 10.128.0.2
Swarm initialed: current node (xyz) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join — token SWMTKN-1–4xdjyknpemyepex3pydc5cduoxnlpy5jdgz0hy1ovr6dtnmf2u-37x1pwimpo079uixsdtpn2o1d 10.128.0.2:2377
Ubuntu-server
Join the swarm by running the docker swarm join command that was presented above on the Linux server. If you lost that output, you can run docker swarm join-token worker on the Windows node to see the command again.
Windows Server 2016
One last time you will return to the Windows server. Since I am more comfortable in the Linux world, and there are tools there that are not available by default on Windows (grep, for example), I promoted my Linux node to a manager. A manager node can run all swarm-related commands; worker nodes cannot.
1. List the nodes (again, note the original hostnames of my VMs). The Medium Blog doesn’t quite format the output well:
PS> docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
ajat… windows-1 Ready Active Leader
qrcp… ubuntu-1 Ready Active Reachable
2. Promote the Linux node to a manager:
PS> docker node promote ubuntu-1
Now, we can do all we need to do from our Linux server, which will be the assumption moving forward.
Label the Nodes
Labeling nodes in a swarm is a key feature in the Docker engine. These labels allow you to designate affinity when deploying a service or container, putting the containers for a given service on a particular subset of nodes in the swarm. For example, you could have nodes with faster storage attached; labeling the nodes with an indicator of this enables you to direct I/O-intensive containers to those nodes if capacity exists there.
For our purposes, we need to direct Windows or nano-based containers to the Windows node and Linux containers to the Linux node:
# docker node update — label-add windows windows-1
# docker node update — label-add linux ubuntu-1
Deploy two services into the swarm
Hopefully during all this extra work, the docker pull command has been merrily cranking along, grabbing the IIS image for our use. The second service will not deploy until that image is available, so you have to be patient and let it finish.
1. Launch a Linux-based service, mapping port 8000 to 80, and affiliating the service to run on the Linux node:
# docker service create — name nginx-linux — replicas=2
— publish 8000:80 — placement-pref spread=node.labels.linux nginx
2. Examine the service and note that the 2 containers are on the same node (which is not the default behavior) because of our label usage:
# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS ullugr7… nginx-linux replicated 2/2 nginx:latest *:8000->80/tcp
# docker service ps nginx-linux
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE …
sheq4… nginx-linux.1 nginx:latest ubuntu-1 Running Running 2 minu
S19ym… nginx-linux.2 nginx:latest ubuntu-1 Running Running 2 minu…
3. Confirm the operation — you should get back the default NGINX landing page:
# curl localhost:8000
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
snip
The swarm routing mesh took care of bringing back the response from either of the 2 containers.
4. Now we repeat the process, but this time launching a Windows-based service:
# docker service create — name iis-windows — replicas=2 — publish 80:80 — placement-pref spread=node.labels.windows microsoft/iis
5. Examine the service and note that the 2 Windows containers are on the Windows node:
# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS ullugr7… nginx-linux replicated 2/2 nginx:latest *:8000->80/tcp
5tc80pl… iis-windows2 replicated 2/2 microsoft/i… *:80->80/tcp
# docker service ps iis-windows
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE …
ikhas… iis-windows.1 microsoft/… windows-1 Running Running 2 minu… xact2… iis-windows.2 microsoft/… windows-1 Running Running 2 minu…
6. Confirm the operation — you should get back the default IIS landing page:
# curl localhost:80
!DOCTYPE html PUBLIC “-//W3C//DTD XHTML 1.0 Strict//EN” …
<html xmlns=”http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv=”Content-Type” content=”text/html; charset=iso-8…<title>IIS Windows Server</title>
<snip>
There are still a few gaps in the Docker engine on Windows Server 2016. The biggest issues seem to revolve around networking to the containers. You probably noticed that I used a non-standard port for the Linux HTTP server mapping (8000) and the standard port (80) for the Windows.
If you research this issue, you will see many comments and issues going back into 2016, and it seems that they have not quite all ironed out yet. When I tried to use a different port in this example, it simply did not work. I got the service, the containers were there, and if I mined the internal IP assigned to the container on the Windows server, I was able to see the IIS landing page by using curl <internal IP> from the PowerShell prompt. However, it was not mapped to the swarm unless the same port was used on both ends. A couple of links that discuss the issues are below.
Despite these shortcomings, this example still demonstrates how a Windows and Linux container can both co-exist and provide a common serivce like a web server. Hopefully the networking stuff is addressed soon, enabling ‘full’ support for such activities. In the meantime, go forth and containerize!