<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>docker | FLRNKS</title><link>https://flrnks.netlify.app/tag/docker/</link><atom:link href="https://flrnks.netlify.app/tag/docker/index.xml" rel="self" type="application/rss+xml"/><description>docker</description><generator>Source Themes Academic (https://sourcethemes.com/academic/)</generator><language>en-us</language><copyright>© 2024</copyright><lastBuildDate>Fri, 13 Dec 2019 11:11:00 +0000</lastBuildDate><item><title>Docker with Ansible</title><link>https://flrnks.netlify.app/post/ansible-docker/</link><pubDate>Fri, 13 Dec 2019 11:11:00 +0000</pubDate><guid>https://flrnks.netlify.app/post/ansible-docker/</guid><description>&lt;h2 id="introduction">Introduction&lt;/h2>
&lt;p>This post was written as a kind of learning diary for my most recent venture into the world of automation through &lt;code>Ansible&lt;/code>. The project I implemented uses Docker to package 2 services into a micro-services architecture and Ansible to build and deploy those services on remote hosts (with the help of Dockr Compose).&lt;/p>
&lt;h3 id="the-idea">The Idea&lt;/h3>
&lt;p>The service implements a file processing utility which monitors the file-system (a particular folder) and grabs any newly created files and stores them in another folder while compressing it. Interacting with the service is possible through a web interface which offers a way to upload files, simple statistics and the possibility to request email summaries.&lt;/p>
&lt;h3 id="the-approach">The Approach&lt;/h3>
&lt;p>The first idea was to write it all in Go, because I am quite comfortable with the language. However, after a few searches on the interweb, I discovered that a handy UNIX utility already exists for my exat use-case: &lt;code>inotify&lt;/code>. While Go has some packages that offer wrappers around this utility, I eventually decided to just write a bash script for using the &lt;code>inotify&lt;/code> tool, instead of relying on Go for implementing all parts of this service. This also allowed me a convenient excuse to make the service into a 2 piece set, both of which can be deployed and scaled independently, in the spirit of micro-service architecture. Next, I set out to learn enough of Ansible that can be used to deploy a packaged in Docker containers.&lt;/p>
&lt;h2 id="ansible-101">ANSIBLE 101&lt;/h2>
&lt;p>Before this project, I never had the chance to use Ansible, but I wanted to learn about it for quite a while, so here I would describe it briefly for those who are also on the start of their journey with Ansible.&lt;/p>
&lt;p>At the basic level, it is a tool for provisioning and configuring applications on remote systems in an automated fashion. To achieve the automation it uses so-called &lt;code>playbooks&lt;/code>, which define what steps are necessary to reach a desired state for remote systems. It runs mainly on UNIX systems, but is able to provision and configure both UNIX and Windows based systems.&lt;/p>
&lt;p>It is an &lt;code>agentless&lt;/code> tool, which means it does not require any special software to be included in the remote hosts. Instead it relies on an SSH connection to remote hosts, through which bash or PowerShell utilities are used to carry out the necessary steps.&lt;/p>
&lt;p>Ansible uses an &lt;code>inventory&lt;/code> that describes the remote systems that can be provisioned through the playbooks. Inventories can be defined statically in local filesystem on the Ansible master node, or pulled dynamically from remote systems as well.&lt;/p>
&lt;h2 id="ansible-meets-docker">ANSIBLE MEETS DOCKER&lt;/h2>
&lt;p>For the purpose of this project, them main use of Ansible lies in its ability to build and run Docker containers. While Docker is not strictly needed to deploy this service on multiple remote hosts, it becomes much easier when all the necessary dependencies and the source code are packaged neatly in a container that can be easily shipped. Within the Docker container, all dependencies are set up and the service is configured in a reliable and consistent manner, while Ansible takes care of deploying and running the service.&lt;/p>
&lt;p>It is worth mentioning that other tools exist, such as Kubernetes, Docker Swarm and others, which focus more on shipping containerised applications. This blog post, however will not deal with those, but focus entirely on Ansible and Docker instead. Future posts may discuss those alternatives in more detail.&lt;/p>
&lt;p>Below is a brief summary of the proposed architecture that depicts how Ansible and Docker are used together to achieve the desired state of deploying the containerised service on each Ansible host.&lt;/p>
&lt;p>&lt;img src="ansib-meets-dock.png" alt="Ansible meets Docker">&lt;/p>
&lt;p>Detailed instructions are out of scope for this post as well, but briefly: the above shows a snapshot of my local environment using virtual machines in VirtualBox. First, I created a master VM with Ubuntu Desktop and then two slave VMs with Ubuntu server (no GUI necessary). Ansible was installed on the Master node and proper SSH access was configured for both slave VMs from the master VM. In the Ansible playbook used to deploy the service on remote systems, the first few tasks were about installing necessary dependencies and setting up a local docker environment, which can later build and run containerised applications.&lt;/p>
&lt;h2 id="monolithic-vs-microservice">MONOLITHIC VS MICROSERVICE&lt;/h2>
&lt;p>Before discussing how Ansible was used to deploy the service on remote machines using Docker, it is worth going through the building blocks of the service itself. The set of features needed for the service:&lt;/p>
&lt;ul>
&lt;li>file monitoring service that grabs and compresses files&lt;/li>
&lt;li>web interface for file uploads, email sending and service stats&lt;/li>
&lt;/ul>
&lt;p>These features could be implemented in one application that runs all the necessary functions in parallel. In fact, on my first iteration, I opted to solve it this way, packaging all features into a single container. The below figure shows how it worked.&lt;/p>
&lt;p>&lt;img src="monolithic.png" alt="Monolithic Docker">&lt;/p>
&lt;p>However, for the sake of learning, it is worth to consider using a &lt;code>microservice&lt;/code> approach. This essentially means breaking up big &lt;code>monolithic&lt;/code> applications to smaller sub-components. Docker is a perfect tool for this. For our purposes, such an architecture could mean deploying 2 separate containers: one for the Web UI backend (for uploads, statistics and email) and another that implements the monitoring and compression service. Below is an updated figure showing the breakup of our previously monolithic approach.&lt;/p>
&lt;p>&lt;img src="microservice.png" alt="Microservice Docker">&lt;/p>
&lt;p>Breaking up the one container from the first iteration into two separate containers enables us to reap some benefits of microservice architecture. Our application components can fail independently, for example, a bug in the email sending service will not bring down the monitoring service. Also, such an architecture means in the future we can scale better with demand, in case there would be a huge surge in requests to the web frontend, we could just deploy more instances of the container and use a load-balancer to distribute requests among those instances.&lt;/p>
&lt;h2 id="implementation">IMPLEMENTATION&lt;/h2>
&lt;p>To implement the web component, I used simmple static HTML being served from a &lt;code>GO&lt;/code> backend, that also handled file-uploads, sending email notifications and extracting statistical data from a shared SQLite3 database. In order to implement the file monitoring service, I used the &lt;code>inofity-tools&lt;/code> available on UNIX systems, and wrapped it in a bash script that took care of the zipping, and generating of logs and statistics into the SQLite3 database.&lt;/p>
&lt;h3 id="docker-compose">Docker-Compose&lt;/h3>
&lt;p>Docker Compose was used to enable easier testing and deployment. The definitions in the &lt;code>docker-compose.yml&lt;/code> describe what docker containers should be started with what parameters. The two services defined in the docker-compose correspond to the two containers defined above using the micro-service architecture.&lt;/p>
&lt;p>The &lt;code>webserver&lt;/code> running the GO backend uses a few mounted folders plus an exposed port to let inbound communication reach the server. The &lt;code>monitor&lt;/code> uses 4 folders mounted from the host FS, which enable its core functionality (listening for files and zipping them to a different folder).&lt;/p>
&lt;h3 id="ansible">Ansible&lt;/h3>
&lt;p>Thanks to Docker Compose, it was relatively simple to deploy and run the service with Ansible, once the necessary packages and dependencies are installed on Ansible hosts. All it took was a simple Ansible Task using the docker_compose module:&lt;/p>
&lt;div class="highlight">&lt;pre class="chroma">&lt;code class="language-yml" data-lang="yml">- &lt;span class="k">name&lt;/span>&lt;span class="p">:&lt;/span>&lt;span class="w"> &lt;/span>Docker-Compose&lt;span class="w"> &lt;/span>UP&lt;span class="w">
&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="k">docker_compose&lt;/span>&lt;span class="p">:&lt;/span>&lt;span class="w">
&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="k">project_src&lt;/span>&lt;span class="p">:&lt;/span>&lt;span class="w"> &lt;/span>path_to_docker_compose_yml&lt;span class="w">
&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="k">build&lt;/span>&lt;span class="p">:&lt;/span>&lt;span class="w"> &lt;/span>yes&lt;span class="w">
&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>While testing the sevice a few issues were discovered that could be considered as bugs, but instead let&amp;rsquo;s call them features!&lt;/p>
&lt;h3 id="feature-1">Feature #1&lt;/h3>
&lt;p>Since the service lets users upload files, sometimes, if the file is large enough, the processing may kick in faster than the upload can be completed. In this case, the file may be corrupted and would not be possible to recover after unzipping. To mitigae this to a certain extent, a 5 second processing delay has been added to the &lt;code>monitor_service.sh&lt;/code> script, which will, as a result, delay the processing and hope that during those 5 seconds, the upload has finished.&lt;/p>
&lt;h3 id="feature-2">Feature #2&lt;/h3>
&lt;p>While creating the two Docker files describing each component of the service, I wanted to take an extra step and created a non-root user, so that the main process of the service starts as some user which does not have full root access to everything. This worked well while developing and testing on a local system using manual execution via &lt;code>docker-compose up/down&lt;/code> commands. However, once Ansible has been updated to use DC via the &lt;code>docker_compose&lt;/code> module, certain functionalities would be broken due to file/folder permission issues. Basically the mounted folders would belong to root, whereas the running process was non-root, so it could not save uploaded files for example. Further investigations will be done to solve this, until then, the Dockerfiles have been reverted to use root when starting the main processes.&lt;/p>
&lt;h2 id="conclusion">CONCLUSION&lt;/h2>
&lt;p>All in all, working on this project has been a great opportunity to practice such tools as Docker, Docker Compose and Ansible. While I have used Docker briefly before, I have never once used Ansible, and I learnt a great deal about it during this project. I can definitely see how it enables large organisations to streamline their processes when it comes to deploying and configuring various systems and services in their infrastructure. While this project is rather rudimentary, it gave me a good entry to this realm of IT.&lt;/p></description></item></channel></rss>