Friday, April 29, 2016

Learning "DOCKER"

Learning "DOCKER"

1) Docker is an open source project that automates the deployment of applications inside Linux
    Containers, and provides the capability to package an application with its run time dependencies
    into a container.
2) It provides a Docker CLI command line tool for the life cycle management of
     image-based containers.
3) Containers are disposable / fast/ ephemeral / Immutable.
4) Compared to VM, Docker Containers starts in seconds.
5) Docker Containers are portable across machines. We just need Docker Engine on any machine to
    run any number of docker containers from images.
6) Docker Containers include minimal run time requirements of the application, reducing their size
    and allowing them to be deployed quickly.
7) Sharing containers with others by keeping them in remote repositories.
8) Lightweight Footprint and Minimal Overhead - Images are typically very small, facilitates rapid
    delivery and reduces time to deploy new application containers.
 9)Containers isolate apps from each other and the underlying infrastructure while providing
     an added layer for protection.
10) Being sharing the same OS Kernel - containers start immediately compared to VM's.
so docker is called as light weight VM.
11) Docker Containers run on all major Linux Distributions and Microsoft OS's with support for
      every infrastructure.
12) Containers eliminate Environment Inconsistencies by packaging the application with its configs
      and dependencies together and shipping as a containers, the application will always work as  
      designed locally, on another machine, in test or production. No more worries about having to
      install the same configs into a different environment.

Components in "DOCKER"
Docker works with the following fundamental components:

Each container is based on an image that holds necessary configuration data. When you launch a container from an image, a writable layer is added on top of this image. Every time you commit a container (using the docker commit command), a new image layer is added to store your changes.
Docker allows you to package an application with all of its dependencies into a standardized unit(Container) for software development.
       A Docker image is made up of filesystems layered over each other.
At the base if boot filesystem(bootfs) - resembles Linux/ Unix boot file system.
Docker user will never interact wil bootfs.
when a container is booted, it's moved to memory, and the bootfs in unmounted to free up the             RAM used by the initrd disk image.
A static snapshot of the containers' configuration. Image is a read-only layer that is never modified, all changes are made in top-most writable layer, and can be saved only by creating a new image. Each image depends on one or more parent images.

Platform Image
An image that has no parent. Platform images define the runtime environment, packages and utilities necessary for containerized application to run. The platform image is read-only, so any changes are reflected in the copied images stacked on top of it. See an example of such stacking in the below Figure

A repository of images. Registries are public or private repositories that contain images available for download.Some registries allow users to upload images to make them available to others.

A configuration file with build instructions for Docker images. Dockerfiles provide a way to automate, reuse, and share build procedures.

More Information about Containers:
       Linux Containers using core technologies such as
Control Groups (Cgroups) for Resource Management
Namespaces for Process Isolation

Several components are needed for Linux Containers to function correctly,
most of them are provided by the Linux kernel. Kernel namespaces ensure process isolation
and cgroups are employed to control the system resources. SELinux is used to assure separation
between the host and the container and also between the individual containers. Management
interface  forms a higher layer that interacts with the aforementioned kernel components and  
provides tools for  construction and management of containers.

DOCKER Commands:
docker info
docker -v                               = Provides Docker Version
docker images
docker images --no-trunc     =true
docker ps                           = Lists only running containers
docker ps -a                       = Lists all containers(Including stopped containers)
docker pull imagename        = Pulls docker image from the Central Docker Hub Registry.
docker run imagename         = Spins up(start) a Container.
docker run -i -t ubuntu bash = Spins up ubuntu container and executes bash command - If you
                                                  run this command "n" times, "n" containers will spin up.
docker kill containername | ID
docker stop containername | ID
docker rm containername  | ID or Delete/ Remove the Container

Virtual Machine Vs DOCKER

Docker Architecture

Docker Interactions

Advantages of "DOCKER"
More Info to be Added Soon

Tuesday, April 12, 2016

RunDeck Introduction / Jenkins & RunDeck Integration

Introduction about RunDeck:
-   - RunDeck can be used to automate routine operational procedures, automating tasks on multiple nodes
- - Scheduling Jobs, to give access to run specific jobs without giving the access to Servers etc.,
- - Rundeck's website says "Turn your operations procedures into self-service jobs.
                Safely give others the control and visibility they need."
- - GUI & CLI Combination- able to run it without clients agents, remote execution with SSH, Flat File Config structure.
- - To run jobs and tasks remotely, either adhoc or at specific time and capture results.
- - For Data Center Automation, Allows to trigger jobs by the Scheduler or on demand using the web interface or API.

- - Custom workflows, end to end orchestration across local or remote servers, cross-platform etc.,

Jenkins & Run Deck Integration

1) In Jenkins, 
    Install the required plugins for "RunDeck" in Jenkins
Click "Manage Jenkins" => "Manage Plugins" => Click on "Available" Option.

2)  Click "Manage Jenkins => Configure System => Search for RunDeck in the Jenkins Page.
As the plugin is installed the RunDeck options appears in this page.
Add "Run Deck" options as per the below screenshot.

3) RunDeck Options
    URL :  
       Provide the URL of the RunDeck based on the server it is installed, Default Port for RunDeck is 4440.
   Login : admin(Default Value)
   Password : admin(Default Value)
   Test Connection : Click on this once you  provide above details to check connection to RunDeck Server

4) Configure Jenkins Job for RunDeck Job Triggering:

5) In Configure Jenkins Job:

Post Build Step: Select "RunDeck" from dropdown.

6) From RunDeck: Copy the Job related "UUID" as shown below into use it in Jenkins.

7) Provide the options as below: UUID to trigger etc.,

Saturday, April 2, 2016


Continuous Integration:(CI)
CI is a development practice that requires developers to integrate code into a shared repository(SVN/Clearcase/GIT) several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early.

Continuous Delivery:
Continuous Delivery is a software development discipline where you build software in such a way that the software can be released to production at any time. You're doing continuous delivery when: Your software is deployable throughout its life cycle.

Continuous Deployment:
Continuous deployment is the next step past continuous delivery, where you are not just steadily creating a deployable package, but you are actually deploying it steadily

Continuous Deployment is also consistently deploying code to production as features are completed, and as soon as you have met the release criteria for those features. That release criteria depends on your situation, and may be running some automated tests, code reviews, load tests, manual verification by a QA person or business stakeholder, or just having another pair of eyes look at your feature and make sure it doesn't explode. Again, the specific criteria can vary, but the key idea is to have a steadily flowing pipeline pushing changes to production, always moving the code forward, and keeping the pipeline as short as realistically possible.

Friday, April 1, 2016


Learning About PUPPET:

What is Puppet ?
Puppet is an open source configuration management tool. It is written in Ruby.
It runs on many unix-like systems as well as on Microsoft windows, and includes it's own declarative language to describe system configuration. Puppet is produced by Puppet Labs by Luke Kanies in 2005.

What Can Puppet be used for?
- Puppet can be used for datacenter automation and server management.
- Puppet is a tool designed to manage the configuration of Unix-like and Microsoft
  Windows systems declaratively.
- The user describes system resources and their state,
  either using Puppet's declarative language or a Ruby DSL (domain-specific language).
- Puppet can manage minimum of 2 servers until maximum of 50,000 servers.

- Puppet Architecture ?
Puppet has Puppet Master, Puppet Nodes, Puppet Agents etc.,
Puppet has been build with 2 modes:
  1) Client Server Mode : Central Server & Agents running on Separate Nodes.
  2) Serverless Mode      : Single process does all the work.

Puppet Use Cases:

More Info Yet to be Added Soon