Sunday, December 11, 2016

AWS Interview Questions

AWS Interview Questions

Found this link & it covers AWS interview questions.

Tuesday, November 22, 2016

DevOps Interview Questions

DevOps Interview Questions

Found this link & it covers a lot of interview questions on alot of topics related to DevOps

GitLab and Jenkins Integration using SSH Keys

Setting up SSH Keys to connect GitLab and Jenkins

Follow the below Steps: 
1) Generate Public Key & Private Key(SSH Keys) of the user.
       Command : ssh-keygen -t rsa -C "",
       Just press <Enter> to accept the default location and file name. ..
       Enter, and re-enter, a passphrase when prompted. or Just Press Enter to leave 
       it as blank

      There are other ways too to generate SSH Keys.

2) GitLab : (GitLab is installed on On-Premise Server in my case)
      Login to gitlab server with your user ID.
      Click on "profile settings" on the left bottom & "SSH Keys" on right top corner.


Copy User Public Key( to GitLab : SSH Keys of the 
     Specific User or Service Account ID.

3) Jenkins:
    Copy User Private Key(id_rsa) & add it to Jenkins : 
     Credentials -> System -> Global Credentials -> Add Credentials.
     Select Kind : SSH Username with private key & provide username.
     & Enter directly section : Copy and paste the Private key or use other options.
      & Leave passphrase & ID as blank.
     & Provide a name description as "UserName_PrivateKey", 
     so you can select that during the job configuration drop down.

4) In the Jenkins Job Configuration @ GIT Section 
Provide the GitLab Repo URL : SSH option URL
           Ex : git@server:locationoftheRepo.git
& Select "UserName_PrivateKey" from Dropdown, 

This will authenticate the User with GitLab using SSH Keys & 
we will be able to connect Jenkins to GitLab successfully.

Monday, October 10, 2016

DOCKER => Simple Hands-On

"DOCKER => Simple Hands-On"

:tada: :birthday: :tada: Docker Birthday #3 :tada: :birthday: :tada:"

Recently went to a meetup hosted by Docker for "Docker Birthday #3".

      Install the Docker on Windows or Linux Servers.

Installing Docker On Ubuntu :
     1) login as root
     2) apt-get update
     3) apt-get install -y
Installing Docker on Centos:  
     1) login as root
     2) yum update
     3) yum install -y docker
To Check whether docker has been installed or not ?
    Type "docker version" or "docker --version" or "docker info
To Run Docker using non-root user.
      sudo groupadd docker    
      sudo usermod -aG docker youruserid

For beginners to learn Docker, you can start from here at the below link @ GitHub :

"Docker for beginners" : 

This tutorial consists of below sections.
1.0 Running your first container
2.0 Webapps with Docker
3.0 Run a multi-container app with Docker Compose

You can try building a simple flask app.

Docker Birthday 3 : App Challenge

Sunday, June 5, 2016



Kubernetes is derived from "κυβερνήτης" (kubernetes) is Greek for "pilot" or "helmsman of ship".
Think of Pilot as "Kubernetes" & the Containers (Docker) as Engines managed by Kubernetes.

Kubernetes is a powerful system, developed by Google, for managing containerized applications in a clustered environment, provides better ways of managing related, distributed components across varied infrastructure.

Understanding Terminology:
   - Master is the central control point that provides a unified view of the cluster. There is a single  
      master node that control multiple minions or nodes.
   - Master server serves as the main management contact point for administrators, and it provides
      many cluster-wide systems for the relatively dumb worker nodes.
   - Master server runs a number of unique services that are used to manage the cluster's workload
     and direct communications across the nodes.
Minions or Nodes: 
   - Minion is a worker node that run tasks as delegated by the master. Minions can run one or more
      pods. It provides an application-specific "virtual host” in a containerized environment.
   - Pods are the smallest deployable units that can be created, scheduled, and managed. Its a logical
     collection of containers that belong to an application.
   - Group of closely-related containers on the same host.
   - Kubernetes pods come and go, if one of them shuts down or crashes, a new pod will be started.
      When scaling up or down or when doing rolling upates pods are created or destroyed.
   - Kubelet is the primary “node agent” that runs on each node.
   - The kubelet works in terms of a PodSpec. A PodSpec is a YAML or JSON object that
      describes a pod.
   - The kubelet takes a set of PodSpecs and ensures that the described containers are
      running and healthy.
Replication Controller: 
   - Creates & Destroys PODS dynamically.
   - Defines pods to be horizontally scaled.
   - Uses a label query for identifying what containers to run.
   - Maintains a specific number of replicas of a particular thing to run.
   - Etcd is a component that kubernetes needs to function as a globally available configuration store.
   - Etcd is lightweight, distributed key-value store that can be distributed across multiple nodes.
   - To store data that can be used by each of the nodes in the cluster. 
   - Command Line Utility
   - To manage clusters, kubectl authenticates to REST API.
   -  Kubectl controls the Kubernetes cluster manager.
    - MVirtual abstraction.
    - Basic load balancer.
    - Single consistent access point to the pod.

    - Key value tag to mark work, units a part of group.
    - Management and action targeting.

Definition File:
    - YAML/ JSON describing a pod, service or replication controller. 

kubectl annotate       - Update the annotations on a resource
kubectl api-versions - Print the supported API versions on the server, in the form of “group/version”.
kubectl apply           - Apply a configuration to a resource by filename or stdin
kubectl attach          - Attach to a running container.
kubectl autoscale      - Auto-scale a deployment or replication controller
kubectl cluster-info  - Display cluster info
kubectl config         - config modifies kubeconfig files
kubectl convert        - Convert config files between different API versions
kubectl cordon        - Mark node as unschedulable
kubectl create          - Create a resource by filename or stdin
kubectl delete          - Delete resources by filenames, stdin, resources and names, or by resources
                                    and label selector.
kubectl describe       - Show details of a specific resource or group of resources
kubectl drain           - Drain node in preparation for maintenance
kubectl edit             - Edit a resource on the server
kubectl exec            - Execute a command in a container.
kubectl explain         - Documentation of resources.
kubectl expose         - Take a replication controller, service or pod and expose it as a new
                                    Kubernetes Service.
kubectl get - Display one or many resources
kubectl label - Update the labels on a resource
kubectl logs - Print the logs for a container in a pod.
kubectl patch - Update field(s) of a resource using strategic merge patch.
kubectl proxy - Run a proxy to the Kubernetes API server
kubectl replace - Replace a resource by filename or stdin.
kubectl rollout - rollout manages a deployment
kubectl run - Run a particular image on the cluster.
kubectl scale - Set a new size for a Replication Controller, Job, or Deployment.
kubectl uncordon - Mark node as schedulable
kubectl version - Print the client and server version information.
kubectl namespace - SUPERSEDED: Set and view the current Kubernetes namespace
kubectl port-forward   - Forward one or more local ports to a pod.
kubectl rolling-update - Perform a rolling update of the given ReplicationController.


Friday, April 29, 2016

Learning "DOCKER"

Learning "DOCKER"

1) Docker is an open source project that automates the deployment of applications inside Linux
    Containers, and provides the capability to package an application with its run time dependencies
    into a container.
2) It provides a Docker CLI command line tool for the life cycle management of
     image-based containers.
3) Containers are disposable / fast/ ephemeral / Immutable.
4) Compared to VM, Docker Containers starts in seconds.
5) Docker Containers are portable across machines. We just need Docker Engine on any machine to
    run any number of docker containers from images.
6) Docker Containers include minimal run time requirements of the application, reducing their size
    and allowing them to be deployed quickly.
7) Sharing containers with others by keeping them in remote repositories.
8) Lightweight Footprint and Minimal Overhead - Images are typically very small, facilitates rapid
    delivery and reduces time to deploy new application containers.
 9)Containers isolate apps from each other and the underlying infrastructure while providing
     an added layer for protection.
10) Being sharing the same OS Kernel - containers start immediately compared to VM's.
so docker is called as light weight VM.
11) Docker Containers run on all major Linux Distributions and Microsoft OS's with support for
      every infrastructure.
12) Containers eliminate Environment Inconsistencies by packaging the application with its configs
      and dependencies together and shipping as a containers, the application will always work as  
      designed locally, on another machine, in test or production. No more worries about having to
      install the same configs into a different environment.

Components in "DOCKER"
Docker works with the following fundamental components:

Each container is based on an image that holds necessary configuration data. When you launch a container from an image, a writable layer is added on top of this image. Every time you commit a container (using the docker commit command), a new image layer is added to store your changes.
Docker allows you to package an application with all of its dependencies into a standardized unit(Container) for software development.
       A Docker image is made up of filesystems layered over each other.
At the base if boot filesystem(bootfs) - resembles Linux/ Unix boot file system.
Docker user will never interact wil bootfs.
when a container is booted, it's moved to memory, and the bootfs in unmounted to free up the             RAM used by the initrd disk image.
A static snapshot of the containers' configuration. Image is a read-only layer that is never modified, all changes are made in top-most writable layer, and can be saved only by creating a new image. Each image depends on one or more parent images.

Platform Image
An image that has no parent. Platform images define the runtime environment, packages and utilities necessary for containerized application to run. The platform image is read-only, so any changes are reflected in the copied images stacked on top of it. See an example of such stacking in the below Figure

A repository of images. Registries are public or private repositories that contain images available for download.Some registries allow users to upload images to make them available to others.

A configuration file with build instructions for Docker images. Dockerfiles provide a way to automate, reuse, and share build procedures.

More Information about Containers:
       Linux Containers using core technologies such as
Control Groups (Cgroups) for Resource Management
Namespaces for Process Isolation

Several components are needed for Linux Containers to function correctly,
most of them are provided by the Linux kernel. Kernel namespaces ensure process isolation
and cgroups are employed to control the system resources. SELinux is used to assure separation
between the host and the container and also between the individual containers. Management
interface  forms a higher layer that interacts with the aforementioned kernel components and  
provides tools for  construction and management of containers.

DOCKER Commands:
docker info
docker -v                               = Provides Docker Version
docker images
docker images --no-trunc     =true
docker ps                           = Lists only running containers
docker ps -a                       = Lists all containers(Including stopped containers)
docker pull imagename        = Pulls docker image from the Central Docker Hub Registry.
docker run imagename         = Spins up(start) a Container.
docker run -i -t ubuntu bash = Spins up ubuntu container and executes bash command - If you
                                                  run this command "n" times, "n" containers will spin up.
docker kill containername | ID
docker stop containername | ID
docker rm containername  | ID or Delete/ Remove the Container

Virtual Machine Vs DOCKER

Docker Architecture

Docker Interactions

Advantages of "DOCKER"
More Info to be Added Soon

Tuesday, April 12, 2016

RunDeck Introduction / Jenkins & RunDeck Integration

Introduction about RunDeck:
-   - RunDeck can be used to automate routine operational procedures, automating tasks on multiple nodes
- - Scheduling Jobs, to give access to run specific jobs without giving the access to Servers etc.,
- - Rundeck's website says "Turn your operations procedures into self-service jobs.
                Safely give others the control and visibility they need."
- - GUI & CLI Combination- able to run it without clients agents, remote execution with SSH, Flat File Config structure.
- - To run jobs and tasks remotely, either adhoc or at specific time and capture results.
- - For Data Center Automation, Allows to trigger jobs by the Scheduler or on demand using the web interface or API.

- - Custom workflows, end to end orchestration across local or remote servers, cross-platform etc.,

Jenkins & Run Deck Integration

1) In Jenkins, 
    Install the required plugins for "RunDeck" in Jenkins
Click "Manage Jenkins" => "Manage Plugins" => Click on "Available" Option.

2)  Click "Manage Jenkins => Configure System => Search for RunDeck in the Jenkins Page.
As the plugin is installed the RunDeck options appears in this page.
Add "Run Deck" options as per the below screenshot.

3) RunDeck Options
    URL :  
       Provide the URL of the RunDeck based on the server it is installed, Default Port for RunDeck is 4440.
   Login : admin(Default Value)
   Password : admin(Default Value)
   Test Connection : Click on this once you  provide above details to check connection to RunDeck Server

4) Configure Jenkins Job for RunDeck Job Triggering:

5) In Configure Jenkins Job:

Post Build Step: Select "RunDeck" from dropdown.

6) From RunDeck: Copy the Job related "UUID" as shown below into use it in Jenkins.

7) Provide the options as below: UUID to trigger etc.,

Saturday, April 2, 2016


Continuous Integration:(CI)
CI is a development practice that requires developers to integrate code into a shared repository(SVN/Clearcase/GIT) several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early.

Continuous Delivery:
Continuous Delivery is a software development discipline where you build software in such a way that the software can be released to production at any time. You're doing continuous delivery when: Your software is deployable throughout its life cycle.

Continuous Deployment:
Continuous deployment is the next step past continuous delivery, where you are not just steadily creating a deployable package, but you are actually deploying it steadily

Continuous Deployment is also consistently deploying code to production as features are completed, and as soon as you have met the release criteria for those features. That release criteria depends on your situation, and may be running some automated tests, code reviews, load tests, manual verification by a QA person or business stakeholder, or just having another pair of eyes look at your feature and make sure it doesn't explode. Again, the specific criteria can vary, but the key idea is to have a steadily flowing pipeline pushing changes to production, always moving the code forward, and keeping the pipeline as short as realistically possible.

Friday, April 1, 2016


Learning About PUPPET:

What is Puppet ?
Puppet is an open source configuration management tool. It is written in Ruby.
It runs on many unix-like systems as well as on Microsoft windows, and includes it's own declarative language to describe system configuration. Puppet is produced by Puppet Labs by Luke Kanies in 2005.

What Can Puppet be used for?
- Puppet can be used for datacenter automation and server management.
- Puppet is a tool designed to manage the configuration of Unix-like and Microsoft
  Windows systems declaratively.
- The user describes system resources and their state,
  either using Puppet's declarative language or a Ruby DSL (domain-specific language).
- Puppet can manage minimum of 2 servers until maximum of 50,000 servers.

- Puppet Architecture ?
Puppet has Puppet Master, Puppet Nodes, Puppet Agents etc.,
Puppet has been build with 2 modes:
  1) Client Server Mode : Central Server & Agents running on Separate Nodes.
  2) Serverless Mode      : Single process does all the work.

Puppet Use Cases:

More Info Yet to be Added Soon

Wednesday, March 30, 2016

What is DevOps ?


Different Definitions about DevOps:
- DevOps is a movement of people who think it's time to change in the IT Industry - Time to stop 
  wasting money & start delivering great software and building systems that scale and last long.
- DevOps is a Cultural Shift or Movement or practice that emphasizes the Collaboration and  
  Communication of both Software Developers and Operations teams while automating the process of   software Delivery and Infrastructure changes.
- DevOps is the blending of tasks performed by a company's application development and Systems
  Operations teams.
- DevOps is all about trying to avoid that epic failure and working smarter and more efficiently at the   same time. It is framework of ideas and principles designed to foster cooperation, learning and  
  coordination between development and operational groups.

What Does Development team wants ? What Does Operations Team wants ?
Development team wants Changes to be pushed to Higher Environments.
Operations teams wants Stability of the Environments.
For Deployment of any software, we need the Dev & Ops teams to work together.

DevOps Life Cycle?
DevOps starts with requirements from Customers regarding New Functionality, any change or upgrade needed.
Business Owners: 
Business Owners wants to Provide the Developers with the Changes or Upgrades the customers wants or that helps their business to develop by new features or functionalities to the softwares or websites.
Development teams works on writing new code/ new changes/ changes to existing code and pushing the changes to SCM & Later Build and Deploy to target higher Environments.
Testing teams works on finding out bugs, errors associated with the Deployed Code to the higher environments like Integration, UAT, Perf etc., - All these tests are automated by using different automation softwares such as JUnits for Unit Testing, Web browser automation testing using Selenium etc., Stress Testing using HP Load Runner, Gatling etc.,

Some Video Links about DevOps:

Tuesday, March 29, 2016

SonarQube for Android App & Jenkins

SonarQube Code Quality Analysis for Android App & Integration with Jenkins

1) Install Andriod Plugin, Java Plugin will be installed by Default which is also needed.

To install the Plugin - You need to have Administration rights in SonarQube to Install any Plugins. 
Follow these below steps for Installation.
You need to Click : Administration -> System -> Update Center
2) Click on Available Option and Search for "Android" Plugin and Install it by Clicking Install.
Once the plugin is installed - Restart SonarQube
Note : Android Plugin is also called as "Android Lint" Plugin these days.
3) Click on Quality Profiles after the Android Plugin Installation - and you can see the below Info.
4) Android Plugin is a free version, 
5) In Jenkins - Create a Job
7) In Jenkins Job : Pull the Code from SCM, whether it's from github, gitlab, Git Server , SVN etc,.
8) Build Step : Click Invoke "Standalone SonarQube Analysis" and provide the details as below.

SonarQube 4 IOS App(Swift Language) & Jenkins

SonarQube Code Quality Analysis for IOS App(Swift Language) & Integration with Jenkins

1) Install Swift Language Plugin in SonarQube If your IOS Application is written in Swift Language.

To install the Plugin - You need to have Administration rights in SonarQube to Install any Plugins. 
Follow these below steps for Installation.
You need to Click : Administration -> System -> Update Center

2) Click on Available Option and Search for "Swift" Plugin and Install it by Clicking Install.
Once the plugin is installed - Restart SonarQube
3) Click on Quality Profiles after the Swift Installation - and you can see the below Info.

4) Basically Swift Plugin is not a free version, so i went to sonarsource website and requested
 for a trail license key for swift plugin by providing my company details and my details - 
within 2-3 days i got response from sonarsource team with Trail License Key for 14 days.
5) I provided that Trail License Key in SonarQube URL at the below location.
Click on Administration -> Licenses - Swift - provide the Key there & Click Save License Settings.
8) Build Step : Click Invoke "Standalone SonarQube Analysis" and provide the details as below.