Amazon Machine Learning for Analysing Big Data

As Big data analytics gaining popularity, Amazon Web Services (AWS) launched a new technology called ‘Amazon Machine Learning,’ to analyse large amounts of information.

Amazon Machine Learning is a cloud based service designed to extract relevant information from massive data repositories. It can help companies analyse large amounts of specific data and apply them to improve the operational processes and service offerings of the company.  Its usage can vary from detecting wrong transactions to enhancing customer support. It can also be used to support innumerable business predictions made by analysing company data.


Speaking about AWS’s latest innovation, Jeff Bilger, a senior manager at Amazon Machine Learning, said, “Amazon has a long legacy in machine learning, it powers the product recommendations customers receive on It is what makes Amazon Echo able to respond to your voice, and it is what allows us to unload an entire truck full of products and make them available for purchase in as little as 30 minutes.”

Amazon Machine Learning harnesses the power of super-fast cloud computing to eliminate the process of sifting information and painstakingly developing accurate algorithms. Users can use Amazon’s servers to extract relevant data and create various predictions in the fraction of the time it currently takes.

You can read more about ‘Amazon Machine Learning’ on


Lessons from using Ansible exclusively for 2 years.

Today we’re glad to share a post written by Corban Raun about Ansible Playbooks. Corban is a Linux Systems Administrator and has been working with Ansible for 2 years. As he shared experience about his career, he came to a point in his career where he desperately needed a configuration management tool. He started looking at products like Puppet, Chef and SaltStack but he felt overwhelmed by the choice and wasn’t sure which tool to choose. During this time a friend and mentor suggested he look into a product named Ansible.

You can find him on twitter (@corbanraun) and ask any questions about Ansible. So here is great article about ansible :


As a Linux Systems Administrator, I came to a point in my career where I desperately needed a configuration management tool. I started looking at products like Puppet, Chef and SaltStack but I felt overwhelmed by the choice and wasn’t sure which tool to choose.

I needed to find something that worked, worked well, and didn’t take a lot of time to learn. All of the existing tools seemed to have their own unique way of handling configuration management, with many varying pros and cons. During this time a friend and mentor suggested I look into a lesser known product called Ansible.

No looking back

I have now been using Ansible exclusively for ~2 years on a wide range of projects, platforms and application stacks including Rails, Django, and Meteor web applications; MongoDB clustering; user management; CloudStack setup; and monitoring.

I also use Ansible to provision cloud providers like Amazon, Google, and DigitalOcean; and for any task or project that requires repeatable processes and a consistent environment (which is pretty much everything).

Ansible vs Puppet, Chef and Saltstack

One reason I chose Ansible was due to its ability to maintain a fully immutable server architecture and design. We will get to exactly what I mean later, but it’s important to note – my goal in writing this post is not compare or contrast Ansible with other products. There are many articles available online regarding that. In fact, some of the things I love about Ansible are available in other configuration management tools.

My hope with this article is actually to be able to give you some Ansible use cases, practical applications, and best practices; with the ulterior motive of persuading you that Ansible is a product worth looking into. That way you may come to your own conclusions about whether or not Ansible is the right tool for your environment.

Immutable Server Architecture

When starting a new project with Ansible, one of the first things to think about is whether or not you want your architecture to support Immutable servers. For the purposes of this article, having an Immutable server architecture means that we have the ability to create, destroy, and replace servers at any time without causing service disruptions.

As an example, lets say that part of your server maintenance window includes updating and patching servers. Instead of updating a currently running server, we should be able to spin up an exact server replica that contains the upgrades and security patches we want to apply. We can then replace and destroy the current running server. Why or how is this beneficial?

By creating a new server that is exactly the same as our current environment including the new upgrades, we can then proceed with confidence that the updated packages will not break or cause service disruption. If we have all of our server configuration in Ansible using proper source control, we can maintain this idea of Immutable architectures. By doing so we can keep our servers pure and unadulterated by those who might otherwise make undocumented modifications.

Ansible allows us to keep all of our changes centralized. One often unrealized benefit of this is that our Ansible configuration can be looked at as a type of documentation and disaster recovery solution. A great example of this can be found in the Server Density blog post on Puppet.

This idea of Immutable architecture also helps us to become vendor-agnostic, meaning we can write or easily modify an Ansible playbook which can be used across different providers. This includes custom datacenter layouts as well as cloud platforms such as Amazon EC2, Google Cloud Compute, and Rackspace. A really good example of a multi vendor Ansible playbook can be seen in the Streisand project.

Use Cases

Use Case #1: Security Patching

Ansible is an incredibly powerful and robust configuration management system. My favorite feature? Its simplicity. This can be seen by how easy it is to patch vulnerable servers.

Example #1: Shellshock

The following playbook was run against 100+ servers and patched the bash vulnerability in less than 10 minutes. The below example updates both Debian and Red Hat Linux variants. It will first run on half of all the hosts that are defined in an inventory file.

– hosts: all
gather_facts: yes
remote_user: craun
serial: “50%”
sudo: yes
– name: Update Shellshock (Debian)
apt: name=bash
when: ansible_os_family == “Debian”

– name: Update Shellshock (RedHat)
yum: name=bash
when: ansible_os_family == “RedHat”

Example #2: Heartbleed and SSH

The following playbook was run against 100+ servers patching the HeartBleed vulnerability. At the time, I also noticed that the servers needed an updated version of OpenSSH. The below example updates both Debian and RedHat linux variants. It will patch and reboot 25% of the servers at a time until all of the hosts defined in the inventory file are updated.

– hosts: all
gather_facts: yes
remote_user: craun
serial: “25%”
sudo: yes
– name: Update OpenSSL and OpenSSH (Debian)
apt: name={{ item }}
– openssl
– openssh-client
– openssh-server
when: ansible_os_family == “Debian”

– name: Update OpenSSL and OpenSSH (RedHat)
yum: name={{ item }}
– openssl
– openssh-client
– openssh-server
when: ansible_os_family == “RedHat”
– name: Reboot servers
command: reboot

Use Case #2: Monitoring

One of the first projects I used Ansible for was to simultaneously deploy and remove a monitoring solution. The project was simple: remove Zabbix and replace it with Server Density. This was incredibly easy with the help of Ansible. I ended up enjoying the project so much, I open sourced it.

One of the things I love about Ansible is how easy it is to write playbooks, and yet always have room to improve upon them. The Server Density Ansible playbook, is the result of many revisions to my original code that I started a little over a year ago. I continually revisit and make updates using newfound knowledge and additional features that have been released in the latest versions of Ansible.

Everything Else

Ansible has many more use cases than I have mentioned in this article so far, like provisioning cloud infrastructure, deploying application code, managing SSH keys, configuring databases, and setting up web servers. One of my favorite open source projects that uses Ansible is called Streisand. The Streisand project is a great example of how Ansible can be used with multiple cloud platforms and data center infrastructures. It shows how easy it is to take something difficult like setting up VPN services and turning it into a painless and repeatable process.

Already using a product like Puppet or SaltStack? You can still find benefits to using Ansible alongside other configuration management tools. Have an agent that needs to be restarted? Great! Ansible is agentless, so you could run something like:

ansible -i inventories/servers all -m service -a “name=salt-minion state=restarted” -u craun -K –sudo

From the command line to restart your agents. You can even use Ansible to install the agents required by other configuration management tools.

Best practices

In the last few years using Ansible I have learned a few things that may be useful should you choose to give it a try.

Use Ansible Modules where you can

When I first started using Ansible, I used the command and shell modules fairly regularly. I was so used to automating things with Bash that it was easy for me to fall into old habits. Ansible has many extremely useful modules. If you find yourself using the `command` and `shell` modules often in a playbook, there is probably a better way to do it. Start off by getting familiar with the modules Ansible has to offer.

Make your roles modular (i.e. reusable)

I used to maintain a separate Ansible project folder for every new application stack or project. I found myself copying the exact same roles from one project to another and making minor changes to them (such as Nginx configuration or vhost files). I found this to be inefficient and annoying as I was essentially repeating steps. It wasn’t until I changed employers that I learned from my teammates that there is much better way to set up projects. As an example, one thing Ansible lets you do is create templates using Jinja2. Let’s say we have an Nginx role with the following nginx vhost template:

server {
listen 80;

location / {
return 302 https://$host$request_uri;

server {
listen 443 ssl spdy;
ssl_certificate    /etc/ssl/certs/mysite.crt;
ssl_certificate_key    /etc/ssl/private/mysite.key;

location / {
root   /var/www/public;
index  index.html index.htm;

While the above example is more than valid, we can make it modular by adding some variables:

server {
listen 80;

location / {
return 302 https://$host$request_uri;

server {
listen 443 ssl spdy;
ssl_certificate    {{ ssl_certificate_path }};
ssl_certificate_key    {{ ssl_key_path }};
server_name {{ server_name }} {{ ansible_eth0.ipv4.address }};
location / {
root   {{ web_root }};
index  index.html index.htm;

We can then alter these variables within many different playbooks while reusing the same Nginx role:

– hosts: website
gather_facts: yes
remote_user: craun
sudo: yes
ssl_certificate_path: “/etc/ssl/certs/mysite.crt”
ssl_key_path: “/etc/ssl/private/mysite.key”
server_name: “”
web_root: “/var/www/public”
– nginx

Test, Rinse, Repeat

Test your changes, and test them often. The practice and idea of testing out changes is not a new one. It can, however become difficult to test modifications when both sysadmins and developers are making changes to different parts of the same architecture. One of the reasons I chose Ansible is its ability to be used and understood by both traditional systems administrators and developers. It is a true development operations tool.

For example, it’s incredibly simple to integrate Ansible with tools like HashiCorp’s Vagrant. By combining the tools, you and your developers will be more confident that what is in production can be repeated and tested in a local environment. This is crucial when troubleshooting configuration and application changes. Once you have verified and tested your changes with these tools you should have relatively high confidence that your changes should not break anything (remember what immutable means?).

What now?

As mentioned previously, my goal was not to compare Ansible to other products; afteral you can find uses for it in environments where you already have other configuration management tools in place; and some of the features I have talked about are even available in other products.

Hopefully this article gave you an idea as to why Ansible may be useful in your server architecture. If you only take one thing from this article, let it be this:Ansible can help you maintain and manage any server architecture you can imagine, and it’s a great place to get started in the world of automation.


Reference :

Ansible Bootstrapping

In this video Creston Jamison from ‘Ruby Tree Software’ explains how to write playbook with Ansible to bootstrap your server for future Ansible runs as well as adding some security.

[youtube width=”550″ height=”340″]k7pw7z00CeU[/youtube]

For more details, visit

vim cheat sheet

The vim editor has two modes of operation

– command mode

– insert mode


 Here are some very essential commands to work with ‘command mode’ in vim.


Cursor Movement



h or left arrow Move cursor left
j or down arrow Move cursor down
k or up arrow Move cursor up
l or right arrow Move cursor right
w Jump forwards to start of the word
W Jump forwards to start of the word (words can contain punctuation)
e Jump forwards to end of the word
E Jump forwards to end of the word (words can contain punctuation).
b Jump backwards to start of the word
B Jump backwards to start of the word (words can contain punctuation)
0 (zero) Jump to start of the line
^ Jump to the first non-blank character of line
G Move to the end of the file
$ Move to the end of the current line
Ctrl-B Move up (back) one screen
Ctrl-F Move down (forward) one screen


Editing Cut and paste
i Insert at present cursor position yy Copy a entire line
I Insert at begining of current cursor position 2yy Copy 2 lines
r Replace a single character y$ Copy to end of aline
J Join line below to current one p Paste after current cursor position
cc replace entire line P Paste before current cursor position
cw Change to the end of the word dd Cut a line
c$ Change to the end of the line 2dd Cut 2 lines
s Delete character and substitute text dw Cut word
S Delete line and substitute text (Same as cc) D Cut to the end of the line
x Delete character at current cursor position x Cut character
X Delete character immediately before of current cursor position v start visual mode, mark lines then use command
xp Transpose two letters (cut and paste) n Find the next match
u Undo N Find the previous match
Ctrl + r redo /regex Search forward for regex
. Repeat last command ?regex Search backward for regex.


:q Quit without the changes Force this with :q! :e file Load file in place of current file
:wq Save the changes and quit. Force this with :wq! :r file Insert the contents of file after current cursor position
: x Write the file contents and quit


Reference :


Puppet Enterprise 3.8 – Easiest way to Automate Provisioning and Manage Infrastructure as Code

Provisioning new machines into an environment can be a common source of IT bottlenecks and downtime. Manually racking and stacking servers or deploying cloned images can often delay the availability of new systems by days or weeks. Manual processes can introduce human error and magnify those problems when replicated within cloned images. To withdraw these problems recently puppet has announced ‘Puppet Enterprise 3.8’.


Newly announced Puppet Enterprise 3.8 includes

  • New provisioning capabilities for Docker containers.
  • AWS infrastructure and bare metal servers.
  • ‘Puppet Code Manager’ app based on r10k technology which accelerates the deployment of infrastructure changes and improves reliability by giving you a consistent and automated way to change, review, test and promote the Puppet code you use to define your infrastructure.
  • Automated Provisioning with Next-Generation Puppet Node Manager which includes :
  1. Containers (New puppet supported module for Docker).
  2. Cloud Environments (AWS module which allows provision, configure and manage AWS resources in a consistent and repeatable way).
  3. Bare Metal (Razor which is Puppet Labs’ bare metal provisioning capability, moves from tech preview to a fully supported solution for provisioning bare metal servers.)

When Can I Get It?

Both Puppet Enterprise 3.8 (including the next-generation Node Manager, Razor and Puppet Code Manager) and the Puppet Supported module for Docker will be generally available in late April.

The AWS module is now available on the Puppet Forge.

For the more details, visit,


Why is devops growing

Hey Geeks!! Still confused with what is devops , what are its benefits or how is it different?


In this presentation ReleaseLogics explains why is Devops growing.

This presentation gives the simple explanation about

1. What is devops
2. Benefits
3. Forces behind the growth of Devops
4. How does it differ from traditional/old way of releasing software

This presentation will help you to know the importance of using Devops methodology.

Google Cloud Platform Gets Integrated Log Management

Google has added a service that makes it easy to ingest, view, search and analyze logs generated by Compute Engine and App Engine.

Google Cloud Logging is available in beta to help you manage all of your Google Compute Engine and Google App Engine logs in one place, and collect, view, analyze and export them. By combining Google Cloud Monitoring with Cloud Logging, you gain a powerful set of tools for managing operations and increasing business insights.

34002The Cloud Logging service allows you to:

1. Ingest
2. Search
3. Analyze
4. Archive

See more details at,

“Click To Deploy” Puppet — Really?

Google has introduced “Click to deploy” option using which you can deploy Open Source Puppet on Google Compute Engine.

The feature is designed to make it simpler for systems administrators to quickly set up the tool on Google’s Compute Engine platform and use it to automate tasks like installing, configuring and upgrading software on virtual machines.

arrow-161142_640(1)This marks the first time that an IT automation tool has been made availableas a click-to-deploy option in the cloud, said Nigel Kersten, CIO of Puppet Labs,in comments on the Google Cloud Platform Blog.

The version of Puppet that Google has enabled with a one-click deploy option is open source and available for free. In addition, Puppet Labs also sells a commercial version of the software for businesses that want additional support with the product. The tool runs on all major Unix and Linux versions and also on Mac OS X and Windows.

See more deatail at,

The “Cold” Cloud Wars : Amazon VS Google

Google Inc. vs. Who Will Win The “Cold” Cloud Wars?
This is an interesting article about the competition between Google and Amazon.

Google recently introduced “Nearline” which can store older data in “cold storage”. Amazon had targeted the same market by introducing “Glacier” which is a similar service in 2012.

Google-v-amazonThe article includes

  • How both Google and amazon are trying to expand their ecosystems.
  • How much revenue comes from the cloud business.
  • Why everyone is eyeing the cloud.


For more details visit,

How DevOps can redefine your IT strategy

Rich Hein says “People today expect their software to work wherever they are,whether they are using a mobile device or a desktop PC. As a result, IT must respond to these demands quickly. DevOps aims to do just that by allowing organizations to produce and release more high-quality code better and faster.”

The article gives you an overview about

1. What is Devops?nerd-309458_640

2. DevOps value.

3. Finding or developing DevOps talent.

4. DevOps departments.

5. Making a career in DevOps.

6. DevOps certifications?


He concluded the article by saying “DevOps isn’t something you can just decide to do. Much like big data, it requires a culture change and a breaking down of the functioning silos within the IT organization. It needs to start at the top. The end-game is to have your development and operations team working in a collaborative fashion toward the collective goal of continuous delivery of better software.”

For more details visit,

Kitematic – The easiest way to use Docker on Mac

Kitematic completely automates the Docker installation and setup process and provides an intuitive graphical user interface (GUI) for running Docker containers on the Mac. Kitematic integrates with Docker Machine to provision a VM and install Docker Engine locally on your Mac.

Kitematic is …


1. Fast and easy to setup.
2. Includes docker hub integration.
3. Provides seamless experience between CLI and GUI.
4. Includes more advanced Features.

To download Kitematic and to get more details visit,

Continuous Delivery with Puppet Enterprise and CloudBees Jenkins Enterprise Webinar

Hey Devops Geeks check out out the upcoming Webinar on “Continuous Delivery with Puppet Enterprise and CloudBees Jenkins Enterprise”

Its a 60-minute webinar in which you’ll learn

jenkins1. How to use Puppet Enterprise together with CloudBees Jenkins Enterprise.

2. How it will help you to achieve continuous delivery of infrastructure services.

The webinar is on Tuesday, March 17th at 2pm ET
For more details and to register, visit