Friday, 31 March 2017

Ubuntu 17.04 (Zesty Zapus) Beta 2 Installation on VMware Workstation

Harry
Ubuntu 17.04 (Zesty Zapus) Beta 2 installation
Ubuntu 17.04 (Zesty Zapus) Installation on VMware Workstation

This video tutorial shows Ubuntu 17.04 (Zesty Zapus) Beta 2 installation on VMware Workstation/Player step by step. This tutorial is also helpful to install Ubuntu 17.04 on physical computer or laptop hardware. We also install VMware Tools (Open VM Tools) on Ubuntu 17.04 Beta 2 for better performance and usability features such as Fit Guest Now, Drag-Drop File and Clipboard Sharing.

Ubuntu 17.04 Beta 2 Installation Steps:

  1. Download Ubuntu 17.04 Beta 2 ISO
  2. Create Virtual Machine on VMware Workstation/Player
  3. Start Ubuntu 17.04 Zesty Zapus Installation
  4. Install VMware Tools (Open VM Tools)
  5. Test VMware Tools Features: Fit Guest Now, Drag-Drop File and Clipboard Sharing

Installing Ubuntu 17.04 (Zesty Zapus) Beta 2 on VMware Workstation


Ubuntu 17.04 Zesty Zapus New Features and Improvements

  1. The default DNS resolver is now systemd-resolved.
  2. For new installs, a swap file will be used instead of a swap partition.
  3. Ubuntu 17.04 is based on the Linux Kernel 4.10.
  4. It support printers which allow printing without printer-specific drivers.
  5. LibreOffice has been updated to 5.3.
  6. The Calendar app now has a Week view.
  7. gconf is no longer installed by default since it has long been superseded by gsettings.
  8. Apps provided by GNOME have been updated to 3.24. Exceptions are the Nautilus file manager (3.20), Terminal (3.20), Evolution (3.22), and Software (3.22).

Ubuntu 17.04 Desktop Minimum System Requirements

  1. 700 MHz processor (about Intel Celeron or better)
  2. 512 MB RAM (system memory)
  3. 5 GB of hard-drive space (or USB stick, memory card or external drive but see LiveCD for an alternative approach)
  4. VGA capable of 1024x768 screen resolution
  5. Either a CD/DVD drive or a USB port for the installer media
  6. Internet access is helpful
Hope you found this Ubuntu 17.04 (Zesty Zapus) Beta 2 installation tutorial helpful and informative. Please consider sharing it. Your feedback and questions are welcome!

Thursday, 30 March 2017

How to Install MKVToolNix 10.0.0 on Ubuntu 16.04, 16.10

Harry
   MKVtoolnix is a set of tools to create, alter, split, join and inspect Matroska files(mkv). With these tools one can get information about (mkvinfo) Matroska files, extract tracks/data from (mkvextract) Matroska files and create (mkvmerge) Matroska files from other media files. Matroska is a multimedia file format aiming to become THE new container format for the future. Use MKVCleaver or gMKVExtractGUI to extract/demultiplex mkv video and audio files.


MKVToolNix 10.0.0 changelog:
  •     mkvmerge: AVC/h.264 parser: mkvmerge will now drop all frames before the first key frame as they cannot be decoded properly anyway.
  •     mkvmerge: HEVC/h.265 parser: mkvmerge will now drop all frames before the first key frame as they cannot be decoded properly anyway.
  •     mkvmerge: HEVC/h.265 parser: added a workaround for invalid values for the “default display window” in the VUI parameters of sequence parameter sets.
  •     mkvmerge: MP4 reader: fixed track offsets being wrong in certain situations regarding the presence or absence of edit lists (‘elst’ atoms) & composition timestamps (‘ctts’ atoms).
  •     mkvmerge: MP4 reader: offsets in “ctts” are now always treated as signed integers, even with version 0 atoms.
  •     mkvinfo: the timestamps of SimpleBlocks with negative timestamps are now shown correctly.
  •     mkvmerge: Matroska reader: fixed handling BlockGroups and SimpleBlocks with negative timestamps.
  •     mkvmerge: MP3 packetizer: the MP3 packetizer will no longer drop timestamps from source containers if they go backwards. This keeps A/V in sync for files where the source was in sync even though their timestamps aren’t monotonic increasing.
  •     mkvmerge: AVC/h.264 parser: mkvmerge will now drop timestamps from the source container if no frame is emitted for that timestamp.
  •     mkvmerge: HEVC/h.265 parser: mkvmerge will now drop timestamps from the source container if no frame is emitted for that timestamp. Fixes the HEVC equivalent of the problem with AVC described in #1908.
  •     mkvextract: SSA/ASS: fixed extraction when the “Format” line in the “[Events]” section contains less fields than the default for SSA/ASS would indicate.

Installation instructions:

 Opening terminal (Ctrl+Alt+T) and running the command:

Ubuntu 16.04 xenial


$ wget -q -O - https://mkvtoolnix.download/gpg-pub-moritzbunkus.txt | sudo apt-key add -

$ echo "deb http://mkvtoolnix.download/ubuntu/xenial/ ./" | sudo tee -a /etc/apt/sources.list

$ sudo apt-get update

$ sudo apt-get install mkvtoolnix mkvtoolnix-gui

Ubuntu 16.10 yakkety

$ wget -q -O - https://mkvtoolnix.download/gpg-pub-moritzbunkus.txt | sudo apt-key add -

$ echo "deb http://mkvtoolnix.download/ubuntu/yakkety/ ./" | sudo tee -a /etc/apt/sources.list

$ sudo apt-get update

$ sudo apt-get install mkvtoolnix mkvtoolnix-gui
 

How to Install GScan2PDF 1.7.3 on Ubuntu 16.04, 16.10, 17.04

Harry
   gscan2pdf is a GUI to ease the process of producing PDFs or DjVus from scanned documents. You scan one or several pages in with File/Scan, and create a PDF of selected pages with File/Save PDF. At maturity, the GUI will have similar features to that of the Windows Imaging program, but with the express objective of writing a PDF, including metadata. Scanning is handled with SANE via scanimage. PDF conversion is done by libtiff. Perl is used for portability and ease of programming, with gtk2-perl for the GUI. This should therefore work more or less out of the box on any system with gtk2-perl, scanimage, and libtiff.


Features
  •     Compatible with any SANE-capable scanner
  •     Crop, threshold & clean up scan
  •     Reorder pages via DND
  •     Write multi-page scan to PDF, DjVu or TIFF
  •     Write single scans to any format supported by ImageMagick
  •     Ocropus & tesseract support
  •     Place OCR output at boundary boxes supplied by Ocropus
  •     Incorporate PDF metadata in filename

Installation instructions:

   Opening terminal (Ctrl+Alt+T) and running the command:

$ sudo add-apt-repository ppa:jeffreyratcliffe/ppa

$ sudo apt-get update

$ sudo apt-get install gscan2pdf


Sunday, 26 March 2017

Antergos 17.3 Installation on VMware Workstation

Harry
Antergos 17.3 Installation
Antergos 17.3 Installation on VMware Workstation

This video tutorial shows Antergos 17.3 installation on VMware Workstation/Player step by step. This tutorial is also helpful to install Antergos Linux on physical computer or laptop hardware. We also install VMware Tools (Open VM Tools) on Antergos for better performance and usability features such as Fit Guest Now, Drag-Drop File and Clipboard Sharing.

Antergos 17.3 Linux Installation Steps:

  1. Download Antergos 17.3 ISO
  2. Create Virtual Machine on VMware Workstation/Player
  3. Start Antergos Installation
  4. Install VMware Tools (Open VM Tools)
  5. Test VMware Tools Features: Fit Guest Now, Drag-Drop File and Clipboard Sharing

Installing Antergos 17.3 on VMware Workstation



What is Antergos Linux?

Antergos is a modern, elegant and powerful operating system based on Arch Linux. It started life under the name of Cinnarch, combining the Cinnamon desktop with the Arch Linux distribution, but the project has moved on from its original goals and now offers a choice of several desktops, including GNOME 3 (default), Cinnamon, Razor-qt and Xfce. Antergos also provides its own graphical installation program. Antergos is a rolling release distribution. Your entire system, from the base OS components to the applications that you install, will receive updates as they are released upstream. Antergos is available in many languages including Spanish, Galician, Catalan, English, German, and more.
Antergos Website: https://antergos.com/

Hope you found this Antergos 17.3 installation tutorial helpful and informative. Please consider sharing it. Your feedback and questions are welcome!

Saturday, 25 March 2017

Arch Linux 2017 Installation with Budgie Desktop and Apps on VMware Workstation

Harry
Arch Linux 2017 Installation with Budgie Desktop
Arch Linux 2017 Installation with Budgie Desktop on VMware Workstation

This video tutorial shows Arch Linux 2017 Installation with Budgie Desktop on VMware Workstation/Player step by step. We'll also install applications such as Firefox, VLC, GIMP, FileZilla, Gedit and LibreOffice on Arch Linux 2017.03. This tutorial is also helpful to install Arch Linux 2017 on physical computer or server hardware. We'll also install and test VMware Tools (Open VM Tools) on Arch Linux for better performance and usability features such as Fit Guest Now, Drag-Drop File, Clipboard Sharing and Mouse Integration.

Arch Linux 2017 with Budgie Installation Steps:

  1. Download Arch Linux 2017.03 ISO
  2. Create Virtual Machine on VMware Workstation/Player
  3. Start Arch Linux Base Installation
  4. Install and Configure Xorg and Budgie Desktop
  5. Installing Firefox, VLC, GIMP, FileZilla, Gedit and LibreOffice Applications on Arch Linux
  6. Installing and Configuring VMware Tools (Open VM Tools)
  7. Arch Linux 2017 Budgie Desktop Review

Installing Arch Linux 2017 Budgie Desktop on VMware Workstation



Arch Linux 2017.03 New Features and Improvements

Due to the decreasing popularity of i686 among the developers and the community, they have decided to phase out the support of 32-bit architecture. Arch Linux 2017.02 ISO was the last that allows to install 32-bit Arch Linux. The next 9 months are deprecation period, during which i686 will be still receiving upgraded packages. Starting from November 2017, packaging and repository tools will no longer require that from maintainers, effectively making i686 unsupported. There's only one option for installing the Arch Linux operating system on new PCs, for 64-bit (x86_64) platforms.
Arch Linux Website: https://www.archlinux.org/

Arch Linux Minimum System Requirements

Arch Linux should run on any x86_64-compatible machine with a minimum of 512 MB RAM. A basic installation with all packages from the base group should take less than 800 MB of disk space. As the installation process needs to retrieve packages from a remote repository, a working internet connection is required.


What is Budgie Desktop?

Budgie is a distro-agnostic desktop environment, leveraging GNOME technologies such as GTK+, and is developed by Solus project as well as contributors from numerous communities such as Arch Linux and Ubuntu Budgie. Budgie is the default desktop of Solus Operating System, written from scratch. Besides a more modern design, Budgie can emulate the look and feel of the GNOME 2 desktop.
Budgie Desktop Website: https://budgie-desktop.org/

Hope you found this Arch Linux 2017 installation with Budgie Desktop and Apps tutorial helpful and informative. Please consider sharing it. Your feedback and questions are welcome!

Friday, 24 March 2017

Part 5: Ansible Galaxy

Anonymous
It's been a while since I wrote Parts 1,2,3,4 on my Ansible Tutorial series, but I've recently changed my approach somewhat when using Ansible, and certainly when I build on Parallax.
I've started using more and more from Ansible Galaxy.  For those of you who don't know, Galaxy is a community "app store"-like thing for sharing reusable Ansible Roles.
https://galaxy.ansible.com/
Let's pretend we want to deploy a staging server for a Python/Django application, using Postgres as the backend database all on a single server running Ubuntu 14.04 Trusty.
I've recently done something similar, so I know roughly what roles I need to include.  YMMV.
Starting with the basic stuff.  Let's find a role to install/configure Postgres.
https://galaxy.ansible.com/explore#/
Click the "database" category.
I tend to like to sort by Average Score, in Reverse order, so you get the highly rated ones at the top.
The top-rated Postgres role is from ANXS https://galaxy.ansible.com/list#/roles/512
There's a bunch of useful links on that page, one to the role's github source, and the issue tracker.
Below, there's a list of the supported platforms (taken from the role's metadata yml file).
Just check that your target OS is listed there, and everything will probably work fine.
It's also worth checking that your ansible installed version is at least as new as the role's Minimum Ansible Version.
Starting with a base-point of Parallax (because it still has some really useful bits and bobs in it - like 'common')..
cd ./playbooks/part5_galaxy (or whatever you've called your playbook set).
If you want to directly install the role into the roles/ directory, you'll need to append the -p flag, and the path (relative or absolute) to your project's roles directory.  Otherwise they tend to get installed in a global location (which is a bit of a pain if you're not root).
So when you run:
ansible-galaxy install -p roles ANXS.postgresql

 downloading role 'postgresql', owned by ANXS
 no version specified, installing v1.0.3
 - downloading role from https://github.com/ANXS/postgresql/archive/v1.0.3.tar.gz
 - extracting ANXS.postgresql to roles/ANXS.postgresql
ANXS.postgresql was installed successfully
You should have output that resembles that, or something vaguely similar.
The next thing to do, is to integrate that role into our playbook.
In tutorial.yml, you can see that there's a vars: section in the play definition, as well as some variables included when a role is listed.
This also introduces a slightly different way of specifying a role within the playbook, where you can follow each role up with the required options.
There's an option within ANXS.postgresql to use monit to ensure that postgresql server is always running.  If you want to enable this, you will also need to install the ANXS.monit role.
In a way not entirely different to pip freeze, and the requirements file, you can run
ansible-galaxy list -p roles/ >> galaxy-roles.txt 
and then be able to reimport the whole bunch of useful roles with a single command:
ansible-galaxy install -r galaxy-roles.txt -p roles
I've determined from past experience that the following Galaxy roles tend to play nicely together, and will proceed to install them in the tutorial playbook so you get some idea of how a full deployment workflow might look for a simple application.
These are the roles I've used..
 ANXS.apt, v1.0.2
 ANXS.build-essential, v1.0.1
 ANXS.fail2ban, v1.0.1
 ANXS.hostname, v1.0.4
 ANXS.monit, v1.0.1
 ANXS.nginx, v1.0.2
 ANXS.perl, v1.0.2
 ANXS.postgresql, v1.0.3
 ANXS.python, v1.0.1
 brisho.sSMTP, master
 EDITD.supervisor_task, v0.8
 EDITD.virtualenv, v0.0.2
 f500.project_deploy, v1.0.0
 joshualund.ufw, master
ANXS provide a great many roles which all play nicely.  Some of those are included as they are dependencies of other roles.  I tend to use sSMTP to forward local mail to Sendgrid, because I hate running email servers.
f500.project_deploy is a capistrano-like deployment role for Ansible which supports the creation of symlinks to the current deployed version (which subsequently allows rollbacks).
I don't want to go into the process of explaining how to modify this to deploy a Django application, I'm going to assume you've got enough information to figure that out for yourself.
I've also added the ufw role, which configures Ubuntu's ufw package, a neat interface to IPTables.
Basically, it should be quite easy to see how it is possible to build a playbook without having to write quite so much in the way of new ansible tasks/modules.

Other Useful Commands:

ansible-galaxy init 
This will create a role in a format ready for submission to the Galaxy community.
ansible-galaxy list
Show currently installed roles.
ansible-galaxy remove [role name] 
Removes a currently installed role.

Endnote

When you look at the list of available roles, it's quite staggering what you could possible integrate, without having to do too much coding yourself.
It's fantastic.  At the time I wrote this article, there were 7880 users, and 1392 roles in total.  It's also growing rapidly day on day.
There's plenty more information on the Galaxy intro page, which covers how to share your own roles.

Part 4: Ansible Tower

Anonymous
You may remember that in January, I wrote a trilogy of blogposts surrounding the use of Ansible, as a handy guide to help y’all get started.  I’ve decided to revisit this now, and write another part, about Ansible Tower.
In the 6-odd months since I wrote Parts 1, 2 and 3 of my Getting Started with Ansible guide, it’s had over 10,000 unique visitors.  I’m quite impressed with that alone.  I’ve built the ansible-based provisioning and deployment pipelines for two more companies, both based off my Parallax starting point I’ve been working on since January.  That alone has been gathering Stars and Forks on Github.
And so, to part four: Ansible Tower.
Ansible Tower is the Web-based User Interface for Ansible, developed by the company behind the Ansible project.
It provides an easy-to-use dashboard, and role-based access control, so that it’s easier to allow individual teams access to use Ansible for their deployments, without having to rely on dedicated build engineers / DevOps teams to do it for them.
There’s also a REST API built into Tower, which aids automation tasks (we’ll come back to this in Part 5).
In this tutorial, I’m going to configure a server running Ansible Tower, and connect it to an Active Directory system.  You can use any LDAP directory, but Active Directory is probably the most  commonly found in Enterprise deployments.

Prerequisites:

Ansible Tower server (I’m using a VMware environment, so both my servers are VMs)
1 Core, 1GB RAM Ubuntu 12.04 LTS Server, 64-bit
Active Directory Server (I’m using Windows Server 2012 R2)
2 Cores, 4GB RAM
Officially, Tower supports CentOS 6, RedHat Enterprise Linux 6, Ubuntu Server 12.04 LTS, and Ubuntu Server 14.04 LTS.
Installing Tower requires Internet connectivity, because it downloads from their repo servers.
I have managed to perform an offline installation, but you have to set up some kind of system to mirror their repositories, and change some settings in the Ansible Installer file.
I *highly* recommend you dedicate a server (vm or otherwise) to Ansible Tower, because the installer will rewrite pg_hba.conf and supervisord.conf to suit its needs.  Everything is easier if you give it it’s own environment to run in.
You *might* be able to do it in Docker, although I haven’t tried, and I’m willing to bet you’re asking for trouble.
I’m going to assume you already know about installing Windows Server 2012 and building a domain controller. (If there's significant call for it, I might write a separate blog post about this...)

Installation Steps:

 SSH into the Tower Server, and upload the ansible-tower-setup-latest.gz file to your ~directory.
Extract it
Download and open http://releases.ansible.com/ansible-tower/docs/tower_user_guide-latest.pdf in a browser tab for perusal and reference.
Install dependencies:
sudo apt-get install python-dev python-yaml python-paramiko python-jinja2 python-pip sshpass
sudo pip install ansible
cd ansible-tower-setup-$VERSION 
(where $VERSION is the version of Ansible it untarred.  Mine’s 1.4.11.)
It should come as no surprise that the Ansible Tower installer is actually an Ansible Playbook (hosts includes 127.0.0.1, and it’s all defined in group_vars/all and site.yml) - Neat, huh?
Edit group_vars/all to set some sane defaults - basically changing passwords away from what they ship with.
pg_password: AWsecret
admin_password: password
rabbitmq_password: "AWXbunnies"
**Important** - You really need to change these default values, otherwise it’d be theoretically possible that you could expose your secrets to the world!
The documentation says if you’re doing to do LDAP integration, you should configure that now.
I'm actually going to do LDAP integration at a later stage.
 sudo ./setup.sh
With any luck, you should get the following message.
The setup process completed successfully.

With Ansible Tower now installed, you can open a web browser, and go to http://
You’ll probably get presented with an unsigned certificate error, but we can change that later.

Sidenote on SSL.  

It’s all done via Apache2, so the file you’d want to edit is:
/etc/apache2/conf.d/awx-httpd-443.conf

and edit:
  SSLCertificateFile /etc/awx/awx.cert
  SSLCertificateKeyFile /etc/awx/awx.key

You can now log into Tower, with the username: admin, and whatever password you specified in group_vars/all at setup time.
In terms of actually getting started with Ansible Tower, I highly recommend you work your way through the PDF User guide I linked earlier on.  There’s a good example of a quickstart, and it’s really easy to import your standalone playbooks.
When you import a playbook, either manually or with some kind of source control mechanism, it’s important to remember that in the playbook YAML file, you set hosts: all, because the host definition will now be controlled by Tower, so if you forget to do that, you’ll probably find nothing happens when you run a job.
Now for the interesting part…(and let’s face it, it’s the bit you’ve all been waiting for)

Integrating Ansible Tower with LDAP / Active Directory

Firstly, make sure that you can a) ping the AD server and b) make a LDAP connection to it.
ping is easy.. Just ping it by hostname (if you’ve set up DNS or a hosts file)
LDAP is pretty straight forward too, just telnet into it on port 389.  If you get Connection Refused, you’ll want to check Windows Firewall settings.
On the Tower server, open up:
 /etc/awx/settings.py
After line 80 (or thereabouts) there’s a section on LDAP settings.
Settings you’ll want to change (and some sane examples):
AUTH_LDAP_SERVER_URI = ''
set this to the ldap connection string for your server:
AUTH_LDAP_SERVER_URI = 'ldap://ad0.wibblesplat.com:389'
On the AD Server, open Users and Computers, and create a user in Managed Service Accounts called something like “Ansible Tower” and assign it a suitably obscure password.  Mark it as “Password never expires”.
We’ll use this user to form the Bind DN for LDAP authentication.
I’ve also created another account in AD->Users, as “Bobby Tables” - with the sAMAccountName of bobby.tables, and a simple password.  We’ll use this to test that the integration is working later on.
We’ll need the full DN path for the config file, so open Powershell, and run
`dsquery user`
In the list that's returned, look for the LDAP DN of your newly created user:
 “CN=Ansible Tower,CN=Managed Service Accounts,DC=wibblesplat,DC=com”
Back in /etc/awx/settings.py, set:
AUTH_LDAP_BIND_DN = 'CN=Ansible Tower,CN=Managed Service Accounts,DC=wibblesplat,DC=com'
# Password using to bind above user account.
AUTH_LDAP_BIND_PASSWORD = 'P4ssW0Rd%!'
AUTH_LDAP_USER_SEARCH = LDAPSearch(
    ‘CN=Users,DC=wibblesplat,DC=com',   # Base DN
    ldap.SCOPE_SUBTREE,             # SCOPE_BASE, SCOPE_ONELEVEL, SCOPE_SUBTREE
    '(sAMAccountName=%(user)s)',    # Query
)
You’ll want to edit the AUTH_LDAP_USER_SEARCH  attribute to set your site’s Base DN correctly.  If you store your Users in an OU, you can specify that here.
AUTH_LDAP_GROUP_SEARCH = LDAPSearch(
    'CN=Users,DC=wibblesplat,DC=com',    # Base DN
    ldap.SCOPE_SUBTREE,     # SCOPE_BASE, SCOPE_ONELEVEL, SCOPE_SUBTREE
    '(objectClass=group)',  # Query
)
Again, you’ll want to specify your site’s Base DN for Groups here, and again, if you store your groups in an OU, you can specify that.
This is an interesting setting:
# Group DN required to login. If specified, user must be a member of this
# group to login via LDAP.  If not set, everyone in LDAP that matches the
# user search defined above will be able to login via AWX.  Only one
# require group is supported.
#AUTH_LDAP_REQUIRE_GROUP = 'CN=ansible-tower-users,CN=Users,DC=wibblesplat,DC=com'
# Group DN denied from login. If specified, user will not be allowed to login
# if a member of this group.  Only one deny group is supported.
#AUTH_LDAP_DENY_GROUP = 'CN=ansible-tower-denied,CN=Users,DC=wibblesplat,DC=com'
Basically, you can choose a group, and if the user’s not in that group, they ain’t getting in.
Both of these are specified as Group DNs:
It’s easy to discover Group DNs with
dsquery group
from Powershell on your AD server.
Another clever setting.  It’s possible to give users the Tower “is_superuser” flag, based on AD/LDAP group membership:
AUTH_LDAP_USER_FLAGS_BY_GROUP = {
    'is_superuser': 'CN=Domain Admins,CN=Users,DC=wibblesplat,DC=com',
}
Finally, the last setting allows you to map Tower Organisations (Organizations) to AD/LDAP groups:
AUTH_LDAP_ORGANIZATION_MAP = {
    'Test Org': {
        'admins': 'CN=Domain Admins,CN=Users,DC=wibblesplat,DC=com',
        'users': ['CN=ansible-tower-users,CN=Users,DC=wibblesplat,DC=com'],
        'remove_users' : False,
        'remove_admins' : False,
    },
    #'Test Org 2': {
    #    'admins': ['CN=Administrators,CN=Builtin,DC=example,DC=com'],
    #    'users': True,
    #    'remove_users' : False,
    #    'remove_admins' : False,
    #},
}
Committing the changes is as easy as restarting Apache, and the AWX Services.
Restart the AWX Services first, with
supervisorctl restart all
Now restart Apache, with:
service apache2 restart
I created two groups in
CN=Users,DC=wibblesplat,DC=com
Called “ansible-tower-denied” and “ansible-tower-users”.
I created two users, “Bobby Tables (bobby.tables)” - in ansible-tower-users, and “Evil Emily (evil.emily)” - in ansible-tower-denied.
When I restarted Ansible’s services, and tried to log in with bobby.tables, I get in.

When I view Organizations, I can see Test Org (according to the mapping), and Bobby Tables in that organisation.

When I try to log in as evil.emily, I get “Unable to login with provided credentials.” - Which is what we expect, as this user is in the deny access group.


Using Ansible Tower

As far as how to use Tower is concerned, I don't really want to re-hash what Ansible have already said in their User Manual PDF. 
I will, however walk through the steps to getting Parallax imported, and deployed on a test server.
For this purpose, I've built a Test VM in my development environment, running Ubuntu 14.04.  I'm going to configure Tower to manage this VM, download Parallax playbooks from Github, and create a job template to run them against the test server.
In this example, I'm logged in as the 'admin' superuser account, although with the correct permissions configured within Tower, using Active Directory, or manual permission assignment, it's possible to do this on an individual, and a team level.

A few quick definitions: 

Organizations :- This is the top-level unit of hierarchical organisation in Tower.  An Organization contains Users, Teams, Projects and Inventories.  Multiple Organizations can be used to create multi-tenancy on a Tower server.
Users : - These are the logins to Tower.  They're either manually created, or mapped in from LDAP.  Users have Properties (name, email, username, etc..), Credentials (used to connect to services and servers), Permissions (to give them Role-based access control to Inventories and Deployments), Organizations (organizations they're members of), and Teams (useful for subdividing Organizations into groups of users, projects, credentials and permissions).

Teams : - A team is a sub-division of an organisation.  Imagine you have a Networks team, who have their own servers.  You might also have a Development team, who need their development environment.  Creating Teams means that Networks manage theirs, and Development manage their own, without knowledge of each others' configurations.

Permissions : - These tie users and teams to inventories and jobs.  You have Inventory permissions, and Deployment permissions.
Inventory permissions give users and teams the ability to modify inventories, groups and hosts.
Deployment permissions gives users and teams the ability to launch jobs that make changes "Run Jobs", or launch jobs that check state "Check Jobs"

Credentials : - These are the passwords and access keys that Tower needs to be able to ssh (or use other protocols) to connect to the nodes it's managing.

There are a few types of Credentials that Tower can manage and utilise:
SSH Password - plain old password-based SSH login.
SSH Private Key - used for key-based SSH Authentication.
SSH Private Key w/ Passphrase - Used to protect the private key with a passphrase.  The passphrase may be optionally stored in the database.  If it's not, Tower will ask you for the password when it needs to use the Credential.
Sudo Password - Required if sudo has to run, and needs a password to auth.
AWS Credentials - Stores AWS Access Key and Secret Key securely in the Tower Database.
Rackspace credentials - Stores Rackspace username and Secret Key.
SCM Credentials - Stores credentials used for accessing source code repositories for the storage and retrieval of Projects.
Projects : - These are where your playbooks live.  You can either add them manually, by cloning into
/var/lib/awx/projects
or by using Git, SVN, or Mercurial and having Tower do the clone automatically before each job run (or on a schedule).
Inventories : - These effectively replace the grouping within the Playbook directory hosts file.  You can define groups of hosts, and then configure individual hosts within these groups.  It's possible to assign host-specific variables, or Inventory specific variables from this.

Groups : - These live in Inventories, and allow you to collect groups of similar hosts, to which you can apply a playbook.

Hosts : - These live in Groups, and define the IP address / Hostname of the node, plus some host variables.

Job Templates : - This is basically a definition of an Ansible job, that ties together the Inventory (and its hosts/groups), a Project (and its Playbooks), Credentials, and some extra variables.  You can also specify tags here (like --tags on the ansible-playbook command line).
Job Templates can also accept HTTP Callbacks, which is a way that a newly provisioned host can contact the Tower server, and ask to be provisioned.  We'll come back to this concept in Part 5.
Jobs : - These are what happens when a Job Template gets instantiated, and runs a playbook against a set of hosts from the relevant Inventory.
Running Parallax with Tower
The first thing we need to do (unless you've already done this / or had one automatically created by LDAP mapping), is to create an Organization. - Again, it's best to refer to the extant Ansible Tower documentation linked above for the best way to do this.
I've actually mapped my Test Org in via the LDAP interface, so the next step is to create a Team.
I've called my Team "DevOps"
I'm going to assign them a Credential now.
Navigate to Teams / DevOps


Under "Credentials", click the [+]
Select type "Machine"
 - On a server somewhere, run ssh-keygen, and generate a RSA key.  Copy the private key to the clipboard, and paste it into the SSH Private Key box.

 Scroll down, and click Save.
From the tabbed menu at the top, click Projects and then click the [+]
Give the Project a meaningful name and description.  Enter the SCM Type as Git
Under SCM URL, give the public Github address for Parallax, and under SCM Branch set "tower"
Set SCM Update Options to "Update on Launch" - this will do a git update before running a job, so you'll always get the latest version from Git.


Click Save.

This will trigger a job, which will clone the latest version from Git, and save it into the Projects directory.  If this fails, you might need to run:
chown -R awx /var/lib/awx/projects

Next, create an Inventory.
Pretty straightforward - name, description, organisation.

Select that Inventory, and create a Group - It's possible to import Groups from EC2, by selecting the Source tab when you create a new group.
Select that group you just created, and create a host under it, with the IP Address / hostname of your test server.

At this point, you can assign per-host variables.
Nearly there!
Click "Job Templates", and create a new job template.  As I said before, these really tie it all together.
Give it a name, then select your Inventory, Project, Playbook and Credential.

Click Save.

To launch it, click the Rocketship from the Job Templates Listing.


You'll get redirected to the Jobs page, showing your latest job in Queued.

Unless you have a very busy Tower server, it won't stay Queued for long.  Click the refresh button on the Queued section to reload, and you should see it's moved to Active.

You can click on the job for an update on its status, or just patiently wait for it to complete.
When the job's done, you'll either have a red dot, or a green dot indicating the status of the job.


That's it.  You've installed Ansible Tower, integrated it with Active Directory, and created your first deployment job of Parallax with Tower.

Other Resources: 

Ansible Tower Demo video (12 minutes long)
Other videos from Ansible on Youtube
Coming Soon: Part 5. Automation with Ansible.

Part 3: Ansible and Amazon Web Services

Anonymous
If you haven't, go and do it now.
You should also be familar with some of the basic concept surrounding AWS deployment, how AWS works, and so on.
So, you'll have some idea how Ansible uses playbooks to control deployment to target hosts, and some idea of the capability for deploying code from version control systems (in Part 2, we used the Ansible git: module.).
In this section, we'll be looking at how you can put it all together, using Ansible to provision an EC2 instance, do the basic OS config, and deploy an application.
In previous parts, we've only needed to have the Ansible python module installed on the execution host (y'know, the one on which you run ansible-playbook and so on).  When we're dealing with Amazon Web Services (or Eucalyptus), we need to install one extra module, called 'boto', which is the AWS library for python.
You can do this either from native OS packages, with
sudo apt-get install python boto 
(on Ubuntu)

sudo yum install python-boto 
(on RHEL, Centos, Fedora et al.)
or from pip:
pip install boto
Interesting side note.. I had to do this globally, as even inside a virtualenv, ansible-playbook reported the following error:
  failed: [localhost] => {"failed": true}
  msg: boto required for this module
  FATAL: all hosts have already failed -- aborting

I think we'll create a separate playbook for this, for reasons which should become apparent as we progress.
From your parallax directory, create a branch, and a new subdirectory under playbooks/
I'm calling it part3_ec2, but you'll probably want to give your playbook and branch a more logical name.
I'm going to go ahead and create a totally different hosts inventory file, this time only including four lines:
[local]
localhost
[launched]

The reason for this, is because a lot of the configuration, provisioning EC2 hosts etc actually happens from the local machine that you're running ansible-playbook on.
The site.yml in this playbook will have a different format.  For this first attempt, I'm not sure if I can see any real value in breaking the provisioning up into separate roles.  I might change that in future, if we decide to configure Elastic LoadBalancers and so on.

AWS and IAM
---------------
Amazon Web Services now provide a federated account management system called IAM (Identity and Access Management). Traditionally, with AWS, you could only create two pairs of Access/Secret keys.
With IAM, you can create groups of users, with role-based access control capabilities, which give you far more granular control over what happens with new access/secret key pairs.
In this article, I'm going to create an IAM group, called "deployment"
From your AWS console, visit the IAM page:
https://console.aws.amazon.com/iam/home#home
Click the "Create a new group of users" button as shown: http://cl.ly/image/1p3Y1s3q1h1b
We'll call it "Deployment" http://cl.ly/image/1v1G160o330s
We need to assign roles to the IAM group.  Power User Access seems reasonable for this.. Provides full access to AWS servies and resources, but does not allow user group modifications.
http://cl.ly/image/260J1O314045

This is a JSON representation of the permissions configuration:
http://cl.ly/image/0m0Y0O2A0y2z
We'll create some users to add to this Deployment group:
Let's call one "ansible".
http://cl.ly/image/0C410p1n0N3p

We should get an option now to download the user credentials for the user we just created.
 ansible
Access Key ID:
AKHAHAHAFATCHANCELOLLHA
Secret Access Key:
rmmDoYouReallyThingImGoingTo+5ShareThatzW
If you click the "Download Credentials" button, it'll save a CSV file containg the Username, and the Access/Secret Key.
--
Back to the main theme of this evening's symposium:
To avoid storing the AWS access and secret keys in the playbook, it's recommended that they be set as Environment Variables, namely:
AWS_ACCESS_KEY
and
AWS_SECRET_KEY
Second to that, we'll need a keypair name for the new instance(s) we're creating.  I assume you're already familiar with the process of creating SSH keypairs on EC2.
I'm calling my keypair "ansible_ec2".  Seems logical enough.
I've moved this new keypair, "ansible_ec2.pem" into ~/.ssh/ and set its permissions to 600 (otherwise ssh throws a wobbly.)
We'll also need to pre-create a security group for these servers to sit in.  As you'll see in my site.yml, i've called this "sg_thingy".  I'm going to create this as a security group, allowing TCP ports 22, 80 and 443, and all ICMP traffic through the firewall.
If you haven't specified an existing keypair, or existing security group, ansible will fail and return an error.
I'm going to create a new site.yml file too, containing the following:
---
# Based heavily on the Ansible documentation on EC2:
# http://docs.ansible.com/ec2_module.html
  - name: Provision an EC2 node
    hosts: local
    connection: local
    gather_facts: False
    tags: provisioning
    vars:
      instance_type: t1.micro
      security_group: sg_thingy
      image: ami-a73264ce
      region: us-east-1
      keypair: ansible_ec2
    tasks:
      - name: Launch new Instance
        local_action: ec2 instance_tags="Name=AnsibleTest" group={{ security_group }} instance_type={{ instance_type}} image={{ image }} wait=true region={{ region }} keypair={{ keypair }}
        register: ec2
      - name: Add instance to local host group
        local_action: lineinfile dest=hosts regexp="{{ item.public_dns_name }}" insertafter="[launched]" line="{{ item.public_dns_name }} ansible_ssh_private_key_file=~/.ssh/{{ keypair }}.pem"
        with_items: ec2.instances
        #"
      - name: Wait for SSH to come up
        local_action: wait_for host={{ item.public_dns_name }} port=22 delay=60 timeout=320 state=started
        with_items: ec2.instances
  - name: With the newly provisioned EC2 node configure that thing
    hosts: launched # This uses the hosts that we put into the in-memory hosts repository with the add_host module.
    sudo: yes # On EC2 nodes, this is automatically passwordless. 
    remote_user: ubuntu # This is the username for all ubuntu images, rather than root, or something weird.
    gather_facts: True  #We need to re-enable this, as we turned it off earlier.
    roles:
      - common
      - redis
      - nginx
      - zeromq
      - deploy_thingy
      # These are the same roles as we configured in the 'Parallax/example' playbook, except they've been linked into this one.

I've gone ahead and predefined a hostgroup in our hosts inventory file called '[launched]', because I'm going to insert the details of the launched instances into that with a local_action.
If it works, you should get something like this appearing in the hosts file after it's launched the instance:
[launched]
ec2-50-19-163-42.compute-1.amazonaws.com ansible_ssh_private_key_file=ansible_ec2.pem
I've added a tag to the play that builds an EC2 instance, so that you can run ansible-playbook a further time with the command-line argument --skip-tags provisioning so that you can do the post-provisioning config steps, without having to rebuild the whole VM from the ground up.
I've added some stuff to the common role, too, to allow us to detect (and skip bits) when it's running on an EC2 host.
  - name: Gather EC2 Facts
    action: ec2_facts
    ignore_errors: True
And a little further on, we use this when: selector to disable some functionality that isn't relevant on EC2 hosts.
    when: ansible_ec2_profile != "default-paravirtual"

Running Ansible to Provision
============================

I'm running ansible-playbook as follows:
AWS_ACCESS_KEY=AKHAHAHAFATCHANCELOLLHA AWS_SECRET_KEY="rmmDoYouReallyThingImGoingTo+5ShareThatzW" ansible-playbook -i hosts site.yml
Because I've pre-configured the important information in site.yml, Ansible can now go off, using the EC2 API and create us a new EC2 virtual machine.
PLAY [Provision an EC2 node] **************************************************
TASK: [Launch new Instance] ***************************************************
changed: [localhost]
TASK: [Add instance to local host group] **************************************
ok: [localhost] => (item={u'ramdisk': None, u'kernel': u'aki-88aa75e1', u'root_device_name': u'/dev/sda1', u'placement': u'us-east-1a', u'private_dns_name': u'ip-10-73-193-26.ec2.internal', u'ami_launch_index': u'0', u'image_id': u'ami-a73264ce', u'dns_name': u'ec2-54-205-128-232.compute-1.amazonaws.com', u'launch_time': u'2014-01-28T22:33:50.000Z', u'id': u'i-414ec06f', u'public_ip': u'54.205.128.232', u'instance_type': u't1.micro', u'state': u'running', u'private_ip': u'10.73.193.26', u'key_name': u'ansible_ec2', u'public_dns_name': u'ec2-54-205-128-232.compute-1.amazonaws.com', u'root_device_type': u'ebs', u'state_code': 16, u'hypervisor': u'xen', u'virtualization_type': u'paravirtual', u'architecture': u'x86_64'})
TASK: [Wait for SSH to come up] ***********************************************
ok: [localhost] => (item={u'ramdisk': None, u'kernel': u'aki-88aa75e1', u'root_device_name': u'/dev/sda1', u'placement': u'us-east-1a', u'private_dns_name': u'ip-10-73-193-26.ec2.internal', u'ami_launch_index': u'0', u'image_id': u'ami-a73264ce', u'dns_name': u'ec2-54-205-128-232.compute-1.amazonaws.com', u'launch_time': u'2014-01-28T22:33:50.000Z', u'id': u'i-414ec06f', u'public_ip': u'54.205.128.232', u'instance_type': u't1.micro', u'state': u'running', u'private_ip': u'10.73.193.26', u'key_name': u'ansible_ec2', u'public_dns_name': u'ec2-54-205-128-232.compute-1.amazonaws.com', u'root_device_type': u'ebs', u'state_code': 16, u'hypervisor': u'xen', u'virtualization_type': u'paravirtual', u'architecture': u'x86_64'})

Cool.
Now what? 
Well, we'll want to configure this new instance *somehow*.  As we're already using Ansible, that seems like a pretty good way to do it.
To prevent code reuse, I've symlinked the roles from the example playbook into the part3 playbook, so that I should theoretically be able to include them from here.
Come to think of it, you should be able to merge the branches (you'll probably have to do this semi-manually), because it should be possible to have the two different play types coexisting, due to the idempotent nature of Ansible.
I've decided not to merge my playbooks into one directory, because for the time being, i want to keep site.yml separate between the EC2 side and the non-EC2 side.
As I mentioned earlier, I added a tag to the instance provisioning play in the site.yml file for this playbook.  This means that now I've built an instance (and it's been added to the hosts inventory (go check!)), I can run the configuration plays, and skip the provisioning stuff, as follows:
ansible-playbook -i hosts --skip-tags provisioning  site.yml
This will now go off and do stuff.  I had to go through and add some conditionals to tell some tasks not to run on EC2 provisioned nodes, and some other stuff to prevent it looking for packages that are only available in ubuntu saucy.
I'm not going to paste the full output, because we should now be fairly familiar with the whole ansible deployment/configuration thing.
I will however, show you this:
PLAY RECAP ********************************************************************
ec2-50-19-163-42.compute-1.amazonaws.com : ok=30   changed=11   unreachable=0    failed=0
It's probably worth noting that because I chose to append the newly added host to the physical host inventory file, that subsequent plays won't see it, so it's best to run a new ansible run, but this time skip the provisioning tag.
Proof it works:

For what it's worth, I'm going to destroy that instance in a moment, so you'll have to do it yourself. Bwahahaha.
My EC2 deployment playbook / branch etc can be found here: https://github.com/tomoconnor/parallax/tree/master/playbooks/part3_ec2
Part 4, now available: Ansible with Ansible Tower

Part 2: Deploying Applications with Ansible

Anonymous
You should by now have worked your way through Part 1: Getting Started with Ansible.  If you haven't, go and do that now.
In this article, I'll be demonstrating a very simple application deployment workflow, deploying an insanely simple node.js application from a github repository, and configuring it to start with supervisord, and be reverse-proxied with Nginx.
As with last time, we'll be using Parallax as the starting point for this.  I've actually gone through and put the config in there already (if you don't feel like doing it yourself ;)

- name: Install all the packages and stuff required for a demobox
  hosts: demoboxes
  user: user
  sudo: yes
  roles:
    - redis
    - nginx
    - nodejs
    - zeromq
#    - deploy_thingy

In the 9c818d0b8f version, you'll be able to see that I've created a new role, inventively called "deploy_thingy".

**Updated**
I've been recommended that my __template__ role be based on the output of
ansible-galaxy init $rolename
So I've recreated the __template__ role to be based on an ansible-galaxy role template.
There's not that many changes, but it does include a new directory 'default/' containing the Galaxy metadata required if you wish to push back to the public galaxy role index.

In an attempt to make creating new roles easier, I put a __template__ role into the file tree when I first created Parallax, so that all you do to create a new role is execute:
cp -R __template__ new_role_name
in the roles/ directory.
.
├── files
│   ├── .empty
│   ├── thingy.nginx.conf
│   └── thingy.super.conf
├── handlers
│   ├── .empty
│   └── main.yml
├── meta
│   ├── .empty
│   └── main.yml
├── tasks
│   └── main.yml
└── templates
    └── .empty

In this role, we define some dependencies in meta/main.yml, there's two files in the files/ directory, and there's a set of tasks defined in tasks/main.yml.  There's also some handlers defined in handlers/main.yml.

Let's have a quick glance at the meta/main.yml file.
---
dependencies:
  - { role: nodejs }
  - { role: nginx }

This basically sets the requirement that this role, deploy_thingy depends on services installed by the roles: nginx and nodejs.
Although these roles are explicitly stated to be installed in site.yml, this gives us a level of belt-and-braces configuration, in case the deploy_thingy role were ever included without the other two roles being explicitly stated, or if it were configured to run before its dependencies had explicitly been set to run.
tasks/main.yml is simple.
---
 - name: Create directory under /srv for thingy
   file: path=/srv/thingy state=directory mode=755
 - name: Git checkout from github
   git: repo=https://github.com/tomoconnor/shiny-octo-computing-machine.git
        dest=/srv/thingy
 - name: Drop Config for supervisord into the conf.d directory
   copy: src=thingy.super.conf dest=/etc/supervisor/conf.d/thingy.conf
   notify: reread supervisord
 - name: Drop Reverse Proxy Config for Nginx
   copy: src=thingy.nginx.conf dest=/etc/nginx/sites-enabled/thingy.conf
   notify: restart nginx

We'll create somewhere for it to live, check the code out of my git repository [1], Then drop two config files in place, one to configure supervisor(d), and one to configure Nginx.
Because the command to configure supervisor(d) and nginx change the configuration of those services, there are notify: handlers to reload the configuration, or restart the service.

Let's have a quick peek at those handlers now:
---
  - name: reread supervisord
    shell: /usr/bin/supervisorctl reread && /usr/bin/supervisorctl update
  - name: restart nginx
    service: name=nginx state=restarted

When the supervisor config changes (and we add something to /etc/supervisor/conf.d), we need to tell supervisord to re-read it's configuration files, at which point, it will see the new services, and then run supervisorctl update, which will set the state of the newly added items from 'available' to 'started'.
When we change the nginx configuration, we'll hit nginx with a restart.  It's possible to do softer actions, like reload here, but I've chosen service restart for simplicity.

I've also changed the basic Ansible config, and configuration of roles/common/files/insecure_sudoers so that it will still ask you for a sudo password in light of some minor criticism.
I've found that if you're developing Ansible playbooks on an isolated system, then there's no great harm in disabling SSH Host Key Checking (in ansible.cfg), similarly how there's no great problems in disabling sudo authentication, so it's effectively like NOPASSWD use.
However, Micheil made a very good point that in live environments it's a bit dodgy to say the least.  So I've commented those lines out of the playbook in Parallax, so that it should give users a reasonable level of basic security.  At the end of the day, it's up to you how you use Parallax, and if you find that disabling security works for you, then fine.  It's not like you haven't been warned.
But I digress.
The next thing to do is to edit site.yml, and ensure that the new role we've created gets mapped to a hostgroup in the play configuration.
In the latest version of Parallax this is already done for you, but as long as the role name in the list matches the directory in roles/, it should be ready to go.
Now if we run:
ansible-playbook -k -K -i playbooks/example/hosts playbooks/example/site.yml

It should go through the playbook, installing stuff, then finally do the git clone from github, deploy the configuration files, and trigger a reread of supervisord, and a restart of nginx.
If I now test that it's working, with:
curl -i http://192.168.20.151/
HTTP/1.1 200 OK
Server: nginx/1.4.1 (Ubuntu)
Date: Mon, 27 Jan 2014 14:51:29 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 170
Connection: keep-alive
X-Powered-By: Express
ETag: "1827834703"

That X-Powered-By: Express line shows that Nginx is indeed working, and that the node.js application is running too.
You can get more information about stuff that supervisord is controlling by running:
sudo supervisorctl status
on the target host.
$ sudo supervisorctl status
thingy                           RUNNING    pid 19756, uptime 0:00:06
If the Nginx side is configured, but the node.js application isn't running, you'd get a HTTP 502 error, as follows:
curl -i http://192.168.20.151/
HTTP/1.1 502 Bad Gateway
Server: nginx/1.4.1 (Ubuntu)
Date: Mon, 27 Jan 2014 14:59:34 GMT
Content-Type: text/html
Content-Length: 181
Connection: keep-alive
So, that's it.
A very simple guide to deploying a very simple application with Ansible.  Of course, it should be obvious that you can deploy *anything* from a git repository, it really boils down to the configuration of supervisord.  For that matter, it doesn't have to be supervisord.
I consider configuring supervisord for process controlling to be outside of the scope of this article, but I might touch on it in future in more detail.
Next up, Part 3: Ansible and Amazon Web Services.
1: It's really simple, and I'm not very node-savvy, so I'm sorry if it sucks.

Part 1: Getting Started with Ansible

Anonymous
An introduction to Ansible Configuration Management

A brief history of Configuration Management
===========================================

* CFEngine - Released 1993. Written in C
* Puppet - Released 2005 - Written in Ruby. Domain Specific Language (DSL. SSL Nightmare.
* Chef - Released 2009 - Written in Ruby, also a DSL, more like pure Ruby
* Juju - Released 2010, Python, Very ubuntu.
* Salt - Released 2011, Python, Never got it working right
* Ansible - Released 2012, Python.  Awesome.

Why Ansible?
============
It’s agentless.  Unlike Puppet, Chef, Salt, etc.. Ansible operates only over SSH (or optionally ZeroMQ), so there’s none of that crap PKI that you have to deal with using Puppet.
It’s Python. I like Python.  I’ve been using it far longer than any other language.
It’s self-documenting,  Simple YAML files describing the playbooks and roles.
It’s feature-rich.  Some call this batteries included, but there’s over 150 modules provided out of the box, and new ones are pretty easy to write.

Installing Ansible
==================

You can get it from the Python Package Index (PyPI):
pip install ansible

You can get it from your OS package index
sudo apt-get install ansible

You can download the source from Github and run setup.py yourself.
git clone https://github.com/ansible/ansible.git

My preferred way of installing it is inside a virtualenv, then using pip to install it.

Ansible Modes
=============
Playbook Mode
 - This executes a series of commands in order, according to a playbook.

Non-playbook mode
 - This executes an ansible module command on a target host.

I'll primarily be focussing on Playbook Mode, and hopefully giving an insight on what a playbook consists of, and how to use Ansible to deploy an application.
Parallax
========
I've put together a collection of Ansible bits I've used in the past to give a quick-start of what a Playbook might look like for an example service.
I'll be referring back to this in the rest of this article, so you'll probably want to grab a copy from Github to play with:
git clone https://github.com/tomoconnor/parallax.git

First Steps
===========

1. Install Ansible (see above)
2. Clone Parallax

From a first look at the source tree of Parallax, you should see a config file, and a directory called "playbooks".
The config file (ansible.cfg) contains the ansible global configuration.  Lots more information about it, and its directives can be found here:
http://docs.ansible.com/intro_configuration.html#the-ansible-configuration-file

Playbooks
---------
Playbooks are the bread and butter of Ansible.  They represent collections of 'plays', configuration policies which get applied to defined groups of hosts.
In Parallax, there's a "playbooks" directory, containing an example playbook to give you an idea of what an Ansible Playbook looks like.

Anatomy of a Playbook
=====================
If you take a look inside the Parallax example playbook, you'll notice there's the following file structure:
.
├── example_servers.yml
├── group_vars
│   ├── all
│   └── example_servers
├── host_vars
│   └── example-repository
├── hosts
├── repository_server.yml
├── roles
│   ├── __template__
│   ├── common
│   ├── gridfs
│   ├── memcached
│   ├── mongodb
│   ├── nginx
│   ├── nodejs
│   ├── redis
│   ├── repository
│   ├── service_example
│   └── zeromq
└── site.yml

Looking at that tree, there's some YAML files, and some directories.
There's also a file called "hosts".  This is the Ansible Inventory file, and it stores the hosts, and their mappings to the host groups.
The hosts file looks like this:
[example_servers]
192.168.100.1 set_hostname=vm-ex01
# example of setting a host inventory by IP address.
# also demonstrates how to set per-host variables.

[repository_servers]
example-repository
#example of setting a host by hostname.  Requires local lookup in /etc/hosts
# or DNS.
[webservers]
web01
[dbservers]
db01

It's standard INI-like file format, hostgroups are defined in [square brackets], one host per line.  Per-host variables can follow the hostname or IP address.  If you declare a host in the inventory by hostname, it must be resolvable either in your /etc/hosts file, or by a DNS lookup.
The playbook definitions are in the .yml files.  There's 3 in the Parallax example.  Two which are separate YAML files, and one that's a kind of, catchall in 'site.yml'.
site.yml is the default name for a playbook, and you'll likely see it crop up when you look at other ansible examples (https://github.com/ansible/ansible-examples/).
You'll also see lots of files called 'main.yml'.  This is the default filename for a file containing Ansible Tasks, or Handlers.  More on that later.
So, site.yml, consists of 3 named blocks.  If you look closely, you'll see that the blocks all have a name, they all have a hosts: line, and they all have roles.
The hosts: line sets which host group (from the Inventory file 'hosts') to apply the following roles to.
The roles: line, and subsequent role entries define the roles to apply to that hostgroup.   The roles currently defined in parallax can be seen in the above tree structure.
You can either put multiple named blocks in one site.yml file, or split them up, in the manner of 'example_servers.yml' and 'repository_server.yml'
Other stuff in 'site.yml':
'user:' - This sets the name of the user to connect to the target as.  Sometimes shown as remote_user in newer ansible configurations.
'sudo:' - This tells Ansible whether it should run sudo on the target when it connects.  You'll probably want to set this as "sudo: yes" most often, unless you plan to connect as root.  In which case, this (ಠ.ಠ) is for you.


Roles
=====
A role should encapsulate all the things that have to happen to make a thing work.  If that sounds vague, it's because it is.
The parallax example has a role called common, which installs and configures the things that I've found are useful as prerequisites for other things.  You should go through and decide which bits you want to put into your 'common' role, if you decide to have one.
Roles can have dependencies, which will require that another role be applied first.  This is good for things like handling the dependencies before you deploy code.

Inside A Role
-------------
Let's take a look at one of the pre-defined roles in Parallax:
├── redis
│   ├── files
│   ├── handlers
│   ├── meta
│   ├── tasks
│   └── templates

This, unsurprisingly is a quick role I threw together that'll install Redis from an Ubuntu PPA, and start the service.
In general, a role consists of the following subdirectories, "files", "handlers", "meta", "tasks" and "templates".
files/ contains files that will be copied to the target with the copy: module.
handlers/ contains YAML files which contain 'handlers' little bits of config that can be triggered with the notify: action inside a task. Usually just handlers/main.yml - See http://docs.ansible.com/playbooks_intro.html#handlers-running-operations-on-change for more information on what handlers are for.
meta/ contains YAML files containing role dependencies.  Usually just meta/main.yml
tasks/ contains YAML files containing a list of named steps which Ansible will execute in order on a target.  Usually tasks/main.yml
templates/ contains Jinja2 template files, which can be used in a task with the template: module to interpolate variables in the template, then copy the template to a location on the target.  Files in this directory often end .j2 by convention.

Example Role: Redis
-------------------

Path: parallax/playbooks/example/roles/redis
Structure: 
.
├── files
├── handlers
├── meta
├── tasks
│   └── main.yml
└── templates

All there is in this one, is a task file, unsurprisingly called 'main.yml' - Told you that name would crop up again.
- Actually, there's a .empty file under files, handlers, meta, and templates.  This is just so that if you commit it to git, the empty directories won't vanish.

Let's have a look at the redis role's tasks:
$ cat tasks/main.yml
---
 - name: Add the Redis PPA
   apt_repository: repo='ppa:rwky/redis' update_cache=yes
 - name: Install Redis from PPA
   apt: pkg=redis-server state=installed
 - name: Start Redis
   service: name=redis state=started

Each named block has an action below it.  Each action refers to an Ansible Module. There's an index of all available modules and their documentation here: http://docs.ansible.com/list_of_all_modules.html

Basically explained:
apt_repository: module configures a new apt repository for the system.  It can take a ppa name, or a URL for a repository.  update_cache tells ansible to run apt-get update after it's added the new repository.
apt: module tells Ansible to run apt-get install $pkg using  whatever value has been defined for pkg.
service: tells Ansible to execute "sudo service $name start" on the target.

I recommend you have a trawl through the roles as configured in Parallax, and see if you can make sense of how they work.  If you open the Ansible Module Index, you'll be able to use that as a quick reference guide for the modules in the roles.

One of the most useful features of Ansible, in my opinion is the "with_items:" action that some modules support.  If you want to install multiple packages with apt at the same time, the easiest way to do it is like this:
(example from roles/common/tasks/main.yml)

 - name: install default packages
    apt: pkg={{ item }} state=installed
    with_items:
      - aptitude
      - vim
      - supervisor
      - python-dev
      - htop
      - screen

Running Ansible
===============

Once you've got your Host Inventory defined, and at least one play for Ansible to execute, it'll be able to do stuff for you,

I've just spun up a new Ubuntu 13.10 Virtual Machine.  It has the IP Address 192.168.1.96

I'm going to create a new hostgroup called [demoboxes] and put that in:
[demoboxes]
192.168.1.96 access_user=user

The variable access_user is required *somewhere* by the common role, to create the ssh authorised keys stuff, under that user's home directory.


and in site.yml:
- name: Install all the packages and stuff required for a demobox
  hosts: demoboxes
  user: user
  sudo: yes
  roles:
    - redis
    - nginx
    - nodejs
    - zeromq
I've included a few other roles from parallax for the halibut.
I'm going to run ansible-playbook -i hosts site.yml and see what happens.
For the first run, we'll need to tell ansible the SSH and Sudo passwords, because one of the thing that the common role does is to configure passwordless sudo, and deploy a SSH key.
In order to use Ansible with SSH passwords (pretty much required for the first run of normal machines (unless you deploy keys with something far lower level, like kickstart), you'll need the sshpass program.
On ubuntu, you can install that as follows:
sudo apt-get install sshpass
When you use Parallax as a starting point, one thing you'll want to do is edit
 roles/common/files/authorized_keys
and put your keys in it.

So, for a first run, it's:
 ansible-playbook -i hosts -k -K site.yml

You'll get the following prompts for the ssh password, and the sudo password:
SSH password:
sudo password [defaults to SSH password]:

Enter whatever password you gave Ubuntu at install time.

Once the following tasks have completed, you can remove -k -K from the ansible command line
TASK: [common | deploy access ssh-key to user's authorized keys file] *********
changed: [192.168.1.96]
TASK: [common | Deploy Sudoers file] ******************************************
changed: [192.168.1.96]

Because at that point, you'll be able to use your ssh key, and passwordless sudo.

At the end of the run, you'll get a Play Recap, as follows:
PLAY RECAP ********************************************************************
192.168.1.96               : ok=19   changed=8    unreachable=0    failed=0
You should now be able to open http://192.168.1.96/ (or whatever your server's IP address is) in a browser.

Toms-iMac-2:example tomoconnor$ curl -i http://192.168.1.96/
HTTP/1.1 200 OK
Server: nginx/1.4.1 (Ubuntu)
Date: Sun, 26 Jan 2014 17:48:47 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Mon, 06 May 2013 10:26:49 GMT
Connection: keep-alive
ETag: "51878569-264"
Accept-Ranges: bytes

Hurrah.

Thursday, 23 March 2017

Ansible Quick Start - A Brief Introduction

Anonymous
Recently, I have been working with Ansible, an IT automation, configuration management and provisioning tool along the same lines as Chef and Puppet. If you are responsible for managing servers - even if it is just one - you would be well served to learn one of these tools.

Why Ansible?

And, more generally, why use a configuration management tool at all? Anyone with an operations or development background have surely had to log into a server to change a configuration option, install a package, restart a service, or something else. It is easy enough to log in via SSH, make a quick change to get your application working, and then log out again. I know that I have done this hundreds (maybe thousands?) of times over my career. Sometimes, I would be diligent and document that change. More often, I would not. Then, weeks or months later, I would run into the same problem and have to rack my brain to remember how I fixed it. After resorting to scouring Google for answers, I’ll find the solution, slap my forehead, and then proceed to make the same exact change over again. This process may get you by for a time but there is definitely a better way. Especially in this day and age with the proliferation of cloud computing and cheap, disposable virtual machines, the ability to manage servers in a fast, repeatable and consistent manner is of paramount importance.
As mentioned above, there are a variety of tools that can help. But, there is definitely a barrier to entry, especially if you are just managing a handful of servers and don’t have the resources to spend a lot of time learning new tools. Chef and Puppet are fantastic and can be used to manage extremely large infrastructures but there is no denying that they have a large learning curve and can be difficult to setup and configure (at least in my experience). Ansible aims to be simpler and easier to understand while still maintaining the efficiency and power of other tools. It uses an agentless architecture so you don’t have to bootstrap your machines with a client application. And, it uses a simple configuration file format that is easy to understand and read for sysadmins and developers alike. Finally, Ansible unifies remote execution and configuration management - some other solutions require separate tools for these tasks. So, let’s take a look.
In order to follow along, you will need at least one server you can play around with. If you don’t have one, you can use Vagrant to spin up a virtual machine or two to work with. Another option I also like to use is DigitalOcean - it is an easy, low cost way to work with virtual machines in the cloud. You will also need a machine to run Ansible on. If you are running Linux or OSX, you should be good to go. As far as I know, Ansible will not run (easily) on Windows.

Installation

If you are on OSX, the easiest way to get Ansible installed is to use Homebrew.
$ brew update
$ brew install ansible
On Ubuntu 14.04 (Trusty Tahr), you can run the following commands to get a recent version of Ansible:
$ sudo apt-get install software-properties-common
$ sudo apt-add-repository ppa:ansible/ansible
$ sudo apt-get update
$ sudo apt-get install ansible
For other options and more details on how to install Ansible on other releases and platforms, you should consult the Ansible Installation documentation.

Inventory File

Ansible uses an inventory file to determine what hosts to work against. In its simplest form, an inventory file is just a text file containing a list of host names or IP addresses - one on each line. For example:
192.168.0.20
192.168.0.21
192.168.0.22
The inventory file acually uses the INI format and has a lot more capabilities than just a flat list of hosts. It supports things like specifying aliases, SSH ports, name patterns, groups, and variables. For more details, check out the inventory docs. For our purposes, we just need a simple list of hosts.
By default, Ansible looks for an inventory file at /etc/ansible/hosts. I like to be more explicit about this, especially when experimenting, and specify the path to an inventory file that I am working with. Most Ansible commands support passing in an option like –inventory=/path/to/inventory/file. We will see more of this later. For now, create a text file called inventory.ini wherever you like and add the host name or IP address of the server or servers you want to manage with Ansible.

Testing Connectivity

As mentioned above, Ansible depends on SSH access to the servers you are managing. If you are able to use access your servers via SSH then you should be able to manage them with Ansible. Ansible works best when you have SSH public key authentication configured so that you don’t have to use passwords to access your hosts. For the rest of this post, I am going to assume that this is the case but Ansible does have options for specifying passwords in its commands (run man ansible for details). It also assumes that you are going to be authenticating with the current user name who is running the commands. If this is not the case, you can pass –user=username or -u username to tell it to use a specific user. In these examples, I am working on newly provisioned DigitalOcean servers and need to authenticate as the root user.
Let’s verify we have everything setup correctly and we can connect to our host(s).
$ ansible all --inventory-file=inventory.ini --module-name ping -u root
Note: If you are using a Vagrant virtual machine, you are likely going to have to modify the command above. If you are using a typical Vagrant base box, you will likely want to authenticate with a user named vagrant and a different private key. For example, on my Vagrant virtual machine (using base box “ubuntu/trusty64”), the command I use is:
$ ansible all --inventory-file=inventory.ini --module-name ping -u vagrant --private-key=~/.vagrant.d/insecure_private_key
You can run vagrant ssh-config to get more details about the options needed to successfully SSH into your Vagrant virtual machine. There are ways to configure the inventory file so that you don’t have to use such an unwieldy command line that I can cover in a future post.
Also, note that I am running Ansible in the same directory as my inventory file (inventory.ini). If you aren’t, or if you named your inventory file something different, just adjust the inventory file path in the command.
You may get a prompted to accept the host key first if you haven’t connected to these servers over SSH before.
The authenticity of host '104.129.22.241 (104.129.22.241)' can't be established.
RSA key fingerprint is 0c:71:ca:a5:e9:f2:4d:60:9d:2e:01:c3:b8:09:75:50.
Are you sure you want to continue connecting (yes/no)? yes
If everything works, you should see some output like the following:
104.129.3.148 | success >> {
    "changed": false,
    "ping": "pong"
}

104.129.22.241 | success >> {
    "changed": false,
    "ping": "pong"
}
If something went wrong, you may see something like:
104.129.3.148 | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
This means Ansible was unable to connect to your host(s) for some reason. As mentioned in the output, adding -vvvv will usually point you in the right direction.
So, let’s dissect that command a bit. The first argument, all simply tells Ansible to run against all hosts defined in the inventory file. You can use this first argument to target a specific host, group, wildcard pattern or a combination of all of those things. For our purposes, we will just be using all going forward. We mentioned the –inventory option earlier - it just lets you specify a path to the inventory file. If you don’t include this, Ansible will look for an inventory file at /etc/ansible/hosts. There is a shorter version of this option: -i inventory.ini which we will use from now on. Next, is the module name: –module-name ping. We’ll talk about Ansible modules below but just know that, in this example, we are calling the ping module which simply returns “pong” if successful. This is a useful, side-effect free way of checking that we can connect and manage our hosts with Ansible.
You can shorten the –module-name argument to just -m. For example:
$ ansible all -i inventory.ini -m ping -u root

Ansible Modules

Modules are Ansible’s way of abstracting certain system management or configuration tasks. In many ways, this is where the real power in Ansible lies. By abstracting commands and state into modules, Ansible is able to make system management idempotent. This is an important concept that makes configuration management tools like Ansible much more powerful and safe than something like a typical shell script. It is challenging enough to write a shell script that can configure a system (or lots of systems) to a specific state. It is extremely challenging to write one that can be run repeatedly against the same systems and not break things or have unintended side effects. When using idempotent modules, Ansible can safely be run against the same systems again and again without failing or making any changes that it does not need to make.
There is a large catalog of modules available for Ansible out of the box. Here are just a very small sample of some things that can be managed with Ansible modules:
  • users
  • groups
  • packages
  • ACLs
  • files
  • apache modules
  • firewall rules
  • ruby gems
  • git repositories
  • mysql and postgresql databases
  • docker images
  • AWS / Rackspace / Digital Ocean instances
  • Campfire or Slack notifications
  • and a whole lot more.
If there is not a specific module available to accomplish a certain task, you can also just run arbitrary commands with Ansible or you can create your own custom module.

Remotely Executing AdHoc Commands

Ansible allows you to remotely execute commands against your managed hosts. This is a powerful capability so queue the “With great power comes great responsibility” quote. For the most part, you are going to want to package your system management tasks into Playbooks (see below). But, if you do need to run an arbitrary command against your hosts, Ansible has your back. Let’s take a quick look at the uptime on all of our hosts:
$ ansible all -i inventory.ini -m command -u root --args "uptime"
104.131.20.249 | success | rc=0 >>
 17:51:27 up 1 day, 10:26,  1 user,  load average: 0.00, 0.01, 0.05

104.131.3.142 | success | rc=0 >>
 17:51:27 up 1 day, 10:26,  1 user,  load average: 0.00, 0.01, 0.05
Cool. In this example, we are using the command module to run an arbitrary command against the host. We use –args to pass the command line we want to execute. As usual, this command can be shortened a bit:
$ ansible all -i inventory.ini -u root -a "uptime"
It turns out that command is the default module that Ansible will use when you run it. And -a is a shorter alias for –args.
How about another example?
$ ansible all -i inventory.ini -m apt -u root -a "name=zsh state=installed"
104.131.3.142 | success >> {
    "changed": true,
    "stderr": "update-alternatives: warning: skip creation of /usr/share/man/man1/rzsh.1.gz because associated file /usr/share/man/man1/zsh.1.gz (of link group rzsh) doesn't exist\n",
    "stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\nThe following extra packages will be installed:\n  zsh-common\nSuggested packages:\n  zsh-doc\nThe following NEW packages will be installed:\n  zsh zsh-common\n0 upgraded, 2 newly installed, 0 to remove and 50 not upgraded.\nNeed to get 2,726 kB of archives.\nAfter this operation, 11.4 MB of additional disk space will be used.\nGet:1 http://mirrors.digitalocean.com/ubuntu/ trusty/main zsh-common all 5.0.2-3ubuntu6 [2,119 kB]\nGet:2 http://mirrors.digitalocean.com/ubuntu/ trusty/main zsh amd64 5.0.2-3ubuntu6 [607 kB]\nFetched 2,726 kB in 0s (7,801 kB/s)\nSelecting previously unselected package zsh-common.\n(Reading database ... 90913 files and directories currently installed.)\nPreparing to unpack .../zsh-common_5.0.2-3ubuntu6_all.deb ...\nUnpacking zsh-common (5.0.2-3ubuntu6) ...\nSelecting previously unselected package zsh.\nPreparing to unpack .../zsh_5.0.2-3ubuntu6_amd64.deb ...\nUnpacking zsh (5.0.2-3ubuntu6) ...\nProcessing triggers for man-db (2.6.7.1-1) ...\nSetting up zsh-common (5.0.2-3ubuntu6) ...\nSetting up zsh (5.0.2-3ubuntu6) ...\nupdate-alternatives: using /bin/zsh5 to provide /bin/zsh (zsh) in auto mode\nupdate-alternatives: using /bin/zsh5 to provide /bin/rzsh (rzsh) in auto mode\n"
}

104.131.20.249 | success >> {
    "changed": true,
    "stderr": "",
    "stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\nSuggested packages:\n  zsh-doc\nThe following NEW packages will be installed:\n  zsh\n0 upgraded, 1 newly installed, 0 to remove and 12 not upgraded.\nNeed to get 4,716 kB of archives.\nAfter this operation, 11.7 MB of additional disk space will be used.\nGet:1 http://mirrors.digitalocean.com/ubuntu/ precise/main zsh amd64 4.3.17-1ubuntu1 [4,716 kB]\nFetched 4,716 kB in 0s (12.3 MB/s)\nSelecting previously unselected package zsh.\r\n(Reading database ... \r(Reading database ... 5%\r(Reading database ... 10%\r(Reading database ... 15%\r(Reading database ... 20%\r(Reading database ... 25%\r(Reading database ... 30%\r(Reading database ... 35%\r(Reading database ... 40%\r(Reading database ... 45%\r(Reading database ... 50%\r(Reading database ... 55%\r(Reading database ... 60%\r(Reading database ... 65%\r(Reading database ... 70%\r(Reading database ... 75%\r(Reading database ... 80%\r(Reading database ... 85%\r(Reading database ... 90%\r(Reading database ... 95%\r(Reading database ... 100%\r(Reading database ... 113275 files and directories currently installed.)\r\nUnpacking zsh (from .../zsh_4.3.17-1ubuntu1_amd64.deb) ...\r\nProcessing triggers for man-db ...\r\nSetting up zsh (4.3.17-1ubuntu1) ...\r\nupdate-alternatives: using /bin/zsh4 to provide /bin/zsh (zsh) in auto mode.\r\nupdate-alternatives: using /bin/zsh4 to provide /bin/rzsh (rzsh) in auto mode.\r\nupdate-alternatives: using /bin/zsh4 to provide /bin/ksh (ksh) in auto mode.\r\n"
}
In this example, I use the apt module to ensure that Zsh is installed.
Note: In the examples in this post, I am using the root account which has all of the necessary privileges to run this and the following examples. This is not necessarily a best practice (it is common to block the root user from logging in via SSH). If you are authenticating with a user that does not have root privileges but does have sudo access, you should append –sudo or -s to the command line (as well as changing -u to specify the correct user name). Here is what the command looks like when running against a Vagrant virtual machine:
$ ansible all -i inventory.ini -m apt -u vagrant -a "name=zsh state=installed -s"
And, if you need to specify a sudo password, you can use the –ask-sudo-pass or -K option.
One final example:
$ ansible all -i inventory.ini -u root -m user -a "name=arch comment='Arch Stanton' shell=/usr/bin/zsh generate_ssh_key=yes ssh_key_bits=2048"

104.131.3.142 | success >> {
    "changed": true,
    "comment": "Arch Stanton",
    "createhome": true,
    "group": 1001,
    "home": "/home/arch",
    "name": "arch",
    "shell": "/usr/bin/zsh",
    "ssh_fingerprint": "2048 e6:52:dc:c3:c6:ec:98:dd:01:1a:54:0d:d5:b5:94:f7  ansible-generated (RSA)",
    "ssh_key_file": "/home/arch/.ssh/id_rsa",
    "ssh_public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDYNQi/NeehCgS1Apv+Oha+No1FEzGqDVF1PIAAz+lfy1egxs/MaJRfkx2cLiht3riJwGER/CEFGehzB6f7cSbNx7oyK5Sj/aPUEJhiHwIi7Ev28LcteAB4JqMmCO08zgUZd6oJ57stKBVb7esCSLvwQvuFaxtBhYxyIGBov2KMSRDy9KwNXUaLed7qWV7auPWn5lq98APOJ/cjNNLHpYTR/N3iJH1VwmSb2XxrfCFrEx/bpcfKPr97SKpufH6cYuuD/zaXNd43M4QYO6rPY/idWBW8f06rbYFBdrXaLt6C/OIbbv5GWf/ZJ4g0nSo5dzp9knv9EymZ8s2U1e3v0ic1 ansible-generated",
    "state": "present",
    "system": false,
    "uid": 1001
}

104.131.20.249 | success >> {
    "changed": true,
    "comment": "Arch Stanton",
    "createhome": true,
    "group": 1002,
    "home": "/home/arch",
    "name": "arch",
    "shell": "/usr/bin/zsh",
    "ssh_fingerprint": "2048 0b:1d:6a:9a:7a:1d:56:c3:26:d6:2a:90:1c:2d:15:18  ansible-generated (RSA)",
    "ssh_key_file": "/home/arch/.ssh/id_rsa",
    "ssh_public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDr6FCafN7b7QbB3f8itzN7fDcpU5OAnyvpc0HICfP/vxv9Cxr3EHIQCOLXFeXjtUBSQ6iyR17ceVe4n6xyiqrLJqjdsoDZFgwF5fZjTXFUY0/00srq7Bd0Ihm+AyHTYfXzM2dfVLy/l5/NQ4vwsez8FTh23Ef5FungY68dMs1VjsYnbu3ddg3IUEH4CADveIVhvcx9EQ/EBJvKsBIUjoDxPfC8uBNt8kx9h3TQvmIx8+Ydrn5lFEpyHWZGtlIoduWdHlH4cfN0NQaFhzJnalgeiem76C78pZ/YJ2wkNNXoFMveTNAu873a9kepSlHtRSZ1ND1c/xWV0KJX3DsQ7QTt ansible-generated",
    "state": "present",
    "system": false,
    "uid": 1002
}
Here I created a new user, generated an SSH key for that user, and set their shell to Zsh. As you can see, you can use Ansible to perform pretty sophisticated operations across multiple hosts really rapidly.

Playbooks

Playbooks allow you to organize your configuration and management tasks in simple, human-readable files. Each playbook contains a list of tasks (‘plays’ in Ansible parlance) and are defined in a YAML file. Playbooks can be combined with other playbooks and organized into Roles which allow you to define sophisticated infrastructures and then easily provision and manage them. Playbooks and roles are large topics so I encourage you to read the docs. But, let’s look at a quick example playbook. I want to create myself a user account on all of my servers. Furthermore, I want to be able to authenticate using my personal SSH key and I want to use Zsh as my shell. For my Zsh config, I am going to use the great oh-my-zsh framework.
---
- hosts: all
  tasks:
    - name: Ensure Zsh is installed
      apt: name=zsh state=installed

    - name: Ensure git is installed
      apt: name=git state=installed

    - name: Create my user account
      user: name=ryan shell=/usr/bin/zsh

    - name: Add my public key to the server
      authorized_key: user=ryan
                      key="{{ lookup('file', '~/.ssh/id_rsa.pub') }}"

    - name: Install oh-my-zsh
      git: repo=https://github.com/robbyrussell/oh-my-zsh.git
           dest=~/.oh-my-zsh
      remote_user: ryan
      sudo: false

    - name: Copy .zshrc template
      command: cp ~/.oh-my-zsh/templates/zshrc.zsh-template ~/.zshrc
      remote_user: ryan
      sudo: false
*Update Nov 22, 2014*: see my post about updating the Copy .zshrc template task to be idempotent and safely repeatable.
Hopefully, you should be able to understand exactly what is going to happen just by scanning the file. If not, this is what we are going to accomplish with this playbook.
  1. We install the Zsh package
  2. We install git which we will need to clone the oh-my-zsh repository.
  3. We create my user account and we set my shell to Zsh
  4. We use the authorized_key module and a file lookup to copy my public key to the servers.
  5. We use the git module to clone the oh-my-zsh repository.
  6. We use the command module to copy the example zsh config to my user’s ~/.zshrc
The last two plays are interesting. Note we use the remote_user option to specify that we want to run these tasks as the new ryan user. We also override any sudo option passed in from the ansible-playbook command. This means I don’t have to worry about adding plays to fix file permissions and ownership which I probably would have to do if I run those tasks as root. This does depend on the ability of the ryan account to login via SSH (which we configured in step 4.).
Ok, cool, now let’s try it out. The command to run playbooks is ansible-playbook. It shares a lot of options with the ansible command so most of this should look familiar:
$ ansible-playbook myuser.yml -i inventory.ini -u root

PLAY [all] ********************************************************************

GATHERING FACTS ***************************************************************
ok: [104.131.3.142]
ok: [104.131.20.249]

TASK: [Ensure Zsh is installed] ***********************************************
changed: [104.131.3.142]
changed: [104.131.20.249]

TASK: [Ensure git is installed] ***********************************************
changed: [104.131.3.142]
changed: [104.131.20.249]

TASK: [Create my user account] ************************************************
changed: [104.131.20.249]
changed: [104.131.3.142]

TASK: [Add my public key to the server] ***************************************
changed: [104.131.20.249]
changed: [104.131.3.142]

TASK: [Install oh-my-zsh] *****************************************************
changed: [104.131.3.142]
changed: [104.131.20.249]

TASK: [Copy .zshrc template] **************************************************
changed: [104.131.3.142]
changed: [104.131.20.249]

PLAY RECAP ********************************************************************
104.131.20.249             : ok=7    changed=6    unreachable=0    failed=0
104.131.3.142              : ok=7    changed=6    unreachable=0    failed=0
Sweet! I can now SSH into my hosts with my ryan account, using public key authentication, and I have an awesome shell environment already configured. The command we used should look familiar. The first argument is the file path to the playbook we are running. In this case, newuser.yml. The -i and -u options are the same as we have seen before with the ansible command. And, feel free to run the playbook again (and again). You won’t hurt anything (unless you make a change to the ~/.zshrc file in between runs - this part could be improved but I’ll leave that as an exercise to the reader).

Facts, Variables, Roles, Vault, etc.

There is a lot more to Ansible than I can cover in this introduction. We really just scratched the surface. If you are interested, you should definitely checkout some of the resources I listed below. And, please, if there is something you would like me to cover on this blog, please let me know!

Resources