Tuesday, 31 October 2017

How to Install GScan2PDF 1.8.8 release on Ubuntu 17.04, 17.10

Harry
     gscan2pdf is a GUI to ease the process of producing PDFs or DjVus from scanned documents. You scan one or several pages in with File/Scan, and create a PDF of selected pages with File/Save PDF. At maturity, the GUI will have similar features to that of the Windows Imaging program, but with the express objective of writing a PDF, including metadata. Scanning is handled with SANE via scanimage. PDF conversion is done by libtiff. Perl is used for portability and ease of programming, with gtk2-perl for the GUI. This should therefore work more or less out of the box on any system with gtk2-perl, scanimage, and libtiff.


GScan2PDF 1.8.8 Changelog: 
  • Filter out 1 and 2 digit integers from tool warnings. Show original message, not filtered message.
  • Add option to profile only after successfully applying it
  • Fix default value for unpaper script direction.
  • Fix race condition updating widgets before they can be created after cycling device handle.
  • Fix 16-bit PNM parsing.
  • Fix Perl warning about redundant argument in sprintf.
  • Update to Hungarian translation. 

Installation instructions:


   The developer’s PPA offers the latest packages for Ubuntu 14.04, Ubuntu 16.04, Ubuntu 17.04, Ubuntu 17.10, Ubuntu 18.04, and derivatives.

   Opening terminal (Ctrl+Alt+T) and running the command:

$ sudo add-apt-repository ppa:jeffreyratcliffe/ppa

$ sudo apt-get update

$ sudo apt-get install gscan2pdf

   Option, remove GScan2PDF 1.8.8:

$ sudo apt-get remove --autoremove gscan2pdf

Monday, 30 October 2017

Setting up Multi-Master replication of FreeIPA Directory servers

Harry

Today, lets take things one step further by adding redundancy into the equation. If you are familiar with Microsoft Active Directory and how Windows Domain Controllers replicate between each other, this article will show you how to set up FreeIPA to achieve the same goal.

For this article, you will obviously need an existing FreeIPA server that is up and running, as well as a new system you wish to make as a secondary master.
In this article, I will be using the below details.
Existing FreeIPA server: ds01.example.com (10.0.1.11)
New FreeIPA server: ds02.example.com (10.0.1.12)
FreeIPA Admin user: admin
FreeIPA Admin password: redhat123
FreeIPA Directory Manager user: admin
FreeIPA Directory Manager password: redhat123
DNS Forwarder: 10.0.0.254 (Same as forwarder configured on ds01.example.com)
Operating System of both hosts: Red Hat Enterprise Linux 6.3 x86_64


Step 1. Install FreeIPA packages on new system
Although possible, you *could* set up replication with different versions of FreeIPA on different servers, I highly recommend sticking with the same version as your existing host.
Install the same packages as you did on your first host. Note: if you are using external DNS, you do not need to install the bind packages.
[root@ds02 ~]# yum install -y ipa-server bind bind-utils bind-dyndb-ldap

Step 2. Add new host to DNS
We need to set up IPA so that it knows to allow replication with the new host when we install it. This has a prerequisite on DNS however, so we will need to add the DNS entries for our new server before we can prepare IPA.
Note: if you are using external DNS, this does not apply to you.
To add your new host to DNS, run the following commands on your existing FreeIPA server.
[root@ds01 ~]# kinit admin
Password for admin@EXAMPLE.COM:
[root@ds01 ~]# ipa dnsrecord-add example.com ds02 --a-rec 10.0.1.12
Record name: ds02
A record: 10.0.1.12
[root@ds01 ~]# ipa dnsrecord-add 1.0.10.in-addr.arpa. 12 --ptr-rec ds02.example.com.
Record name: 12
PTR record: ds02.example.com.
[root@ds01 ~]#

Step 3. Prepare your current FreeIPA server for a replication agreement
Once you have your host stored in DNS, you are now ready to create a replication GPG key for your new server to use to commence the replication install.
To prepare FreeIPA for replication, run the following command.
[root@ds01 ~]# ipa-replica-prepare ds02.example.com
Directory Manager (existing master) password:

Preparing replica for ds02.example.com from ds01.example.com
Creating SSL certificate for the Directory Server
Creating SSL certificate for the dogtag Directory Server
Creating SSL certificate for the Web Server
Exporting RA certificate
Copying additional files
Finalizing configuration
Packaging replica information into /var/lib/ipa/replica-info-ds02.example.com.gpg
[root@ds01 ~]#
Now we need to copy this gpg file to our new replica-to-be
[root@ds01 ~]# scp /var/lib/ipa/replica-info-ds02.example.com.gpg root@ds02.example.com:/var/lib/ipa/
The authenticity of host 'ds02.example.com (10.0.1.12)' can't be established.
RSA key fingerprint is 36:b0:7e:de:29:7f:96:1a:f8:43:00:9a:22:24:75:15.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ds02.example.com,10.0.1.12' (RSA) to the list of known hosts.
root@ds02.example.com's password:
replica-info-ds02.example.com.gpg                                                                                                                                                100%   28KB  28.4KB/s   00:00
[root@ds01 ~]#

Step 4. Open firewall port on both hosts to allow replication
Your existing FreeIPA server will already have several ports that are open.
Note: If you don’t use your local firewall in your environment, which I highly recommend against, you can jump to step 5 if you wish.
Just as a reminder, they are below
TCP: 80, 443, 389, 636, 88, 464, 53
UDP: 88, 464, 53, 123
We need to open one more port on both hosts, as this port will be needed to allow the communication of replication data.
TCP: 7389
To open this port, you can run the following
[root@ds01 ~]# iptables -I INPUT -p tcp --dport 7389 -j ACCEPT
[root@ds01 ~]# service iptables save
iptables: Saving firewall rules to /etc/sysconfig/iptables:[  OK  ]
[root@ds01 ~]#
You will need to open all of the above ports on your new system in order to proceed. As my IPA servers only ever exist for the purpose of IPA, I cheat here and I copy the /etc/sysconfig/iptables file to my replicas.
From your new system, copy the existing iptables config file and restart the iptables service
[root@ds02 ~]# scp root@ds01.example.com:/etc/sysconfig/iptables /etc/sysconfig/
The authenticity of host 'ds01.example.com (10.0.1.11)' can't be established.
RSA key fingerprint is b2:ea:40:2c:1d:55:50:b6:c6:df:d8:19:09:4d:2a:6a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ds01.example.com,10.0.1.11' (RSA) to the list of known hosts.
root@ds01.example.com's password:
iptables                                                                                                                                                                       100% 1023     1.0KB/s   00:00
[root@ds02 ~]# service iptables restart
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Unloading modules:                               [  OK  ]
iptables: Applying firewall rules:                         [  OK  ]
[root@ds02 ~]#
Or alternatively you can run the following if you want to do it the manual way.
[root@ds02 ~]# for x in 80 443 389 636 88 464 53 7389; do iptables -I INPUT -p tcp --dport $x -j ACCEPT ; done
[root@ds02 ~]# for x in 88 464 53 123 ; do iptables -I INPUT -p udp --dport $x -j ACCEPT ; done
[root@ds02 ~]# service iptables save
iptables: Saving firewall rules to /etc/sysconfig/iptables:[  OK  ]
[root@ds02 ~]#

Step 5. Setting up replication (This is where the magic happens)
As I am setting FreeIPA up in a way similar to how Microsoft Active Directory is generally used, I will be replicating DNS as well.
Note: again, if you are using external DNS, you should omit the “–setup-dns” option in the below command.
To start the replication set up, run the following. You will be prompted for your Directory Manager and Admin password.
Note: Brace yourself, there is a fair bit of output here again.
[root@ds02 ~]# ipa-replica-install --setup-dns --setup-ca --forwarder=10.0.0.254 /var/lib/ipa/replica-info-ds02.example.com.gpg 
Directory Manager (existing master) password: 

Run connection check to master
Check connection from replica to remote master 'ds01.example.com':
   Directory Service: Unsecure port (389): OK
   Directory Service: Secure port (636): OK
   Kerberos KDC: TCP (88): OK
   Kerberos Kpasswd: TCP (464): OK
   HTTP Server: Unsecure port (80): OK
   HTTP Server: Secure port (443): OK
   PKI-CA: Directory Service port (7389): OK

The following list of ports use UDP protocol and would need to be
checked manually:
   Kerberos KDC: UDP (88): SKIPPED
   Kerberos Kpasswd: UDP (464): SKIPPED

Connection from replica to master is OK.
Start listening on required ports for remote master check
Get credentials to log in to remote master
admin@EXAMPLE.COM password: 

Execute check on remote master
Check connection from master to remote replica 'ds02.example.com':
   Directory Service: Unsecure port (389): OK
   Directory Service: Secure port (636): OK
   Kerberos KDC: TCP (88): OK
   Kerberos KDC: UDP (88): OK
   Kerberos Kpasswd: TCP (464): OK
   Kerberos Kpasswd: UDP (464): OK
   HTTP Server: Unsecure port (80): OK
   HTTP Server: Secure port (443): OK
   PKI-CA: Directory Service port (7389): OK

Connection from master to replica is OK.

Connection check OK
Configuring ntpd
  [1/4]: stopping ntpd
  [2/4]: writing configuration
  [3/4]: configuring ntpd to start on boot
  [4/4]: starting ntpd
done configuring ntpd.
Configuring directory server for the CA: Estimated time 30 seconds
  [1/3]: creating directory server user
  [2/3]: creating directory server instance
  [3/3]: restarting directory server
done configuring pkids.
Configuring certificate server: Estimated time 3 minutes 30 seconds
  [1/13]: creating certificate server user
  [2/13]: creating pki-ca instance
  [3/13]: configuring certificate server instance
  [4/13]: disabling nonces
  [5/13]: creating RA agent certificate database
  [6/13]: importing CA chain to RA certificate database
  [7/13]: fixing RA database permissions
  [8/13]: setting up signing cert profile
  [9/13]: set up CRL publishing
  [10/13]: set certificate subject base
  [11/13]: enabling Subject Key Identifier
  [12/13]: configuring certificate server to start on boot
  [13/13]: Configure HTTP to proxy connections
done configuring pki-cad.
Restarting the directory and certificate servers
Configuring directory server: Estimated time 1 minute
  [1/30]: creating directory server user
  [2/30]: creating directory server instance
  [3/30]: adding default schema
  [4/30]: enabling memberof plugin
  [5/30]: enabling referential integrity plugin
  [6/30]: enabling winsync plugin
  [7/30]: configuring replication version plugin
  [8/30]: enabling IPA enrollment plugin
  [9/30]: enabling ldapi
  [10/30]: configuring uniqueness plugin
  [11/30]: configuring uuid plugin
  [12/30]: configuring modrdn plugin
  [13/30]: enabling entryUSN plugin
  [14/30]: configuring lockout plugin
  [15/30]: creating indices
  [16/30]: configuring ssl for ds instance
  [17/30]: configuring certmap.conf
  [18/30]: configure autobind for root
  [19/30]: configure new location for managed entries
  [20/30]: restarting directory server
  [21/30]: setting up initial replication
Starting replication, please wait until this has completed.
Update in progress
Update in progress
Update in progress
Update in progress
Update in progress
Update in progress
Update succeeded
  [22/30]: adding replication acis
  [23/30]: setting Auto Member configuration
  [24/30]: enabling S4U2Proxy delegation
  [25/30]: initializing group membership
  [26/30]: adding master entry
  [27/30]: configuring Posix uid/gid generation
  [28/30]: enabling compatibility plugin
  [29/30]: tuning directory server
  [30/30]: configuring directory to start on boot
done configuring dirsrv.
Configuring Kerberos KDC: Estimated time 30 seconds
  [1/9]: adding sasl mappings to the directory
  [2/9]: writing stash file from DS
  [3/9]: configuring KDC
  [4/9]: creating a keytab for the directory
  [5/9]: creating a keytab for the machine
  [6/9]: adding the password extension to the directory
  [7/9]: enable GSSAPI for replication
  [8/9]: starting the KDC
  [9/9]: configuring KDC to start on boot
done configuring krb5kdc.
Configuring kadmin
  [1/2]: starting kadmin 
  [2/2]: configuring kadmin to start on boot
done configuring kadmin.
Configuring ipa_memcached
  [1/2]: starting ipa_memcached 
  [2/2]: configuring ipa_memcached to start on boot
done configuring ipa_memcached.
Configuring the web interface: Estimated time 1 minute
  [1/13]: disabling mod_ssl in httpd
  [2/13]: setting mod_nss port to 443
  [3/13]: setting mod_nss password file
  [4/13]: enabling mod_nss renegotiate
  [5/13]: adding URL rewriting rules
  [6/13]: configuring httpd
  [7/13]: setting up ssl
  [8/13]: publish CA cert
  [9/13]: creating a keytab for httpd
  [10/13]: clean up any existing httpd ccache
  [11/13]: configuring SELinux for httpd
  [12/13]: restarting httpd
  [13/13]: configuring httpd to start on boot
done configuring httpd.
Applying LDAP updates
Restarting the directory server
Restarting the KDC
Using reverse zone 1.0.10.in-addr.arpa.
Configuring named:
  [1/8]: adding NS record to the zone
  [2/8]: setting up reverse zone
  [3/8]: setting up our own record
  [4/8]: setting up kerberos principal
  [5/8]: setting up named.conf
  [6/8]: restarting named
  [7/8]: configuring named to start on boot
  [8/8]: changing resolv.conf to point to ourselves
done configuring named.

Global DNS configuration in LDAP server is empty
You can use 'dnsconfig-mod' command to set global DNS options that
would override settings in local named.conf files

Restarting the web server
[root@ds02 ~]#

Step 6. Verify that replication is responding correctly
As with all things that involve setting up technology, you should always verify your work. I never thought I’d enforce this saying as much as I do. My high school maths teacher would be very proud.
One of the first things I do post-setup, is verify that I have two directory server instances running. You will see your DOMAIN instance, and if you set up CA replication, you will also see PKI-IPA.
To check, run the following
[root@ds02 ~]# service dirsrv status
dirsrv EXAMPLE-COM (pid 5115) is running...
dirsrv PKI-IPA (pid 5185) is running...
[root@ds02 ~]#
also, make sure you can authenticate. That’s pretty important!!
[root@ds02 ~]# kinit admin
Password for admin@EXAMPLE.COM: 
[root@ds02 ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: admin@EXAMPLE.COM

Valid starting     Expires            Service principal
08/29/12 22:53:02  08/30/12 22:53:00  krbtgt/EXAMPLE.COM@EXAMPLE.COM
[root@ds02 ~]#
All looking good so far. Lastly, lets just make sure that our servers are in fact replicating.
Check to see all IPA replica’s in the domain
[root@ds02 ~]# ipa-replica-manage list
ds01.example.com: master
ds02.example.com: master
[root@ds02 ~]#
Great… Now lets make sure that ds01.example.com is replicating to ds02.example.com
[root@ds02 ~]# ipa-replica-manage list ds01.example.com
ds02.example.com: replica
[root@ds02 ~]#
Also great
Last one, lets check to see that ds02.example.com can replicate back to ds01.example.com
[root@ds02 ~]# ipa-replica-manage list ds02.example.com
ds01.example.com: replica
[root@ds02 ~]#
Happy Days!. We have one we peachy IPA replicated environment.
Stay tuned as I’ll be covering more detail on managing more than 2 replica’s in an upcoming article. This will be useful for those of you who might be looking to deploy IPA into a multi-site environment.

Sunday, 29 October 2017

How to Install Arduino IDE 1.8.5 on Ubuntu 16.04 & Higher

Harry
   Arduino is an open-source prototyping platform based on easy-to-use hardware and software. Arduino boards are able to read inputs - light on a sensor, a finger on a button, or a Twitter message - and turn it into an output - activating a motor, turning on an LED, publishing something online. You can tell your board what to do by sending a set of instructions to the microcontroller on the board.


Arduino IDE 1.8.5 Changelog
[ide]
  •  Added workaround for menu visibility bug in MacOSX 10.13 beta. Thanks @puybaret
  •  Fixed bug for negative-font-size.
  •  New/Rename tabs now allows names starting with a number. 

Installation instructions:

1. Download the latest packages, Linux 32-bit or Linux 64-bit, from the official link below: 


2. Open terminal from Unity Dash, App Launcher, or via Ctrl+Alt+T keys. When it opens, run below commands one by one:


$ cd ~/Downloads

$ tar -xvf arduino-1.8.5-*.tar.xz

$ sudo mv arduino-1.8.5 /opt

$ cd /opt/arduino-1.8.5/

$ chmod +x install.sh

$ ./install.sh

Finally, launch Arduino IDE from Unity Dash, Application Launcher, or via Desktop shortcut.




Saturday, 28 October 2017

How to install Android Studio 3.0 Released on Ubuntu

Harry
  Android Studio is the official Integrated Development Environment (IDE) for Android app development, based on IntelliJ IDEA . On top of IntelliJ's powerful code editor and developer tools, Android Studio offers even more features that enhance your productivity when building Android apps


Android Studio 3.0 changelog:
  • Support for Android 8.0.
  • Support for building separate APKs based on language resources.
  • Support for Java 8 libraries and Java 8 language features (without the Jack compiler).
  • Support for Android Test Support Library 1.0 (Android Test Utility and Android Test Orchestrator).
  • Improved ndk-build and cmake build speeds.
  • Improved Gradle sync speed.
  • AAPT2 is now enabled by default.
  • Using ndkCompile is now more restricted. You should instead migrate to using either CMake or ndk-build to compile native code that you want to package into your APK. To learn more, read Migrate from ndkcompile.
  • See more....

Installation instructions:

    You can easily install it either via Maarten Fonville’s PPA or by using Ubuntu Make in Ubuntu 14.04, Ubuntu 16.04, Ubuntu 17.04, and Ubuntu 17.10.

    Opening terminal (Ctrl+Alt+T) and running the command:

Install Java 8 on Ubuntu

      The first need to install Java. It’s recommended to install Oracle Java, because it has a performance edge over OpenJDK. Run the following commands in terminal to install it from PPA and set Oracle Java 8 by default:

$ sudo add-apt-repository ppa:webupd8team/java

$ sudo apt-get update

$ sudo apt-get install java-common oracle-java8-installer

$ sudo apt-get install oracle-java8-set-default

Install Android Studio on Ubuntu 

     Run the following commands to add Android Studio PPA and install.

$ sudo add-apt-repository ppa:maarten-fonville/android-studio

$ sudo apt-get update

$ sudo apt-get install android-studio

To install Android Studio 3.0 via Ubuntu Make

$ sudo apt install ubuntu-make

$ umake android



refer original : https://howto-ubuntunew.blogspot.com/








Tuesday, 24 October 2017

Youtube-DL 2017.10.20, a Youtube Video Downloader released

Harry
youtube-dl is a command-line program to download videos from YouTube.com. It requires the Python interpreter, version 2.6, 2.7, or 3.2+, and it is not platform specific. It should work in your Unix box, in Windows or in Mac OS X. It is released to the public domain, which means you can modify it, redistribute it or use it however you like.

Here's is the list of all the supported sites, ordered alphabetically:
  • 1tv: Первый канал
  • 1up.com
  • 20min
  • 220.ro
  • 22tracks:genre
  • 22tracks:track
  • 24video
  • 3qsdn: 3Q SDN
  • 3sat
  • 4tube
  • 56.com
  • 5min
  • 8tracks
  • 91porn
  • 9gag
  • 9now.com.au
  • abc.net.au
  • Abc7News
  • abcnews
  • abcnews:video
  • AcademicEarth:Course
  • acast
  • acast:channel
  • AddAnime
  • AdobeTV
  • AdobeTVChannel
  • more sites.
youtube-dl 2017.10.20 Changelog:
Core
  • [downloader/fragment] Report warning instead of error on inconsistent download state
  • [downloader/hls] Fix total fragments count when ad fragments exist
Extractors
  • [parliamentliveuk] Fix extraction
  • [soundcloud] Update client id
  • [servus] Add support for servus.com
  • [unity] Add support for unity3d.com
  • [youtube] Replace youtube redirect URLs in description
  • [pbs] Restrict direct video URL regular expression
  • [drtv] Respect preference for direct HTTP formats
  • [eporner] Add support for embed URLs
  • [arte] Capture and output error message
  • [niconico] Improve uploader metadata extraction robustness

How to install youtube-dl 2017.10.20 on Ubuntu 17.10, Ubuntu 17.04, Ubuntu 16.04, Ubuntu 15.10 Wily Werewolf, Ubuntu 15.04 Vivid Vervet, Ubuntu 14.04 Trusty Tahr and derivative systems like Linux Mint 17.2 Rafaela, Linux Mint 17.1 Rebecca, Linux Mint 17 Qiana, Pinguy OS 14.04, Elementary OS 0.3 Freya, Deepin 2014, Peppermint 6, Peppermint 5, LXLE 14.04 and Linux Lite 2

 Open terminal and insert command line...

$ sudo curl -L https://yt-dl.org/downloads/2017.10.20/youtube-dl -o /usr/local/bin/youtube-dl

$ sudo chmod a+rx /usr/local/bin/youtube-dl

 If you do not have curl, you can alternatively use wget:

$ sudo wget https://yt-dl.org/downloads/2017.10.20/youtube-dl -O /usr/local/bin/youtube-dl

$ sudo chmod a+rx /usr/local/bin/youtube-dl

Command showing how to download a Youtube Video:

$ youtube-dl 'the url address of video to download'

How to Install Wine 2.0.3 stable on Ubuntu 16.04, 17.04

Harry
   Wine (originally an acronym for "Wine Is Not an Emulator") is a compatibility layer capable of running Windows applications on several POSIX-compliant operating systems, such as Linux, Mac OSX, & BSD. Instead of simulating internal Windows logic like a virtual machine or emulator, Wine translates Windows API calls into POSIX calls on-the-fly, eliminating the performance and memory penalties of other methods and allowing you to cleanly integrate Windows applications into your desktop.


Wine 2.0.3 stable changelog:
  • unimplemented function ole32.dll.OleGetIconOfFile
  • tmpfile() fails when run from Unix path
  • Lost Horizon crash/page fault during Chapter 2
  • Sound Recorder crashes on encoding PCM Sample
  • Purebasic does not display icons in toolbar which is drawn distorted.
  • Sound Recorder displays error when seeking MP3 stream to the end
  • Uninstaller: application list doesn't fit
  • WPS 2013 (Kingsoft) crash at install
  • World of Warships/Planes/Tanks client in torrent download mode crashes spuriously on high bandwidth load (i/o completion ports)
  • Worms Armageddon Gameplay only shows top-left corner of screen
  • notepad++ escape key
  • Listview does not draw correctly in some conditions.
  • "Unrecognized stencil op 0" messages flooding system log in Söldner Secret Wars
  • ACDSee Pro 10 needs msvcp140.dll.?_Schedule_chore@details@Concurrency@@YAHPEAU_Threadpool_chore@12@@Z
  • Soul Reaver GOG Cinematics stopped working
  • Seed of Andromeda Pre-Alpha 0.2 crashes
  • Scrabble (Infogrames) multiplayer requires IDirectPlay4::EnumConnections
  • WAtomic: White labels that show name of elements hidden by GL components
  • secur32/tests/ntlm.ok crashes in DeleteSecurityContext
  • Guitar Pro 7 needs msvcp140.dll._To_wide
  • SP+ maker won't run.
  • Rise of the Tomb Raider needs unimplemented function USER32.dll.PhysicalToLogicalPoint
  • WarBR: game (WarS v5.5 p4) crashes on start, needs WMP IOleObject::GetExtent method implementation
  • Adobe Premiere needs ntoskrnl.exe.KeAcquireSpinLockRaiseToDpc
  • winhttp fails to redirect from http to https on 301 error.
  • Adobe Premiere needs ntoskrnl.exe.KeReleaseSpinLock
  • Wargaming.net Game Center needs msvcp140.dll._To_byte
  • Crazyracing KartRider: Crashes on startup on unimplemented function ntoskrnl.exe.IoCreateNotificationEvent
  • BitLord crashes on unimplemented function IPHLPAPI.DLL.if_nametoindex
  • PHP crashes on unimplemented function api-ms-win-crt-math-l1-1-0.dll.acosh
  • PHP crashes on unimplemented function api-ms-win-crt-math-l1-1-0.dll.atanh
  • numpy crashes on unimplemented function api-ms-win-crt-math-l1-1-0.dll.log1p
  • winealsa.drv: Warning while building (GCC 7.1.1)
  • valgrind shows a couple invalid reads in programs/regedit/tests/regedit.c
  • make error on Debian 4.9.30-2kali1 (2017-06-22) x86_64 GNU/Linux
  • Many applications (winecfg, ...) crash on startup with freetype 2.8.1
  • freetype 2.8.1 breaks Wine build during font conversion with sfnt2fon 

Installation instructions:
 
Open terminal and insert command line...

$ wget -nc https://dl.winehq.org/wine-builds/Release.key

$ sudo apt-key add Release.key

$ sudo apt-add-repository https://dl.winehq.org/wine-builds/ubuntu/

$ sudo apt-get update

$ sudo apt-get install --install-recommends winehq-stable

Check version wine:

$ wine --version
 

Monday, 23 October 2017

How to Install LibreOffice 5.4.2.2 on Ubuntu 16.04 & Higher

Harry
   LibreOffice is a powerful office suite – its clean interface and feature-rich tools help you unleash your creativity and enhance your productivity. LibreOffice includes several applications that make it the most powerful Free and Open Source office suite on the market: Writer (word processing), Calc (spreadsheets), Impress (presentations), Draw (vector graphics and flowcharts), Base (databases), and Math (formula editing).


LibreOffice 5.4.2.2 Changelog :
  • unknown Read [Caolán McNamara]
  • sw: DeleteAndJoin found yet another way to delete new redline [Michael Stahl]
  • SYLK import: check ;X;Y;C;R col/row validity early [Caolán McNamara]
  • unknown Read [Caolán McNamara]
  • PaletteManager::LoadPalettes() leaks memory [Julien Nabet]
  • ODF: wrong place for draw:notify-on-update-of-ranges, is in loext:p, should be in draw:object
  • See announcement for full details  

Installation instructions:

    Open terminal and insert command line...

$ sudo apt-get remove --purge libreoffice*

$ sudo add-apt-repository ppa:libreoffice/libreoffice-5-4

$ sudo apt-get update

$ sudo apt-get install libreoffice

Install via package if PPA is not up to date

32bit OS

$ wget http://download.documentfoundation.org/libreoffice/stable/5.4.2/deb/x86/LibreOffice_5.4.2_Linux_x86_deb.tar.gz

$ tar -xvf LibreOffice_5.4.2_Linux_x86_deb.tar.gz

$ cd LibreOffice_5.4.2.2_Linux_x86_deb/DEBS/

$ sudo dpkg -i *.deb

64bit OS

$ wget http://download.documentfoundation.org/libreoffice/stable/5.4.2/deb/x86_64/LibreOffice_5.4.2_Linux_x86-64_deb.tar.gz

$ tar -xvf LibreOffice_5.4.2_Linux_x86-64_deb.tar.gz

$ cd LibreOffice_5.4.2.2_Linux_x86-64_deb/DEBS/

$ sudo dpkg -i *.deb


Sunday, 22 October 2017

Virtualbox 5.2.0 Released, Install on Ubuntu 16.04, 17.04, 17.10

Harry
  VirtualBox is a general-purpose full virtualizer for x86 hardware, targeted at server, desktop and embedded use.Not only is VirtualBox an extremely feature rich, high performance product for enterprise customers, it is also the only professional solution that is freely available as Open Source Software under the terms of the GNU General Public License (GPL) version 2.


Virtualbox 5.2.0 Changelog:
This is a major update. The following major new features were added:
  • VM export to Oracle Cloud (OPC)
  • Unattended guest installation
  • Overhauled VM selector GUI (improved tools VM / global tools handling, new icons)
  • Added experimental audio support for video recording
In addition, the following items were fixed and/or added:
  • VMM: fixed reason for recent Linux kernels on also recent CPU models warning about "XSAVE consistency problem"
  • GUI: Virtual Media Manager rework allowing to manage media attributes, like size, location, type and description
  • GUI: Host-only Network Manager implemented to simplify managing corresponding networks and their attributes
  • GUI: Snapshot Pane rework allowing to manage snapshot attributes, like name and description; reworked snapshot details which looks more clear, corresponds to VM Details pane and reflects current VM state difference according to last snapshot taken
  • GUI: Audio settings extended with possibility to enable/disable audio input/output; corresponding changed were done to Audio and Video Capture settings pages; VM Devices menu and status-bar extended with corresponding actions and indicator as well
  • GUI: improvements with accessibility support
  • GUI: Fixed double mouse cursor when using mouse integration without Guest Additions, actually a Qt 5.6 bug fixed with QT 5.6.3
  • Audio: implemented (optional) device enumeration support for audio backends
  • Audio: implemented support for host device callbacks (e.g. when adding or removing an audio device)
  • Audio: HDA emulation now uses asynchronous data processing in separate threads
  • Audio: implemented ability to enable or disable audio input / output on-the-fly
  • Storage: implemented support for CUE/BIN images as CD/DVD media including multiple tracks
  • Storage: implemented support for the controller memory buffer feature for NVMe
  • Storage: first milestone of the I/O stack redesign landed
  • E1000: Fix for Windows XP freeze when booting with unplugged cable
  • NAT network: do not skip some port forwarding setup when multiple VMs are active
  • Serial: fixed extremely rare misbehavior on VM poweroff
  • EFI: better video mode handling, supporting custom video modes and easier configuration
  • BIOS: properly report floppy logical sectors per track for unusual formats
  • BIOS: update ATA disk parameter table vectors only if there is actually a corresponding ATA disk attached
  • PXE: speed up booting by better handling pending packets when the link is not up yet
  • VBoxManage: handle CPUID sub-leaf overrides better
  • Windows Additions: fix several 3D related crashes
  • Solaris hosts: allow increasing MTU size for host-only adapter to 9706 bytes to support jumbo frames
  • Linux Additions: on systems using systemd, make sure that only the Guest Additions timesync service is active
  • many unlisted fixes and improvements  

Install VirtualBox 5.2.0 on Ubuntu


Open terminal (Ctrl+Alt+T) and run command to add the repository:

$ sudo sh -c 'echo "deb http://download.virtualbox.org/virtualbox/debian xenial contrib" >> /etc/apt/sources.list.d/virtualbox.list'

$ wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -

$ wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add -

$ sudo apt-get update

$ sudo apt-get install virtualbox-5.2
 



How to install Fotoxx 17.08.3 Photo Editor on Ubuntu 16.04, 17.04, 17.10

Harry
   Fotoxx is a free Linux program for editing photos or other images and managing a large collection. Image directories (folders) can be viewed as a scrolling gallery of thumbnail images. Navigating directories and subdirectories is simple and fast. Click on a thumbnail for a full window view of the image. The image can be zoomed, panned and scrolled using the mouse. Gallery thumbnails can vary from small to huge. Popup windows can be used to view multiple images at any scale. Galleries are also used to display image search results and albums.


Fotoxx 17.08.3 Changelog: 
  • Bugfix: user settings for video file types were not being saved. 
  • Bugfix: add missing popup diagnostic for libraw (RAW file) errors. Bugfix: check for hugin, not hugin-executor (recent change? distro diversity?).

Installation instructions:

   The GetDeb repository contains the latest packages of Fotoxx, available for Ubuntu 16.04,  Ubuntu 17.04 and derivatives.

1. Add the GetDeb repository via command:

$ sudo sh -c 'echo "deb http://archive.getdeb.net/ubuntu $(lsb_release -sc)-getdeb apps" >> /etc/apt/sources.list.d/getdeb.list'

For Linux Mint 18.x, replace $(lsb_release -sc) directly with xenial in the command. Type in your password when prompts and hit Enter.

2. Download and install the repository key via command:

$ wget -q -O- http://archive.getdeb.net/getdeb-archive.key | sudo apt-key add -

3. Running command update and install:

$ sudo apt-get update

$ sudo apt-get install
fotoxx

(Optional) To remove Fotoxx, use command:

$ sudo apt-get remove fotoxx

$ sudo apt-get autoremove


Saturday, 21 October 2017

How To Download videos From youtube on linux using command line. [Easy Method]

Harry

 

 

 How To Install & Configure

Copy below command and Open Your Terminal (CTRL+T) and paste on your terminal and press ENTER.

To Install and Configure Youtube Downloader.
# git clone https://bitbucket.org/DevOps-Expert/youtube-downloader.git; cd youtube-downloader; ./ytd --configure; source ~/.bashrc 


Enter Your User Password When it asking for. it required for Download binary file for YTD Downloader. 
once Downloader get configured you will see like below image
  
if every this is configured Properly Then you can use below command to download any video, mp3, play list easily. 

How to Download Youtube Videos ?


    It is very simple and very easy tool that allow you to download youtube videos, playlist, easily. it is also helpful to download bulk videos from file. that contains bulk youtube urls.

To Get Syntax Help or Man Page, Execute below one.

# ytd -h  



if you want to Download HD Videos (1280x720p) run below command.

# ytd -u https://www.youtube.com/watch?v=s7viEqT02Yc --hd 

Run Below command to Download MP3 from youtube.

# ytd -u https://www.youtube.com/watch?v=s7viEqT02Yc --mp3

To Download Standard Quality Video from youtube.

# ytd -u https://www.youtube.com/watch?v=s7viEqT02Yc --video

To Download HD Videos,Playlist,Bulk Videos using ./tmp/video_list.txt from youtube.

# ytd -o HD_Videos --bulk

# ytd -o HD_Videos --playlist

# ytd -o HD_Videos --batch

Note: -o used to create a folder name HD_Videos & all videos will be Download here. you must have to make entries all urls in ./tmp/video_list.txt that we want to download


Hey Friends,

if you found any bug please share with me. so that i can fixed in future version.

email me at HarryTheITexpert@gmail.com



Source Code : https://bitbucket.org/DevOps-Expert/youtube-downloader

How To Download videos From youtube on linux using command line.

Harry

 

 

 How To Install & Configure

Copy below command and Open Your Terminal (CTRL+T) and paste on your terminal and press ENTER.

To Install and Configure Youtube Downloader.
# git clone https://bitbucket.org/DevOps-Expert/youtube-downloader.git; cd youtube-downloader; ./ytd --configure; source ~/.bashrc 


Enter Your User Password When it asking for. it required for Download binary file for YTD Downloader. 
once Downloader get configured you will see like below image
  
if every this is configured Properly Then you can use below command to download any video, mp3, play list easily. 

How to Download Youtube Videos ?


    It is very simple and very easy tool that allow you to download youtube videos, playlist, easily. it is also helpful to download bulk videos from file. that contains bulk youtube urls.

To Get Syntax Help or Man Page, Execute below one.

# ytd -h  

if you want to Download HD Videos (1280x720p) run below command.

# ytd -u https://www.youtube.com/watch?v=s7viEqT02Yc --hd 

Run Below command to Download MP3 from youtube.

# ytd -u https://www.youtube.com/watch?v=s7viEqT02Yc --mp3

To Download Standard Quality Video from youtube.

# ytd -u https://www.youtube.com/watch?v=s7viEqT02Yc --video

To Download HD Videos,Playlist,Bulk Videos using ./tmp/video_list.txt from youtube.

# ytd -o HD_Videos --bulk

# ytd -o HD_Videos --playlist

# ytd -o HD_Videos --batch

Note: -o used to create a folder name HD_Videos & all videos will be Download here. you must have to make entries all urls in ./tmp/video_list.txt that we want to download


Hey Friends,

if you found any bug please share with me. so that i can fixed in future version.

email me at HarryTheITexpert@gmail.com



Source Code : https://bitbucket.org/DevOps-Expert/youtube-downloader

Wednesday, 18 October 2017

Install HP Print Drivers HPLIP 3.17.10 Adds New Printers on Ubuntu

Harry
    HPLIP is a free, open-source HP-developed solution for printing, scanning, and faxing with HP inkjet and laser based printers in Linux.


Drivers HPLIP 3.17.10 Changelog: 
Added Support for the Following New Scanners:
  • HP Scanjet Enterprise Flow N9120 fn2 Document Scanner
  • HP Digital Sender Flow 8500 fn2 Document Capture Workstation
Added support for the following new Distro's:
  • Debian 9.1

Installation instructions:
 
1. Download the installer (hplip-3.17.10.run) from the link below:


https://sourceforge.net/projects/hplip/files/hplip/3.17.10/

2. Open terminal (Ctrl+Alt+T) and run:



$ cd ~/Downloads/

$ chmod +x hplip-3.17.10.run

$ ./hplip-3.17.10.run

3. Restart your computer or re-plug your printer



Helm issues

Harry

  1. Helm install pod in pending state:
    When you execute kubectl get events you will see the following error:
    no persistent volumes available for this claim and no storage class is set or
    PersistentVolumeClaim is not bound
    This error usually comes in kubernetes set with kubeadm.
    You will need to create persistentvolume with the following yaml file:
    [code]
    kind: PersistentVolume
    apiVersion: v1
    metadata:
    name: redis-data
    labels:
    type: local
    spec:
    storageClassName: generic
    capacity:
    storage: 8Gi
    accessModes:
    - ReadWriteOnce
    hostPath:
    path: "/bitnami/redis"

    [/code]
    create pv with kubectl create -f pv-create.ymlThen you will need to create pvc with following yaml

    [code]
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
    name: redis-data
    spec:
    storageClassName: generic
    accessModes:
    - ReadWriteOnce
    resources:
    requests:
    storage: 8Gi

    [/code]
    You will need to create pvc with kubectl create -f pv-claim.ymlCheck the pvc status with kubectl get pvc with status should be bound.

Installing Kubernetes 1.8.1 on centos 7 with flannel

Harry
Prerequisites:-

You should have at least two VMs (1 master and 1 slave) with you before creating cluster in order to test full functionality of k8s.

1] Master :-

Minimum of 1 Gb RAM, 1 CPU core and 50 Gb HDD     ( suggested )

2] Slave :-

Minimum of 1 Gb RAM, 1 CPU core and 50 Gb HDD     ( suggested )

3] Also, make sure of following things.

  • Network interconnectivity between VMs.

  • hostnames

  • Prefer to give Static IP.

  • DNS entries

  • Disable SELinux


$ vi /etc/selinux/config

  • Disable and stop firewall. ( If you are not familiar with firewall )


$ systemctl stop firewalld

$ systemctl disable firewalld

Following steps creates k8s cluster on the above VMs using kubeadm on centos 7.

Step 1] Installing kubelet and kubeadm on all your hosts

$ ARCH=x86_64

$ cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-${ARCH}

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg

       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

EOF

$ setenforce 0

$ yum install -y docker kubelet kubeadm kubectl kubernetes-cni

$ systemctl enable docker && systemctl start docker

$ systemctl enable kubelet && systemctl start kubelet

You might have an issue where the kubelet service does not start. You can see the error in /var/log/messages: If you have an error as follows:
Oct 16 09:55:33 k8s-master kubelet: error: unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory
Oct 16 09:55:33 k8s-master systemd: kubelet.service: main process exited, code=exited, status=1/FAILURE

Then you will have to initialize the kubeadm first as in the next step. And the start the kubelet service.

Step 2.1] Initializing your master

$ kubeadm init

Note:-

  1. execute above command on master node. This command will select one of interface to be used as API server. If you wants to provide another interface please provide “--apiserver-advertise-address=<ip-address>” as an argument. So the whole command will be like this-

$ kubeadm init --apiserver-advertise-address=<ip-address>

 

  1. K8s has provided flexibility to use network of your choice like flannel, calico etc. I am using flannel network. For flannel network we need to pass network CIDR explicitly. So now the whole command will be-

$ kubeadm init --apiserver-advertise-address=<ip-address> --pod-network-cidr=10.244.0.0/16

Exa:- $  kubeadm init --apiserver-advertise-address=172.31.14.55 --pod-network-cidr=10.244.0.0/16

Step 2.2] Start using cluster

$ sudo cp /etc/kubernetes/admin.conf $HOME/
$ sudo chown $(id -u):$(id -g) $HOME/admin.conf
$ export KUBECONFIG=$HOME/admin.conf
-> Use same network CIDR as it is also configured in yaml file of flannel that we are going to configure in step 3.

-> At the end you will get one token along with the command, make a note of it, which will be used to join the slaves.

 

Step 3] Installing a pod network

Different networks are supported by k8s and depends on user choice. For this demo I am using flannel network. As of k8s-1.6, cluster is more secure by default. It uses RBAC ( Role Based Access Control ), so make sure that the network you are going to use has support for RBAC and k8s-1.6.

  1. Create RBAC Pods :

$ kubectl apply -f  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

Check whether pods are creating or not :

$ kubectl get pods --all-namespaces

  1. Create Flannel pods :

$ kubectl apply -f   https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Check whether pods are creating or not :

$ kubectl get pods --all-namespaces -o wide

-> at this stage all your pods should be in running state.

-> option “-o wide” will give more details like IP and slave where it is deployed.

 

Step 4] Joining your nodes

 

SSH to slave and execute following command to join the existing cluster.

$ kubeadm join --token <token> <master-ip>:<master-port>

You might also have an ca-cert-hash make sure you copy the entire join command from the init output to join the nodes.

Go to master node and see whether new slave has joined or not as-

$ kubectl get nodes

-> If slave is not ready, wait for few seconds, new slave will join soon.

 

Step 5]  Verify your cluster by running sample nginx application

$ vi  sample_nginx.yaml

---------------------------------------------

apiVersion: apps/v1beta1

kind: Deployment

metadata:

 name: nginx-deployment

spec:

 replicas: 2 # tells deployment to run 2 pods matching the template

 template: # create pods using pod definition in this template

   metadata:

     # unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is

     # generated from the deployment name

     labels:

       app: nginx

   spec:

     containers:

     - name: nginx

       image: nginx:1.7.9

       ports:

       - containerPort: 80

------------------------------------------------------

$ kubectl create -f sample_nginx.yaml

 

Verify pods are getting created or not.

$ kubectl get pods

$ kubectl get deployments

 

Now , lets expose the deployment so that the service will be accessible to other pods in the cluster.

$ kubectl expose deployment nginx-deployment --name=nginx-service --port=80 --target-port=80 --type=NodePort

 

Above command will create service with the name “nginx-service”. Service will be accessible on the port given by “--port” option for the “--target-port”. Target port will be of pod. Service will be accessible within the cluster only. In order to access it using your host IP “NodePort” option will be used.

 

--type=NodePort :- when this option is given k8s tries to find out  one of free port in the range 30000-32767 on all the VMs of the cluster and binds the underlying service with it. If no such port found then it will return with an error.

 

Check service is created or not

$ kubectl get svc

 

Try to curl -

$ curl <service-IP> 80  

From all the VMs including master. Nginx welcome page should be accessible.

$ curl <master-ip> nodePort

$ curl <slave-IP> nodePort

Execute this from all the VMs. Nginx welcome page should be accessible.

Also, Access nginx home page by using browser.

Helm: Installation and Configuration

Harry

PREREQUISITES



  • You must have Kubernetes installed. We recommend version 1.4.1 or later.

  • You should also have a local configured copy of kubectl.


Helm will figure out where to install Tiller by reading your Kubernetes configuration file (usually $HOME/.kube/config). This is the same file that kubectl uses.

To find out which cluster Tiller would install to, you can run kubectl config current-contextor kubectl cluster-info.
$ kubectl config current-context
my-cluster

INSTALL HELM


Download a binary release of the Helm client. You can use tools like homebrew, or look at the official releases page.

For more details, or for other options, see the installation guide.

INITIALIZE HELM AND INSTALL TILLER


Once you have Helm ready, you can initialize the local CLI and also install Tiller into your Kubernetes cluster in one step:
$ helm init

This will install Tiller into the Kubernetes cluster you saw with kubectl config current-context.

TIP: Want to install into a different cluster? Use the --kube-context flag.

TIP: When you want to upgrade Tiller, just run helm init --upgrade.

INSTALL AN EXAMPLE CHART


To install a chart, you can run the helm install command. Helm has several ways to find and install a chart, but the easiest is to use one of the official stable charts.
$ helm repo update              # Make sure we get the latest list of charts
$ helm install stable/mysql
Released smiling-penguin

In the example above, the stable/mysql chart was released, and the name of our new release is smiling-penguin. You get a simple idea of the features of this MySQL chart by running helm inspect stable/mysql.

Whenever you install a chart, a new release is created. So one chart can be installed multiple times into the same cluster. And each can be independently managed and upgraded.

The helm install command is a very powerful command with many capabilities. To learn more about it, check out the Using Helm Guide

LEARN ABOUT RELEASES


It’s easy to see what has been released using Helm:
$ helm ls
NAME VERSION UPDATED STATUS CHART
smiling-penguin 1 Wed Sep 28 12:59:46 2016 DEPLOYED mysql-0.1.0

The helm list function will show you a list of all deployed releases.

UNINSTALL A RELEASE


To uninstall a release, use the helm delete command:
$ helm delete smiling-penguin
Removed smiling-penguin

This will uninstall smiling-penguin from Kubernetes, but you will still be able to request information about that release:
$ helm status smiling-penguin
Status: DELETED
...

Because Helm tracks your releases even after you’ve deleted them, you can audit a cluster’s history, and even undelete a release (with helm rollback).

READING THE HELP TEXT


To learn more about the available Helm commands, use helm help or type a command followed by the -h flag:
$ helm get -h

+++ aliases = [ “install.md”, “docs/install.md”, “using_helm/install.md”, “developing_charts/install.md” ] +++

Installing Helm


There are two parts to Helm: The Helm client (helm) and the Helm server (Tiller). This guide shows how to install the client, and then proceeds to show two ways to install the server.

INSTALLING THE HELM CLIENT


The Helm client can be installed either from source, or from pre-built binary releases.

From the Binary Releases


Every release of Helm provides binary releases for a variety of OSes. These binary versions can be manually downloaded and installed.

  1. Download your desired version
  2. Unpack it (tar -zxvf helm-v2.0.0-linux-amd64.tgz)
  3. Find the helm binary in the unpacked directory, and move it to its desired destination (mv linux-amd64/helm /usr/local/bin/helm)

From there, you should be able to run the client: helm help.

From Homebrew (macOS)


Members of the Kubernetes community have contributed a Helm formula build to Homebrew. This formula is generally up to date.
brew install kubernetes-helm

(Note: There is also a formula for emacs-helm, which is a different project.)

FROM SCRIPT


Helm now has an installer script that will automatically grab the latest version of the Helm client and install it locally.

You can fetch that script, and then execute it locally. It’s well documented so that you can read through it and understand what it is doing before you run it.
$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
$ chmod 700 get_helm.sh
$ ./get_helm.sh

Yes, you can curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash that if you want to live on the edge.

From Canary Builds


“Canary” builds are versions of the Helm software that are built from the latest master branch. They are not official releases, and may not be stable. However, they offer the opportunity to test the cutting edge features.

Canary Helm binaries are stored in the Kubernetes Helm GCS bucket. Here are links to the common builds:

From Source (Linux, macOS)


Building Helm from source is slightly more work, but is the best way to go if you want to test the latest (pre-release) Helm version.

You must have a working Go environment with glide and Mercurial installed.
$ cd $GOPATH
$ mkdir -p src/k8s.io
$ cd src/k8s.io
$ git clone https://github.com/kubernetes/helm.git
$ cd helm
$ make bootstrap build

The bootstrap target will attempt to install dependencies, rebuild the vendor/ tree, and validate configuration.

The build target will compile helm and place it in bin/helm. Tiller is also compiled, and is placed in bin/tiller.

INSTALLING TILLER


Tiller, the server portion of Helm, typically runs inside of your Kubernetes cluster. But for development, it can also be run locally, and configured to talk to a remote Kubernetes cluster.

Easy In-Cluster Installation


The easiest way to install tiller into the cluster is simply to run helm init. This will validate that helm’s local environment is set up correctly (and set it up if necessary). Then it will connect to whatever cluster kubectl connects to by default (kubectl config view). Once it connects, it will install tiller into the kube-system namespace.

After helm init, you should be able to run kubectl get pods --namespace kube-systemand see Tiller running.

You can explicitly tell helm init to…

  • Install the canary build with the --canary-image flag

  • Install a particular image (version) with --tiller-image

  • Install to a particular cluster with --kube-context

  • Install into a particular namespace with --tiller-namespace


Once Tiller is installed, running helm version should show you both the client and server version. (If it shows only the client version, helm cannot yet connect to the server. Use kubectl to see if any tiller pods are running.)

Helm will look for Tiller in the kube-system namespace unless --tiller-namespace or TILLER_NAMESPACE is set.

Installing Tiller Canary Builds


Canary images are built from the master branch. They may not be stable, but they offer you the chance to test out the latest features.

The easiest way to install a canary image is to use helm init with the --canary-image flag:
$ helm init --canary-image

This will use the most recently built container image. You can always uninstall Tiller by deleting the Tiller deployment from the kube-system namespace using kubectl.

Running Tiller Locally


For development, it is sometimes easier to work on Tiller locally, and configure it to connect to a remote Kubernetes cluster.

The process of building Tiller is explained above.

Once tiller has been built, simply start it:
$ bin/tiller
Tiller running on :44134

When Tiller is running locally, it will attempt to connect to the Kubernetes cluster that is configured by kubectl. (Run kubectl config view to see which cluster that is.)

You must tell helm to connect to this new local Tiller host instead of connecting to the one in-cluster. There are two ways to do this. The first is to specify the --host option on the command line. The second is to set the $HELM_HOST environment variable.
$ export HELM_HOST=localhost:44134
$ helm version # Should connect to localhost.
Client: &version.Version{SemVer:"v2.0.0-alpha.4", GitCommit:"db...", GitTreeState:"dirty"}
Server: &version.Version{SemVer:"v2.0.0-alpha.4", GitCommit:"a5...", GitTreeState:"dirty"}

Importantly, even when running locally, Tiller will store release configuration in ConfigMaps inside of Kubernetes.

UPGRADING TILLER


As of Helm 2.2.0, Tiller can be upgraded using helm init --upgrade.

For older versions of Helm, or for manual upgrades, you can use kubectl to modify the Tiller image:
$ export TILLER_TAG=v2.0.0-beta.1        # Or whatever version you want
$ kubectl --namespace=kube-system set image deployments/tiller-deploy tiller=gcr.io/kubernetes-helm/tiller:$TILLER_TAG
deployment "tiller-deploy" image updated

Setting TILLER_TAG=canary will get the latest snapshot of master.

DELETING OR REINSTALLING TILLER


Because Tiller stores its data in Kubernetes ConfigMaps, you can safely delete and re-install Tiller without worrying about losing any data. The recommended way of deleting Tiller is with kubectl delete deployment tiller-deploy --namespace kube-system, or more concisely helm reset.

Tiller can then be re-installed from the client with:
$ helm init

CONCLUSION


In most cases, installation is as simple as getting a pre-built helm binary and running helm init. This document covers additional cases for those who want to do more sophisticated things with Helm.

Once you have the Helm Client and Tiller successfully installed, you can move on to using Helm to manage charts.

+++ aliases = [ “kubernetes_distros.md”, “docs/kubernetes_distros.md”, “using_helm/kubernetes_distros.md”, “developing_charts/kubernetes_distros.md” ] +++

Kubernetes Distribution Guide


This document captures information about using Helm in specific Kubernetes environments.

We are trying to add more details to this document. Please contribute via Pull Requests if you can.

MINIKUBE


Helm is tested and known to work with minikube. It requires no additional configuration.

SCRIPTS/LOCAL-CLUSTER AND HYPERKUBE


Hyperkube configured via scripts/local-cluster.sh is known to work. For raw Hyperkube you may need to do some manual configuration.

GKE


Google’s GKE hosted Kubernetes platform is known to work with Helm, and requires no additional configuration.

UBUNTU WITH ‘KUBEADM’


Kubernetes bootstrapped with kubeadm is known to work on the following Linux distributions:

  • Ubuntu 16.04

  • CAN SOMEONE CONFIRM ON FEDORA?


Some versions of Helm (v2.0.0-beta2) require you to export KUBECONFIG=/etc/kubernetes/admin.conf or create a ~/.kube/config.

CONTAINER LINUX BY COREOS


Helm requires that kubelet have access to a copy of the socat program to proxy connections to the Tiller API. On Container Linux the Kubelet runs inside of a hyperkube container image that has socat. So, even though Container Linux doesn’t ship socat the container filesystem running kubelet does have socat. To learn more read the Kubelet Wrapper docs.

+++ aliases = [ “install_faq.md”, “docs/install_faq.md”, “using_helm/install_faq.md”, “developing_charts/install_faq.md” ] +++

Installation: Frequently Asked Questions


This section tracks some of the more frequently encountered issues with installing or getting started with Helm.

We’d love your help making this document better. To add, correct, or remove information, file an issue or send us a pull request.

DOWNLOADING


I want to know more about my downloading options.

Q: I can’t get to GitHub releases of the newest Helm. Where are they?

A: We no longer use GitHub releases. Binaries are now stored in a GCS public bucket.

Q: Why aren’t there Debian/Fedora/… native packages of Helm?

We’d love to provide these or point you toward a trusted provider. If you’re interested in helping, we’d love it. This is how the Homebrew formula was started.

Q: Why do you provide a curl ...|bash script?

A: There is a script in our repository (scripts/get) that can be executed as a curl ..|bashscript. The transfers are all protected by HTTPS, and the script does some auditing of the packages it fetches. However, the script has all the usual dangers of any shell script.

We provide it because it is useful, but we suggest that users carefully read the script first. What we’d really like, though, are better packaged releases of Helm.

INSTALLING


I’m trying to install Helm/Tiller, but something is not right.

Q: How do I put the Helm client files somewhere other than ~/.helm?

Set the $HELM_HOME environment variable, and then run helm init:
export HELM_HOME=/some/path
helm init --client-only

Note that if you have existing repositories, you will need to re-add them with helm repo add....

Q: How do I configure Helm, but not install Tiller?

A: By default, helm init will ensure that the local $HELM_HOME is configured, and then install Tiller on your cluster. To locally configure, but not install Tiller, use helm init --client-only.

Q: How do I manually install Tiller on the cluster?

A: Tiller is installed as a Kubernetes deployment. You can get the manifest by running helm init --dry-run --debug, and then manually install it with kubectl. It is suggested that you do not remove or change the labels on that deployment, as they are sometimes used by supporting scripts and tools.

Q: Why do I get Error response from daemon: target is unknown during Tiller install?

A: Users have reported being unable to install Tiller on Kubernetes instances that are using Docker 1.13.0. The root cause of this was a bug in Docker that made that one version incompatible with images pushed to the Docker registry by earlier versions of Docker.

This issue was fixed shortly after the release, and is available in Docker 1.13.1-RC1 and later.

GETTING STARTED


I successfully installed Helm/Tiller but I can’t use it.

Q: Trying to use Helm, I get the error “client transport was broken”
E1014 02:26:32.885226   16143 portforward.go:329] an error occurred forwarding 37008 -> 44134: error forwarding port 44134 to pod tiller-deploy-2117266891-e4lev_kube-system, uid : unable to do port forwarding: socat not found.
2016/10/14 02:26:32 transport: http2Client.notifyError got notified that the client transport was broken EOF.
Error: transport is closing

A: This is usually a good indication that Kubernetes is not set up to allow port forwarding.

Typically, the missing piece is socat. If you are running CoreOS, we have been told that it may have been misconfigured on installation. The CoreOS team recommends reading this:

Here are a few resolved issues that may help you get started:

Q: Trying to use Helm, I get the error “lookup XXXXX on 8.8.8.8:53: no such host”
Error: Error forwarding ports: error upgrading connection: dial tcp: lookup kube-4gb-lon1-02 on 8.8.8.8:53: no such host

A: We have seen this issue with Ubuntu and Kubeadm in multi-node clusters. The issue is that the nodes expect certain DNS records to be obtainable via global DNS. Until this is resolved upstream, you can work around the issue as follows:

1) Add entries to /etc/hosts on the master mapping your hostnames to their public IPs 2) Install dnsmasq on the master (e.g. apt install -y dnsmasq) 3) Kill the k8s api server container on master (kubelet will recreate it) 4) Then systemctl restart docker (or reboot the master) for it to pick up the /etc/resolv.conf changes

See this issue for more information: https://github.com/kubernetes/helm/issues/1455

Q: On GKE (Google Container Engine) I get “No SSH tunnels currently open”
Error: Error forwarding ports: error upgrading connection: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-[redacted]"?

Another variation of the error message is:
Unable to connect to the server: x509: certificate signed by unknown authority


A: The issue is that your local Kubernetes config file must have the correct credentials.

When you create a cluster on GKE, it will give you credentials, including SSL certificates and certificate authorities. These need to be stored in a Kubernetes config file (Default: ~/.kube/config so that kubectl and helm can access them.

Q: When I run a Helm command, I get an error about the tunnel or proxy

A: Helm uses the Kubernetes proxy service to connect to the Tiller server. If the command kubectl proxy does not work for you, neither will Helm. Typically, the error is related to a missing socat service.

Q: Tiller crashes with a panic

When I run a command on Helm, Tiller crashes with an error like this:
Tiller is listening on :44134
Probes server is listening on :44135
Storage driver is ConfigMap
Cannot initialize Kubernetes connection: the server has asked for the client to provide credentials 2016-12-20 15:18:40.545739 I | storage.go:37: Getting release "bailing-chinchilla" (v1) from storage
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x8053d5]

goroutine 77 [running]:
panic(0x1abbfc0, 0xc42000a040)
/usr/local/go/src/runtime/panic.go:500 +0x1a1
k8s.io/helm/vendor/k8s.io/kubernetes/pkg/client/unversioned.(*ConfigMaps).Get(0xc4200c6200, 0xc420536100, 0x15, 0x1ca7431, 0x6, 0xc42016b6a0)
/home/ubuntu/.go_workspace/src/k8s.io/helm/vendor/k8s.io/kubernetes/pkg/client/unversioned/configmap.go:58 +0x75
k8s.io/helm/pkg/storage/driver.(*ConfigMaps).Get(0xc4201d6190, 0xc420536100, 0x15, 0xc420536100, 0x15, 0xc4205360c0)
/home/ubuntu/.go_workspace/src/k8s.io/helm/pkg/storage/driver/cfgmaps.go:69 +0x62
k8s.io/helm/pkg/storage.(*Storage).Get(0xc4201d61a0, 0xc4205360c0, 0x12, 0xc400000001, 0x12, 0x0, 0xc420200070)
/home/ubuntu/.go_workspace/src/k8s.io/helm/pkg/storage/storage.go:38 +0x160
k8s.io/helm/pkg/tiller.(*ReleaseServer).uniqName(0xc42002a000, 0x0, 0x0, 0xc42016b800, 0xd66a13, 0xc42055a040, 0xc420558050, 0xc420122001)
/home/ubuntu/.go_workspace/src/k8s.io/helm/pkg/tiller/release_server.go:577 +0xd7
k8s.io/helm/pkg/tiller.(*ReleaseServer).prepareRelease(0xc42002a000, 0xc42027c1e0, 0xc42002a001, 0xc42016bad0, 0xc42016ba08)
/home/ubuntu/.go_workspace/src/k8s.io/helm/pkg/tiller/release_server.go:630 +0x71
k8s.io/helm/pkg/tiller.(*ReleaseServer).InstallRelease(0xc42002a000, 0x7f284c434068, 0xc420250c00, 0xc42027c1e0, 0x0, 0x31a9, 0x31a9)
/home/ubuntu/.go_workspace/src/k8s.io/helm/pkg/tiller/release_server.go:604 +0x78
k8s.io/helm/pkg/proto/hapi/services._ReleaseService_InstallRelease_Handler(0x1c51f80, 0xc42002a000, 0x7f284c434068, 0xc420250c00, 0xc42027c190, 0x0, 0x0, 0x0, 0x0, 0x0)
/home/ubuntu/.go_workspace/src/k8s.io/helm/pkg/proto/hapi/services/tiller.pb.go:747 +0x27d
k8s.io/helm/vendor/google.golang.org/grpc.(*Server).processUnaryRPC(0xc4202f3ea0, 0x28610a0, 0xc420078000, 0xc420264690, 0xc420166150, 0x288cbe8, 0xc420250bd0, 0x0, 0x0)
/home/ubuntu/.go_workspace/src/k8s.io/helm/vendor/google.golang.org/grpc/server.go:608 +0xc50
k8s.io/helm/vendor/google.golang.org/grpc.(*Server).handleStream(0xc4202f3ea0, 0x28610a0, 0xc420078000, 0xc420264690, 0xc420250bd0)
/home/ubuntu/.go_workspace/src/k8s.io/helm/vendor/google.golang.org/grpc/server.go:766 +0x6b0
k8s.io/helm/vendor/google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc420124710, 0xc4202f3ea0, 0x28610a0, 0xc420078000, 0xc420264690)
/home/ubuntu/.go_workspace/src/k8s.io/helm/vendor/google.golang.org/grpc/server.go:419 +0xab
created by k8s.io/helm/vendor/google.golang.org/grpc.(*Server).serveStreams.func1
/home/ubuntu/.go_workspace/src/k8s.io/helm/vendor/google.golang.org/grpc/server.go:420 +0xa3

A: Check your security settings for Kubernetes.

A panic in Tiller is almost always the result of a failure to negotiate with the Kubernetes API server (at which point Tiller can no longer do anything useful, so it panics and exits).

Often, this is a result of authentication failing because the Pod in which Tiller is running does not have the right token.

To fix this, you will need to change your Kubernetes configuration. Make sure that --service-account-private-key-file from controller-manager and --service-account-key-filefrom apiserver point to the same x509 RSA key.

UPGRADING


My Helm used to work, then I upgrade. Now it is broken.

Q: After upgrade, I get the error “Client version is incompatible”. What’s wrong?

Tiller and Helm have to negotiate a common version to make sure that they can safely communicate without breaking API assumptions. That error means that the version difference is too great to safely continue. Typically, you need to upgrade Tiller manually for this.

The Installation Guide has definitive information about safely upgrading Helm and Tiller.

The rules for version numbers are as follows:

  • Pre-release versions are incompatible with everything else. Alpha.1 is incompatible with Alpha.2.

  • Patch revisions are compatible: 1.2.3 is compatible with 1.2.4

  • Minor revisions are not compatible: 1.2.0 is not compatible with 1.3.0, though we may relax this constraint in the future.

  • Major revisions are not compatible: 1.0.0 is not compatible with 2.0.0.


UNINSTALLING


I am trying to remove stuff.

Q: When I delete the Tiller deployment, how come all the releases are still there?

Releases are stored in ConfigMaps inside of the kube-system namespace. You will have to manually delete them to get rid of the record.

Q: I want to delete my local Helm. Where are all its files?

Along with the helm binary, Helm stores some files in $HELM_HOME, which is located by default in ~/.helm.

+++ aliases = [ “using_helm.md”, “docs/using_helm.md”, “using_helm/using_helm.md”, “developing_charts/using_helm.md” ] +++

Using Helm


This guide explains the basics of using Helm (and Tiller) to manage packages on your Kubernetes cluster. It assumes that you have already installed the Helm client and the Tiller server (typically by helm init).

If you are simply interested in running a few quick commands, you may wish to begin with the Quickstart Guide. This chapter covers the particulars of Helm commands, and explains how to use Helm.

THREE BIG CONCEPTS


Chart is a Helm package. It contains all of the resource definitions necessary to run an application, tool, or service inside of a Kubernetes cluster. Think of it like the Kubernetes equivalent of a Homebrew formula, an Apt dpkg, or a Yum RPM file.

Repository is the place where charts can be collected and shared. It’s like Perl’s CPAN archive or the Fedora Package Database, but for Kubernetes packages.

Release is an instance of a chart running in a Kubernetes cluster. One chart can often be installed many times into the same cluster. And each time it is installed, a new release is created. Consider a MySQL chart. If you want two databases running in your cluster, you can install that chart twice. Each one will have its own release, which will in turn have its own release name.

With these concepts in mind, we can now explain Helm like this:

Helm installs charts into Kubernetes, creating a new release for each installation. And to find new charts, you can search Helm chart repositories.

‘HELM SEARCH’: FINDING CHARTS


When you first install Helm, it is preconfigured to talk to the official Kubernetes charts repository. This repository contains a number of carefully curated and maintained charts. This chart repository is named stable by default.

You can see which charts are available by running helm search:
$ helm search
NAME VERSION DESCRIPTION
stable/drupal 0.3.2 One of the most versatile open source content m...
stable/jenkins 0.1.0 A Jenkins Helm chart for Kubernetes.
stable/mariadb 0.5.1 Chart for MariaDB
stable/mysql 0.1.0 Chart for MySQL
...

With no filter, helm search shows you all of the available charts. You can narrow down your results by searching with a filter:
$ helm search mysql
NAME VERSION DESCRIPTION
stable/mysql 0.1.0 Chart for MySQL
stable/mariadb 0.5.1 Chart for MariaDB

Now you will only see the results that match your filter.

Why is mariadb in the list? Because its package description relates it to MySQL. We can use helm inspect chart to see this:
$ helm inspect stable/mariadb
Fetched stable/mariadb to mariadb-0.5.1.tgz
description: Chart for MariaDB
engine: gotpl
home: https://mariadb.org
keywords:
- mariadb
- mysql
- database
- sql
...

Search is a good way to find available packages. Once you have found a package you want to install, you can use helm install to install it.

‘HELM INSTALL’: INSTALLING A PACKAGE


To install a new package, use the helm install command. At its simplest, it takes only one argument: The name of the chart.
$ helm install stable/mariadb
Fetched stable/mariadb-0.3.0 to /Users/mattbutcher/Code/Go/src/k8s.io/helm/mariadb-0.3.0.tgz
happy-panda
Last Deployed: Wed Sep 28 12:32:28 2016
Namespace: default
Status: DEPLOYED

Resources:
==> extensions/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
happy-panda-mariadb 1 0 0 0 1s

==> v1/Secret
NAME TYPE DATA AGE
happy-panda-mariadb Opaque 2 1s

==> v1/Service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
happy-panda-mariadb 10.0.0.70 <none> 3306/TCP 1s


Notes:
MariaDB can be accessed via port 3306 on the following DNS name from within your cluster:
happy-panda-mariadb.default.svc.cluster.local

To connect to your database run the following command:

kubectl run happy-panda-mariadb-client --rm --tty -i --image bitnami/mariadb --command -- mysql -h happy-panda-mariadb

Now the mariadb chart is installed. Note that installing a chart creates a new release object. The release above is named happy-panda. (If you want to use your own release name, simply use the --name flag on helm install.)

During installation, the helm client will print useful information about which resources were created, what the state of the release is, and also whether there are additional configuration steps you can or should take.

Helm does not wait until all of the resources are running before it exits. Many charts require Docker images that are over 600M in size, and may take a long time to install into the cluster.

To keep track of a release’s state, or to re-read configuration information, you can use helm status:
$ helm status happy-panda
Last Deployed: Wed Sep 28 12:32:28 2016
Namespace: default
Status: DEPLOYED

Resources:
==> v1/Service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
happy-panda-mariadb 10.0.0.70 <none> 3306/TCP 4m

==> extensions/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
happy-panda-mariadb 1 1 1 1 4m

==> v1/Secret
NAME TYPE DATA AGE
happy-panda-mariadb Opaque 2 4m


Notes:
MariaDB can be accessed via port 3306 on the following DNS name from within your cluster:
happy-panda-mariadb.default.svc.cluster.local

To connect to your database run the following command:

kubectl run happy-panda-mariadb-client --rm --tty -i --image bitnami/mariadb --command -- mysql -h happy-panda-mariadb

The above shows the current state of your release.

Customizing the Chart Before Installing


Installing the way we have here will only use the default configuration options for this chart. Many times, you will want to customize the chart to use your preferred configuration.

To see what options are configurable on a chart, use helm inspect values:
helm inspect values stable/mariadb
Fetched stable/mariadb-0.3.0.tgz to /Users/mattbutcher/Code/Go/src/k8s.io/helm/mariadb-0.3.0.tgz
## Bitnami MariaDB image version
## ref: https://hub.docker.com/r/bitnami/mariadb/tags/
##
## Default: none
imageTag: 10.1.14-r3

## Specify a imagePullPolicy
## Default to 'Always' if imageTag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
# imagePullPolicy:

## Specify password for root user
## ref: https://github.com/bitnami/bitnami-docker-mariadb/blob/master/README.md#setting-the-root-password-on-first-run
##
# mariadbRootPassword:

## Create a database user
## ref: https://github.com/bitnami/bitnami-docker-mariadb/blob/master/README.md#creating-a-database-user-on-first-run
##
# mariadbUser:
# mariadbPassword:

## Create a database
## ref: https://github.com/bitnami/bitnami-docker-mariadb/blob/master/README.md#creating-a-database-on-first-run
##
# mariadbDatabase:

You can then override any of these settings in a YAML formatted file, and then pass that file during installation.
$ echo '{mariadbUser: user0, mariadbDatabase: user0db}' > config.yaml
$ helm install -f config.yaml stable/mariadb

The above will create a default MariaDB user with the name user0, and grant this user access to a newly created user0db database, but will accept all the rest of the defaults for that chart.

There are two ways to pass configuration data during install:

  • --values (or -f): Specify a YAML file with overrides. This can be specified multiple times and the rightmost file will take precedence

  • --set: Specify overrides on the command line.


If both are used, --set values are merged into --values with higher precedence.

The Format and Limitations of --set


The --set option takes zero or more name/value pairs. At its simplest, it is used like this: --set name=value. The YAML equivalent of that is:
name: value

Multiple values are separated by , characters. So --set a=b,c=d becomes:
a: b
c: d

More complex expressions are supported. For example, --set outer.inner=value is translated into this:
outer:
inner: value

Lists can be expressed by enclosing values in { and }. For example, --set name={a, b, c} translates to:
name:
- a
- b
- c

As of Helm 2.5.0, it is possible to access list items using an array index syntax. For example, --set servers[0].port=80 becomes:
servers:
- port: 80

Multiple values can be set this way. The line --set servers[0].port=80,servers[0].host=example becomes:
servers:
- port: 80
host: example

Sometimes you need to use special characters in your --set lines. You can use a backslash to escape the characters; --set name=value1\,value2 will become:
name: "value1,value2"

Similarly, you can escape dot sequences as well, which may come in handy when charts use the toYaml function to parse annotations, labels and node selectors. The syntax for --set nodeSelector."kubernetes\.io/role"=master becomes:
nodeSelector:
kubernetes.io/role: master

Deeply nested data structures can be difficult to express using --set. Chart designers are encouraged to consider the --set usage when designing the format of a values.yaml file.

More Installation Methods


The helm install command can install from several sources:

  • A chart repository (as we’ve seen above)

  • A local chart archive (helm install foo-0.1.1.tgz)

  • An unpacked chart directory (helm install path/to/foo)

  • A full URL (helm install https://example.com/charts/foo-1.2.3.tgz)


‘HELM UPGRADE’ AND ‘HELM ROLLBACK’: UPGRADING A RELEASE, AND RECOVERING ON FAILURE


When a new version of a chart is released, or when you want to change the configuration of your release, you can use the helm upgrade command.

An upgrade takes an existing release and upgrades it according to the information you provide. Because Kubernetes charts can be large and complex, Helm tries to perform the least invasive upgrade. It will only update things that have changed since the last release.
$ helm upgrade -f panda.yaml happy-panda stable/mariadb
Fetched stable/mariadb-0.3.0.tgz to /Users/mattbutcher/Code/Go/src/k8s.io/helm/mariadb-0.3.0.tgz
happy-panda has been upgraded. Happy Helming!
Last Deployed: Wed Sep 28 12:47:54 2016
Namespace: default
Status: DEPLOYED
...

In the above case, the happy-panda release is upgraded with the same chart, but with a new YAML file:
mariadbUser: user1

We can use helm get values to see whether that new setting took effect.
$ helm get values happy-panda
mariadbUser: user1

The helm get command is a useful tool for looking at a release in the cluster. And as we can see above, it shows that our new values from panda.yaml were deployed to the cluster.

Now, if something does not go as planned during a release, it is easy to roll back to a previous release using helm rollback [RELEASE] [REVISION].
$ helm rollback happy-panda 1

The above rolls back our happy-panda to its very first release version. A release version is an incremental revision. Every time an install, upgrade, or rollback happens, the revision number is incremented by 1. The first revision number is always 1. And we can use helm history [RELEASE] to see revision numbers for a certain release.

HELPFUL OPTIONS FOR INSTALL/UPGRADE/ROLLBACK


There are several other helpful options you can specify for customizing the behavior of Helm during an install/upgrade/rollback. Please note that this is not a full list of cli flags. To see a description of all flags, just run helm <command> --help.

  • --timeout: A value in seconds to wait for Kubernetes commands to complete This defaults to 300 (5 minutes)

  • --wait: Waits until all Pods are in a ready state, PVCs are bound, Deployments have minimum (Desired minus maxUnavailable) Pods in ready state and Services have an IP address (and Ingress if a LoadBalancer) before marking the release as successful. It will wait for as long as the--timeout value. If timeout is reached, the release will be marked as FAILED.


Note: In scenario where Deployment has replicas set to 1 and maxUnavailable is not set to 0 as part of rolling update strategy, --wait will return as ready as it has satisfied the minimum Pod in ready condition. - --no-hooks: This skips running hooks for the command - --recreate-pods (only available for upgrade and rollback): This flag will cause all pods to be recreated (with the exception of pods belonging to deployments)

‘HELM DELETE’: DELETING A RELEASE


When it is time to uninstall or delete a release from the cluster, use the helm deletecommand:
$ helm delete happy-panda

This will remove the release from the cluster. You can see all of your currently deployed releases with the helm list command:
$ helm list
NAME VERSION UPDATED STATUS CHART
inky-cat 1 Wed Sep 28 12:59:46 2016 DEPLOYED alpine-0.1.0

From the output above, we can see that the happy-panda release was deleted.

However, Helm always keeps records of what releases happened. Need to see the deleted releases? helm list --deleted shows those, and helm list --all shows all of the releases (deleted and currently deployed, as well as releases that failed):
⇒  helm list --all
NAME VERSION UPDATED STATUS CHART
happy-panda 2 Wed Sep 28 12:47:54 2016 DELETED mariadb-0.3.0
inky-cat 1 Wed Sep 28 12:59:46 2016 DEPLOYED alpine-0.1.0
kindred-angelf 2 Tue Sep 27 16:16:10 2016 DELETED alpine-0.1.0

Because Helm keeps records of deleted releases, a release name cannot be re-used. (If you really need to re-use a release name, you can use the --replace flag, but it will simply re-use the existing release and replace its resources.)

Note that because releases are preserved in this way, you can rollback a deleted resource, and have it re-activate.

‘HELM REPO’: WORKING WITH REPOSITORIES


So far, we’ve been installing charts only from the stable repository. But you can configure helm to use other repositories. Helm provides several repository tools under the helm repocommand.

You can see which repositories are configured using helm repo list:
$ helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com
local http://localhost:8879/charts
mumoshu https://mumoshu.github.io/charts

And new repositories can be added with helm repo add:
$ helm repo add dev https://example.com/dev-charts

Because chart repositories change frequently, at any point you can make sure your Helm client is up to date by running helm repo update.

CREATING YOUR OWN CHARTS


The Chart Development Guide explains how to develop your own charts. But you can get started quickly by using the helm create command:
$ helm create deis-workflow
Creating deis-workflow

Now there is a chart in ./deis-workflow. You can edit it and create your own templates.

As you edit your chart, you can validate that it is well-formatted by running helm lint.

When it’s time to package the chart up for distribution, you can run the helm packagecommand:
$ helm package deis-workflow
deis-workflow-0.1.0.tgz

And that chart can now easily be installed by helm install:
$ helm install ./deis-workflow-0.1.0.tgz
...

Charts that are archived can be loaded into chart repositories. See the documentation for your chart repository server to learn how to upload.

Note: The stable repository is managed on the Kubernetes Charts GitHub repository. That project accepts chart source code, and (after audit) packages those for you.

TILLER, NAMESPACES AND RBAC


In some cases you may wish to scope Tiller or deploy multiple Tillers to a single cluster. Here are some best practices when operating in those circumstances.

  1. Tiller can be installed into any namespace. By default, it is installed into kube-system. You can run multiple Tillers provided they each run in their own namespace.
  2. Limiting Tiller to only be able to install into specific namespaces and/or resource types is controlled by Kubernetes RBAC roles and rolebindings. You can add a service account to Tiller when configuring Helm via helm init --service-acount <NAME>. You can find more information about that here.
  3. Release names are unique PER TILLER INSTANCE.
  4. Charts should only contain resources that exist in a single namespace.
  5. It is not recommended to have multiple Tillers configured to manage resources in the same namespace.

CONCLUSION


This chapter has covered the basic usage patterns of the helm client, including searching, installation, upgrading, and deleting. It has also covered useful utility commands like helm statushelm get, and helm repo.

For more information on these commands, take a look at Helm’s built-in help: helm help.

In the next chapter, we look at the process of developing charts.

+++ aliases = [ “plugins.md”, “docs/plugins.md”, “using_helm/plugins.md”, “developing_charts/plugins.md” ] +++

The Helm Plugins Guide


Helm 2.1.0 introduced the concept of a client-side Helm plugin. A plugin is a tool that can be accessed through the helm CLI, but which is not part of the built-in Helm codebase.

Existing plugins can be found on related section or by searching Github.

This guide explains how to use and create plugins.

AN OVERVIEW


Helm plugins are add-on tools that integrate seamlessly with Helm. They provide a way to extend the core feature set of Helm, but without requiring every new feature to be written in Go and added to the core tool.

Helm plugins have the following features:

  • They can be added and removed from a Helm installation without impacting the core Helm tool.

  • They can be written in any programming language.

  • They integrate with Helm, and will show up in helm help and other places.


Helm plugins live in $(helm home)/plugins.

The Helm plugin model is partially modeled on Git’s plugin model. To that end, you may sometimes hear helm referred to as the porcelain layer, with plugins being the plumbing. This is a shorthand way of suggesting that Helm provides the user experience and top level processing logic, while the plugins do the “detail work” of performing a desired action.

INSTALLING A PLUGIN


A Helm plugin management system is in the works. But in the short term, plugins are installed by copying the plugin directory into $(helm home)/plugins.
$ cp -a myplugin/ $(helm home)/plugins/

If you have a plugin tar distribution, simply untar the plugin into the $(helm home)/pluginsdirectory.

BUILDING PLUGINS


In many ways, a plugin is similar to a chart. Each plugin has a top-level directory, and then a plugin.yaml file.
$(helm home)/plugins/
|- keybase/
|
|- plugin.yaml
|- keybase.sh


In the example above, the keybase plugin is contained inside of a directory named keybase. It has two files: plugin.yaml (required) and an executable script, keybase.sh (optional).

The core of a plugin is a simple YAML file named plugin.yaml. Here is a plugin YAML for a plugin that adds support for Keybase operations:
name: "keybase"
version: "0.1.0"
usage: "Integrate Keybase.io tools with Helm"
description: |-
This plugin provides Keybase services to Helm.
ignoreFlags: false
useTunnel: false
command: "$HELM_PLUGIN_DIR/keybase.sh"

The name is the name of the plugin. When Helm executes it plugin, this is the name it will use (e.g. helm NAME will invoke this plugin).

name should match the directory name. In our example above, that means the plugin with name: keybase should be contained in a directory named keybase.

Restrictions on name:

  • name cannot duplicate one of the existing helm top-level commands.

  • name must be restricted to the characters ASCII a-z, A-Z, 0-9, _ and -.


version is the SemVer 2 version of the plugin. usage and description are both used to generate the help text of a command.

The ignoreFlags switch tells Helm to not pass flags to the plugin. So if a plugin is called with helm myplugin --foo and ignoreFlags: true, then --foo is silently discarded.

The useTunnel switch indicates that the plugin needs a tunnel to Tiller. This should be set to true anytime a plugin talks to Tiller. It will cause Helm to open a tunnel, and then set $TILLER_HOST to the right local address for that tunnel. But don’t worry: if Helm detects that a tunnel is not necessary because Tiller is running locally, it will not create the tunnel.

Finally, and most importantly, command is the command that this plugin will execute when it is called. Environment variables are interpolated before the plugin is executed. The pattern above illustrates the preferred way to indicate where the plugin program lives.

There are some strategies for working with plugin commands:

  • If a plugin includes an executable, the executable for a command: should be packaged in the plugin directory.

  • The command: line will have any environment variables expanded before execution. $HELM_PLUGIN_DIR will point to the plugin directory.

  • The command itself is not executed in a shell. So you can’t oneline a shell script.

  • Helm injects lots of configuration into environment variables. Take a look at the environment to see what information is available.

  • Helm makes no assumptions about the language of the plugin. You can write it in whatever you prefer.

  • Commands are responsible for implementing specific help text for -h and --help. Helm will use usage and description for helm help and helm help myplugin, but will not handle helm myplugin --help.


ENVIRONMENT VARIABLES


When Helm executes a plugin, it passes the outer environment to the plugin, and also injects some additional environment variables.

Variables like KUBECONFIG are set for the plugin if they are set in the outer environment.

The following variables are guaranteed to be set:

  • HELM_PLUGIN: The path to the plugins directory

  • HELM_PLUGIN_NAME: The name of the plugin, as invoked by helm. So helm myplug will have the short name myplug.

  • HELM_PLUGIN_DIR: The directory that contains the plugin.

  • HELM_BIN: The path to the helm command (as executed by the user).

  • HELM_HOME: The path to the Helm home.

  • HELM_PATH_*: Paths to important Helm files and directories are stored in environment variables prefixed by HELM_PATH.

  • TILLER_HOST: The domain:port to Tiller. If a tunnel is created, this will point to the local endpoint for the tunnel. Otherwise, it will point to $HELM_HOST--host, or the default host (according to Helm’s rules of precedence).


While HELM_HOST may be set, there is no guarantee that it will point to the correct Tiller instance. This is done to allow plugin developer to access HELM_HOST in its raw state when the plugin itself needs to manually configure a connection.

A NOTE ON USETUNNEL


If a plugin specifies useTunnel: true, Helm will do the following (in order):

  1. Parse global flags and the environment
  2. Create the tunnel
  3. Set TILLER_HOST
  4. Execute the plugin
  5. Close the tunnel

The tunnel is removed as soon as the command returns. So, for example, a command cannot background a process and assume that that process will be able to use the tunnel.

A NOTE ON FLAG PARSING


When executing a plugin, Helm will parse global flags for its own use. Some of these flags are not passed on to the plugin.

  • --debug: If this is specified, $HELM_DEBUG is set to 1

  • --home: This is converted to $HELM_HOME

  • --host: This is converted to $HELM_HOST

  • --kube-context: This is simply dropped. If your plugin uses useTunnel, this is used to set up the tunnel for you.


Plugins should display help text and then exit for -h and --help. In all other cases, plugins may use flags as appropriate.