Deploy IBM Cloud Private
From my current System job at IBM, we talk a lot of “IBM Cloud Private” where you can build your own Cloud using, at its core, Kubernetes, Docker and Helm. I decided to look into this by deploying it on my local VMware system on two Ubuntu vm’s and play around with it.
In the end it should connect my Domoticz server with the “Weather Company” to make sure my sunscreens will go down and up according to the weather API. It should also incorporate Watson with its speak speech recognition to talk to my Domoticz server to create a true “smart home”. This using Node-red to connect all the dots. Although the installation is fully document as you can expect from IBM, I did make a specific guide to use Ubuntu and did discover some side-notes that where not deeply enlightened by the IBM guide which you can find here
[Requirements]
At the time of this writing version 2.1.0.2 was the latest and greatest and the requirements are tough for the this version compared to 1.x. Though if you disable the monitoring services you will need at least two nodes with 8GB of RAM using 4 or more cores.
I installed a clean Ubuntu (16.04.4) and did the usual get update and get upgrade. One node is called “icp-master” where my second one was called “icp-worker1” with IP’s 10.0.0.5 and 10.0.0.6. The full architecture is explained in more details here though it should give you a better impression during the installation.
[System preparations]
As there where quite a few steps to modify the basic installation, I made a few sections on that. For the average unix specialist this is not a problem though for me it was some trial and error.
1. Edit the Hosts file
You will need to edit your hosts file to put a hard link to the host names and the IP addresses. I discovered it did also worked with DHCP though in case of a “lease expiry” I also edited the file located at /etc/hosts to match the following output.
albert@icp-master:~$ cat /etc/hosts 127.0.0.1 localhost # 127.0.1.1 icp-master # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6-allrouters 10.0.0.5 icp-master 10.0.0.6 icp-worker1 albert@icp-master:~$
2. Enable root access on SSH
I discovered the hard way that the installation is using “root” to connect to the other (worker) nodes in the cluster using ssh commands. By default Ubuntu does not allow the user root to use SSH so you will have to edit the following file /etc/ssh/sshd_config
using sudo vi
and place a “yes” behind the rule PermitRootLogin
. Afterwards you will have to restart the SSH deamon using sudo service sshd restart
. You will also need to set a password using the command passwd
right after you become root using sudo su -
. Do note; you will need to do this on both the Master node and the Worker node!
3. Share the SSH Key for a trust relation
The master node is communicating using SSH though they need to trust each other for it to work. So we need to create a SSH Key and share it on the nodes. We start with generating the key itself using the following command (due note that I switch to root the whole time from this step forward!). Below is my console output from the following steps; I have used [enter] to separate the all the steps
- Create the SSH Key from the Masternode
- Create a directory on the worker node (Note that when I am connecting my password is asked)
- Copy the key from the master to the worker
- Also copy the key to my “authorized” list.
- I did a test at the end and if you don’t need to provide a password, you are set! (Note that the prompt change without asking for a password)
root@icp-master:~# ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:oUne34zEoFwZ1yoylo6oK4eMvmwiTDBp2CpgaTzRZsQ root@Grafana The key's randomart image is: +---[RSA 2048]----+ | +. . .. | | . E + . | |oo= ..= . | |*B. +=*.+. | |*o.. +*oS.o | |o.. . . o + | |*o o o | |O+. | |B*. | +----[SHA256]-----+ root@icp-master:~# root@icp-master:~# ssh root@icp-worker1 sudo mkdir -p /root/.ssh root@icp-worker1's password: root@icp-master:~# root@icp-master:~# cd ~/.ssh root@icp-master:~/.ssh# root@icp-master:~/.ssh# cat id_rsa.pub | ssh root@icp-worker1 'cat >> .ssh/authorized_keys' root@icp-worker1's password: root@icp-master:~/.ssh# root@icp-master:~/.ssh# cp id_rsa.pub authorized_keys root@icp-master:~/.ssh# root@icp-master:~/.ssh# ssh root@icp-worker1 Welcome to Ubuntu 16.04.4 LTS (GNU/Linux 4.4.0-87-generic x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage 9 packages can be updated. 7 updates are security updates. Last login: Sun Apr 15 23:36:46 2018 from 10.0.0.5 root@icp-worker1:~#
4. Install the required software
The last stap is to install the required dependencies using the following command apt install docker.io python
[Installing]
1. Get the image using command docker pull ibmcom/icp-inception:2.1.0.2
2. Create an installation directory and exact config files (console output)
root@icp-master:~# mkdir /opt/icp root@icp-master:~# cd /opt/icp root@icp-master:~# docker run -e LICENSE=accept -v "$(pwd)":/data ibmcom/icp-inception:2.1.0.2 cp -r cluster /data\
3. copy sshkey created in step 3 from the the preparation section, to the ssh_key file in the configuration file
root@icp-master:~# cp /root/.ssh/id_rsa /opt/icp/cluster/ssh_key
4. set the IPs from both the worker and master node in the /opt/icp/cluster/hosts
file. My file looks like this;
[master] 10.0.0.5 [worker] 10.0.0.6 [proxy] 10.0.0.5 [management] [va]
5. And finally start the installation
root@icp-master:~# sudo docker run -e LICENSE=accept --net=host -t -v "$(pwd)":/installer/cluster ibmcom/icp-inception:2.1.0.2 install
if its failing for some reason, use the same syntax but add -vvv behind it to get more verbose logging. The error should be self explanatory though if you need help, let me know!
It took a while though the following was displayed; It took Playbook run took 0 days, 0 hours, 40 minutes, 32 seconds
. This I started a browser and point to https://10.0.0.5:8443 and used admin:admin to login!
In the next post I will Add Extra Helm Repositories and configure nodeRED to talk to my Domoticz along with some other cewl stuff!

Hi Atsiekratsie,
I run docker run –net=host -t -e LICENSE=accept -v “$(pwd)”:/installer/cluster ibmcom/icp-inception-amd64:3.1.2-ee install -vvv on single node system in virtual server,
but i received that error:
TASK [Checking if setting password or not] *************************************
task path: /installer/playbook/plays/check-password.yaml:14
fatal: [localhost]: FAILED! => changed=false
msg: ‘The password is not set. You must specify a password that meets the following criteria: ”^([a-zA-Z0-9\-]{32,})$”’
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1
Can you help me?
Thank you
Hey Mario, that has been a while though 🙂 I am going to do a fresh install as my version on this blog is quite old TBH. If I read the FAILS message, it looks like it’s expecting a password that has to meet certain criteria. Though my guess is that this is to easy and you’d looked into that? I have to redo the setup but my guess would be to check the “config.yaml”. More of that (new) file can be found here; https://www.ibm.com/support/knowledgecenter/SSBS6K_3.1.2/installing/password_auth.html
Hope it helps! I will reinstall soon and post my changes in this blog.