Category Archives: Linux Planet

Linux Planet 2016-04-12 01:50:51

I am trying to mount a cifs share aka smaba/smb/windows share, from a Debian server so I can access log files when needed. To do this automatically I create two mounts, one which is read only and is automatically mounted and another that is read/write which is not mounted. The /etc/fstab file looks a bit like this:

// /mnt/server-d cifs auto,rw,credentials=/root/.ssh/server.credentials,domain= 0 0
// /mnt/server-d-rw cifs noauto,ro,credentials=/root/.ssh/server.credentials,domain= 0 0

To mount all the drives with “auto” in the /etc/fstab file you can use the “-a, –all” option . From the man page, Mount all filesystems (of the given types) mentioned in fstab (except for those whose line contains the noauto keyword). The filesystems are mounted following their order in fstab.

However when I ran the command I get:

root@server:~# mount -a
mount: wrong fs type, bad option, bad superblock on //,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount. helper program)

In some cases useful info is found in syslog - try
dmesg | tail or so.

Well it turns out that Debian is no longer shipping cifs as a default option. It can be added easyly enough using the command:

root@server:~# aptitude install cifs-utils

Now mount -a works fine

root@server:~# mount -a

Read More

Ambient Weather WS-1001-Wifi Observer Review

In the most recent episode of Bad Voltage, I reviewed the Ambient Weather WS-1001-Wifi Observer Personal Weather Station. Tune in to listen to the ensuing discussion and the rest of the show.

Regular listeners will know I’m an avid runner and sports fan. Add in the fact that I live in a city where weather can change in an instant and a personal weather station was irresistible to the tech and data enthusiast inside me. After doing a bit of research, I decided on the Ambient Weather WS-1001-Wifi Observer. While it only needs to be performed once, I should note that setup is fairly involved. The product comes with three components: An outdoor sensor array which should be mounted on a pole, chimney or other suitable area, a small indoor sensor and an LCD control panel/display console. The first step is to mount the all-in-one outdoor sensor, which remains powered using a solar panel and rechargeable batteries. It measures and transmits outdoor temperature, humidity, wind speed, wind direction, rainfall, and both UV and solar radiation. Next, mount the indoor sensor which measures and transmits indoor temperature, humidity and barometric pressure. Finally, plug in the control panel and complete the setup procedure which will walk you through configuring your wifi network, setting up NTP, syncing the two sensors and picking your units of measurement. Note that all three devices must be within 100-330 feet of each other, depending on layout and what materials are between them.

With everything setup, data will now start collecting on your display console and is updated every 14 seconds. In addition to showing all the data previously mentioned you will also see wind gusts, wind chill, sunrise, sunset, phases of the moon, dew point, rainfall rate and some historical graphs. There is a ton of data presented and while the sparse dense layout works for me, it has been described as unintuitive and overwhelming by some.

While seeing the data in real-time is interesting, you’ll likely also want to see long term trends and historical data. While the device can export all data to an SD card in CSV format, it becomes much more compelling when you connect it with the Weather Underground personal weather station network. Once connected, the unit becomes a public weather station that also feeds data to the Wunderground prediction model. That means you’ll be helping everyone get more accurate data for your specific area and better forecasts for your general area. You can even see how many people are using your PWS to get their weather report. There’s also a very slick Wunderstation app that is a great replacement for the somewhat antiquated display console, although unfortunately it’s currently only available for the iPad.

So, what’s the Bad Voltage verdict? At $289 the Ambient Weather WS-1001-WIFI OBSERVER isn’t cheap. In an era of touchscreens and sleek design, it’s definitely not going to win any design awards. That said, it’s a durable well built device that transmits and displays a huge amount of data. The Wunderground integration is seamless and knowing that you’re improving the predictive model for your neighborhood is surprisingly satisfying. If you’re a weather data junkie, this is a great device for you.

ws-1001-wifi-bd A1001PWS1


Read More

PGDay Asia and FOSS Asia – 2016

Jumping Bean attended PGDay Asia 17th March 2016 and FOSS Asia 18th-20th  March 2016 and delivered a talk at each event. At PGDay Asia we spoke on using Postgres as a NoSQL document store and for FOSS Asia. It was a great event and nothing beat interacting with the developers of Postgres and the open source community.

Our slides for “There is JavaScript in my SQL” presentation at PGDay Asia and “An Introduction to React” from FoSS Asia can be found on our Slide Share account.

FOSS Asia React library presentation
FOSS Asia 2016 React!
FOSS Asia 2016 crowd

Read More

Create self-managing servers with Masterless Saltstack Minions

Over the past two articles I’ve described building a Continuous Delivery pipeline for my blog (the one you are currently reading). The first article covered packaging the blog into a Docker container and the second covered using Travis CI to build the Docker image and perform automated testing against it.

While the first two articles covered quite a bit of the CD pipeline there is one piece missing; automating deployment. While there are many infrastructure and application tools for automated deployments I’ve chosen to use Saltstack. I’ve chosen Saltstack for many reasons but the main reason is that it can be used to manage both my host system’s configuration and the Docker container for my blog application. Before I can start using Saltstack however, I first need to set it up.

I’ve covered setting up Saltstack before, but for this article I am planning on setting up Saltstack in a Masterless architecture. A setup that is quite different from the traditional Saltstack configuration.

Masterless Saltstack

A traditional Saltstack architecture is based on a Master and Minion design. With this architecture the Salt Master will push desired states to the Salt Minion. This means that in order for a Salt Minion to apply the desired states it needs to be able to connect to the master, download the desired states and then apply them.

A masterless configuration on the other hand involves only the Salt Minion. With a masterless architecture the Salt state files are stored locally on the Minion bypassing the need to connect and download states from a Master. This architecture provides a few benefits over the traditional Master/Minion architecture. The first is removing the need to have a Salt Master server; which will help reduce infrastructure costs, an important item as the environment in question is dedicated to hosting a simple personal blog.

The second benefit is that in a masterless configuration each Salt Minion is independent which makes it very easy to provision new Minions and scale out. The ability to scale out is useful for a blog, as there are times when an article is reposted and traffic suddenly increases. By making my servers self-managing I am able to meet that demand very quickly.

A third benefit is that Masterless Minions have no reliance on a Master server. In a traditional architecture if the Master server is down for any reason the Minions are unable to fetch and apply the Salt states. With a Masterless architecture, the availability of a Master server is not even a question.

Setting up a Masterless Minion

In this article I will walk through how to install and configure a Salt in a masterless configuration.

Installing salt-minion

The first step to creating a Masterless Minion is to install the salt-minion package. To do this we will follow the official steps for Ubuntu systems outlined at Which primarily uses the Apt package manager to perform the installation.

Importing Saltstack’s GPG Key

Before installing the salt-minion package we will first need to import Saltstack’s Apt repository key. We can do this with a simple bash one-liner.

# wget -O - | sudo apt-key add -

This GPG key will allow Apt to validate packages downloaded from Saltstack’s Apt repository.

Adding Saltstack’s Apt Repository

With the key imported we can now add Saltstack’s Apt repository to our /etc/apt/sources.list file. This file is used by Apt to determine which repositories to check for available packages.

# vi /etc/apt/sources.list

Once editing the file simply append the following line to the bottom.

deb trusty main

With the repository defined we can now update Apt’s repository inventory. A step that is required before we can start installing packages from the new repository.

Updating Apt’s cache

To update Apt’s repository inventory, we will execute the command apt-get update.

# apt-get update
Ign trusty InRelease                                 
Get:1 trusty-security InRelease [65.9 kB]           
Get:2 trusty-updates InRelease [65.9 kB]             
Get:3 trusty InRelease [2,813 B]                     
Get:4 trusty/main amd64 Packages [8,046 B]           
Get:5 trusty-security/main Sources [105 kB]         
Hit trusty Release.gpg                              
Ign trusty/main Translation-en_US                    
Ign trusty/main Translation-en                       
Hit trusty Release                                   
Hit trusty/main Sources                              
Hit trusty/universe Sources                          
Hit trusty/main amd64 Packages                       
Hit trusty/universe amd64 Packages                   
Hit trusty/main Translation-en                       
Hit trusty/universe Translation-en                   
Ign trusty/main Translation-en_US                    
Ign trusty/universe Translation-en_US                
Fetched 3,136 kB in 8s (358 kB/s)                                              
Reading package lists... Done

With the above complete we can now access the packages available within Saltstack’s repository.

Installing with apt-get

Specifically we can now install the salt-minion package, to do this we will execute the command apt-get install salt-minion.

# apt-get install salt-minion
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following extra packages will be installed:
  dctrl-tools libmysqlclient18 libpgm-5.1-0 libzmq3 mysql-common
  python-dateutil python-jinja2 python-mako python-markupsafe python-msgpack
  python-mysqldb python-tornado python-zmq salt-common
Suggested packages:
  debtags python-jinja2-doc python-beaker python-mako-doc
  python-egenix-mxdatetime mysql-server-5.1 mysql-server python-mysqldb-dbg
The following NEW packages will be installed:
  dctrl-tools libmysqlclient18 libpgm-5.1-0 libzmq3 mysql-common
  python-dateutil python-jinja2 python-mako python-markupsafe python-msgpack
  python-mysqldb python-tornado python-zmq salt-common salt-minion
0 upgraded, 15 newly installed, 0 to remove and 155 not upgraded.
Need to get 4,959 kB of archives.
After this operation, 24.1 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 trusty-updates/main mysql-common all 5.5.47-0ubuntu0.14.04.1 [13.5 kB]
Get:2 trusty/main python-tornado amd64 4.2.1-1 [274 kB]
Get:3 trusty-updates/main libmysqlclient18 amd64 5.5.47-0ubuntu0.14.04.1 [597 kB]
Get:4 trusty/main salt-common all 2015.8.7+ds-1 [3,108 kB]
Processing triggers for libc-bin (2.19-0ubuntu6.6) ...
Processing triggers for ureadahead (0.100.0-16) ...

After a successful installation of the salt-minion package we now have a salt-minion instance running with the default configuration.

Configuring the Minion

With a Traditional Master/Minion setup, this point would be where we configure the Minion to connect to the Master server and restart the running service.

For this setup however, we will be skipping the Master server definition. Instead we will need to tell the salt-minion service to look for Salt state files locally. To alter the salt-minion‘s configuration we can either edit the /etc/salt/minion configuration file which is the default configuration file. Or we could add a new file into /etc/salt/minion.d/; this .d directory is used to override default configurations defined in /etc/salt/minion.

My personal preference is to create a new file within the minion.d/ directory, as this keeps the configuration easy to manage. However, there is no right or wrong method; as this is a personal and environmental preference.

For this article we will go ahead and create the following file /etc/salt/minion.d/masterless.conf.

# vi /etc/salt/minion.d/masterless.conf

Within this file we will add two configurations.

file_client: local
    - /srv/salt/base
    - /srv/salt/bencane

The first configuration item above is file_client. By setting this configuration to local we are telling the salt-minion service to search locally for desired state configurations rather than connecting to a Master.

The second configuration is the file_roots dictionary. This defines the location of Salt state files. In the above example we are defining both /srv/salt/base and /srv/salt/bencane. These two directories will be where we store our Salt state files for this Minion to apply.

Stopping the salt-minion service

While in most cases we would need to restart the salt-minion service to apply the configuration changes, in this case, we actually need to do the opposite; we need to stop the salt-minion service.

# service salt-minion stop
salt-minion stop/waiting

The salt-minion service does not need to be running when setup as a Masterless Minion. This is because the salt-minion service is only running to listen for events from the Master. Since we have no master there is no reason to keep this service running. If left running the salt-minion service will repeatedly try to connect to the defined Master server which by default is a host that resolves to salt. To remove unnecessary overhead it is best to simply stop this service in a Masterless Minion configuration.

Populating the desired states

At this point we have a Salt Minion that has been configured to run masterless. However, at this point the Masterless Minion has no Salt states to apply. In this section we will provide the salt-minion agent two sets of Salt states to apply. The first will be placed into the /srv/salt/base directory. This file_roots directory will contain a base set of Salt states that I have created to manage a basic Docker host.

Deploying the base Salt states

The states in question are available via a public GitHub repository. To deploy these Salt states we can simply clone the repository into the /srv/salt/base directory. Before doing so however, we will need to first create the /srv/salt directory.

# mkdir -p /srv/salt

The /srv/salt directory is Salt’s default state directory, it is also the parent directory for both the base and bencane directories we defined within the file_roots configuration. Now that the parent directory exists, we will clone the base repository into this directory using git.

# cd /srv/salt/
# git clone base
Cloning into 'base'...
remote: Counting objects: 50, done.
remote: Total 50 (delta 0), reused 0 (delta 0), pack-reused 50
Unpacking objects: 100% (50/50), done.
Checking connectivity... done.

As the salt-base repository is copied into the base directory the Salt states within that repository are now available to the salt-minion agent.

# ls -la /srv/salt/base/
total 84
drwxr-xr-x 18 root root 4096 Feb 28 21:00 .
drwxr-xr-x  3 root root 4096 Feb 28 21:00 ..
drwxr-xr-x  2 root root 4096 Feb 28 21:00 dockerio
drwxr-xr-x  2 root root 4096 Feb 28 21:00 fail2ban
drwxr-xr-x  2 root root 4096 Feb 28 21:00 git
drwxr-xr-x  8 root root 4096 Feb 28 21:00 .git
drwxr-xr-x  3 root root 4096 Feb 28 21:00 groups
drwxr-xr-x  2 root root 4096 Feb 28 21:00 iotop
drwxr-xr-x  2 root root 4096 Feb 28 21:00 iptables
-rw-r--r--  1 root root 1081 Feb 28 21:00 LICENSE
drwxr-xr-x  2 root root 4096 Feb 28 21:00 ntpd
drwxr-xr-x  2 root root 4096 Feb 28 21:00 python-pip
-rw-r--r--  1 root root  106 Feb 28 21:00
drwxr-xr-x  2 root root 4096 Feb 28 21:00 screen
drwxr-xr-x  2 root root 4096 Feb 28 21:00 ssh
drwxr-xr-x  2 root root 4096 Feb 28 21:00 swap
drwxr-xr-x  2 root root 4096 Feb 28 21:00 sysdig
drwxr-xr-x  3 root root 4096 Feb 28 21:00 sysstat
drwxr-xr-x  2 root root 4096 Feb 28 21:00 timezone
-rw-r--r--  1 root root  208 Feb 28 21:00 top.sls
drwxr-xr-x  2 root root 4096 Feb 28 21:00 wget

From the above directory listing we can see that the base directory has quite a few Salt states. These states are very useful for managing a basic Ubuntu system as they perform steps such as installing Docker (dockerio), to setting the system timezone (timezone). Everything needed to run a basic Docker host is available and defined within these base states.

Applying the base Salt states

Even though the salt-minion agent can now use these Salt states, there is nothing running to tell the salt-minion agent it should do so. Therefore the desired states are not being applied.

To apply our new base states we can use the salt-call command to tell the salt-minion agent to read the Salt states and apply the desired states within them.

# salt-call --local state.highstate

The salt-call command is used to interact with the salt-minion agent from command line. In the above the salt-call command was executed with the state.highstate option.

This tells the agent to look for all defined states and apply them. The salt-call command also included the --local option, this option is specifically used when running a Masterless Minion. This flag tells the salt-minion agent to look through it’s local state files rather than attempting to pull from a Salt Master.

The below shows the results of the execution above, within this output we can see the various states being applied successfully.

          ID: GMT
    Function: timezone.system
      Result: True
     Comment: Set timezone GMT
     Started: 21:09:31.515117
    Duration: 126.465 ms
          ID: wget
    Function: pkg.latest
      Result: True
     Comment: Package wget is already up-to-date
     Started: 21:09:31.657403
    Duration: 29.133 ms

Summary for local
Succeeded: 26 (changed=17)
Failed:     0
Total states run:     26

In the above output we can see that all of the defined states were executed successfully. We can validate this further if we check the status of the docker service. Which we can see from below is now running; where before executing salt-call, Docker was not installed on this system.

# service docker status
docker start/running, process 11994

With a successful salt-call execution our Salt Minion is now officially a Masterless Minion. However, even though our server has Salt installed, and is configured as a Masterless Minion, there are still a few steps we need to take to make this Minion “Self Managing”.

Self-Managing Minions

In order for our Minion to be Self-Managed, the Minion server should not only apply the base states above, it should also keep the salt-minion service and configuration up to date as well. To do this, we will be cloning yet another git repository.

Deploying the blog specific Salt states

This repository however, has specific Salt states used to manage the salt-minion agent, for not only this but also any other Masterless Minion used to host this blog.

# cd /srv/salt
# git clone bencane
Cloning into 'bencane'...
remote: Counting objects: 25, done.
remote: Compressing objects: 100% (16/16), done.
remote: Total 25 (delta 4), reused 20 (delta 2), pack-reused 0
Unpacking objects: 100% (25/25), done.
Checking connectivity... done.

In the above command we cloned the blog-salt repository into the /srv/salt/bencane directory. Like the /srv/salt/base directory the /srv/salt/bencane directory is also defined within the file_roots that we setup earlier.

Applying the blog specific Salt states

With these new states copied to the /srv/salt/bencane directory, we can once again run the salt-call command to trigger the salt-minion agent to apply these states.

# salt-call --local state.highstate
[INFO    ] Loading fresh modules for state activity
[INFO    ] Fetching file from saltenv 'base', ** skipped ** latest already in cache u'salt://top.sls'
[INFO    ] Fetching file from saltenv 'bencane', ** skipped ** latest already in cache u'salt://top.sls'
          ID: /etc/salt/minion.d/masterless.conf
    Function: file.managed
      Result: True
     Comment: File /etc/salt/minion.d/masterless.conf is in the correct state
     Started: 21:39:00.800568
    Duration: 4.814 ms
          ID: /etc/cron.d/salt-standalone
    Function: file.managed
      Result: True
     Comment: File /etc/cron.d/salt-standalone updated
     Started: 21:39:00.806065
    Duration: 7.584 ms
                  New file

Summary for local
Succeeded: 37 (changed=7)
Failed:     0
Total states run:     37

Based on the output of the salt-call execution we can see that 7 Salt states were executed successfully. This means that the new Salt states within the bencane directory were applied. But what exactly did these states do?

Understanding the “Self-Managing” Salt states

This second repository has a hand full of states that perform various tasks specific to this environment. The “Self-Managing” states are all located within the srv/salt/bencane/salt directory.

$ ls -la /srv/salt/bencane/salt/
total 20
drwxr-xr-x 5 root root 4096 Mar 20 05:28 .
drwxr-xr-x 5 root root 4096 Mar 20 05:28 ..
drwxr-xr-x 3 root root 4096 Mar 20 05:28 config
drwxr-xr-x 2 root root 4096 Mar 20 05:28 minion
drwxr-xr-x 2 root root 4096 Mar 20 05:28 states

Within the salt directory there are several more directories that have defined Salt states. To get started let’s look at the minion directory. Specifically, let’s take a look at the salt/minion/init.sls file.

# cat salt/minion/init.sls
    - managed
    - humanname: SaltStack Repo
    - name: deb {{ grains['lsb_distrib_codename'] }} main
    - dist: {{ grains['lsb_distrib_codename'] }}
    - key_url:
    - latest
    - dead
    - enable: False

    - source: salt://salt/config/etc/salt/minion.d/masterless.conf

    - source: salt://salt/config/etc/cron.d/salt-standalone

Within the minion/init.sls file there are 5 Salt states defined.

Breaking down the minion/init.sls states

Let’s break down some of these states to better understand what actions they are performing.

    - managed
    - humanname: SaltStack Repo
    - name: deb {{ grains['lsb_distrib_codename'] }} main
    - dist: {{ grains['lsb_distrib_codename'] }}
    - key_url:

The first state defined is a pkgrepo state. We can see based on the options that this state is use to manage the Apt repository that we defined earlier. We can also see from the key_url option, that even the GPG key we imported earlier is managed by this state.

    - latest

The second state defined is a pkg state. This is used to manage a specific package, specifically in this case the salt-minion package. Since the latest option is present the salt-minion agent will not only install the latest salt-minion package but also keep it up to date with the latest version if it is already installed.

    - dead
    - enable: False

The third state is a service state. This state is used to manage the salt-minion service. With the dead and enable: False settings specified the salt-minion agent will stop and disable the salt-minion service.

So far these states are performing the same steps we performed manually above. Let’s keep breaking down the minion/init.sls file to understand what other steps we have told Salt to perform.

    - source: salt://salt/config/etc/salt/minion.d/masterless.conf

The fourth state is a file state, this state is deploying a /etc/salt/minion.d/masterless.conf file. This just happens to be the same file we created earlier. Let’s take a quick look at the file being deployed to understand what Salt is doing.

$ cat salt/config/etc/salt/minion.d/masterless.conf
file_client: local
    - /srv/salt/base
    - /srv/salt/bencane

The contents of this file are exactly the same as the masterless.conf file we created in the earlier steps. This means that while right now the configuration file being deployed is the same as what is currently deployed. In the future if any changes are made to the masterless.conf within this git repository, those changes will then be deployed on the next state.highstate execution.

    - source: salt://salt/config/etc/cron.d/salt-standalone

The fifth state is also a file state, while this state is also deploying a file the file in question is very different. Let’s take a look at this file to understand what it is used for.

$ cat salt/config/etc/cron.d/salt-standalone
*/2 * * * * root su -c "/usr/bin/salt-call state.highstate --local 2>&1 > /dev/null"

The salt-standalone file is a /etc/cron.d based cron job that appears to be running the same salt-call command we ran earlier to apply the local Salt states. In a masterless configuration there is no scheduled task to tell the salt-minion agent to apply all of the Salt states. The above cron job takes care of this by simply executing a local state.highstate execution every 2 minutes.

Summary of minion/init.sls

Based on the contents of the minion/init.sls we can see how this salt-minion agent is configured to be “Self-Managing”. From the above we were able to see that the salt-minion agent is configured to perform the following steps.

  1. Configure the Saltstack Apt repository and GPG keys
  2. Install the salt-minion package or update to the newest version if already installed
  3. Deploy the masterless.conf configuration file into /etc/salt/minion.d/
  4. Deploy the /etc/cron.d/salt-standalone file which deploys a cron job to initiate state.highstate executions

These steps ensure that the salt-minion agent is both configured correctly and applying desired states every 2 minutes.

While the above steps are useful for applying the current states, the whole point of continuous delivery is to deploy changes quickly. To do this we need to also keep the Salt states up-to-date.

Keeping Salt states up-to-date with Salt

One way to keep our Salt states up to date is to tell the salt-minion agent to update them for us.

Within the /srv/salt/bencane/salt directory exists a states directory that contains two files base.sls and bencane.sls. These two files both contain similar Salt states. Let’s break down the contents of the base.sls file to understand what actions it’s telling the salt-minion agent to perform.

$ cat salt/states/base.sls
    - user: root
    - group: root
    - mode: 700
    - makedirs: True

    - name:
    - target: /srv/salt/base
    - force: True

In the above we can see that the base.sls file contains two Salt states. The first is a file state that is set to ensure the /srv/salt/base directory exists with the defined permissions.

The second state is a bit more interesting as it is a git state which is set to pull the latest copy of the salt-base repository and clone it into /srv/salt/base.

With this state defined, every time the salt-minion agent runs (which is every 2 minutes via the cron.d job); the agent will check for new updates to the repository and deploy them to /srv/salt/base.

The bencane.sls file contains similar states, with the difference being the repository cloned and the location to deploy the state files to.

$ cat salt/states/bencane.sls
    - user: root
    - group: root
    - mode: 700
    - makedirs: True

    - name:
    - target: /srv/salt/bencane
    - force: True

At this point, we now have a Masterless Salt Minion that is not only configured to “self-manage” it’s own packages, but also the Salt state files that drive it.

As the state files within the git repositories are updated, those updates are then pulled from each Minion every 2 minutes. Whether that change is adding the screen package, or deploying a new Docker container; that change is deployed across many Masterless Minions all at once.

What’s next

With the above steps complete, we now have a method for taking a new server and turning it into a Self-Managed Masterless Minion. What we didn’t cover however, is how to automate the initial installation and configuration.

In next months article, we will talk about using salt-ssh to automate the first time installation and configuration the salt-minion agent using the same Salt states we used today.

Posted by Benjamin Cane

Read More

Linux and POWER8 microprocessors


With the enormous amount of data being generated every day, POWER8 was designed specifically to keep up with today’s data processing requirements on high end servers.

POWER8 is a symmetric multiprocessor based on the power architecture by IBM. It’s designed specifically for server environments to have faster execution times and to really concentrate performing well on high server workloads. POWER8 is very scalable architecture and scales from 1 to 100+ CPU core per server. Google was involved when POWER8 was designed and they currently use dual socket POWER8 system boards internally.

Systems available with POWER8 cpus started shipping in late 2014. CPU clock ranges between 2.5Ghz all the way up to 5.0Ghz. It has support for DDR3 and DDR4 memory controllers. Memory support is designed to be future proof by being as generic as possible.

Open architecture

Design is available for licensing via the OpenPower foundation mainly to support custom made processors for use in cloud computing and applications that need to calculate big amounts of scientific data. POWER8 processor specifications and firmware are available on liberal licensing. Collaborative development model is encouraged and it’s already happening.

Linux has full support of POWER8

IBM has begun submitting code patches for the Linux kernel in 2012 to support POWER8 features. Linux now has full support for POWER8 since version the kernel version 3.8.

Many big Linux distributions, including Debian, Fedora and OpenSUSE has installable iso images available for Power hardware. When it comes to applications almost all software available for traditional cpu architectures are also available for POWER8. Packages build for it usually has the prefix ppc64el/ppc64le or ppc64 when build for big endian mode. There is prebuilt software available for Linux distributions. For example, thousands of Debian Linux packages are available. Remember to limit the search results to include packages for ppc64el to get a better picture what’s available.

While power hardware is transitioning from big endian to little endian, POWER8 is actually bi-endian architechure and it’s capable of accessing data is both modes. However, most Linux distributions concentrate on little endian mode as it has much wider application ecosystem.

Future of POWER8

Some years ago it seemed like that ARM servers were going to be really popular, but as of today it seems that POWER8 is the only viable alternative for the Intel Xeon architecture.

Read More

The VMware Hearing and the Long Road Ahead

[ This blog was crossposted
on Software Freedom Conservancy’s website
. ]

On last Thursday, Christoph Hellwig and his legal counsel attended a
hearing in
Hellwig’s VMware
that Conservancy currently funds. Harald Welte, world famous for
his GPL enforcement work in the early 2000s, also attended as an
observer and wrote
an excellent
. I’d like to highlight a few parts of his summary, in the
context of Conservancy’s past litigation experience regarding the GPL.

First of all, in great contrast to the cases here in the USA, the Court
acknowledged fully the level of public interest and importance of the case.
Judges who have presided over Conservancy’s GPL enforcement cases USA
federal court take all matters before them quite seriously. However, in
our hearings, the federal judges preferred to ignore entirely the public
policy implications regarding copyleft; they focused only on the copyright
infringement and claims related to it. Usually, appeals courts in the USA
are the first to broadly consider larger policy questions. There are
definitely some advantages to the first Court showing interest in the
public policy concerns.

However, beyond this initial point, I was struck that Harald’s summary
sounded so much like the many hearings I attended in the late 2000’s and
early 2010’s regarding Conservancy’s BusyBox cases. From his description,
it sounds to me like judges around the world aren’t all that different:
they like to ask leading questions and speculate from the bench. It’s
their job to dig deep into an issue, separate away irrelevancies, and
assure that the stark truth of the matter presents itself before the Court
for consideration. In an adversarial process like this one, that means
impartially asking both sides plenty of tough questions.

That process can be a rollercoaster for anyone who feels, as we do, that
the Court will rule on the specific legal issues around which we have built
our community. We should of course not fear the hard questions of judges;
it’s their job to ask us the hard questions, and it’s our job to answer
them as best we can. So often, here in the USA, we’ve listened to Supreme
Court arguments (for which the audio is released publicly), and every
pundit has speculated incorrectly about how the justices would rule based
on their questions. Sometimes, a judge asks a clarification question
regarding a matter they already understand to support a specific opinion
and help their colleagues on the bench see the same issue. Other times,
judges asks a questions for the usual reasons: because the judges
themselves are truly confused and unsure. Sometimes, particularly in our
past BusyBox cases, I’ve seen the judge ask the opposing counsel a question
to expose some bit of bluster that counsel sought to pass off as settled
law. You never know really why a judge asked a specific question until you
see the ruling. At this point in the VMware case, nothing has been
decided; this is just the next step forward in a long process. We enforced
here in the USA for almost five years, we’ve been in litigation in Germany
for about one year, and the earliest the Germany case can possibly resolve
is this May.

Kierkegaard wrote that it is perfectly true, as the philosophers say,
that life must be understood backwards. But they forget the other
proposition, that it must be lived forwards.
Court cases are a prime
example of this phenomenon. We know it is gut-wrenching for our
Supporters to watch every twist and turn in the case. It has taken so
long for us to reach the point where the question of a combined work of
software under the GPL is before a Court; now that it is we all want this
part to finish quickly. We remain very grateful to all our Supporters
who stick with us, and the new ones who will join
today to help us make our funding match on its last day
. That
funding makes it possible for Conservancy to pursue this and other
matters to ensure strong copyleft for our future, and handle every other
detail that our member projects need. The one certainty is that our best
chance of success is working hard for plenty of hours, and we appreciate
that all of you continue to donate so that the hard work can continue.
We also thank the Linux developers in Germany, like Harald, who are
supporting us locally and able to attend in person and report back.

Read More

Give My Regards To Ward 10

Hello everyone, I’m back!!! Well, partially back I suppose. I just wanted to write a quick update to let you all know that I’m at home now recovering from my operation and all went as well as could be expected. At this early stage all indications are that I could be healthier than I’ve been in over a decade but it’s going to take a long time to recover. There were a few unexpected events during the operation which left me with a drain in my chest for a few days, I wasn’t expecting that but I don’t have the energy to explain it all now. Maybe I will in future. The important thing to remember is it went really well, the surgeon seems really happy and he was practically dancing when I saw him the day after the operation.

A photo of a grizzly bear

Don’t Mess With Me

Right now I am recuperating at home and just about able to shuffle around the house. I’m doing ok though, not in much pain, just overwhelming tired all the time and very tender. I have a rather fetching scar which stretches from deep in my groin up to my chest and must be about 15 inches long. I’m just bragging now though hehehe :) I will certainly look like a bad ass judging from my scars. I just need a suitable story to go with them. Have you seen The Revenant? I might go with something that. I strangled a bear with my bare hands. No pun intended.

I was treated impeccably by the wonderful staff at The Christie and I really can’t praise them highly enough. My fortnight on Ward 10 was made bearable by their humour and good grace. I couldn’t have asked for more.

I will obviously be out of action for many weeks but rest assured I am fine and I’ll see you all again soon.

Take it easy,


Read More

Big Panda’s community panel on Cloud monitoring

On Wednesday, February 10th, I participated in an online panel on the subject of Cloud Monitoring, as part of MonitoringScape Live (#MonitoringScape), a series of community panels about everything that matters to DevOps, ITOps, and the modern NOC.

 Watch a recording of the panel:

Points to note from the session above:

  • What is cloud?
    Most of the panelists agreed that cloud is a way to get resources on demand. I personally think that a scalable and practically infinite pool of resources with high availability can be termed as cloud.
  • How cloud based architectures have impacted user experience?
    There are mixed feeling about this. While a lot of clutter and noise is generated because getting resources to build and host applications have become easier by the virtue of cloud, I think it is not a bad thing. Cloud has reduced barrier to entry for a lot of application developers. It helps in shielding users from bad experiences during high volume of requests or processing. In a way, cloud has help to serve users more consistently.
  • What is the business case for moving to cloud?
    It is easy to scale, not only scale out and up but also down and in. Out and up helps in consistent user experience and ensuring that the app does not die due to high load. Down and in helps in reducing the expense which might have been incurred due to underutilized resources lying around.
  • What is different about monitoring cloud application?
    Cloud is dynamic. So, in my opinion, monitoring hosts is less important than monitoring services. One should focus on figuring out the health of the service, rather than the health of individual machines. Alerting was a pain point that every panelist pointed out. I think we need to change the way we alert for cloud systems. We need to measure parameters like response time of the application, rather than CPU cycles on individual machine.
  • What technology will impact cloud computing the most in next 5 years?
    This is a tricky question. While I would bet that containers are going to change the way we deploy and run our applications, it was pointed out, and I accept this that predicting technology is hard. So we just need to wait and watch and be prepared to adapt and evolve to whatever comes.
  • Will we ever automate people out of datacenters?
    I think we are almost there. As I see it, there are only two manual tasks left to get a server online which is to connect it to network and power it on. From there, thanks to network boot and technologies like kickstarts, taking things forward is not too difficult and does not need a human inside the datacenter. 

This was a summary of the panel discussion. I would recommend everyone to go through the video and listen to what different panelists had to say about cloud monitoring.
I would like to thank Big Panda for organizing this. There are more community panels that are going to happen with different panelists. Do check them out.

Read More