Ubuntu on IBM Bladecenter E

Inserting the Blade

We’ll cut to the chaste with this one, and just list the steps needed to get our Bladecenter E and its HS22 blades integrated into our Ubuntu network.

With second-hand enterprise-grade IT equipment you get huge amounts of computing power but little or no support. No manuals, no software installed, and only a small and disjoint open-source community. We will not list here all the things we did wrong, as it would take too long. Instead, here is a brief description of what was needed.


The chassis has two management blades, with one RJ-45 each, and two Cisco EIGESM switch blades with four data ports each. Four 2KW power-supplies and two blowers were also fitted. At the front end we have a number of blades, each with two embedded gigabit ports on the back-plane. The intent was to insert the entire blade server onto its own built-in subnet connected to our internal network through the plug-in Cisco switch. Intended use is as a “self-contained” development and testing server.

Network requirements
We require two subnets for the chassis. One data network, and an ILO management network.

Chassis Preparation

Hardware redundancy duirng the installation, when working without manuals or customer support, can complicate affairs. So the first step was to remove the redundant management and swich blades. Leaving one of each installed.

We also do not need all four power supplies to power just three blades, so two of these can be removed, at least until we’ve run some seperate AC circuits to them. Both blowers have to remain inserted because an error-condition will be invoked if only one is present (and that one will then run at full speed as a result).

Connecting a KVM directly to the management port we can bring up the chassis. An online search on part-numbers showed us how to reset everything to factory defaults. As the server had been in use prior to our acquisition we decided to delay updating firmware unless we hit problems.

Step 1 – Change the management IP address to put it onto our own network. This is done directly on the Bladecenter. Default IP for eth0 was, which, by happenstance, we did not need to change. Our ILO subnet is already configured as

Once we can ping the box from a PC on our management subnet we can cease working on the KVM and use a browser from a more comfortable location.

Step 2 – Configure the switch. The switch defaults to On the Bladecenter we need to change three things: Enable external ports on the switch. Enable switch management over all ports. Configure the switch.

The first two tasks are on done on the chassis management screen.


IBM Bladecenter e external ports enabled

And Here:

CIGESM switch management enabled

We now have to set the IP for at least one of the external ports on the switch to the Bladecenter’s internal subnet. Plugging the top ethernet port into an external network will automatically include the switch’s internal port Gio/17 onto the default vlan2. We now need to bring up that vlan on the switch.

The switch IP can be set from the Bladecenter as well, but one can also telnet in (default: and configure directly on the switch with Cisco commands.

conf t
int vlan 2
ip address
no shut
conf t
ip default gateway
wr mem

Where the given vlan ip address becomes the ip for the switch and the gateway already exists on our own internal router (in this case a Linux PC that acts as a gateway/firewall) with IP attached to the Bladecenter switch port. These commands will remove the switch from the subnet and place it onto, so it will be necessary to establish a new connection during the process.

At the end of the process the command sh running-config should include this output:

ibm Bladecenter e vlan config

The chassis management screen should also show the same configuration.

IBM bladecenter switch config

Here’s the big at-a-glance subnetting picture.

IBM bladecenter e subnet diagram

The ILO subnet attached to the blades has been chosen to match the Bladecenter factory defaults. This can be changed to suit local requirements, but requires some extra configuration. Here we are only interested in getting the blades up and running to test them with the minimum of fuss.

Once the switch is reachable from our own LAN the blades can be inserted and configured.

Step 3 – Inserting the blades

This needed be done with the KVM and directly attached monitor. The management web-interface requires an out-dated and insecure JVM which we are not willing to downgrade to. The only functionality we’d need is to access the blade’s POST and setup screens. As we need to be at the iron to insert the blades, we may as well configure them and install the OS at the same time.

As each blade is inserted and given KVM control, we go into the POST set-up. With the CPUs and RAM recognised and not showing errors there are two things to do: We have two SAS drives on each blade so we need to configure those in the hardware RAID as a mirror. We then need to reset the network interfaces to defaults (which by default are on 10.10.10.x/24 range, with x = blade position).

Step 4 – Install OS

Installing Ubuntu 14.04 server on HS22 blades requires nothing special. It was the same process as installing on a PC. It required manually setting an IP address to match the default in the BIOS (we will not be using DHCP for the Bladecenter data ports).

Once installed we first check network connectivity by pinging to an external IP, then check DNS. This is only to ensure that our blades are being correctly routed to the outside world. Once confirmed the next step is to update and upgrade the OS installation with the latest patches.

Step 5 – Performance test

We used our existing real-time log anaylzer to run the tests with around 5,000,000 rows of historical data. Testing was conducted with a basic two-blade setup.

On blade #1 we installed Eclipse with the Subclipse and SVN plugins so that we could get the code from our repository and locally compile. By using Eclipse gui to do this we also ensure our X-windows connectivity and JVMs are all correctly configured.

On blade #2 we installed MySQL and modified the configuration to suit our performance requirements. With 8GB RAM available, we gave the innoDB engine 6GB.

The results of our performance test were impressive. Although these blades have a lower-level CPU speed than our existing PC-based development hardware, we saw a 100% speed-up. We also monitored the blades and saw the code was using only about half of the available resources.

This speed increase is due to the better throughput of server hardware. This translates to around 350 million rows of log data parsed, normalised, and stored per day from a single blade. With plenty of spare capacity on the DB blade we’d configured for testing.

These tests have shown us areas in which we can improve our own code to fully utilise the blades’ capacity, before we move on to adding extra blades for distributed processing. That we’ll leave for our next coding update. Our target is to process, normalise, and store 1 billion Apache logfile entries per day into a single instance of MySQL.