In the cloud

IBM Cloud Computing

IBM Cloud Computing (Photo credit: IvanWalsh.com)

One of the more interesting areas I have been lucky enough to work with is cloud. We run some banks live in the public cloud. The scale of the cloud offerings is simply staggering. There are three main big players; Microsoft, Google & Amazon. Between them, they bought 31% of the world’s CPUs in the last two years and Microsoft alone is pumping a billion dollars a year into the initiative.

I was surprised to learn that the datacenters that make up the cloud are not stocked with the latest generation machines, because slightly older stock makes much more sense on a dollar per CPU cycle basis. Also, the economics are quite surprising. CPU power accounts for less than 1% of the overall cost of the data centre. Between 50 – 90% of power that feeds into the datacenter is eaten up by lighting, cooling, transforming etc. and a sizable proportion is taken up by the storage arrays.

Siting the datacenters is extremely important. The first consideration is power. You need a lot of it and it needs to be reliable & cheap. You also need good internet links. There’s no point siting your fledgling data centre next to a nice shiny power station if people can’t connect into it. Climate is another consideration. Obviously, the hotter the country, the more your cooling costs will be. The last consideration is security. Obviously, the more the customer base builds up, the more attractive a cloud data centre becomes as a terrorist target. The ideal site for datacenters is Iceland. The climate is cool, there is plenty of cheap geothermal energy and all transatlantic internet links pass through Iceland. The UK is a very expensive place for a datacenter due to high land and labour costs.

The datacentres are populated using pre-fabricated containers containing 5,000 CPUs in readymade racks, preconfigured with storage. When the container is delivered, they plug in power, network & water (for cooling) and the container starts initializing. Not all the servers in the container can start up at the same time as the thing would melt, so they start up in waves equally spaced around the container. A typical datacenter will have many of these containers with typically 2 – 3 being added every week.

With this many machines, hardware failure is a fact of life. Anything up to a 3% failure rate is business as usual. If the failure rate hits 10%, the machine vendor dials in and performs remote diagnostics on the container. If the failure rate hits 30%, the machine vendor sends out an engineer to inspect the container and perform any remedial action necessary. If the failure rate hits 50%, the container is powered down and replaced.

I was surprised by how flexible the Azure cloud is. There are three ways you can use it. The first is what they call the “web role”, which allows any kind of web-based activity (web pages, active server pages, web services etc). The second is what they call the “worker role” which allows you to do pretty much anything providing the software you want to run does not require installation (i.e. it can be copied across) and does not need to access the registry.

There were big challenges to overcome. Because you never really know which machine your application is running on or where your data is, it gives regulators a heart attack. There is also the fact that the database is what they call “eventually consistent”. Because your data is spread around and copied onto different machines, there is no real backup as we would normally consider it. You can back up your data to a local backup, but if your database is big, you may want to take advantage of the service they offer whereby they post you a tape once a day. There is no such thing as a printer connected to the cloud so if you want a hard copy of anything, you have to get creative.

The costs of a cloud based solution are *very* compelling, especially when you consider that you are effectively getting a fully managed, fault tolerant data centre. There are a number of pricing options depending on how much you want to commit to. There is everything from pay as you go, right through to volume licensing. If I had my way, I’d put every server in the cloud.

 

Tap, tap, tap. It looks like you’re writing a blog post…

Clippit asking if the user needs help

Clippit asking if the user needs help (Photo credit: Wikipedia)

I can’t stand technology that thinks it’s smarter than me especially when some bright spark thinks it makes my life easier. There can’t be too many fans of IVR systems where you phone up and get told to press 1 for this and 2 for that. But I prefer them to the automated systems that say something like “Tell me what you want to do today” and wait for a response. I usually reply “To speak to a human being”.

There is a lot of complexity in the modern world and it’s very difficult to cater for every kind of user. People don’t read manuals any more. In fact, many products ship with little or no documentation whatsoever. How then do they help people to climb the learning curve? The best way is to make things intuitive in the first place. In software, consistency with convention goes a long way. If that fails, there’s always the online help.

I don’t know about you, but the help is that last resort. Unless it’s really well written, the chances are that after bumbling around for a while, the answer to whatever question you have will prove elusive. The basic design of help systems hasn’t fundamentally changed over time, but one day, a company called Microsoft dared to innovate.

I was at a conference. There was a buzz in the air. Everyone could sense that some big announcement was on the way. As the speaker took the stand, a hush descended over the crowd. Without saying a word, he fired up his machine, launched a program and started typing. An animated paper clip in the corner of the screen bounced around with eyes following the cursor. After a moment or so, the animated paper clip tapped on the screen before sticking up a speech bubble “It looks like you’re typing a letter. Would you like some help?”

There was a nervous ripple of applause. The speaker announced that the paper clip’s name was Clippy, the office assistant and it represented a revolution in online help. He went on to show us the different faces that Clippy could take. Apparently, we could add Clippy to our own applications as there was a rich API. We could even create our own avatars. The man predicted that one day, all software would have a Clippy to proactively help to educate users in how to use the program.

You could have knocked me over with a feather. I had to check the date to see if it was April the 1st. But it obviously wasn’t a joke. You could see that they had invested a significant number of man years. The avatars were nicely drawn and animated. Proactivity takes some engineering, so someone, somewhere really believed  that this was the future.

I couldn’t see it. It was too intrusive, too twee, far too annoying. It seems the general public agreed as after universal derision, the Office Assistant was quietly dropped. It’s a shame, because there was a germ of a good idea in there somewhere.

 

Window replacement

Microsoft Windows 95 operating system cover shot

Microsoft Windows 95 operating system cover shot (Photo credit: Wikipedia)

Microsoft has always been an adaptable beast, constantly reinventing itself to adapt to whatever technology landscape is the current order of the day. Sometimes they are slow to adapt, such as when Bill Gates initially dismissed the internet, but they are quick to catch up.

This week, 18 years after the fanfare of Windows 95, comes the launch of Windows 8. Back in 1995, Take That and Blur were fighting for the number 1 spot in the charts. Sweden, Austria and Finland had just joined the European Union and Netscape had just gone public.

The computing landscape was very different back then. Pretty much every desktop in the world ran Windows, so Microsoft found a ready supply of customers eager to upgrade from the limitations of Windows 3.x up to the ultra modern Windows 95 with its plug and play, 32 bit support and long filenames. Although, even the ultra modern Windows 95 didn’t even come with a web browser. You had to download the “plus” pack in order to get the fledgling Internet explorer. Thus began the browser wars that led to the downfall of Netscape.

The mood of the launch was very different for Windows 95. Microsoft was very much a company in the ascendancy. They dominated the desktop with Windows and Office and there was absolutely no doubt that the new version of Windows would be a success. They chose “Start Me Up” from the Rolling Stones as a theme tune for the launch campaign as a reference to the brand new start button that nestled in the bottom left of the screen. Wisely, they recorded a new rendition where they removed the words “you make a grown man cry“.

Windows 95 was a runaway success with 1 million copies selling in the first 4 days, 40 million in the first 12 months. Microsoft will be hoping for similar commercial success with the new version of Windows. But the competitive landscape is very different. Windows 8 is not just a desktop operating system, it is also aimed at the very crowded tablet market. It’s quite a battlefield with Android and iOS holding the high ground. Also – Windows 95 was a big step forward from Windows 3.x. Windows 8 comes after a very capable Windows 7 which had little to fault.

Windows 8 has been publicly denounced by Tim Cook, the Apple CEO as an unholy union not unlike a toaster combined with a fridge. Apple have approached the market with separate operating systems for tablet and desktop and see any operating system that tries to cater to both platforms as a compromise too far.

With the cash cows of Windows and Office looking decidedly venerable, Microsoft need Windows 8 to be successful and the move to a completely new paradigm is brave (even though the old look and feel is still there if you need it). I think they deserve plaudits for that bravery and there is a good chance that just like the ribbon toolbar that came with Office 2007, people will get used to it and come to love it.

Either way – Windows 8 is a landmark event in computing history.

Beam me up Scotty!

 

Abstract icon of Enterprise NX-01 of Star Trek...

Abstract icon of Enterprise NX-01 of Star Trek Enterprise (Photo credit: Wikipedia)

After the big comfy armchair that was BP, my second employer “Pentyre” was more like a roller coaster. After the comfort of a blue chip company, I was brought down to Earth with a bump when I was summoned in to an all staff meeting the week after I joined. The entire company easily fitted into a small room. I was one of 7 software developers. The MD explained that a customer had reneged on a bill and the company was in real trouble. Some of the directors were going to forego salary for a couple of months and hopefully, everything was going to be OK. Had I made the mistake of my life ?

I very nearly ended up missing out on working at Pentyre altogether. I had already accepted a job with another company, but there was a recruitment consultant who just wouldn’t take no for an answer. He insisted I visit Pentyre to see what they had to offer. So after work one evening, I turned up outside their offices for an interview. First appearances weren’t too promising – a converted factory with a tatty sign outside. I was shown up the stairs into a demonstration suite. There were various devices around the place; pagers, phones, cameras, alarms, flashing lights.

I was interviewed by the MD himself. An unassuming middle aged man called Malcolm, impeccably dressed, short with wispy white hair. He gave me the potted history of the company, which didn’t take long. After a long career in IBM, he had taken his terms and used his severance money to start up the company. He then took me through a demonstration of what their software could do. It was a dazzling display as he made the various devices do their stuff with just a click with his mouse. The inner geek in me was hooked.

He then took me on a tour of the building. At the back of the building was an area that looked a bit like Q’s workshop out of the Bond movies. There were machines everywhere in various states of assembly. On the benches there were oscilloscopes and multimeters. Propped up against one wall was something called a protocol analyser. The last time I had heard of anything like it was during an episode of Star Trek. Dominating the middle of the room was a train set. I looked at Malcolm quizzically and he nonchalantly explained that the reason it was there was to test the train radio system they were developing for the London underground.

I was mesmerised. I simply had to work for this company. As we went back up the stairs to talk turkey, it was obvious I was hooked. Malcolm asked what it would take for me to go and work there. A brief exchange later, the deal was done. I started work there a few days later.

It was an amazing place to work. The nice thing about working as part of a small company is that every single person really makes a difference. Every few months or so, there was a new assignment – usually involving some brand new technology. My first job was all about pagers. Back then, if you had a large site, the only way to keep in touch with your workforce was to give them a pager. I also developed a building management system, some modelling software for a steelworks and CCTV systems for prisons and nuclear reactors.

My time at Pentyre taught me the power of small teams working together without constraints. The productivity was amazing. There was no demarcation; you sold, designed, built, tested, installed and supported every bit of software that went out the door. Not all the technology was all it was cracked up to be. We worked on a project to detect faces in crowds. I couldn’t get the face recognition software to recognise me standing perfectly still at less than two paces. Voice recognition was similarly inaccurate being unable to determine the difference between “Chicken Tikka Massala” and “Land Rover” even when spoken slowly and deliberately.

The technology failure that has tickled me to this day was DCOM. Launched with some fanfare, DCOM (or Distributed Component Object Model) was Microsoft’s attempt at a computing model for distributed computing. In order to test it, we set up two machines. On the first, we coded a simple button with the label “Sausage” which would then send a message to the second machine which would respond with the on screen message “sizzle”. It was the simplest test imaginable and we were used to getting such things working.

Even so, after a whole morning of messing about with various different settings, we simply couldn’t get it to work. Frustrated, we went off to lunch. When we returned – there was the “sizzle” message on the second machine – we obviously hadn’t left it long enough!