Surprisingly, IBM had scant experience building web-scale applications. Its model was to provide turnkey hardware, software and services solutions to large clients with particular employee-to-business and business-to-business needs. The nacent e-business era of customer-to-business scenarios were typically custom one-off, siloed projects of varying success. The company never translated its expensive and hard-fought experience hosting Wimbledon or the Olympics into practical knowledge for defining a public cloud. For emerging web-born business such as Google and Amazon, global scale customer-to-business interaction was fundamental to their survival. These companies and those like it relied on in-house engineering expertise to build what was to become their public clouds, not as their primary goal, but rather as a result of sustaining their own business. This DIY engineering culture and a healthy dose of performance pressure from “succeed or die” produced the #1 and #3 public cloud providers. This trend also robbed IBM of a whole generation of new customers, new companies who increasingly sought to lower their capital outlay by utilizing the cloud environments of these two Internet behemoths, secure in the knowledge that Google and Amazon relied on the same environment to sustain their own businesses. While a engineering culture flourished at Amazon and Google, IBM still very much embraced the sales culture. It was a place where everyone was supposed to sell, and salesmen (and women) steer the helm, in stark contrast to its new rivals in the cloud arena.
The clouds built by Amazon and Google were part of their corporate DNA right from the beginning. However, what seems more impressive is the progress arch enemy Microsoft has made. Even Microsoft, the great Internet denier of the 90’s has built a better cloud than IBM, as it currently stands at a distant #2, and all without the need for serious acquisitions to make it happen.
IBM’s public cloud – SmartCloud Enterprise – trailed Amazon, Google and even Microsoft by years, being announced in 2011 (a full five years behind AWS). While late to market it has also been a marketing disaster, with the typical IBM-induced confusion of tiered and overlapping offerings (Foundation, Entry, Enterprise+, you get my point), coupled with a complete retooling after the acquisition of SoftLayer (2013) and a pivot towards OpenStack (2013) and adoption of Cloud Foundry (2014). It also casts a very wide customer net, supplying every need from IaaS via SoftLayer and OpenStack (including baremetal provisioning), CaaS (Container as a Service) via the IBM Container bolt-on to Bluemix, PaaS via Bluemix and various SaaS offerings such as IBM Connections for collaboration, Verse for email, Blueworks Live for business process management, etc. Finally, as a differentiator, IBM is pursuing the hybrid cloud with BlueBox Dedicated and Local and Bluemix Dedicated and Local which collectively cover IaaS and PaaS in dedicated off-premises and on-premises offerings. The hybrid products and services cater to IBM’s existing clientele whereas public Bluemix hopes to capture developer mindshare and emerging startups.
I’m not sure if such wide coverage is a masterful strategic interweaving of multiple products to maximize relevance, or an act of desperation stemming from an lack of confidence in predicting the tea leaves of the cloud ecosystem. What is certain is that IBM’s cloud platform consists of a combination of recent acquisitions and open source software primarily because the company was so desperately behind the marketplace due to a late start compounded by SmartCloud and PureSystems commercial failures. The company’s cloud story needed to seriously catch up.
IBM’s platform-as-a-service environment Bluemix has been available for just over two years now, and offers a constantly evolving and impressive array of services and runtimes. The company has leveraged many of its middleware products and enabled them as CloudFoundry services, along with common open source equivalents. Coupled with the new Watson services this makes for a compelling platform. Yet unfortunately due to it’s late market entry and early fumble with the SmartCloud line the IBM cloud has already lost the battle for developer mindshare. When the industry talks about cloud computing, the examples and comparisons between AWS, Azure and GCE are typically mentioned, followed by players such as Rackspace and DigitalOcean. Seldom is SoftLayer mentioned and the only references to Bluemix are from IBM itself. This obscurity is further compounded by lack of open source adoption. As distributed computing, microservices and containerization have been focal topics within the industry the past year, it has become evident that open source projects at the forefront of this industry such as Docker Swarm, Kubernetes and Apache Mesos provide adopter on-ramps to only the most popular cloud environments. There is never an easy way to provision one of these frameworks on Softlayer to experiment with and “kick the tires”. The reason why a new player such as DigitalOcean is supported before IBM’s offerings is its intense simplicity. What it lacked in sophistication, it reaped in developer mindshare. As a result its growth curve has been phenomenal. The end result is that unless you are a first tier cloud provider, support and references to your cloud offering is greatly diminished. Without an exceptional defining attribute or significant marketshare your platform is condemned to the long tail of obscurity. Despite the wide net cast, this is where the IBM Cloud finds itself now.
Generally speaking the new utility model for computing is going to take a toll on the profit margins IBM enjoyed in the past. No longer will clients need to add a safety margin when estimating their hardware and software requirements for the years ahead. Furthermore, shelfware – unused or under-utilized software licenses will be a footnote of the past. Under the new rules of utility computing clients will only pay for what they use when they use it.
IBM’s hybrid cloud play is a big client niche that takes the public-only clouds such as GCE and AWS out of play, leaving only Azure (and of course a long tail of smaller competitors). Strategically a good play, but I wonder about the longer term viability of the entire hybrid cloud market. A hybrid cloud is intended to be a safe and isolated mini-cloud residing within the corporate firewall. Its self-service features are meant to support devops and accelerate internal IT lifecycles. Additionally it has the potential to federate with its bigger brother in the public sphere for any low risk workloads. These are all desirable features, but considering where most of these large clients stand today, it is evolutionary rather than revolutionary. These large customers have spent the past decade virtualizing their internal hosting, have a measure of self-service or at least accelerated turnaround and been engaged in automation with a myriad of tools like Chef, Puppet and the like. However the most obvious deficit with on-prem clouds is that they lack the killer feature of treating computing like a utility. This is the endgame and it is only perceived risk that is keeping the established customers on the slidelines. I think eventually the tides will turn and the prevalent attitude will consider it reckless and irresponsible to host your own data rather than a dedicated cloud provider. Data protection regulations will continue to get more sophisticated and onerous and economies of scale will drive more clients to public or dedicated cloud adoption (in a sense this is Regulatory Compliance as a Service – RCaaS).
Containerization and Docker specifically crashed onto the scene over the last two years and have disrupted everything. Devops was made more practical overnight. Distributed systems like Cloud Foundry and Apache Mesos have been retrofitted to accommodate Docker. New systems such as Kubernetes and Docker Swarm now exist. Other PaaS offerings such as OpenShift have been rewritten to use Kubernetes. OpenStack now manages containers via Magnum. Container specific OSes like Photon and CoreOS now exist. In short the dust is still settling.
What does this mean for IBM and it’s cloud strategy?
For one, I think it’s the deathblow to AIX, which has been on lifesupport every since IBM committed to Linux. Docker and its kin are all Linux centric which means AIX users are left out of this devops renaissance. Even Microsoft (with Windows Container Server 2016) has added (and enhanced via Hyper-V) Docker support. True, the images themselves are OS specific, but now Windows users can participate in the Docker ecosystem.
Also, containerization has provided a “plugin” mechanism for cloud services, and in a sense diminishes the value of the services catalog provided by the cloud vendors. As long as the environment has the capability of hosting containers, one can simply rifle though the registry for the image that suits their needs and spin it up. Of course there is additional work required to keep that image up to date, ensure it contains no security vulnerabilities and configure it to scale, all of which a service would provide transparently. However, features like IBM’s Vulnerability Advisor and several other similar products provide automated image scanning that make this job much easier.
As the service runtimes become portable and less of a differentiator between cloud vendors, the platform stickiness shifts to data and services that rely on large data sets such as machine learning algorithms.
AWS is the 800lbs gorilla with regards to public cloud, and shows no signs of slowing down. Meanwhile, Google’s cloud is a threat for developer mindshare, but not a serious enterprise contender. IBM’s chief competitor for hybrid enterprise cloud is Microsoft Azure. Potential new competitors like Cisco and Intel are becoming more heavily involved in the datacenter virtualization ecosystem. New disruptors could turn everything upside down like a small company named dotCloud did with Docker.