In the beginning there was the Colossus. Or maybe it was Babbage’s Analytical Engine… whatever. It was made… it was good. But there could be no multi-tasking… there could be only one user at a time. So.. maybe it wasn’t so good. But hey… nobody gets it right the first time!
Fast forward to the first mainframes with interactive terminals, let’s say the IBM 3270 connecting to the old IBM mainframes. We’re looking at the first cloud. Well, why did this make so much sense back then? Why did we move away from it? And why are we now back to it? Well… here’s my theory:
Why do we choose centralized computing?
Well, we choose it because it makes the most sense of course! When computing went from top-secret government research to the greater business community marks the time when IT became “alive!” And if you’ve watched any movie when someone wakes up from recently being given life… they breathe in… and hard!
The simple fact is that back in the days of the 3270, it was cheaper to build a central powerhouse of data processing and let the terminals on the other end made with much cheaper parts just serve as a window into the bigger beast. What we needed to do back then (input data, organize data, retrieve data) was better served with as little processing done on the client-side as possible. The reason? It was just too expensive. The networks were too slow to move much data back and forth so programmers of the day needed to be clever about how they used the small amount of resources they had.
And then computers got cheaper and faster… WAY faster…
As the price of computers went way WAY down… the mainframe pricing model just didn’t keep up. As networks got faster and faster, programmers weren’t as constrained by technical limitations as they once were. It became more economic to just use distributed systems as servers… and it made more sense to push a large chunk of the processing down to the client. And when that happened… the personal computer was a massive contender in the IT scene for decades. Cheap memory, processing power, and storage opened up a whole new world that a centralized model just couldn’t keep up with. The bottlenecks during this period (1990 to 2009) were:
- Business model of the mainframe providers
- Network speed <– This is a BIG one!
Graphics became common place. And people love pretty pictures! Once people were shown that they could have a computer that was pleasurable to look at, a simple green screen could never suffice again. Well… pictures take up network resources (bandwidth)… and the 1990′s weren’t really known for fast internet speeds. Just think about that collection of AOL disks that offered a blazing fast 5.6kb per second… assuming you had the cleanest phone line on the planet. Remember house phones?? Yeah… I barely do myself.
In summary, the way people needed to use computers changed… and it caused the world of IT to breathe out. But, while we weren’t looking… the bottlenecks changed again.
And now we have SaaS
The bottlenecks today aren’t so much in hardware for most applications. The bottleneck is we haven’t yet caught up to using all the power we have at our disposal. And since we’re not really using it… it has now suddenly gotten cheaper to move stuff away from the distributed channel… and it is being moved back into a central location. The cloud. SaaS. Whatever you want to call it these days. At its core it is the same thing… IT breathing in.
So… IT is inhaling. It is just beginning its second breath. And on this schedule… it should breathe back out again around the year 2025. Why will it breathe out? The same reason it always breathes… The way we use computers will change in such a way so as to make it a necessity to adapt again. And… we will.
But for now… SaaS is here to stay. It is going to be here long enough to build a business model around it and if you’ve not done so already… you’re very far behind.
Then again… you could just be waiting for 2025.. in which case… I’ll see you then.
ABOUT THE AUTHOR
Matthew Bradford has been in the I.T. Performance Business for 15 years and has been critical to the success of many Fortune 500 Performance Management groups. He is currently the CTO of InsightETE, an I.T. Performance Management company specializing in passive monitoring and big data analytics with a focus on real business metrics.