If Datability is all about extracting value from those stores of data which seem to accumulate so rapidly and so easily, then it should be pretty clear that everything associated with data gathering and use should grow along with the data itself. Scalability matters – but not quite as you might understand it. I’m going to explain why.
First, let’s take a step back and break out some numbers associated with data expansion (not that you really need them – your own data stores are likely to be growing at an astounding, if not alarming, pace).
Just in terms of sheer capacity, market watcher IDC recently put out a paper, Data Age 2025, which estimates that the ‘global datasphere’ will reach a total of 163 zettabytes (one trillion gigabytes) by the year 2025. The amount of data being created is getting so out of hand that we seem to need to find new adjectives to describe it as the old ones don’t quite convey the quantities.
But here’s the thing. Typically, when it comes to data, we tend to think of ‘scalability’ in terms of the hardware and platforms, even if they are in the cloud, which must host the stuff. A zettabyte is obviously a pretty serious quantity and so it requires pretty serious scale if it is to reside anywhere, right? Well, yes.
And no. Because the real challenge with Scalability isn’t where you put the stuff. Instead, it is about how you use the stuff.
Indeed, in an earlier blog introducing Datability, I pointed out that merely storing data isn’t exactly useful. Quite the opposite; it can be a liability for various reasons, not the least of which that it consumes capacity, electricity and good old-fashioned dollars.
It’s also worth noting (as you will also know very well) that the cost and capacity of devices and services are, to date, quite capable of scaling to meet burgeoning data storage needs, quite independently of the quest to find those new words to explain how big these stores are getting. Azure, for example, can do it without breaking a sweat.
Instead, the Scalability challenge is concerned with how readily you can go about extracting value from all that data. The question isn’t ‘how do I store this stuff?’ After all, in a cloud world, something as commoditised as storage shouldn’t be troubling a CIO, much less line-of-business people. Utilities are necessary, but they don’t add much value.
With Datability, the question is ‘How do I turn this stuff into actionable information?’ With massive scale, the only possible answer is to supercharge the dev team. And that means automation.
WhereScape RED steps up here by delivering all the capabilities necessary to harness big data – as big as you like, although we’ll concede we haven’t yet encountered too many zettabytes, simply because they are so staggeringly big that no one organisation has had the need – and then create the structures in the cloud which make it ready for analysis.
That’s the creation of data marts, data warehouses and data lakes. With RED, your people are equipped to scale and deal with the data deluge. Datability, of course, means they aren’t just dealing with it, but releasing data value faster and with ease. That’s because RED automates the development, deployment and operation of data infrastructure and big data integration.
Equipped to scale means the ability to rapidly respond to business demands, no matter how big the underlying data sets are. It means the ability to take full advantage of the Azure cloud, providing your people with the tools they need to innovate with data, experiment and explore. With Scalability, Datability is one step closer to reality.