From the Gartner IT Infrastructure, Operations & Management Summit is a panel discussion on cloud computing – Cloud Computing: Will it Impact Infrastructure & Operations?
On the panel are:
Tom: [High Level Perspective] There is perhaps more hype around cloud (that it will solve all our problems) than I’ve ever seen in IT. The reality is that a lot of good things are (and will be) coming out of the cloud but we need to start talking about it more directly to take full advantage of the trend.
Cloud computing does not mean a massive-scale operation, like Google, or even that it’s someone else’s stuff. It means that we have users and providers and between them we have a brick wall. Users ask for what they want (behavior) but abstraction layers and providers need to change, in terms of economies of scale, elasticity, automation, and so on, in order to provide services efficiently and on-demand. Users don’t define what happens behind the brick wall – behind the wall can be an external provider or internal IT (public or private cloud).
Virtualization is leading us to cloud computing, but cloud computing is an evolutionary change, gradual expanding in size and scope for years leading up to where we are now. In 5 to 10 years, the cloud market will look very different than it does now – there won’t be just a few mega-providers but thousands, with cloud supply chains and clouds built on clouds.
The cloud isn’t just about computing alternatives, but rather brand new ways of thinking. For example, the data growth issue – should you “prune” data? This question becomes moot with the cloud – why delete, when you can just put it out on the cloud with efficient access and storage.
The private cloud is all about ROI and will be a bigger phenomenon than the public version for several years. Services destined for the cloud have a different evolutionary path than those that are not, making “cloudsourcing” a new concern in IT, one that requires focus and new skills.
Mark: [Network Perspective] Yes, cloud computing is about the service (and not the underlying infrastructure) but there is a fair amount of complexity – the cloud is not one thing, but several. Take network latency: You can buy bandwidth, but you can’t bribe Mother Nature. Looking at the end-to-end delivery mechanism for cloud, remote applications require more than just bandwidth. As bandwidth increases, the effect of latency on throughput grows. This ensures that large-application delivery across long, fat pipes slows to a crawl on high-latency links.
Cameron: [Operations Perspective] Cloud computing does not mean your management burden becomes lighter. Look at all the outages that get front-page coverage, even just over the last year or so. IT managers sometimes think that cloud computing is going to take away their jobs – in fact, clouds have to be managed, potentially making their jobs more important than ever. It’s up to IT to decide whether (and what type) of cloud computing should be the service delivery mechanism. IT has to choose the right service provider, perform services transition to the cloud and going forward monitor and validate expected service levels.
Stan: [Storage Perspective] The easier-to-use you want to make something, the more difficult it is to implement. This also applies to cloud computing. By using a Cloud Storage Provider, you’ve just abstracted the customer’s application server from storage as well, making it more complex. There are now bandwidth charges for accessing your data, they might be low but they are variable.
Example of cloud computing:
Midsized business – 25% internal IT, 75% to 8 different cloud service providers. The only reason the wheels turn is that the services flow through IT – contracts, service levels, help desk calls, etc.
Questions from the audience:
What should CIOs and IT organizations be doing now to get ready for utilizing cloud computing?
Define service levels with your customers [note: A show of hands showed virtually no one doing this], understand and define how to measure resource utilization and figure out how you’re going to do the accounting/chargeback. The current standard in IT is to over-provision; this is “safe,” as it’s difficult to truly define the service parameters and how they might change going forward, so we tend to build for “just in case.” But the cloud is different – it needs to be automated, not interactive but on-demand as things change; so don’t over-provision from the beginning. Instead, scale up and down as necessary.
Developing a new network for Marine Corps – but can’t find standards/guidelines around what needs to be done to support virtualization and cloud computing. Can you give guidance?
[note: Wow. Possibly summing up the state of all this in one very high-level, quite frustrated question that’s asking for specifics based on reality. Sadly, this is an area that is still very much about speculation and trial and error.] One analyst says service levels but another says that IT doesn’t define service levels; that’s the business/application owner’s job. Mark Fabbi brought up the session he’s doing tomorrow on ” How to save $ 120 billion in Networking & Communications”; the whole point is that there’s waste that occurs around this traditional over-provisioning mindset. In fact, taking virtualization and cloud into account is a great opportunity to cut this waste.
What about security issues with Cloud Computing?
Always starts with policies. You still need your ILM defined, in the cloud or not. Encryption, malware prevention, access control – all of these things still need to be defined by any organization. There’s no one-size-fits-all.