Data portability & cloud computing – your issues discussed

February 15, 2010

As promised a while back, I wanted to discuss some of the points made by Baggy on one of my recent posts (see more detail here).

“I was very interested by the first few paragraphs regarding portability. I have come across a large multinational who have been severely restricted by their current hosting and managed service provider to allow their business continuity company to port/replicate and even vault their data off their production site. The reason given by their hosting provider for the restriction …?…”you are utilising a shared disk model and we cannot RISK the chance your third party may interfere with other clients using the same platform”. Sounds unbelievable I know but absolutely true!” Baggy

I have to say I am slightly concerned on a number of points here.  To start with if the company has signed a contract that prevents it from signing an alternative contract to port/replicate/backup their production data for business continuity, surely that’s anti-competitive!

Secondly, it’s the customer’s data and they have the right to mitigate risk across two providers. The reason given to the customer by their hosting provider around “utilising shared disk” and the possibility of “third party interference with other clients”, suggests to me that the overall security of the platform is definitely questionable.

There is a possibility that the hosting provider’s technology may not allow data portability at a hardware level, but at a software level it should definitely be possible. Of course, it would dependant upon the amount of data the customer wants to port/replicate/backup as the network could restrict the desired RPO (recovery point objective) and the RTO (Recovery Time Objective).

If I were the customer I would challenge the supplier further – after all who owns the data? The provider may own the infrastructure but should support the customer, especially when the customer is simply looking to increase their overall resilience.

If I were looking to outsource any of my compute and data needs I always start by asking what the suppliers approach to data portability is.

Thanks again for your comments Baggy!

Advertisements

US Navy test Cloud Computing

February 2, 2010

I read recently some interesting articles on the use of cloud computing by US government agencies, specifically the US Navy.

Since October 2008 the US government – namely the DISA (Defence Information Systems Agency) has used IaaS (Infrastructure as a Service) to deploy RACE (Rapid Access Computing Environment) http://www.disa.mil/race/

RACE provides 24-hour computing resource within a secure private cloud environment, as and when required by anyone with a US government credit card or a completed MIPR (Military Interdepartmental Purchase Request).

OK, so you may not think this in itself is a big deal. But the article, goes on to suggest that the US navy (through the Naval Network Warfare Command) are starting to look outside of their secure, confined, controlled infrastructure walls and potentially run certain computing requirements in both private and public clouds provided by third parties!

Every year annual tests called ‘Trident Warriors’ are conducted on various Navy IT projects. For example after the devastation caused by Hurricane Katrina various Navy personnel participated in a Trident Warrior exercise to test new web-based communications technologies; assessing their usability and value in a real-world environment. For further information on such test (http://www.navy.mil/search/display.asp?story_id=24281)

Trident Warrior exercises include stringent technological testing to ensure that the US Navy know exactly what works and what doesn’t work.

Recently the United States Department of Defense (DoD) conducted Trident Warrior tests on third party cloud computing provision supplied to the US Navy through the Amazon EC2 (Elastic Compute Cloud) and S3 (Simple Storage Service). The Navy used the cloud to run several applications and tested ‘data-in-motion’ security. These first tests conclude that the use of third party, public and private cloud computing for global connectivity, server failover and application access for some applications is OK.

I’ll will be keeping a keen eye on the results of the second round of tests – due to be conducted in Trident Warrior ’10 and released this spring. It will be interesting to see if the adoption of third party provision of cloud is accelerated by any endorsement from what must be one of the most mission critical, security conscious users of computing in the world!


2009: The Year of the “Cloud”

December 15, 2009

As we roll towards the Christmas break it’s that time of year when everyone starts to predict the big tech trends for 2010: what’s going to be the ‘next big thing’ that’s going to change our lives? (So the IT press will have us believe anyway). Before we move into 2010, let’s review 2009!

One of this year’s tech trends has to be ‘cloud computing’. The cloud managed to hit the headlines of the IT press in a big way!

But is Cloud really new? For me the term is new but the science behind it isn’t! We’ve been working on this sort of thing for a long, long time. The difference is that this long established technological theory is now delivered on today’s ‘platform of choice’ the X86 generation. The theory really isn’t anything different from what we’ve all been doing with a mainframe for the last 30 years! Let me explain…think of a mainframe as the IaaS (Infrastructure as a Service). IaaS in its basic form is the utilisation of virtualisation within the data centre – mainframes have been virtualised for years! OK, so the delivery was through a ‘dumb’ terminal – the green screen, BUT the software itself could potentially be delivered as a ‘bureau’ service – SaaS (Software as a Service). This could all be wrapped up as a ‘pay-as-you–grow’ service; you just switch on more ‘MIPS’ (million instructions per second). So really the cloud just covers old ground but with a fresh outlook that maximises returns. You can’t argue that the fundamentals are at least very similar.

Another feature of the cloud is the ‘network’ capability, which allows access from anywhere, anytime. This instantly makes me think back to Sun Microsystems marketing message in the 90’s: ‘the network is the computer’. The difference with cloud computing is the orchestration of today’s computing from the virtualised ‘x86’ engine room through the shop window (Platform-as-a-Service) that is ‘web services via the browser’. The cloud provides the capability to connect quickly and reliably at work, home or on the move, through for example next generation mobile technology.

So let’s look ahead to 2010, I definitely believe that Cloud hype will continue but will start to mature. People will finally start to leverage the potential business value of it and overcome the concerns that are holding them back, such as security and compliance. There will still be people rebadging partially vitualised IT estates as “The Cloud”, but increased cloud adoption means money will be put into developing offerings rather than just running solutions – thus the theory will become a reality… Much more on 2010 next time… 🙂


Cloud Computing & Data Portability: Best practise & service providers? (Part Two)

December 10, 2009

I believe a future opportunity will arise for Cloud Computing service providers to partner – as long as they use a similar technology. This could create a multi-vendor, cross-vendor cloud platform. So, where as today you have many vendors working independently, providers could begin to work in parallel; integrating data to create a portability process that will enable true data portability whilst minimising risk.

Just a thought… 🙂


Cloud Computing & Data Portability: Technology (Part One)

December 4, 2009

“With hundreds of terabytes in the cloud — you are no longer portable and you’re not going to be portable, so get over it,”*

This thought grabbed my attention… the idea that when you store a lot of data within a ‘cloud’, I mean terabytes… that you’re then stuck in that cloud and can not migrate, port or replicate to a another cloud provider (regardless of whether they provide private or public clouds), you can’t even revert to your own private cloud! In some ways I concur with this – if you’re running within a cloud you’re just buying virtual machines and the storage presentation to the virtual machine is so intertwined that it’s hard to unravel…

So this is where I think a private cloud offering gives you the assurance that you’re still able to migrate, port or replicate from your own data centre into the cloud and vice versa back out of the cloud.  The technology itself is very important, your provider needs to pick the right mix of technology so that you always have the option to migrate or move away from that specific provider if you decide to move your data elsewhere…

Anyway back to the data… so lets take an example of how you can migrate in and migrate out utilising migration technology through replication… if the ‘store’ service in a private cloud platform utilises multi-vendor virtualisation technology it can support a larger quantity of vendors arrays. This means multiple arrays from multiple vendors can be supported through a multi protocol fabric.

An example of how this could work for a customer migrating to the cloud would be the following… btw this is in the roadmap for 2010 (POC has already been completed with some great results!!) 

It’s possible to deploy ‘virtualisation appliances’ into a customer’s data centre, enabling the customer to have a virtual storage platform and giving them a ‘single view’ of their storage. The customer will have to accept some change(s) but will not have to make their current IT investment redundant thus prolonging the life of their current storage assets – providing ROI and TCO in a single proposition for their management! Once virtualised, it’s possible to replicate the data into a storage cloud. (This is already up and running in our cloud today!)

This strategy provides a stepped approach for migration to cloud services, the first step is ‘cloud recovery’, then you begin to migrate production services as and when the business has seen the benefits and trusts the ‘cloud’ to run it’s mission critical applications!

So already you can see some benefits – when you want to move out of the cloud you reverse the process… see cloud can be portable!

* See article full article here