Open source lessons could help move to the cloud

April 21, 2010

We continue our conversation today with Gunnar Hellekson, chief technology strategist for Red Hat’s U.S. Public Sector Group.

When it comes to security, he explains why the old adage, “the more things change, the more they stay the same” often applies.

Security Concerns

“Many people still have concerns about the security of open source software, and I don’t think it’s specific to government. I think that if you look at the kind of customers that Red Hat has, for instance, 50 percent of the equity trades in the world are executed on Red Hat Enterprise Linux. [It is] in every tactical vehicle in Iraq and Afghanistan. We have a broad set of customers and many of them are running extremely mission critical workloads. Those people are obviously very comfortable with the security of open source software. Other folks are not as comfortable with it, and I think that’s them needing more exposure to open source and what it can provide.

If you look at the track record of open source software, it has a really remarkable record in terms of security, in terms of the number of defects per million lines of code. There was a study that has actually shown in the Linux operating system that, even as the code size increases, the number of defects per million lines of code has actually gone down. So, I think if you spend a little bit of time looking at the data — and, also, it’s a little bit of common sense. If you have more eyes on a particular piece of code, you’re more likely to find vulnerabilities. If you are using software in which the only people who can look at the code are working for the company that sold you that software, it very much limits the amount of auditing and scrutiny that code is going to receive.

[We talked earlier] about what a CIO should be looking at when they go to a cloud environment, and security is definitely one of those concerns. In the first case, you want to be able to trust the platform that you’re running on. Second of all, you want to ensure that you’re protecting — since you have many machines and many workloads cohabitating with each other on the same piece of hardware — you want to be able to ensure that one guest can’t attack another guest. You want to make sure that one guest can’t break out and start attacking your hypervisor. Even beyond that, you have procedural and policy questions — if I have the ability to move my workload from one provider to another, or if I have the ability to quickly broker out my workloads so that I can say, ‘I have a workload. It is at a particular security clearance level or has a particular set of security requirements, go find me a cloud provider that can satisfy those needs.’

We need standards and we need interoperability to ensure that you can safely and efficiently make those kinds of requests and have them fulfilled in a trusted way. I think what’s really interesting is how influential the open source community has been in these kinds of conversations. Once we added virtualization technology to Red Hat Enterprise Linux, we found — nearly by accident — that a lot of these security questions and a lot of these security concerns existed back in our operating system days. Once we added the hypervisor, we found that we could actually use technology which has been around for five, 10, 20 years to secure systems in this new, virtualized environment.

It’s the same tools, it’s the same technology. It’s all been very well vetted. SE Linux is probably the best example. This was a project that we had with the National Security Agency, to provide a set of very strict mandatory access controls. This is a system designed to keep top secret information away from secret information. We’ve actually been able to use that technology to separate guests from each other, so that no matter how poorly behaved a particular workload is, it can’t attack its neighboring guests or even attack the hypervisor that’s hosting it. This is something that would be really an extraordinary effort if we were writing a hypervisor from scratch, but, because the open source community is very, very good at reusing code that it has developed, and because of the modular architecture of the Linux operating system, we’ve actually been able to take advantage of . . . the fact that we’ve already solved a number of these problems.

One Standard for All?

What I do know is, if you have a room full of people developing standards, those people are going to be incredibly smart and they are going to come up with what could very well be and effective standard. Coming from the open source community, our interest is in standards that are workable, standards that are practical and, frankly, standards that have working implementations.

The IETF, which runs the Internet — there’s an old saw about that organization that they develop standards by rough consensus and running code, which is precisely how the open source community embraces standards. Standards are often de facto standards, just by virtue of the fact that — ‘Well, we solved this problem once and we solve it in this particular way, so, from now on, we will continue solving it this way.’ We’ll go back and, in retrospect, turn that into a standard.

But, this idea of standards that emerge from actual, functional software I think is very, very important. I agree that it would be immensely useful to have, say, a global standard for cloud computing, interoperability between cloud providers, easy migration of data from one provider to another. I think there’s certainly a need for that. The more these standards proliferate, the broader the market will be for cloud computing. A more competitive market means cheaper products for folks that are consuming those services.

So, a global standard certainly does make a lot of sense. I would be wary of a global standard, though, that was developed in a closed process. I think these standards need to be open. I think they need to have broad participation and, most importantly, I think each of these standards needs to have open source implementation if for no other reason than to prove the fact that these standards are actually working.


Always have an exit strategy when looking at cloud

April 19, 2010

Last week, Fed Cloud Blog promised to bring you more of our chat with Gunnar Hellekson, chief technology strategist for Red Hat’s U.S. Public Sector Group.

Today he starts out by explaining how Red Hat has been supporting the federal government, and also has some tips for what agency CIOs should be looking for when it comes to looking at the cloud.

GH: Red Hat has been supporting the federal government in cloud computing in a number of ways.

First, on a basic technology level, much of the innovation that’s going on in cloud computing and virtualization has been happening in the open source world, specifically, in the open source Linux project.

Red Hat is best known as a vendor of Linux services and support and our engineers have been working for many years on virtualization technology, and doing what it is that we’ve always done with the open source community, which is creating an enabling layer that sits between your hardware and your applciations and actually gives your applications access to some of the really interesting innovations that are going on down in the hardware. So, in the role as a hypervisor — in the role as a software that hosts virtual guests in a cloud computing environment, we’ve been working in that space for some time.

Second, I think it’s really interesting that if you look at some of the clouds that are being stood up now, like the RACE project over at DISA and elsewhere, you’ll see that these cloud computing environments are really offering two platforms to their end users. One is Windows and one is Linux, most often specifically Red Hat Enterprise Linux.

So that’s kind of heartening to see, in conjunction with this move toward cloud computing, we’re seeing a consolidation on to those two operating system platforms. That really, though, is just about technology and delivering low-cost, high-quality, very secure operating systems, which is something we’ve been doing for the federal government for 10 years now

I think what’s more interesting is the way in which we’re able to provide some guidance and some best practices to federal agencies, specifically through our cloud provider certification program. These programs give providers a set of best practices so that they know they will be protected against what is a very quickly moving market. Red Hat is providing them a template for success.

FCB: Is there something specific, in terms of that template, of what a CIO at agency ‘X’ should be thinking about if he or she is looking at cloud computing?

GH: There are a number of concerns, obviously.

Cloud computing is extremely disruptive. So, the CIO has a whole lot to think about.

In most cases, you won’t be providing your own cloud services. In most cases, a CIO will be a consumer of cloud services. So, as a consumer, you’re interested in ensuring that you have a supportable, standard build of you operating system so that you have a stable and predictable platform on which you can put your applications. You want the ability to post for those virtual guests — you want those to be portable.

Cloud computing isn’t just about providing cheap computing cycles, it’s also about the ability to compete the hosting of your applications with much less friction than you have today. Today, if you outsource your data center or you’re hiring another organization to host your computing workload, you have to worry about — am I going to pickup backup, and then I’ve got to take backup and go restore it to a new provider — it’s an extremely costly process.

The premise of cloud computing is to provide enough interoperability and have enough standards so that you should be able to easily move your workload from one provider to another, which creates a . . . much more competitive market than you would have before.

As you’re evaluating cloud providers, you want to be thinking about — what is their interoperability? How well would they work with another cloud provider? How easily can I move my workload from one provider to another?

For the last 10 years, Red Hat has really made its name taking folks from proprietary operating systems that were often tied to hardware . . . [and] getting them off of these proprietary hardware systems and proprietary operating systems and moving them onto commodity hardware. . . . One of the big reasons why people wanted to make that move is so that they could compete their hardware. If you had IBM, you wanted to be able to collect bids from Dell and HP, as well. That competitive market drives down the cost of your hardware.

In cloud computing now we see all of that progress of the last 10 years starting to get undone as people move onto these cloud environments. There’s a danger that you’re going to get locked into a particular hosting provider, a particular virtualization technology. So, as you’re evaluating these hosting providers, you want to be paying a lot of attention to interoperability. You want to make sure that there’s a safe exit strategy


Red Hat works in Deltacloud for improved solutions

April 15, 2010

A lot of CIOs are looking at cloud computing right now, but what’s best for them?

Red Hat is one company working to answer this question. They’re currently working on an open source cloud project called Deltacloud.

Gunnar Hellekson is chief technology strategist for Red Hat’s U.S. Public Sector Group and explains why they’re doing what they’re doing.

“In talking about standards and standards that emerge from working code, one of the projects that Red Hat is working on, which I think is really interesting and will probably be really interesting to government customers is Deltacloud.

“This is a means of providing a single, common interface to multiple cloud providers. You can have a single pane of glass that will be able to work with VMware cloud products, that will work with Red Hat cloud products, that will work with Amazon, and so on. This . . . is kind of a first step towards providing the kind of interoperability that customers are asking for — and also that these standards bodies are seeking.

“I think that it’s significant that this Deltacloud Project, which is already very, very successful, is actually done as an open source project. So, rather than having a single company try and guess at or work with a bunch of these cloud providers as new cloud providers come online, they can simply plug in their own code to this open source project and have it automatically work with everybody else.

“So, Deltacloud is kind of an exciting catalyst for the cooperation that we’re going to need in order to make cloud computing work.”

Hear more from Hellekson next week — and learn about how you can use open source for your cloud project.