We continue our conversation today with Gunnar Hellekson, chief technology strategist for Red Hat’s U.S. Public Sector Group.
When it comes to security, he explains why the old adage, “the more things change, the more they stay the same” often applies.
Security Concerns
“Many people still have concerns about the security of open source software, and I don’t think it’s specific to government. I think that if you look at the kind of customers that Red Hat has, for instance, 50 percent of the equity trades in the world are executed on Red Hat Enterprise Linux. [It is] in every tactical vehicle in Iraq and Afghanistan. We have a broad set of customers and many of them are running extremely mission critical workloads. Those people are obviously very comfortable with the security of open source software. Other folks are not as comfortable with it, and I think that’s them needing more exposure to open source and what it can provide.
If you look at the track record of open source software, it has a really remarkable record in terms of security, in terms of the number of defects per million lines of code. There was a study that has actually shown in the Linux operating system that, even as the code size increases, the number of defects per million lines of code has actually gone down. So, I think if you spend a little bit of time looking at the data — and, also, it’s a little bit of common sense. If you have more eyes on a particular piece of code, you’re more likely to find vulnerabilities. If you are using software in which the only people who can look at the code are working for the company that sold you that software, it very much limits the amount of auditing and scrutiny that code is going to receive.
[We talked earlier] about what a CIO should be looking at when they go to a cloud environment, and security is definitely one of those concerns. In the first case, you want to be able to trust the platform that you’re running on. Second of all, you want to ensure that you’re protecting — since you have many machines and many workloads cohabitating with each other on the same piece of hardware — you want to be able to ensure that one guest can’t attack another guest. You want to make sure that one guest can’t break out and start attacking your hypervisor. Even beyond that, you have procedural and policy questions — if I have the ability to move my workload from one provider to another, or if I have the ability to quickly broker out my workloads so that I can say, ‘I have a workload. It is at a particular security clearance level or has a particular set of security requirements, go find me a cloud provider that can satisfy those needs.’
We need standards and we need interoperability to ensure that you can safely and efficiently make those kinds of requests and have them fulfilled in a trusted way. I think what’s really interesting is how influential the open source community has been in these kinds of conversations. Once we added virtualization technology to Red Hat Enterprise Linux, we found — nearly by accident — that a lot of these security questions and a lot of these security concerns existed back in our operating system days. Once we added the hypervisor, we found that we could actually use technology which has been around for five, 10, 20 years to secure systems in this new, virtualized environment.
It’s the same tools, it’s the same technology. It’s all been very well vetted. SE Linux is probably the best example. This was a project that we had with the National Security Agency, to provide a set of very strict mandatory access controls. This is a system designed to keep top secret information away from secret information. We’ve actually been able to use that technology to separate guests from each other, so that no matter how poorly behaved a particular workload is, it can’t attack its neighboring guests or even attack the hypervisor that’s hosting it. This is something that would be really an extraordinary effort if we were writing a hypervisor from scratch, but, because the open source community is very, very good at reusing code that it has developed, and because of the modular architecture of the Linux operating system, we’ve actually been able to take advantage of . . . the fact that we’ve already solved a number of these problems.
One Standard for All?
What I do know is, if you have a room full of people developing standards, those people are going to be incredibly smart and they are going to come up with what could very well be and effective standard. Coming from the open source community, our interest is in standards that are workable, standards that are practical and, frankly, standards that have working implementations.
The IETF, which runs the Internet — there’s an old saw about that organization that they develop standards by rough consensus and running code, which is precisely how the open source community embraces standards. Standards are often de facto standards, just by virtue of the fact that — ‘Well, we solved this problem once and we solve it in this particular way, so, from now on, we will continue solving it this way.’ We’ll go back and, in retrospect, turn that into a standard.
But, this idea of standards that emerge from actual, functional software I think is very, very important. I agree that it would be immensely useful to have, say, a global standard for cloud computing, interoperability between cloud providers, easy migration of data from one provider to another. I think there’s certainly a need for that. The more these standards proliferate, the broader the market will be for cloud computing. A more competitive market means cheaper products for folks that are consuming those services.
So, a global standard certainly does make a lot of sense. I would be wary of a global standard, though, that was developed in a closed process. I think these standards need to be open. I think they need to have broad participation and, most importantly, I think each of these standards needs to have open source implementation if for no other reason than to prove the fact that these standards are actually working.