What it takes to build a data center

June 22, 2010

Ever wonder what it takes to build one of those massive data centers you hear so much about?

Well, we can’t tell you everything, but today Fed Cloud Blog gives you a bit of an inside look.

Bruce Hart is chief operating officer of Terremark Federal.

The company recently opened its third 50,000-square-foot datacenter — the Network Access Point (NAP) of the Capital Region in Culpeper, Va.

The facility is still in the process of being built, and he gives us an idea of just how massive it’s going to be when completed.

BH: We have a flagship product that we call the Network Access Point of the Americas — or NAP — that was built in Miami several years ago. We are in the process of building, on a lateral scale, a similar facility in Culpeper, Virginia . . . and we’re putting it up one pod at a time. A pod represents a raised floor data center . . . between 50,000 and 60,000 square feet.

Pod A was built in 10 months, filled in 6 months, and is now pretty much done. There’s not much room for more in there. Pod B opened for storage a few weeks ago, but is already sold out. . . . Pod C — we have broken ground and are putting up the foundations. . . . Pods D and E remain to be built. That will pretty much fill up the current campus, along with a 72,000 square foot office building and campus headquarters.

FCB: How does this fit in with Terremark’s overall plan to help the federal government when it comes to cloud computing, data centers and privacy and security.

BH: One of the reasons that our CEO first decided to build this facility in Culpeper . . . was to make it federal-facing — to give it a proximity enough to federal decision makers so they could get there if they needed to, [but] be [far enough away] that it is outside of the blast zone.

It is very, very secure. It’s a level 3-plus facility from a physical security, multi-layer security perspective. . . . It’s a place where you’d have to score a direct hit with a really large bomb. We have been able to fill it up not only with federal customers that require that kind of physical space for their missions, which are absolutely critical — we began with DoD and Intelligence Community kinds of missions — but now we are also moving into the civil sector of government.

We also have a number of Fortune 1000 companies that have availed themselves of that space for the same reasons that their federal brethren do. We don’t just sell co-location there. It’s also the foundation of a whole host of services, which we integrate as needed by federal customers to provide what amounts to one-stop shopping solutions for [IT] problems.

FCB: We don’t know if you can give us absolute numbers, but have you found as you’re doing this that the federal government is ahead of the curve, or are they lagging behind the private sector?

BH: I guess it would depend upon the particular product line. I think the government is moving to external and privately managed co-location, privately owned data centers like ours, because they can’t really find ways within the government’s funding process to recreate these on their own.

It’s very difficult to interest Congressional decision makers and Congressional committees in the basic problems of infrastructure, and one of those pods is $60 million or more. It’s a capital investment that can cause a gulp if you’re not in that line of work. So, for them to be able to avail themselves on a monthly, recurring rate basis — or [look at it] as an operational expenditure, rather than a capital expenditure, sometimes makes their lives a little easier. They get to share the costs with other federal organizations . . . and that’s a business model that makes sense both commercially and in the federal space.

In the area of managed services, and, specifically, the area of enterprise cloud services, the federal government is definitely in the lead. From the time Vivek Kundra and President Obama took office, the federal government has created a position of leadership and vision. I’m very proud of them — they don’t often do this in the sphere of information technology, and it’s not just about things [like] economy. They also, at the very highest levels, understand the power of the cloud and its differentiators in a variety of ways.

So, yeah, I’d say they are in a leadership position at present.

Also, today on Federal News Radio, The Federal Drive gets details about a new survey regarding data center consolidation.


Former CIA IT guru: Everyone starting to learn about cloud

December 9, 2009

Listen to more of our chat with Bruce Hart.


Today we continue our conversation with Bruce Hart, COO of Terremark’s Government Group.

We left off discussing security, so we’ll start back up there:

On security and 100% protection

I don’t think anybody would be fool hearty enough to offer a 100% service level agreement (SLA) of a virtual machine sold as a service that is absolutely foolproof. What you do is basically multi-level security.

One of the things we do with Terremark’s enterprise cloud offering in the federal space, is we actually host it in a data center who’s security level is equal to that at Langley, Virginia or any good military base. We have armed guards, we have 200-foot setoffs, we have fences and all of the same features that any federal institution would require. In fact, we host classified organizations inside our data center.

Then, inside that, you have logical security. You run physical data centers that are essentially lights out — there’s no human access to the actual hardware. All of that’s highly controlled.

Beyond that, you do all of the things that are software-based, or otherwise hardware-based that are about information security. You do malware analysis, digital forensics, vulnerability assessment penetration testing, manage firewalls . . . the list goes on and on.

Public cloud v. Private cloud

There is such a thing as a public cloud.

[Terremark] has an offering called Virtual Cloud Express — or VCloud Express. It’s essentially a commodity cloud, much like Google’s or Amazon’s. You pay as you go, you take reference to shared resources, you don’t have much knowledge or concern about where those resources reside physically. The utility platform is enterprise-class. You sign it with a credit card, there’s no minimum, there’s no contract — you buy it by the minute or by the hour and use it as you will.

In my opinion, most federal agencies are not going to find a lot of utility in that kind of cloud computing. It’s just too risk-laden. It’s too amorphous. They’re not going to put their core missions on that kind of a platform — but there’s a different kind of cloud.

In Terremark’s case, it’s called the enterprise cloud, which is essentially a virtual, private cloud with a dedicated resource pool and the ability to burst above the amount of resource that you have bought. It has a physical device and private network integration. It supports multiple operating systems and I can take you into our data center and point to where it actually lives. So, we serve this up out of a physical fortress.

In fact, we have now moved beyond the dot gov phenomenon . . . at Terremark Federal Group and have recently been selling cloud as a foundation for actual production services inside large federal agencies. So, it’s beginning to happen.

Why everyone — not just IT managers — should learn about the cloud

I think mission managers and executive level decision makers all over the federal government are learning about it as we speak.

I’ve never seen so many symposiums dedicated to a single topic.

Vivek Kundra, the federal CIO, has been a change agent and an advocate for cloud computing from the federal perspective, and is acting in very — I think — effective ways to begin to push the message into federal decision making.

Again, it has a long way to go, but there are many opportunities for federal decision makers to learn about the cloud, to asses and weigh the risks versus the benefits, and — at the end of the day — they can come to a company like ours and just get it for free for 90 days, load it with whatever application they want and kick the tires.

There’s a lot of learning to be done, but it’s well underway.


Former CIA IT guru discusses the federal cloud

December 7, 2009

Listen to the first part of our chat with Bruce Hart.


FCB this week talks with Bruce Hart, COO of Terremark’s Government Group.

Hart is no stranger to the federal government, either. He served as Deputy CIO and then Deputy Director for Science and Technology for CIA before moving into the private sector.

Today we learn a bit about what he and his company are doing in the cloud for the government, as well as what your IT manager should know before making the move.

Helping the federal government
For GSA, we are hosting, on our enterprise cloud out of our network access point (NAP) . . . USA.gov and also Data.gov. Both are citizen-facing, federal public Web sites that provide more efficient access to federal information to citizens all over the country.

For the Library of Congress, what we do is a little different. We host what’s called myLOC.gov. It’s also a public-facing Web site, but we provide a higher level of services in a more traditional hosting sense. We provide for them something we call High V managed hosting. We do virtually everything end-to-end in a dedicated environment just for them.

Defining the cloud
one of the things about cloud is its ubiquity has created a circumstance where not everybody knows exactly what’s being talked about.

In Terremark’s terms, what we mean when we say cloud — our enterprise cloud is a service offering that basically is about compute power. It’s called infrastructure-as-a-service in federal terms.

There are also cloud definitions higher in the stack — platform-as-a-service, software-as-a-service. What we sell is computing power where a client buys the resource, rather than a server — a physical box — from us using virtualization technology across some transport layer — some kind of a network, often the Web.

They buy just the amount they need . . . And they can configure it within a matter of three to five minutes, create a virtual machine — a server that acts like a physical server; load their operating system on that; load their applications on the operating system; then they’re up and running.

What you should know and do before making the move
Federal IT decision makers tend to be, and I don’t mean for it to sound pejorative — server hoggers. (I used to be one myself). They like to have direct access to the hardware and the software upon which their mission is conveyed. It’s very important to them.

So, the first thing you have to do is give up that sense of immediate physical control and literally take advantage of the aspects of the cloud that are so powerful.

You also have to recognize that you’re working on virtual machines. While they actually feel like physical machines, they have their own inherent weaknesses which have to do with the fact that there are multiple [virtual systems] residing on a single physical box somewhere outside of your immediate control.

The controller for those virtual machines is essentially a piece of software in its own right — the hypervisor.

So, from a purely security point of view, if an attacker can get access to that physical machine and can control the hypervisor, there’s some prospect that he can control or work across all the virtual machines that exist on that physical box.

On Wednesday, we’ll learn more about how 21st century security could help mitigate such an attack and we’ll get perspective on why everyone in your office should learn about the cloud.