Building the clouds of the future

January 29, 2015 OpenSystems Media

The cloud has been a great addition to computing – bringing many benefits with its added computing power – but it still has a lot of room for improvement. The ability to make discoveries about and advance the cloud is difficult for typical cloud users, especially when system details such as network topologies and storage system design are intentionally hidden from users. National Science Foundation (NSF)-funded CloudLab aims to allow researchers to build their own clouds to make discoveries about cloud architecture and potential new applications.

“The goal of building the CloudLab infrastructure is to enable researchers to do transformative science on the architecture and applications of cloud computing – to look at the clouds that we have now, and to think about how we can change them at fundamental levels,” says Robert Ricci, Research Assistant Professor, School of Computing at the University of Utah, which is leading the CloudLab project. “To do that work, you need to not just work within the cloud, you need to be able to control and instrument it at very low levels, and you need to be able to do that at a reasonably large scale.”

Ricci cites a few current challenges among many that hold the cloud back: security and privacy, predictability and real-time performance, and power efficiency.

Cloud security already might be better than the average person’s security skills, but some are still skittish about cloud privacy and security.

“If we can design cloud systems that build privacy in from the ground up, then there are a lot more classes of applications that might be suitable for moving to the cloud,” Ricci says. “If not even the cloud provider themselves has access to the data I’ve stored there, there’s nothing for digital thieves to steal. Encrypting data is a start, but we can probably go far beyond that, to the point where the cloud provider may not even be able to tell who is storing data or where they are using it from, yet we can still keep those mundane but important practices, like paying for service, in place.”

The ability to isolate cloud tenants’ performance isn’t perfect yet either, and the isolation and virtualization layers typically used add overheads and introduce hard-to-predict performance variability.

“The cloud has a less than stellar reputation for applications that require real-time performance guarantees – think things like cyber-physical systems, smart grids, and telephony – and for applications that require distributed, low-latency, tightly coupled computation – like many kinds of high-performance computing workloads,” Ricci says. “If we can get those latencies down, reduce scheduling and network noise in the system, and provide stronger performance isolation guarantees, that also brings a lot of new potential applications to the cloud.”

And as with all computing areas, power efficiency and resulting cooling issues present many challenges.

“There are plenty of power-saving features built into modern computing hardware, but the tradeoffs are not at all simple, and using them effectively at large scale across an entire data center is a real challenge,” Ricci says. “This is especially true in an environment where the operator of the facility (the cloud) doesn’t have direct visibility or control over the workloads of its tenants. Then, too, you have new power sources whose availability and true cost differs a lot throughout the day (like solar), according to the weather (like wind), or due to climate (like hydroelectric). Building a cloud that works within those constraints and performs global power optimization given a variety of different workloads is a very large challenge.”

These challenges are part of what CloudLab is set up to help researchers address by providing a tool to build clouds with maximum flexibility. Researchers have access to CloudLab’s hardware and software stack configuration components to get their custom clouds up and running in about 10 minutes. Hardware includes typical x86-based servers in addition to hardware like ARM-based servers and OpenFlow switches that may have an impact on future cloud development. A fully programmable Layer 2 (L2) network between data centers is provided through national research and education network Internet2. Popular software stack profiles are available, such as pre-built installations of OpenStack and Hadoop. If they choose, researchers can use these pre-built stacks, build their own, or use bare metal. Users have full control and visibility and don’t have to share resources with other users.

Three universities are hosting server clusters designed to handle different challenges of cloud computing. The University of Utah’s server-class 64-bit ARM cores, built in partnership with HP on its Moonshot platform, emphasize power-efficient computing (Figure 1). Next to be built are the University of Wisconsin-Madison and Clemson University clusters, emphasizing high bisection bandwidth/storage and high memory, respectively. Current plans call for the system to grow to around 15,000 cores. Additions will include rolling out bare metal access to network resources, and providing specialized hardware such as FPGAs and specialized switching equipment.


Figure 1: The University of Utah’s Downtown Data Center. Photo by Chris Coleman, School of Computing, University of Utah.
(Click graphic to zoom)

The University of Utah has been building infrastructure for research and education for 15 years, Ricci says. They’ve built Emulab, a resource for building custom networks in a similar way that CloudLab builds custom clouds, and PhantomNet, an end-to-end programmable networking using LTE and EPC technologies. The university is also a member of the NSF’s GENI project, which built a nationwide research network. These projects lent themselves to being a leader in the CloudLab project.

“We got into the business of building research infrastructure because we do research in these areas ourselves,” Ricci says. “We started out by building what we needed to get our own work done, and we’re representative enough of the research community as a whole that, by running these facilities as open to the world, we’re able to help a lot of other folks get their work done too.”

The project is poised to give current and future engineers a head start in revolutionizing cloud technology as it is free for use by research and education communities. Ricci says it’ll help level the playing field across all institution types and sizes. The tool will available for teaching classes, and in this role provide students with access to a level of hands-on experience that’s hard to come by.

Ricci hopes CloudLab can be as transformative as the cloud itself.

“The cloud has been transformative because it has taken infrastructure that used to be time consuming and expensive to produce and install, and made it easy for anyone to get it with almost zero effort and time,” Ricci says. “CloudLab aims to do the same for the cloud itself; that is, in CloudLab it’s as easy to build your own cloud as it is to set up a virtual machine in a traditional cloud. We hope that will similarly inspire people to come up with bold ideas about what the future of the cloud itself can be.”

For more information on CloudLab, visit

Monique DeVoe, Managing Editor
Previous Article
Excitation current mismatch effects in three-wire RTD measurement systems, Part 1

Many medical, process control, and industrial automation applications require precision temperature measure...

Next Article
When one cyberattack becomes a thousand: Protecting the IoT

It sounds like a scenario out of a science fiction thriller – in the far future, everything from traf...