Internet of Things networks: Q&A with Douglas Gourlay, Vice President, Systems Engineering, Arista Networks

March 01, 2014

Douglas Gourlay of Arista Networks dissects the challenges of legacy infrastructure on next-gen networks, and sheds light on how programmable technolo...

 

 

Q: How does network infrastructure factor into the Internet of Things (IoT)?

Where we’re seeing a massive explosion of wearable tech, of interactive monitoring devices – I think my car has six or seven IP addresses and probably 50 computers in the thing – what’s valuable to the consumer isn’t that they have a heart monitor on their wrist or they get a pedometer built into it, what’s valuable is that data goes somewhere, gets stored, gets processed, gets analyzed, and the results of that analytics get displayed to the consumer in a meaningful manner. What’s happening is that the IoT is creating massive device proliferation, the significant amount of feedback loops necessary, the storage of a tremendous amount of data from an unprecedented number of input sources, and then the requirement to analyze that data and present and project that data in a fast, near-real-time, and meaningful manner for the consumer. What that creates is a tremendous dependency on connectivity, wireless and wired networks, and data centers. Generally, most of the companies that are doing anything around wearable tech, driverless cars, or even compute-intensive vehicles are requiring and relying upon cloud computing infrastructure for that back-end analytics.

Q: In terms of provisioning and reliability, what are the biggest challenges the IoT creates for a company like Arista?

The first one is simply the number of devices, the number of addresses, the number of traffic flows that have to be handled. It’s significant. It’s not like any one of these devices produce a tremendous amount of data. Some have the potential to – there’s no doubt that onboard analytics in a vehicle could produce a tremendous amount of data depending on how much is captured and what the update interval is. But a pedometer, or the Fitbit Aria scale I have at home? On a good day that thing does two uploads. It’s not a massive amount of data; it’s a massive amount of devices, it’s a massive amount of addresses. We believe that we’re likely to see a move toward more and more IPv6 addressing, which may cause some challenges for legacy equipment, and may cause part of a refresh cycle. In the data center itself what we’re seeing mainly is a significant expansion in the number of compute nodes and the speeds at which they’re interconnecting.

We’re in production in 8 of the 10 largest cloud providers and 8 of the 10 largest financial services companies, and from that we see a tremendous amount of transactions, whether they’re credit card processing or equity trading on the finance side or mobile devices updating centralized resource pools of servers and storage with their latest statistics. The servers are getting much faster, and as they get faster they have a greater ability to consume data, process data, and deliver it back. One of the key inhibitors of that can be older networks, to be really blunt about it. Gigabit Ethernet (GbE) was invented and standardized in 1996-1997, so we’re talking about a 17- to 18-year old technology, which is still the predominant interconnect technology in most legacy data centers. Now these top 10 cloud providers we’re working with are all rapidly moving to 10 GbE and in some cases 40 GbE to really balance the transaction processing speed of the server with the transaction delivery speed of the network. The inhibitor they all found then was networks were designed around this almost archaic Command-Line Interpreter (CLI) that really is reminiscent of a 1960s, 1970s green screen. That is the exact same – literally 80 character by 24 character – command line that’s used by the majority of network engineers today. So there’s been this shift to programmatic APIs. We like to enable our customers to program the network, not CLI configure a network.

How do I create a set of interops, a set of programmatic abstractions so you can tell the network what to do when things are being brought online, when things are being taken offline, when workloads move? So how do we create this ability for our customers to program a network? That’s what we’ve been spending a lot of our time and investment on is that fundamental shift in the way people design and architect networks and their expectations around them. It’s one of those things that’s under the covers, and it’s like changing the frame of a vehicle while the car is in motion. It’s really challenging because we can’t cause downtime. We can’t cause outages. But what customers are doing in many cases is drastically rethinking the role of the network, their expectations of a network device, and how that network is going to interact and be part of IT service delivery.

Q: How is Arista working to manage the transition from legacy networks?

The first thing to really overcome was the knowledge gap and learning curve. So we actually made our devices have the ability to look and feel very similar to one of those CLI-bound devices customers are most familiar with. Most errors are caused by humans, so if we can eliminate a learning curve what we do is reduce complexity for the people operating it so they can get on the bandwagon. Then we started creating programmatic abstractions, APIs, using a base Linux system that is completely unmodified; it’s what a Linux professional would expect to interact with, which also means that any server-side application that they would want to run on the network – for instance an automation or provisioning system – is capable of being run. So now the network starts fitting into the framework that they’re comfortable with. And the server teams were further ahead than the network teams on automation in almost every instance due to the number of devices; for every 40 servers there’s one network switch, so the network team could power along even as the network was expanding or the data center was expanding and sort of CLI power through it and work a few extra hours, whereas if you’re deploying 40,000 servers, you don’t have a choice – you must automate. So that helped.

Then we made the decision not to build our own silicon and to take advantage and leverage the capabilities of the merchant silicon market. So that reduced the power, it increased the port density – which reduced the number of devices, further reducing the power draw – and what this really means to a large cloud provider is they can get more servers and more storage in that data center. They can operate more profitably. They can offer new services. So we coupled that hardware capability of power reduction and density increase with an Operating System (OS) that first off worked the way the network customer expected it to, but it also enabled the server team and the network team to leverage those more advanced Linux capabilities to drive automation, to decrease provisioning time through applying automation frameworks to the network.

Q: How does Software-Defined Networking (SDN) fit into these new networks?

To us we really look at SDN as a meta movement more than just a feature or technology. What SDN actually really says when you take away the “is it OpenFlow” or “is it OpenStack” or “is it programming a network,” it’s a call for help. SDN is our customers telling us the vendor community, “Please help us, we want to do this differently. We need more control in the network, we need the network to be programmable, we need the network to not cost $5,000 a port.” It is a meta movement of customers wanting change in what they rightly perceive as having been a very stagnant industry. An industry that was resting on it’s laurels from the heyday of the Internet growth of the 1990s, and frankly, judging by the stock of our nearest two or three competitors, hasn’t done a whole lot since then. The stock prices have been relatively flat, and the thought leadership relatively vapid. So when we look at this, our customers are clamoring for us all to do something different. And whether you call it SDN or whatever term it is, it’s “make the network part of the IT ecosystem, play well with others, be open, don’t build proprietary infrastructure.”

Q: So what do you see the makeup of a network being 5-10 years out?

Ten years ago the data center network was large chassis that had 384 GbE ports that would sit at the end of a row and connect 10-20 cabinets of servers, and they would uplink to another collection of the same chassis. That was the architecture a decade ago. Today the architecture is a switch, sometimes two, it’s one rack unit that sits in the top of a cabinet and connects those servers together. It’s actually part of the deployment of that cabinet. So they’re looking at the network and sometimes the storage and the servers as integrated compute units that go into these cabinets and get deployed – a top-of-rack model. A two-tiered network rather than the three or four tiers we used to have. So we see the network tiers collapsing, we see the network getting closer to the compute.

A decade from now you’ll see networks that the CLI is only for troubleshooting. Where the networks get automatically provisioned by a cloud management platform or another orchestration or automation framework. Where 99 percent of configuration and change is driven through centralized, programmatic interfaces through third-party applications that actually run the entire infrastructure – not just the network or not just the servers, or not just the Virtual Machines (VMs).

Arista Networks

www.aristanetworks.com@AristaNetworkswww.linkedin.com/company/arista-networks-incwww.youtube.com/user/AristaNetworkswww.aristanetworks.com/en/blogs

Brandon Lewis (Assistant Managing Editor)
Categories
IoT