Xilinx's Liam Madden on Next-Generation Wireless Infrastructures

December 05, 2019

Story

Xilinx's Liam Madden on Next-Generation Wireless Infrastructures

Our conversation ranged from the balance between the Cloud and Hardware to next-generation embedded design


Of the many aspects of the wireless infrastructure that we are creating to support the Cloud-based Internet of Things (IoT) is the balance that must be struck between centralized and decentralized computing. With Edge Computing becoming more and more popular, how much power should we put at the Edge? How much can we rely on advanced wireless technologies like 5G to reduce the hardware demand?

We were able to catch up with Liam Madden, Xilinx Executive Vice President and head of the company’s Wired and Wireless Group at the recent Xilinx Developer Forum in The Hague, Netherlands. Our conversation ranged from the balance between the Cloud and Hardware to next-generation embedded design.

ECD: What’s your take on the balancing act between the wireless infrastructure, and putting more computing power at the Edge?
Madden: When you think about the overlaps between wireless infrastructure and wired, and where they meet, I think there's a couple of things. One is, as I say, most current wireless deployments tend to be proprietary in terms of the standards, until you get into the actual network itself. ORAN is now standardizing those, so that there is potential for interoperability between different equipment. That obviously allows things to be dis-aggregated. It also allows you to solve problems somewhat differently. Right? 

I've talked  a lot around, what I call, the Telco data center, which is the concept of using commodity servers essentially, to do a lot of the compute, and then you still have to do bit-banging and the like. Something has to do that, and we feel that our FPGAs are ideal for that. The connection between, for example, the antennas back to what's called the distributed unit is, sometimes, can be called fronthaul. That, again, is a wired technology primarily now. 

So, as a result of that, within a single PCIE card, we could have that fronthaul piece, plus another FPGA that does a lot of the L1 processing. So again, what we see is there's quite a lot of overlap between wired and wireless technology. One of the things that's happening in the industry, is now that 5G is promising to offer significantly more bandwidth, it's putting more pressure on cable companies to also offer more bandwidth, right?

So, everyone wants to get your attention, and deliver content to you, and extract dollars from you. Now the cable companies are upgrading their equipment. So, rather than having a central unit that broadcasts out through the cable, they're using what are called Remote PHYs. So, they're sending essentially packetized data out to the Remote PHY and then, locally, they will have something that broadcasts across the remainder of the cable connection. That technology actually is very similar to wireless technology. It operates between one and two gigahertz. It uses digital pre-distortion, just like we were talking about, for the 5G network.

So, really there's just a lot of overlap. The only difference is we're actually sending the signal over a cable, rather than into free air, and other than that, most of the technology, and so RFSoC for example, can be used in that application as well. Where rather than driving a power amplifier to an antenna, basically we're driving a piece of cable. So, what I find is that we sometimes talk about our wired group and our wireless group, it turns out that there's a lot of overlap between them.

I would also say is it's an artificial distinction, I think, between what we normally call wireless and what we call wired, a lot of overlap, a lot of common issues that have to be solved, but then historically they've been kept apart. Right? I think, as we see virtualization happening, I think it'll become less obvious what the differences are.

ECD: So that's an interesting balance, and you're on both sides of that.
Madden: Correct. We are on both sides of that, and it is a really fascinating. I think, the thing you have to think about is, for things that consume massive amounts of bandwidth, but hold relatively small amounts of information, edge is the right way to do it, For example, if all you're trying to do is recognize a face, do you really need to send massive amounts of streaming data, just to find the one face? Or do you do it locally, and just send the “I found the face”? 

I think the real interesting thing is when we do have tens of billions of IoT devices, if all they're doing is measuring temperature and sending it back, I don't think there's much of an argument that you don't have to do a lot of local processing for that, but if it's doing a lot more, working on something that has, inherently, a lot of extraneous useless data, shipping it back to the data center doesn't seem like a great plan, from my perspective anyway.

I think it'll be. a mix. I don't think there's one solution that fits all. The other thing that you tend to see is, it goes back and forth, right? If you looked at the whole server/client thing, it's gone in circles a few times already, right? Where you now say, "Okay, I access all of my tools on the cloud. So, why is it taking me so long for this presentation to come up?" That's one of the costs of not cramming up your laptop with software tools, is that, yeah, that's good, but then on the other hand you have to pay a penalty, a latency penalty. That’s the other piece, about edge versus core processing for things that inherently require low latency. I think a lot of the processing will still have to be done at the edge. I don't see how you can manage it any other way. 

ECD: One a different tack, there's a convergence that's been going on for quite awhile. Some of it's been physical convergence, some of it's been functionality convergence in software. Where do you see the next big consolidation of functionality at the device level?
Madden: A guy called Fred Weber who used to be the CTO of AMD, many years ago, said, "Integrate or be integrated." So, my belief system is that if it can be integrated in Silicon, it will be integrated than Silicon because in the end it's from a power and overall cost perspective, it is the lowest power solution, and it can be the lowest cost, and there's big debates about that. 

Personally my view is, as you know, we do multi-chip devices, but we do it for a very specific reason, which is to build large devices, and large capability, without building reticle-busting die. Right? But I've long been a believer that as one of the reasons we integrated RF is the idea of taking data, serializing it, sending it across an interface, de-serializing it, just doesn't make a whole lot of sense. 

When you think about the types of functionality that gets integrated, where do we think, what direction will that take? Philosophically I align with the integrator’s view. I think that from a cost and power point of view, and that is the most efficient way of doing things. 

Now there are a couple of exceptions, and the first one is high-density memory, DRAM. And we've already seen HBM. And you're probably familiar with HBM, where in order to improve the bandwidth and reduce the power for high-density memory, they're putting them in the same package and the interposer.

That was sort of the technology that we used a number of years beforehand to integrate our extra large devices. The same interposer technology that we worked with TSMC to get enabled, which was sort of fun. And that's about I think as far as that will go. Although, it's possible to have DRAM on the exactly the same piece of silicon, it's just not cost effective. And the reason is the overhead of the masks that are used for the DRAM get charged to all the other transistors, as well. And that's not a good economic story. And there's a long history of people trying to do embedded DRAM and saying it would be cost effective, but they've never really it worked. The other one that won't be integrated is photonics. So, one of the issues that we have-

ECD: Not even in apps like isolator devices for power situations?
Madden: Well, so if we look at it today, we're showing our 112 gig electrical SerDes. Although you can build something that's twice the rate of that, the power efficiency just isn't there. You would be better off using two 112 gig than one 220 or whatever the magic number is.

What really means is you do have to go now, if you want to go beyond a few millimeters of reach, you're going to have to use an optical solution. And some people talk about having the optical solution in the same package. Not sure I'm convinced that you need to put it in the same package. But, there's a couple of things about optics. One is, lasers are very unhappy when they're around things that are hot. They have to be temperature controlled. 

My feeling, in the short term, is that the packaging technology that we use for chips can be improved, the amount that's spent on R&D in that packaging environment is relatively limited. And if you try to do fiber attach, and those types of things, I don't think the economics work. So, I think they will be separate packages or very close together. But, that's probably as far as it will go. There's a possibility that some high-density switches may end up having photonics in the same package.

ECD: What about stacked packages? Everyone's talking about 3D packaging. So, you could literally put a separate functionality chip on a layer, for example.
Madden: You could and that's eminently doable. The question is, what functions require that amount of bandwidth? From a power point of view, if you stack things on top of each other, the power just adds, the fluxes.

ECD:  And the waste heat adds.
Madden: Exactly. So the one on the bottom is happily using the one on the top. I think it's eminently doable. If you want to example today, look at what they're doing with image sensors. Where they've done 3D stacks at this point, I think they have an image sensor, a memory DRAM layer, and then they have a logic layer. And I think Sony did that years ago. For that application, because form factor was huge, they wanted to get the largest image sensor possible. They also have a lot of parallelism. Like, they want a lot of data going into the logic region. So again, that's a good example.

I do think that technology will be applied. Again, you have to look at... It's the same thing as, a lot of people come to us and say, "Why don't we do it in package?" And I usually have the other question of, "Why do you want to do it in package? What is the problem you're trying to solve?" I mean, if you have a bandwidth and power issue and that's your fundamental issue, okay. If it's a cost issue and you think it's going to be lower cost, I don't buy it. I don't think it's going to be lower cost, it's going to be more complex.

You're going to have issues with knowing the die, someone's going to have to test, you know, the combined thing. And if it's your own stuff, that's fine. But, if it's coming from random people, then those are first-order problems. And in the end, they're solvable. But, there's a cost associated with it. And you have to look very closely at that equation. And, you may say that, "Okay. Well, it saves me money in developing them, right? Because, now I can mix and match pieces." 

But in the end the customer is more interested in what it costs me to actually manufacture it, and that's a hard economic equation to balance. So, it's a big debate I think in the industry right now. I have my opinions on it. I think if you look at what we've done historically, and a lot of people said that the way that we were building our die wouldn't work in all the rest of it. And it's worked fine for the last whatever, nine years or so. We've been doing it and I think it would continue to do that.

It's nice to see other companies like AMD, sort of pick up on the same idea of... I call them slices rather than chilplets, right? Because, they're self sufficient in and of themselves, but then you just stack them up when you want more capability. I think it's a very powerful concept and definitely, you win the fight against the reticle.

ECD: Now, when we think of the whole migration forward and application spaces, what do you think is going to be the next big thing?
Madden: Oh, that's a great question. There are a few default answers. Everyone's talking about quantum computing as a you know, it's going to revolutionize things. I think there are a class of problems the quantum computing can probably address. I don't think it's the broadest class. And in general, we haven't left a lot of technology behind. If you notice, most of it is additives these days. We started with, back in the day with CPUs and then slowly added GPUs and then we're adding video processing. But, we rarely leave one behind.

My feeling that there'll be one technology that will rule them all, is very unlikely. I think going forward, I think it will end up being just additive. I think the biggest issue I see with quantum computing is, the fact that the class of problems that it seems to work best on, is a small number of inputs in a small number of outputs.

So, doing encryption or whatever, you put in a limited number of bits, the thing works away, and out the other end comes a small result. That class of problem, we don't have too many of those in the world. So, that's one thing that I think a lot of people talk about. And I think it's a fascinating area.

And, I always say that you move to a new technology when the one that you're working with becomes so difficult that anything that looks like a good option. Right? I mean, that's usually when you move. You know, beyond that, I do think we're just at the start of the AI machine learning journey. I don't think it's over by any means. I do think the form that it takes today was predicated on what it was first developed on, which is GPUs is in 2012. 

So as a result, I think the way people think about it is probably the same as that. I don't know. Victor (Xilinx CEO)  talked about neuromorphic stuff. I think that's fascinating. I'm still amazed that our brain is a mere 20 to 25 Watts and does what it does. And, if you want to know about the first order problem that we're all dealing with, is power.

When I talk to customers now, strangely enough, even the customers that you used to predict would come in and say, ”cost, cost, cost” now say, "okay, if you can't reach this power level, it doesn't matter." So, that is the ongoing issue. I do think architectures will focus on where the power is. And the power, by the way, is in general. If you look at the ratio of the power to do an add versus what it takes to fetch an instruction, get the data, move it in, do the add. It's probably about 10% of the power is used. Maybe, even less than that.

Mark Horowitz did a nice study on that. If you want go and look it up. He breaks down all of the elements. But, the most expensive is going to memory. And the most, most expensive is going off chip and going to memory. It's orders of magnitude more energy than what happens on the chip. So, I think domain specific to specific architectures, which is really what we're doing with Versal. 

One of the reasons that we are significantly lower power than a lot of other solutions, is because we have distributed memories. And, distributed memories work very well for things like machine learning and the like. Because again, if you have a bunch of waits, you're going to have them locally. You don't want to be swapping them in and swapping them out every time. So, those types of things I think will evolve over time.

The other thing I think is with the demise of Moore's law, and I‘m happy to say it's pretty much on its last legs at the moment. New algorithms and architecture are some of the things we have.

ECD: What about materials? Advanced semiconductors like gallium nitride have been revolutionizing the power space.
Madden: It has. My background is I did my masters in Cornell. We were saying back then gallium arsenide was going to replace silicon. It hasn't done it so far. I'm not a great believer. The rule is that usually there's about a 10- to 15-year lag between the discovery of the latest new gee-whiz device, and when it actually goes into production, if you look at finFETs, I think it was the mid nineties was when originally they were invented, at Berkeley. And it took until 2010. So, I'm not seeing a lot of great stuff out there that has that amount of promise.

I was lucky I lived in the golden years of Moore's Law. That's why Versal is a new architecture. It was built with that in mind. That just scaling the old stuff just doesn't work very well anymore. You have to rethink how you use functional blocks. How you make them available to other engines, those types of things. You have to think about it that way. That's why I said architecture is probably what we will see. Many now see it as a revival of computer architecture. 

I think it's actually a very interesting time in the industry. But, I think people have to work a lot harder than we've had over the last 20 years or so.

ECD:  I can't think of a better way to end this.
Madden: There you go.

 

Categories
Networking & 5G