Packet Pushers Hosts Cignal AI

Andrew Schmitt was invited by Packet Pushers and Scott Wilkinson of ECI to participate in one of its weekly podcasts. While not well known in the service provider world, it is popular among the network engineers of small to large Enterprises. Each podcast has about 10,000 downloads per episode.

This episode covered topics of interest to enterprise networking engineers: silicon photonics, data center interconnect (DCI) and enterprises using dark fiber to directly connect sites. It also examines the open optical movement being driven by the Facebook-backed Telecom Infrastructure Project as well as the practice of buying and deploying unbranded optics for use in OEM equipment.

You can listen to the podcast embedded below or at the Packet Pushers site where there are also some show notes.

A full transcript of the show is also available for clients.


Greg Ferro: Welcome to Packer Pushers. Now, I consider optical networking to be a specialized area of networking and in particular data networking but for many people, this is really low visibility to otherwise experienced network engineers. I’ve talked to plenty of people with 20, 25, 30 years’ experience and they wouldn’t even know what optical networking is if it came out of the bottom of the boat and started biting off their legs. Really, the optical network is much less about packets and pairs and much more about the physical properties of fibre optic cables, signal propagation and remote operations, It’s about transponders and wavelength switching. Things that most network engineers just don’t get as part of their standard training.

Now, in recent times, optical companies have been moving into data center into connect and selling products direct enterprising where you bang what fundamentally looks like a one eye you switch on the back of a dark fiber offering from a provider and you can now start to provide DCI services or maybe your lease worst telecom supplier, which is probably the only one you’ve got, they’re often using this exact same technology to provide layer to transparent bandwidth between data centers. Now, sure, L2 transparency is risky but it remains the only way for enterprise applications to actually work today. The reality bites pretty hard here. We all know L2 is a disaster but it’s still a working solution.

So, what I’ve done is I’ve reached out to a couple of willing victims from the internet. Joining us is Scott Wilkinson from ECI Telecom. He’s a senior director in portfolio marketing there and Andrew Schmitt who is an analyst at Cignal AI and they’re going to be here to talk a little bit about optical network. You can find out more about them in the show notes on the Packer Pusher’s website. So, let’s get right into it.

I’ve seen a lot of talk recently about something called silicon photonics and optical components. Now, optical switches tend to be made up of these interfaces, the ROADMs that tend to go into the edge. Andrew, is that market really booming because my understanding is that’s really the key to optical network, is those transponder modules?

Andrew Schmitt The transponder module is where the magic happens. That’s where you take the electrons and you convert them into photons and that’s how you drive it through the fiber. Now, silicon photonics is another way to build the same transponder. If you look at the way things have been done in the past, technologies like Indium Phosphide, Lithium Niobate to turn the electrons into optics and specifically, you need a material that can create photons and silicon can’t do this. Silicon photonics takes a material that creates photons, that can lase, and then they put that on top of a silicon substrate. The holy grail for silicon photonics is to take optics and then embed it right into your chip. So, imagine a Broadcom Ethernet switch but instead of that being an electrical IO, it would have optical IO. That’s the long term promise of silicon photonics but we’re a long way from that.

Greg Ferro: So, silicon photonics would be useful then in getting from the switch ASIC. So, if I’ve got 100 gig interface on it, today I have a 50 gig electrical interface running PAM-4. Probably, instead of having a 50 gig electronic interface, I might start going to a 200 gig optical which then goes off to the physical interface.

Andrew Schmitt Well, optical would be the physical interface. It’d just go right off the chip, right out the front panel and off you go.

Greg Ferro: But it’d still have to be amplified. They wouldn’t be able to lase at the power level that you need to transmit 100 gig, 200 gig signals straight directly off the chip, would they?

Andrew Schmitt Well, it depends a lot on how far you want to go and how strong your source laser is and how much it’s been split. In general that’s one of the downsides of silicon photonics in longer reach applications is it is a more lossy medium. So, when you look at people doing thousand kilometers, things like that, there are some companies that do silicon photonics but a lot of the more traditional technologies are still quite competitive. Now, going back to the Broadcom chip example, one of the things, if you actually look at one of those chips today, at least a third, maybe as much as a half of the power consumption that’s in that chip is coming from the IO. It’s not even in the processing. It’s just getting the signals on and off the chip, doing the PAM-4 DSP. So, if you could find a way to do this optically with lower power would be a major benefit to improving the chip itself.

Greg Ferro: So, the real drive behind silicon photonics is lower power consumption on the asset and that ultimately leads to higher density, higher throughput ASIC because now there’s more space left in the silicon, in the cheap, to allocate it to switching bits.

Scott Wilkinson: So, the way we’ve done optics forever, if you go all the way back through history, the way we’ve send own to fiber is we turned a laser on and off really fast and that worked for a while until we got to about 10 gigabits per second and we thought that was the fastest we’d ever have to go, we’d never have to worry about it again. The problem was we couldn’t figure out a really good way to go beyond that and you’ve mentioned numbers here today like 50 gig and 100 gig and 200 gig and 600 gig. Those were not possible the way we used to do things. So, we had to come up with a new way of doing it and what people did is they looked at what was happening with satellites and they said they’ve done some really cool stuff there where they use different levels and phase modulation and all this really cool stuff to go over what’s essentially a lossy medium. We don’t have a really good way to do that with optics and a couple of people came up with some ways to do that with Indian phosphide but it was really expensive and really hard to do.

What silicon photonics did is it took all of that complexity and said let’s move it into silicon, which we already knew how to do a whole lot of stuff on really cheap and it has allowed us to do those things. So, yes, silicon photonics is going to be great for putting stuff onto chip but that’s way in the future. It’s not even clear that’s ever really going to make sense in most cases. Largely what it has done is it’s allowed us to create these transponders that have interfaces that can go above 100 gig and can do it relatively inexpensively. That’s where it’s really coming to maturity and will change the industry.

Greg Ferro: So, I see silicon photonics talked about a lot in terms of 25 and 50 gig and 100 gig ethernet interfaces for LANs and they’re using silicon photonics in those to replace what we used to use as lasers. Is that correct?

Scott Wilkinson: It’s still using a laser. It’s just the way they’re modulating it. It used to be as Andrew mentioned – Lithium Niobate. That’s a device that can turn a laser on and off really fast. It’s a modulated. So, silicon photonics doesn’t just turn it on and off, it puts something in front of it that moves it in multiple levels and multiple phases and allows us to do things like PAM4.

Greg Ferro: So, now you’re starting to transmit multiple symbols on a single frequency?

Scott Wilkinson: Well, you’re starting to transmit symbols rather than bits, you’re transmitting a symbol and that symbol means one of say 16 points or one of 64 things. So, you’re getting a lot more information out of it without having to turn it on and off quite as fast.

Greg Ferro: Okay. I’ll put a link in the show notes. You’ll get some sense about what that is and also quad-amplitude modulation. I think if you’ve been listening along to the podcast, you might remember, we’ve talked about those in terms of WiFi, which is where they’re used mostly. It’s the same principle though, isn’t it?

Scott Wilkinson: It’s exactly the same.

Andrew Schmitt Yeah, the one thing I’d want to point out though is you’ve got to look at some of the transponders today that are used in the data center that are built out of silicon photonics. You can buy something that’s functionally equivalent that has no silicon photonics in it at all and there’s not a really big difference in terms of cost or power between the two solutions. The big company that’s pushing silicon photonics is Intel and you can go look at their LR4 or their CWDM solution for 100G. The benefits versus the traditional way of doing things, they’re not really that clear yet.

So, I think the one thing I’d want to point out is while there are people that are doing silicon photonics, it hasn’t become a no brainer, like “oh my gosh, why would we ever do it any other way?” We haven’t reached that stage yet.

Scott Wilkinson: For short reach. For longer reach it’s a different story, right?

Greg Ferro: So, for short reach it’s cost is a driving factor because if you’re going to drive a 50 meters or 100 meters of ethernet on a fiber optic cable, you’re not willing to pay 50000 dollars per interface.

Andrew Schmitt Yeah but you can buy something from Finisar or you can buy something from Intel and they’re functionally equivalent. They can even inter-operate with each other but there’s not a major difference in terms of the cost of power in AT solutions and that’s why there’s still a tremendous amount of old school stuff being sold today.

Greg Ferro: So, just to wrap up, I heard a lot about silicon photonics. What you’re saying, for silicon photonics, we’re probably going to use it as on chip, that we’re actually going to use optical interconnects inside the chip and less likely to be outside the ASIC in the future?

Scott Wilkinson: I wouldn’t agree with that. I think there are certainly people out there who believe that we’re going to be able to put it onto the chip. I worked with a company out of South Africa for a while that was trying to do this, trying to actually get light out of the silicon and do some interesting things. There are reasons why, think about a Broadcom chip right now and how would the interfaces look on it and can you imagine putting into that lasers that have to attach to it for all of the interfaces? That might work in some cases and I know there are people out there that are pushing that but there are going to be a lot of cases where you’re going to want the Broadcom’s of the world to focus on what they do best and then let somebody else worry about the innovations in optical and still have that serial interface or whatever that interface happens to be to some sort of optical external to the device to allow you to go the distance.

Greg Ferro: So, there’s some hope that this will give us something over time but there’s obviously some sort of innovations or some sort of step change in terms of the material side before that works because putting a laser onto a silicon substrate in a 10 nanometer process isn’t going to be easy-

Andrew Schmitt There’s still a lot of work to be done in the manufacturing side because as I said, there’s people building the same transponder with two different technologies. One using silicon photonics, one that isn’t and they’re both competitive in the marketplace. So, the silicon photonics hasn’t proven itself as the no brainer approach to doing these things as of yet.

Greg Ferro: Okay. So, something to watch but not something that people in the field would necessarily need to worry about because it’s something that happens inside the box or inside the transponder?

Andrew Schmitt That’s correct.

Scott Wilkinson: You talked a little bit about data center interconnect and the idea of going between data centers when you’re talking about longer distances. Not inside, but outside and that area, I think silicon photonics is generally considered an enabling technology. It’s what allows us to now have 600 gigabits per second per wave length and then 88 wavelengths. So, multiple terabits per second on a fiber. That’s really the enabling technology. There are some competing ones but not many.

Greg Ferro: Right. So, this means in data center networking, everything’s modularized everything’s componentized, everything’s interoperable to a lesser or great extent. Some of it’s imaginary from the vendors and some of it’s real. There are a wide range of differences between the transponders and the approaches that vendors take. So, you wouldn’t necessarily have different vendor equipment talking to each other. You wouldn’t just be able to plug in two vendor’s boxes and expect them to work like we do with ethernet.

Scott Wilkinson: That’s an interesting question that we can spend a lot of time on. There are groups within the industry that are trying to make it work and generally, what happens, is you can make them work as long as you make them work at the lowest common denominator. Over the shortest possible distance with the fewest bells and whistles. Those will generally work with each other but if you want to start going over longer distances, vendors have different special sauce that they throw in there that make them work. If you want to have programmability, you want to have them be able to do certain things with switching and auto-balancing, then you start not having compatibility across vendors and there are groups like Open ROADM and some other groups within the industry that are trying to standardize that so that you can say it will work. Over short distance, usually not a problem. Over long distance, you can’t guarantee it all the time.

Greg Ferro: So, where you were talking about 600G, I’m trying to remember the numbers off the top of my head, that would probably be proprietary? You’re right at the leading edge of where you are but you can probably have an interoperable standard if you clocked it back to something like 600 gig and a 16 QAM on a single carrier.

Andrew Schmitt We might not clock the speed back but what you do is clock back the reach performance and that’s the thing with standards. Once you start requiring interoperability between two vendors, as Scott said, it’s the least common denominator and in the past, there’s been such a huge focus on performance that people have been willing to forgo standards to get the most performance out of the optics.

Greg Ferro: I see what you’re saying. So much of it is about if I drive the signal at this level on this fiber, this fiber optic because a big part of optical is actually the fiber optic itself and it’s ability to propagate a signal at a given power level.

Scott Wilkinson: Yeah. It’s not so much the power level. Generally it’s something like forward error correction and that may be something you’re not familiar with but if you are, it’s something that’s been done in radio and other signaling for ages. Essentially, you look at what came in and you’re able to figure out if there were errors and correct them. The ways that forward error correction is done, there are standard ways which are pretty good and there are proprietary ways which are really good. If you use a proprietary forward error correction, you may get an extra 100 kilometers or so out of your, maybe 20 kilometers, out of your distance. So, the power levels don’t really change but the coding changes.

Greg Ferro: Right and for a given fiber optic situation. So, if you’re buying dark fiber from a given provider, one path might run 150 kilometers but your second fiber optic path might run 300 kilometers because you’re on a redundant path.

Scott Wilkinson: Exactly, right, or maybe it may be a worse fiber. It may have more connectors in the way or it may have old fiber. It’s not necessarily distance but it’s how good the fiber is but that’s exactly right, yeah. Some of them will work better than others.

Greg Ferro: I always forget about that. When you’re talking about fiber, it’s not actually how long the fiber is, it’s basically how much power gets taken out of it and that power sum is really the key and that’s how many connectors there are because every time you cross a patch lead, you lose X amount of power. Different type of fiber optic cables attenuates the signal at a lesser or greater degree.

Scott Wilkinson: Right and if it’s aerial and it gets caught in the wind, it might lose something. If it’s underground and goes around a corner, it might lose something. Fiber optics is really interesting.

Greg Ferro: Yes. Well, all right, I think we can actually do a collect show on this and I’m fascinated about it. Just to wrap this up, if I was a network engineer and I wanted to understand what it is about transmitting fiber optic signals is give me three key factors, now, let’s go with one from each of you, what are the key things that you think somebody should be looking at if they’re looking at propagating a DWDM signal in terms of optics? Andrew?

Andrew Schmitt I would say if you’re not going to be using an optical solution from a vendor and you’re going to try and do it on your own… I don’t know. It’s complicated. There’s not one thing.

Greg Ferro: I guess I just want to look at the ROADM and the transponder and say should you go with one vendors set of transponders or one brand of equipment because that’s really, they are anyways kind of work today?

Andrew Schmitt Unless you want to spend a lot of time understanding how all this stuff works, absolutely.

Greg Ferro: All right, Scott, anything to add?

Scott Wilkinson: Absolutely and it’s not just the hardware. It’s not just the optics. It’s the software that controls it. If you’re buying from a lot of different people, you’ve got to have lots of software controlling all the different parts and sometimes those pieces won’t work together. So, it’ll be fine when you put it in but when something breaks, it’ll be impossible to figure out what happened.

Greg Ferro: All right. So, since you started talking about software and the control plane and the devices, let’s move the conversation along to what I broadly talked about, optical devices. Now, in the preparation discussion we talked about, there’s actually two types of optical devices. There’s optical switches, which are the ones that actually just switched the optical signal or the optical wave as it comes in and then propagated again and then there’s optical edge devices, which do ethernet to optical or physical to optical. How big is that market in enterprise networking? If you’re a network engineer working in an enterprise IT and I think this questions for Andrew, how big is the optical networking market in enterprise networking today?

Andrew Schmitt Well, there’s 14 billion dollars worth of optical hardware sold every year and about 10 percent is that sold directly to medium sized enterprises and large enterprises. Most of the equipment gets sold to the telcos. Everyone from Verizon, Level3, Colt and China Mobile and China Telecom. So, enterprises are a small portion of the purchases that are out there and there are a few vendors that tend to cater more to those types of clients.

Greg Ferro: Is the enterprise networking more profitable then telcos?

Andrew Schmitt I would say it is more profitable and particularly on the service side – there’s a lot of system integrators that will work with, let’s say for instance, Ciena or Cisco or ECI. There will be a system integrator that will take that equipment, then work with the medium size business like a hospital or university to actually install it and turn it up.

Greg Ferro: So, Scott, you’re doing this with enterprises today?

Scott Wilkinson: Yes we are.

Greg Ferro: What are the challenges that enterprises are facing when you turn up on their door step and start saying you should do optical here?

Scott Wilkinson: So, I like to tell this story. We went into a large financial institution one time that had been paying a lot of money to their, what did you call it? The least horrible-

Greg Ferro: Yeah, the least worst. I would never call a telco the best.

Scott Wilkinson: Yeah. So, they were paying a lot of money to the least worst to connect a data center together. They were building a new data center. So, they came in, brought us in through a partner like Andrew said, a lot of stuff is done through partners and said, “Tell us what it would be like to build our own optical infrastructure.” And we came in as the optical experts and started talking about wave lengths and dB and splicing and performance polarization modulation dispersion and how easy this was and we terrified the hell out of these guys. They had no idea what we were talking about.

Greg Ferro: You’re talking in a completely different language, I imagine.

Scott Wilkinson: It was and not only did they lose that sale, they didn’t sell the optical. They almost went to the telco and just gave them their data center business because they didn’t want to deal with that. So, we learned a great lesson there that the biggest thing, the biggest challenge for these guys is talking in a language they understand and convincing them that this really isn’t that hard. Everything we’ve been talking about, things like silicon photonics and chips and putting things on chips and off chips and building transponders and ROADMs, they don’t want to hear that. They want to hear this is a way to connect things together very easy. So, the next time we came in, we said, “Look, this stuff is just virtual fiber. Lease fiber, and we’ll give you virtual fiber on top of it. You go from having one connection to having 80 connections, it’s easy.” That is honestly the biggest challenge in optical networking that we’ve had to do with enterprises.

Greg Ferro: Yeah. So, this is something that I’ve often talked to people about is just go and get some DWDM gear, plug it in and you’ll get two dark fiber paths, redundant paths that you’re building, plug it into two different boxes and every time you plug the boxes together, you’ll get 24, 48, 80. Out of that one fiber optic run, you can have 24 to 80, 96, 10 gig, 25 gig, 40 gig, 100 gig lines out of it. It’s magical. It’s like this infinite bandwidth loop.

Scott Wilkinson: If only it was infinite. Yeah. Unfortunately, there is a limit but yeah, it’s amazing and since it was introduced back in the ’90s, it has absolutely revolutionized the way optical networking works.

Greg Ferro: So, literally, it’s a case of put a box in, connect it to the dark fiber, there’s some monitoring that has to happen, there’s some set up that has to happen but at that point, you’ve really got a box at the edge, which has X number of ethernet ports. Usually 24 10 gigs or 24 50 gigs, I think is fairly common these days and then the other side is this optical thing and then you can just max all of those interfaces over that one fiber pair.

Scott Wilkinson: Right. There are lots of different ways of doing it. People have put it together in different form factors. Some of them you put everything in on day one, some of them you build over time, some of them are running at 100 gigs, we have 200 gig and 400 gig and by the end of the year we’ll have 600 gig. Some people will only want to run it 10 gig. So, there’s lots of different ways to put it together but essentially, that’s exactly right. You plug it in, it hits these optical things and it goes over the fiber and it’s very simple.

Greg Ferro: How many customers that you’ve worked with have actually know much about the optical? Can you just buy this stuff, plug it in, and use it or do you need magic robes and wands to make it work?

Scott Wilkinson: What we tell people is you don’t have to have any optical PhD’s on staff. We have those on our staff and we’ll take care of that for you. So, we had a customer recently, it was actually a utility that was trying to build their own because they had this issue where when their communications network went down during a power outage, they called up the local provider and said, “You’ve gotta get the communications back up so we can turn the power back on.” They said, “That’s great but as soon as our power gets on, we’ll do that for you.” And so, they decided to build their own but the problem is they knew nothing about optical. So, they were actually acting as the enterprise connectivity for a bunch of power generation and distribution companies and they were terrified about this and we came in and showed them that really once you’ve got the connectivity and you’ve got the software on top, then software has gotten very intuitive these days. The software on top will tell you things like this laser is about to die, go buy a new one and put it in there before it breaks or-

Greg Ferro: Sorry, I’m just wondering. Do lasers die?

Scott Wilkinson: Yes they do, sure.

Greg Ferro: Do they just wear out?

Andrew Schmitt Everything breaks.

Greg Ferro: But you predicatively so? It’s not just was working, now it’s not? You can actually predict that the laser is, is it physically an organic style thing where it degrades over time and needs to be replaced or-

Scott Wilkinson: It does. So, I actually did my graduate research on that. So, depending on how deep you want to go into it, essentially what happens is laser’s these days are built out of layers of semi-conductors and electrons flowing through those and overtime those electrons start burning little holes and if you get more and more of those holes, it stops working as well and it takes more current to do it. So, what you can do is you can watch the current on the laser and as that current starts to go up or down, it tells you that the laser is getting bad and so, before it even breaks, you can know that the laser is starting to go bad and you can go out and fix it.

Greg Ferro: Andrew?

Andrew Schmitt Yeah and on the receive side, the software can detect the amount of light that’s being received and over time, as that starts to degrade, you can say well we’re getting close to the end of life. There’s a problem or sometimes, it’s not even the laser, it can be the fiber. Overtime, the fiber will degrade and you can detect that just by monitoring the link and that’s where the software that Scott’s talking about is so important.

Greg Ferro: Now, it strikes me that this would happen to ethernet fiber optic things as well but the power difference there and the timing differences, because their running relatively slowly compared to these DWDM, you’re talking 600 gig and yet we’re running them at 25 gig. Even though there’s some variation and degradation and enterprise, it strikes me that the degradation would have much less impact then you would on an optical rig. Is that a good statement?

Scott Wilkinson: Unfortunately, no. It really doesn’t matter all that much because the laser’s really aren’t running at all that different power. In general, they have an amplifier outside of it that’s running the extra power most of the time and the slower stuff, generally, will last a little bit longer but you’re still going to have the same issues. Usually, the issue with a switch is that when it starts to die, you’ve got a redundant piece of it that’s there and you can swap them out relatively quickly. When you’re talking about a link that’s going thousands of kilometers or even hundreds of kilometers, you want to know about it long before it happens because that impacts a lot more things.

Greg Ferro: It tends to be a critical thing that usually dozens of services layered over the top instead of just one server.

Scott Wilkinson: Right. Now, we also have software built in and this is one that’s kind of cool. The biggest break problem with fiber optic link is not usually the laser dying, it’s usually somebody with a backhoe going through it. So, we actually have software as well for these guys who really don’t know anything about optics, remember that, they can tell exactly what location that fiber break hit so they can send a group out specifically to that location to do the prepares. So, they don’t have to know a lot about optics and figure out where everything is. So, all that software on top is really what makes this almost a no brainer.

Greg Ferro: So, you send a signal down, you get an echo back, you can say, “Oh, I’ve got 151.2 kilometers down the wire.” And then they can look at their maps and work out where did it go?

Andrew Schmitt Yeah. In fact, you can use those tools to even … a lot of the equipment has such sophisticated features like that in it where you hook it up to a fiber, you can actually find the poor performance splices and things like that. So, if someone tries to sell you some fiber, you can say, “Well, that’s great but before I actually buy this thing, you need to fix this splice at 120 kilometers because I’m getting a lot of reflections on that.”

Greg Ferro: That’s beginning to sound like magic.

Scott Wilkinson: It’s really cool stuff.

Greg Ferro: That really is. Now, that used to be really arcane, using these devices. I did some work on this stuff 15, 10 years ago and painful does not begin to describe how these devices worked. It was the devil was going through my head and poking me with his trident every single time I wanted to do something. It was not intuitive. It was not obvious. Everything was arcane and stupidly put together because vendors, obviously, hated their customers. Have things changed?

Scott Wilkinson: Well, yeah. You don’t have to wear the robes anymore.

Andrew Schmitt But you have to respect the physical layer. That’s the thing. It seems really simple from the outside but when you get into the details, there’s a lot of things that can go wrong.

Scott Wilkinson: But for example, to make it simple, our software package, you don’t ever actually have to go to the device. If you know that something is broken, we have a point and click that you click on that says, “I know it’s broken, tell me where it’s broken.” And it comes up and tells you with a map, here’s a map of your fiber layout and here’s a little red dot telling you where it’s broken. So, if you want to put on the robes on and know all the stuff underneath it, we’ll certainly give that to you but the software now has abstracted all of that away.

Greg Ferro: So, what we’re seeing too in the optical market is, in the old days it was all optical ethernet but increasingly we’re starting to see optical devices which are IP MPLS optical. So, recently, for example, Juniper bought BTI. So, they’ve now got this optical capability that they can now do IP and MPLS and optical in a single box. Is that something the optical vendors are gettinginto or is that something the networking vendors are getting into or is it a free for all and everybody’s converging on that IP MPLS edge?

Andrew Schmitt This has been going on for over a decade, really almost two decades. This is nothing new. Cisco was really the one that first started push ing this 15 years ago. They called it IP over DWDM and what they would do is take high performance optics and then strap it right on the front of the routers and that’s been a useful tool for collapsing the transport functions and the layer two and three functions into a single system but when you do that, there’s some downsides. Typically, the biggest one is in terms of density. If you look at 100 gig coherent interface on a Juniper system, you’re going to get half the density of what you would be using just if you had the shorter reach client optics but you end up paying the same amount for that slot. So, that’s one of the big challenges.

Scott Wilkinson: Yeah, we get asked about this a lot because there are people out there that are pushing this idea that the optics need to integrate directly into it but even what Juniper did, when they bought BTI, they still have those as separate devices and they’re still talking about the Juniper box being a separate device. Andrew hit on one of the big ones, which is density. There are several other ones but for example, there was a talk at OFC, which is a big optical conference a couple years ago and we had one vendor out there talking very much about their idea of collapsing it together and then you had somebody from Google saying they were doing the exactly the opposite. That they thought their optical expertise needed to stay at the optical layer, the optical capabilities need to stay at the optical layer and that allowed them to focus on the switching and routing as a separate entity without having to burden it with the optical capabilities. So, you think about the things we were just talking about, things like being able to figure out where a break is in the fiber, being able to determine whether lasers are failing, all those things. Most of that will never be integrated into a switch or a router because the real estate is way too expensive.

Greg Ferro: I’m going to make a note of that and call you up when it never happens.

Andrew Schmitt The future is here, it’s just not evenly distributed. People are using that but in most cases, it’s not going to work because you’re not going to get the tools that you need to be able to debug and manage a network but if you look at someone like Microsoft, the way their hooking up their data centers in all of the big cities in the United States. If it’s 40 to 50 kilometers, indeed, they’ve completely eliminated the need for external optical transport equipment and they’re using pluggable optics but once they need to start going longer distances, then they move to the more traditional model. So, there’s cases where it does make sense to take this approach, but these are people who have essentially infinite layer two network management experience, yet they’re still externalizing and pushing the optical transport gear into a discrete function.

Greg Ferro: Okay. So, what you’re saying is for today, it makes sense to have optical devices with optical software, optical control planes that are optimized for that purpose and while we might drop an optical interface in to a router to simplify short hole but the long haul, it still makes sense to have them separated?

Scott Wilkinson: In a lot of cases, yeah. I ended up writing a white paper on this because you gotta ask the question so many times and the conclusion was that there are cases, like Andrew mentioned, what Microsoft is doing where it will make sense but if you’re talking about doing a data center connect with multiple WDM connections, a lot of times, you’re going to want to keep those separate. In fact, most times, I would say you want to keep those separate.

Greg Ferro: Well, partly that’s a failure and it’s also an operation issue when, to my mind, the sufficient divergence in optical operations from IP operations that mixing those skill sets together, there’s no coherence. It doesn’t make sense.

Scott Wilkinson: That’s absolutely true.

Greg Ferro: But there is a time, what we’ve also seen is that 10 years ago, 15 years ago, when MPLS hit the market, people who did MPLS didn’t know how to spell IP. They only did MPLS and yet, over the last 15 years, we’ve seen that market converge where MPLS is now a commodity skill. When you go and do your basic training, you get to start learning MPLS form day one. It’s not a mystic incantation that you learn as an advanced magician.

Scott Wilkinson: Right and I think this is one of the, I would say, a mistake that a lot of people make. I know we’re going to talk about at some point, things like white boxes. When you look at ethernet versus IP MPLS versus white box switching and routing, everybody says, “Well, look how all of these things have gone from being separate things to being together.” And I would agree with that because they’re all packet based devices that can be done on a chip. They all have a similar underpinning to them and as Andrew mentioned earlier, when you start talking about optical, you’ve got to respect the physical layer that those guys don’t have to worry about and so, you’re talking about a completely different type of technology. It’s like saying that you’re going to integrate the radio frequency transmitters on microwave towers into the devices that switch cellphone calls. They’re just such completely different technologies. I agree they’re always next to each other but you’re never going to have somebody who understands both of those at that level.

Andrew Schmitt And the cadence of these technologies, they’re not necessarily developed in sync where Broadcom is releasing new ethernet switches but optical technology could be on a completely different time cycle. So, you don’t want to necessarily couple these two things together because once they’re coupled, you’re going to lose the flexibility to pick and choose the technology as it comes to market.

Greg Ferro: Okay. So, if the market was to slow down and converge in real life, if there wasn’t an enormous amount of bandwidth that we see now and therefore financial rewards for the companies that make these optical interfaces, these high speed transponders, we could see a convergence I guess but the chances of bandwidth reducing is pretty small to be fair.

Andrew Schmitt Well, on a side note, I’ll just point at most of these people making these optical transponders, this isn’t known to be a very profitable or highly lucrative business. It’s pretty brutal. So, if we got to a situation where they were making tons of money, I think they’d be happy to deal with whatever problems come up.

Greg Ferro: I want to jump back to one topic. One of the things that we missed when we were talking about optical components was you made the point about lasers wearing out because overtime, their electrons that stimulate the laser start to drill holes. Does that mean that there is a quantitative difference between quality of those optical modules? One of the things we see is vendors like to charge anywhere from five to 50 times what it costs to buy a standard SFP modules in the ethernet space that I know well and you could go to an online store and pick up a module for 50 bucks but a vendor wants to charge you 1500 bucks for the same module. Is there a qualitative difference there? Is the laser better made or got some sort of special magic inside?

Andrew Schmitt Yes. I have a funny story. There was a very large OEM, let’s say that makes optical modules and they received a shipment from an equipment manufacturer that said, “We’re having massive failures. All these modules that you guys have sent us are pure garbage. Take a look at these. Do an analysis, figure out what went wrong.” And so, this optical component and their dutifully took all these components from the big, large customer. Took them apart and figured out that they are all counterfeit. Clones, copies, they didn’t even make them. So, absolutely. There is a huge difference in quality among all of these component vendors because so much of the cost and so much of the process isn’t just getting the laser chip.

If you ever look at a picture of optical manufacturing, you’ll see huge amounts of manual touch. There’s a huge amount of contact to integrate all of these things together in the little box. It’s not like making semi-conductors. There’s a lot of hands on involvement and checking alignment and that’s not necessarily a good thing but that’s the only way we can do it today and if you don’t do that with the highest degree of quality and standards, you’re going to get failures.

Greg Ferro: Okay. So, does that matter less at the enterprise level? So, I’ve seen plenty of people buying a cheap as possible optics and saying they’re having perfectly acceptable results that don’t get many failures and so much money’s being saved that it doesn’t make sense to buy the more expensive ones. Does that make sense in terms of an enterprise or is it different? And we talked earlier about how when you’re doing optical networking, you’ve got hundreds of services running over single things. So, it makes sense to stay with the high quality stuff. Are they fair statements?

Andrew Schmitt I think it depends on the customer and their tolerance for problems. If you don’t want to have any problems, then you’re going to go buy the good stuff. If you’re willing to have a failure rate, you can take the chances. There’s another thing that’s going on here though that has nothing to do with failure or manufacturing quality and that is large companies like Cisco, Juniper, they will, in essence require that you buy modules from them and they will say, “Look, if you don’t buy modules from us then we’re not going to warranty the product.” And they will then sell you the blades at cost or below cost and you have to buy the modules for them and in a way, they’re providing a service where you can start to pay as you grow. So, if you only need one or two ports on day zero, you don’t have to buy all 24. You can buy them one by one and plug them in but what the large cloud guys have figured out is this model doesn’t work for them at all because right at the beginning, they need to buy everything. So, this variable cost module adds no value to that.

Greg Ferro: I call that the sucker sale. There’s effectively a licensing fee when you have FSP based switches. Every time you buy another SFP module, you’re paying a license fee to use that port.

Andrew Schmitt That’s right but if you’re only buying one port, it might actually work out to your benefit. If you’re buying a lot of ports, then you’re right. You’re a sucker.

Greg Ferro: Yeah.

Scott Wilkinson: I’m just glad that Andrew answered that question instead of me because we do exactly that. We do license report and it is true and I’ve actually had these discussions before with customers who are intelligent enough to understand this and sophisticated enough. They understand that they’re getting a deal on the hardware. They’re buying the razors really cheap with the stipulation they’re going to buy blades.
Right and they’re going to buy the razor blades from us going forward because if we had to make money on every card that we send out there and you’re going to buy the optics from somebody else, the cost of that equipment would have to go up.
Greg Ferro: Right. So, to summarize, there is a difference between the modules. You need to be willing to have some sort of tolerance. You might get a batch of faulty ones, you might get a batch of good ones. It can be hard to tell.
Andrew Schmitt Yeah and there’s also an in between. There’s companies out there now who will purchase gray market modules, test them, make sure the quality is good and then allow you to use them in your Cisco or Juniper or Ciena or Cisco equipment. So, there’s these mid-market guys but if you go onto Ebay and you buy something, you have no idea what you’re going to get.

Greg Ferro: Now, there’s also software in those modules as well. Is there features in vendor approved modules that don’t exist in the lower cost options, in the generics?

Andrew Schmitt Sometimes, yes.

Scott Wilkinson: It’s not always the software, it’s sometimes what’s built into the inventory part of the module. So, we have the ability to figure out. So, for example, we can go in and tell who manufactured it, where it came from, what the life time of it is expected to be and sometimes we have access to certain counters in there, bit rate failures and those kinds of things that you might not get out of a pre-market module.

Greg Ferro: Yeah. So, the sort of features we were talking about earlier like finding where a break is in the fiber and being able to do reflecting, calculate the reflections coming back from the break. Would you be able to do that with the generic or would you need a vendor approved optic to do that?

Scott Wilkinson: Yeah, that actually switches over to a different optic because it runs at a different wave length. So, it doesn’t interfere with traffic. So, that one you would but the ability that Andrew mentioned earlier about, for example, being able to look at the laser changing current overtime or being able to see what the receive power is and see whether or not it’s going up or down, you may not get those capabilities.

Greg Ferro: Yep, they wouldn’t be in the software. That’s something the vendor might develop separately from the module manufacturer.

Scott Wilkinson: It could be, yeah. Sometimes and briefly, Andrew mentioned the idea about warranty. Now, we do have experience, occasionally, with some of these third party devices going into our equipment and generating a lot more heath then they’re supposed to and it does damage the equipment, which is why warranties are not valid when you use third parties as well.

Greg Ferro: That just sounds like self-serving rubbish.

Scott Wilkinson: It does, doesn’t?

Greg Ferro: It does.

Scott Wilkinson: Because you get some of the stuff that comes out of these really great black market stuff, like Andrew was talking about, it doesn’t really work and we’ve had some of them put in that just really get hot, incredibly hot, and the heat handling on these devices is crucial; it’s very, very hard to get all the heat off of these- data centers are designed specifically to have air flow from a hot aisle to a cold aisle, because cooling these things is almost impossible. And so the part about the optics that you plug into a router, you plug into one of our devices. Those little devices are … the cards that hold them have huge heat dissipation capabilities built into them, and if you put something in there that uses a lot more heat than it’s supposed to, it’s going to shut down.

Greg Ferro: Or worse, it could actually create enough heat to actually thermally damage the board that it’s on or the connector.

Scott Wilkinson: It could, yeah, now, if you built it right, it’ll shut down first, but yes, it could damage the equipment.

Andrew Schmitt I think the rule of thumb is if you’re going to buy something illegal, make sure you have a good dealer.

Greg Ferro: I just want to pry into this a little bit more, because this is a real pain point for a lot of people. Is this something that happens a lot, or is this just something that’s one of those apocryphal stories of, you know, I put my switch in the rack, I plug it in and it blew up and then the rack caught on fire. Well, that was a great day!

Andrew Schmitt I think the dirty little secret is the people who buy a lot of this stuff- they’ve negotiated the price of these things down, alright? This is a negotiating point, so when you get out to where the edge is, of you know, 5% of all volume, and people are trying to save some money, because they’re running medium size businesses, then it comes up from time to time. But if you- like I said, you go to eBay, you have no idea what you’re going to get but if you spent a little bit more from a reputable dealer, somebody who …

Greg Ferro: Apparently there are companies online who have been doing this for ten years, who-

Andrew Schmitt … is going to put their reputation on it. So then in that case, you’re probably going to be all right, particularly if you’re buying shorter reach optics that are highly standardized, where there’s, not, there’s, really not any way to differ. They’re commodities it’s just like deep rack.

Greg Ferro: Yeah! Which is a great segue way, talking about commoditization now. Over the last ten years, I’ve seen a number of efforts around open optical. This is sort of like an attempt to drive into white box ethernet, the idea that there are standardized components that could be put together in similar ways, and that you could buy an operating system that goes on top of it, it’ll look just like an x86 running LINUX, and it will work just like an x86 running- why do we have to have these proprietary boxes? I guess I’ll start with you, Scott, do you think open optical is going to be a market transition? Are we actually going to see more open optical enter the market, say, soon, or over time?

Scott Wilkinson: Yeah, this is something I spend a lot of time with, educating internally and externally on. Facebook in particular, when they came out with their voyager project. This idea that you would have a completely open optical system, that has transponders built into it from a specific vendor and it has a Broadcom chip built into it. Then, therefore, this would make everything much less expensive. Really hasn’t taken off.
There’s a lot of reasons for that, one is the stuff we talked about earlier; there’s a lot of capabilities built in, like being able to find a break, and all those kinds of things that vendors have done. The other reason is that it’s not clear that there’s any real cost savings, because we’re essentially building the hardware almost the same way. We’re coming up with different ways to put it together, but there’s not a huge difference. We’re all buying chipsets to do optical we’re buying chipsets to do switching. The reason that ours work better, is because of the software we put around it, the expertise we put around it. The building it into form factors that make sense from different locations, and the fact that we can innovate a lot faster than something like that, if a new chipset comes out that’s faster, for example, the voyager project has not taken that on onboard yet, we’re able to immediately integrate that and come up with something new.

So it was an interesting idea. I think I brought this up earlier, this idea of, people look at white boxes in switching and say, well, that has been such a great success in the Facebook white box switch has done amazing things for Facebook, why can’t we do that with optical? And as Andrew said earlier, you’ve gotta respect the physical layer. It’s not like you can virtualize a laser. We’ve still got to make the laser work, we’ve still got to get it to go across a specific distance at a specific rate, it’s not like you can take an x86 and make it act like a laser.

So I don’t know that this is going to be a huge industry changer. We hear about it a lot from universities in particular, they all like the idea; they think is going to be really cool, but we haven’t seen anybody adopt it on a wide spread basis. We haven’t seen it takeoff very well. We still think that the individual vendors have ways of proving that what they’re doing has value.

Greg Ferro: I’m going to be a little inspirationally cynical there, and say that people said the same thing about white box ethernet five years ago, and then … and look at what we’ve got today, where fundamentally, that model is now becoming slowly but steadily, the standardized model. Cisco’s disaggregating its software from the hardware that’ll run on white box. A lot of people are starting to run white box ethernet as their primary- they still run the proprietary stuff in certain places, but a lot of the white box. In the most extreme, of course is that Amazon now is moving to all white box, all self-developed white box.

Andrew Schmitt Yes, but what was the reason why? What was the engineering problem that was being solved? Because, I think that’s the thing that you have to ask with optics is, with the white box in the switching area, one of the big things that wanted, that was attacked was the cost.

Greg Ferro: Actually, the cost was important. The capital cost was important, but the thing that really bothered the bigger companies like Amazon, Google and Facebook and I’ve spoken to people in those organizations, was that vendors can’t develop software at a level of quality that is acceptable to them. And when bugs are found, as they are, always, they’re not able to fix them in a timely fashion. Vendors want to take six months to two years to catch a bug, and these companies say, I need it by tomorrow, and they’re just not able to do that. That was the real money that they wanted to save.

Andrew Schmitt One of the things that we’ve seen in the optical areas, is that the market has responded, and built these new, I call them compact modular systems, and they look, they look like white boxes, except you get the benefits of the equipment vendors’ supply chain and sourcing and all these things, but from a software perspective, they’re very open. I mean, you could go to a company like Cisco, you could buy one of these systems and they’ll ship it to you as bare metal.

And they’re not the only ones doing this. Ciena is doing this. I imagine ECI’s doing this. They’re basically like look, if you don’t want our software that’s fine. If you have the expertise to design this and make it all work, go right ahead. But in the case of most companies, including a lot of large legacy telcos, that’s just not the case. So you can buy these systems, essentially lobotomized. And put them in place if that’s what you want.

Greg Ferro: And if you’ve got that sort of skill, but again, it comes back to … I can see the other side of this equation. If you’re running a data center, interconnect, and you’re betting your entire business on there being 150 gigabits of bandwidth between those two sites, you can’t afford for that to fail. If you don’t have the in house expertise or, if you’ve got hundreds of services running over the top of that, you need that to be absolutely the best that it could be, and you could afford to blow vast sums of money buying overpriced, vendor branded equipment, and not go down the white box path

Andrew Schmitt Yeah, and look at the magnitude of the problem you’re talking about. Of course, Amazon is moving to all white box switches, but what’s the magnitude that they spend on those versus what they spend on the optical layer? It’s a much smaller problem.

Greg Ferro: Amazon is now spending more money on data centers than Cisco makes in a year. They’re spending 20 billion in a year. Cisco will take about 12 billion in revenue this year.

Scott Wilkinson: Right. One of the differences, is that Amazon sees those optical interconnects as enabling them to make money whereas, the telcos see those optical interconnects as the way they make money. It’s a very different way of looking at it. We talked to one of … the interesting story, we talked to a financial institution one time, to your point about whether or not they’re willing to spend more money, and we’re talking about saving some money on an optical interconnect, and the guys stopped us and said, I want you to understand we don’t save money with optical interconnects, we make money with optical interconnects. We’ll pay whatever it costs to make sure it’s up and running. Not to say that these are more expensive, I think one of Andrew’s points was that building it by one of us, as opposed to going out to some white open project, doesn’t necessarily save you money. Where you get value, is on the stuff that we put on top of it.

Greg Ferro: Yeah. Which, value adding, I’m always a little cynical when people say, we’ve got a value add, over the years that’s turned into something a little bit distasteful, and this is why white box is working for the networking people.

Scott Wilkinson: And that’s true, and I was with a large university in North America recently, and I gave the same argument, and I got the exact same argument back. Well, yeah, but, we could figure out how to make this stuff work. And to Andrew’s point, well, if that’s true then that’s fine, we’ll sell you just the equipment with no software; but buying that from us as opposed to buying it from somebody who’s putting it together based on a blueprint that was put together by Facebook, doesn’t necessarily have any value.

Andrew Schmitt The Facebook voyager project was interesting, because I talked to people who evaluated that system, including some carriers, and they’ve got it in their labs. I talked with them afterwards and they weren’t planning on using it, but they said that it taught them something very valuable. It taught them how dependent they were on their vendors to close the loop in their network, and make things work. I’m not even talking about small banks or things like that, I’m talking about big carriers, that still relied very heavily on the engineering expertise of their supplier to make this stuff work.

Greg Ferro: I guess I’m still … what we did see arrive, is companies like Cumulus Networks who are producing, a LINUX operating system, with networking apps, that runs on white box hardware, that does all the things that you’re talking about, provided the hardware’s available.

Andrew Schmitt Cumulus has a software stack for the voyager, and I spoke with them recently, and they’re trying to figure out who’s going to buy it.

Greg Ferro: Yeah. That’s okay. I remember talking to Cumulus, when they were trying to work out who’s going to buy it for ethernet.

Scott Wilkinson: But he goes back to what we were talking about earlier that yeah, you could probably do that at the lowest possible common denominator. If you’re going 80 kilometers, with a certain number of wavelengths at a certain speed, where things were plug and play, then it might work, and that’s what the Amazons and the Facebooks and all are doing. They don’t care about getting the best performance. They want to be able to plug it in, and when it dies, pull it out, throw it away and put another one in. Most people just are not in that situation. And the other problem here, of course, is that your dark fiber’s expensive. It’s not like you can just run an extra fiber, like you can in the data center. So in the data center, you can just put redundant switches in place, and extra capacity. And if a handful of those switches flame out, it’s no big deal, it doesn’t necessarily have a deep impact. But, buying a thousand kilometers fiber optic cable is a pretty expensive game.

Greg Ferro: So maybe there is an argument to be said, for the time being, that open optical will be a much harder, much longer burn, and maybe it will never come around.

Scott Wilkinson: Yeah, it’s inspired a lot of interesting things in the optical industry. Things like disaggregation, open optical, a lot of other special little terms that are coming out and becoming interesting. The idea that maybe you want to buy a box that just has transponders in it, as opposed to buying one that can do transponders and amplifiers and everything else. So we’re really trying to figure that out right now. The industry is fracturing somewhat, between what the data center guys want, and what the telcos want, and it makes it really exciting for us to have to figure out what to build next year.

Greg Ferro: It strikes me that there’s a whole bunch of standards that optical seems to have missed in a way, and there’s been good reasons for it. Like you said, if you can develop a technology which can get more carriers on a given way, on a given optical fiber, then more power to you, because, as I said, a thousand kilometers of dark fiber is a very expensive business!

Andrew Schmitt You just rewind the clock twenty years, there was no need for standards, right? You just go with AT&T. That was the standard.

Greg Ferro: Well, in America, that was true. That was different in Australia.

Andrew Schmitt Okay, Germany … and Germany was Deutsche Telekom, right? So it was these companies, in essence, were so big and so powerful, they were the standard, and the whole landscape has changed since then. And now what you’re starting to see, is that that’s trickling into the component equipment supply chain, and what’s really catalyzing that is the big cloud and colo guys, because they’re now building their own networks. They have the wands. They have the robes, they can do these things themselves, and that’s really starting to fracture the way the equipment is being built. So you have big vendors like Ciena and you have Cisco, and they’re providing the real high touch, high complexity devices to people like Verizon. And then they’re also stripping out and lobotomizing the equipment to give it to someone like Amazon.

Greg Ferro: Yeah, I do think the cloud … Google’s now saying that it has more bandwidth internally than the internet does, I think, did I read that somewhere? Their own optical backbone has got more bandwidth in the internet outside.

Scott Wilkinson: I know Facebook, at one point, showed a graphic that said, I think, 90% of their traffic never goes outside of a Facebook data center.

Greg Ferro: Which is probably a testament to how badly the Facebook app is put together, that’s just an awful lot of traffic not doing much, perhaps.

Scott Wilkinson: Well, you think about it, when you upload a picture of cats, that you want to share with everybody, it’s got to be downloaded to all of those data centers around the world, even if nobody ever actually looks at it. So all of that stuff is going on constantly, not to mention the mining that they’re doing to sell. There’s just a lot of internal stuff going on, and …

Greg Ferro: I know, I’m just being …

Andrew Schmitt Oh, no, but you hit on a very important thing. What companies like Microsoft are doing, is, they’re using optical connectivity between their data centers, to, in essence, simplify their computing architecture.

Greg Ferro: Quite often, they’re just leaning heavily into their infrastructure to solve problems, to cover up for bad programming or unoptimized software in their cloud platforms. Like, if you’re running Microsoft SQL server, we’ve been relying on big CPUs and lots of memory to make it faster. One of the things that cloud companies have been doing, is just throwing more and more hardware at the problem, because it’s cheaper than optimizing the software. And increasingly, companies like Facebook, and Amazon, and Google, spend increasing amounts of money, to optimize this software, to make it run faster, or leaner, or more efficiently, so that they don’t have to lean on the infrastructure so hard. If they could develop new protocols like GRPC, or Quick or, all those protocols that use less bandwidth, and they’re more efficient they used less requests and burn …

Andrew Schmitt But sometimes it’s just cheaper to throw bandwidth that the problem.

Greg Ferro: Always better to throw bandwidth at anything working problem, I’m convinced of that.

Scott Wilkinson: And we absolutely agree with that, because, we’ll sell you the bandwidth.

Greg Ferro: Well, I think we’ve probably taken the discussion just about as far as we can go for today, we’re about, reaching the maximum time that we’ve got available. Thanks very much to Scott and Andrew for joining us today.

Leave a Comment