Transcript from TELoIP presentation at the Networking Field Day 15 event in Silicon Valley on April 6, 2016.
Pat: Okay, we want to deliver the promise to the Internet. Got to have multiple ISP connections. How do I do that? I don’t want to load balance, because if I go to speedtest.net, I want to get red plus green. So we came up with a method to do that on a per packet basis. We operate per packet, not flows, not load balancing. It’s per packet. As a single user behind any of these, you go to speedtest.net, you’re going to get red plus green plus blue plus whatever is there. In or out. And the per packet did it. That patent showed up in 2008, in our hands.
Okay, so that gave us speed, right? This is a story of speed, quality, reliability, and multi-tenancy, scalability. Next issue. I got speed. That’s good. That only solves part of the problem. I just put all of my communications eggs in an Internet basket. What happens if my Internet goes down? It was a common question in the early days, because the Internet went down a lot more than it does now. All right, we needed to failover. We needed to failover in a way that was hitless. And to do that, we had to be preemptive. And because this is a bidirectional system, it operates on a per packet basis, we invented, perfected, and deployed multi-directional pathway selection. It’s a preemptive fast failover system. It is possible to sense an outage from the controller’s perspective before the branch actually sees what’s going on. That happens often with these asymmetrical broadband type of connections.
And when you do that, use all of the remaining connections with our messaging system to let the other side know and failover. There’s a patent on that. And that’s what gives us the no dropped VoIP feature. Okay, so I’ve got speed, I’ve got reliability, none of which mean anything if I’m on video, a video conference or a voice call, and my IT department’s downloading or uploading a 10 gigabyte file at that time. Quality is a must, because I’m going to be measured at the end of the day on quality. Customers don’t care that I built this on the Internet. They care that you told me it’s going to be better than my MPLS. So how am I going to do that?
Intelligent Packet Distribution Engine. Again, packet. This is an advanced decision tree. It does have some preemptive characteristics in there. But it gave us bidirectional IPQoS inbound, outbound. What do I do with this packet at this… Do I send this packet, at this time, on this link, in that direction, or not? Now you can figure that. You set that policy. You can avoid certain things. I’m going to dig into that.
John: So are you configuring that by traffic type, or ports, something completely different?
Pat: You can go by 5-tuple today, and all the layer 7 stuff is on the way. Multi-PoP, that was next. I talked about that a little bit, you know, that whole at a distance. So we got that. And then the Control Plane, point-to-multi-point over Unicast. Full mesh, that’s point-to-multi-point. I don’t want to build for a thousand site customer, a thousand tunnels out of my controller. I want one interface. We get one interface here, one VxLAN interface, and it has paths to all the other controllers that also house a branch CPE device out of the PoP. And, of course, this all has to be fully managed through the portal.
Our portal is more than just orchestration. In fact, we separated the management plane. Some people call a controller their orchestrator. We get that a lot. Ours is not. Our controllers and actual device that sits on the network, on the perimeter of the core and it talks to a CPE device at the Edge at the customer branch. But we orchestrate it through the portal. It can be orchestrated.
The other thing is QoE. You’ll hear me talk about QoE. So this is a layer 3 world and, you know, hosted voice over IP. How do I know that the quality’s good? Well, we took what…that MOS score strategy and we said, “You know, let’s go with that, because people understand that. It’s familiar. It deals with latency, jitter and loss.” And let’s take it to another level, for our purposes, and that is since I’ve got per packet transmissions here, I can measure each packet in and out on every connection. Maybe I can get a QoE score live in real time on each connection. So we had to tune that to the network of it. And so you get that. On those connections, it’ll tell you what its QoE score is, live, all the time.
Christopher: So your “five nines,” as we’ve seen on other slides. But are you all fives from an MOS perspective or…?
Pat: Nobody can be five. Maybe 4.11, yes. Yes, we can be. Or an average of being in the green within that SLA, right? I’ll bring up a chart. Yeah, that could be part. Well, we give that data, our partners have that data to build into their SLA, if they want, versus just connectivity.
Pat: Not only that, but application paths. Those are my underlays, right? I want to… Well, what’s my ERP server looking like right now? And what’s head office looking like from the perspective of the branch? So we do application path checking right out of the box. I don’t know. IP SLA kind of does something similar. But this gives you a QoE score. And, all those paths I talked about for point-to-multi-point over Unicast, we check those too. Full QoE scores for the whole control plane. Next slide.
Okay. Any questions before I move on? Now I’m going take each one of those individually, talk about them.
Justin: No, let’s break it down.
Pat: Let’s do it.
Justin: Break it down.
Pat: Packets. Start with the packet.
Justin: Well, this is the hard part, right? This is your patented stuff that no one else does. So this is the stuff we all need to understand.
Phil: I have one quick question not technically related, if you’re comfortable answering it. Who would you consider your competitors, considering that many SD-WAN vendors are not doing this with the backhaul that you have and all that?
Pat: I’m pausing for a reason, and I’ll tell you why.
Pat: If you took product… Just like I said, what is the… You know, what’s at the end…
Phil: You’re going to say we have no competitors.
Pat: No, we have competitors.
Pat: It’s a matter of perspective. What’s an SDN? Everybody’s going to have a different answer.
Pat: Same with what’s a competitor. Here’s why. If you pulled up their product sheet and you pulled up our product sheet, “Oh, I’ve got high availability and I have diverse carrier.” Look the same, right? Every one of us looks the same. In fact, we all say the same thing.
Justin: Okay, so let’s look at a differentiator. What’s your core customer base? And don’t say provider. I mean like who makes up the…
Pat: No, I’m going to get into that. I have retail customers. I’ve got manufacturing customers. I’ve got financial customers. It ranges, this technology scales. And here’s the thing. Although we all say the same thing, we mean what we say, but we mean this. We’re different. We’re different. Our technology does separate us. And we spend a lot more time with our partner in the sales process socializing that aspect of TELoIP because we have to… Listen guys, we’re not raising 30 million a quarter, okay? We’re a hard core technology company that’s been organically growing for 13 years. So we spend our time on the tech, and this is the tech.
So it’s a per packet, aggregation technology. It’s called the Autonomous Network Aggregation system. You can read the patent, it’s on our website. It’s like bonding. I hate calling it that, because you can’t compare the two. This is an advanced form. It’s like as if you took load balancing, bonding, and whatever other failover mechanism and put them all together and did it on a per packet basis. That’s what this is. We had another challenge in the early days because… I don’t know, I’m not even going to go to net neutrality yet. But look, if you want it to ride on someone else’s network and you wanted to do it without their permission, it had to be transparent. You had to just look like another packet.
Justin: Perhaps you were going to hit this later, but specific to what you’re doing, the per packet aggregation, how do you deal with things like jitter that occur with voice over IP when you’re using diverse, different connections and trying to bond them in some way? How do you deal with that?
Pat: By being able to measure it at the packet level, for every packet, in every direction. And yes, I’m going to get to it.
Nicolas: But what he’s trying to say is, if you have some voice over IP call, right, are you still doing that or are you in a…
Phil: Pinning the call to one link or something.
Phil: Are you saying this call needs to go down this link and stay on this link? Or is it just…
Nicolas: Rather that bonding…
Pat: What if I told you that that is a great strategy and we do support that? In our IPD, the next slide, we have something called application protocol distribution, where we can pin to a particular link because of latency. Not only that, I can say don’t transmit over any link that isn’t this good. Right?
Justin: So you could tell one particular application that the requirements for that application are this level of performance, and it’ll pick the link to use?
Pat: To a certain extent, yes, you can.
Pat: And in this example here for voice, we could say, “You know that T1 I got? Way better, latency wise and all of that, than this other DSL service,” send all voice packets here, nothing else. That was actually a great strategy in 2006. It’s not a great strategy now, for us anyway, because that will go down and can go down and can get saturated because it is Internet, and all Internet is the same. We find a better strategy is being able to deal with all of the links but understanding the conditions that affect your application and paving the path for them bidirectionally, on any of the links at anytime.
Justin: I’m sorry, can you explain what you meant by “all Internet is the same”? In what respect?
Pat: Okay. Regardless of local loop, in the end of the day, it’s pretty much the same product.
Justin: Yeah, but a lot of the variability exists in that local loop, whether it’s DSL or cable or wireless or whatever it is, photons, that the local loop variation is. So I mean, that…you guys manage that within your system?
Pat: I agree, and it’s because of that that, you know, if your strategy is only one of pinning – this is just my opinion, right – to that particular link, it can later come back to bite you when that link does experience problems, as it probably will being…
Justin: Well, I’m assuming you would failover to another link in that scenario but…
Keith: So I guess that’s the question. Is pinning, the act of pinning, is that a static one-time thing saying this application always uses the T1? Or is that a thing that can be dynamically configured either at that MSP level or at your level?
Pat: At the user level even, if it had to be. And when you do pin it to a particular link, as, again, some people do run this way, you don’t have to unpin it in order for that call to stay live across the other connections. All right? We have many customers with broadband connection and a 4G LTE. As great as 4G LTE sounds, I wouldn’t run a mainline today for this type of data or transactions, because one minute it’s great, the next minute it’s not. And also there’s cost and stuff like that, depending on who you’re with. And, you only want to transmit on that worst case scenario.
However, if I do move to it and I’m going from a broadband circuit to the 4G LTE, that call better not drop. I better not lose the word, my SQL service sessions better not drop, all my QoS better be morphed over to it, and that’s what we do. And in that case, we’re pinning everything on the broadband connection, if it’s just the broadband, whatever.
Drew: Can I ask you a question? I’m looking at your map right now online. I’ll tweet out the link to it.
Drew: But I’m looking at the map, right? So I live down in South Texas, and you guys have a PoP in Dallas and a PoP in El Paso. All of the rest of the state feeds up through Dallas, right? In my location, where I’m at, we have two carriers that come in. And whether it’s AT&T or Time Warner or T-Mobile or whoever it is, they all only go out one of two directions. There’s a lot of times when something will happen on that fiber and our entire area is wiped out.
Drew: And I understand that that’s a little bit of a unique situation. I’m just playing devil’s advocate. I’m just trying to find the flaws, right? I’m looking at this, and if all of those connections are going back through your specific…through your PoPs that are there, this is kind of a… What’s the word I’m looking for? Not disjointed, but distributed model that you guys have. Is there a plan that you guys have not just to open up more PoPs but to maybe use different customer sites or allow us to setup virtualized environments or virtual hosts within our local PoPs so that other people could connect to it so it’s more distributed? Because as great as this sounds, I don’t think it would work for someone like me because we’re fed by so few. We’ve got two fiber providers that come in, or two fiber routes that come into town. If both of those are down, we’re down. I mean, there’s no…that’s…
Pat: Yes, there is a plan. We have a partner peering program where we allow the partners to peer with us and, we throw a wire over the fence between us.
Pat: And you can still use our infrastructure. That means I’m saving you from purchasing the bare metal, I’m saving you from all of that. You’re just riding on the system, but you…we addressed it with that cross connect.
Drew: Okay. So if I was in Austin, I could…I would have to get…
Pat: Well, Austin, we’re good. There’s over 835 different cities in operation today in the U.S.
Pat: And El Paso is one of them. We’ve got customers in that city.
Drew: El Paso is 14 hours away from me. It’s like hundreds of miles. But down where we are, if we’re in our area, it’s not… Just so I’m clear. It’s not about just whatever connection I’m getting to my customer. Let’s say I get them a local run through Spectrum and then I get them a backup with Verizon. That’s not enough to satisfy my customers, and I would still be responsible of getting from wherever their data center is, back to you guys in case it went down. Is that right? Or I mean, do you guys…
Pat: Yeah, you’ve got to get to our point of entry, right? However you get to the point of entry, that’s the least common denominator in this solution. That CPE, at the branch, over whatever it is, has to get to the point of entry. And if there’s contention in a particular region and what not, we’re opening to, building out more PoPs or working with partners to get them plugged into our infrastructure to help tether that better than it would normally be.
Man: What type of compression algorithms do you use for your accelerating compressed traffic?
Pat: I knew I was going to get asked that question. LZF.
Man: There you go.
Pat: Best I/O fly, screaming on the fly, okay.
Man: Just because I new it would come up as some point.
Pat: It did a better job in machine language than anything else, and without interrupting a network session.
Pat: Like, for example, we’ve seen customers with, three…let’s say three broadband connections. Not high, high speed, but let’s say a total 60 megabits per second. When you go to speed test on it and you’re pulling the 60, the red degree and the blue will combine, you turn on the compression and it’s, you know, 210.
If it’s compressible, we’ll accelerate it. At the moment, we believe 30% of the Internet is still compressible. But on the WAN, that number goes up. And it’s not just the speed that one gets, it’s also the savings that I got. For example, if I can compress 30% of my traffic 10 times, I have a lot more room for other latency sensitive applications that this is no longer affecting. So it’s an overall strategy.
Nicolas: Some of the manufacturers on the market are duplicating the traffic across links, what is your position for that?
Pat: We don’t do packet duplication. We do packet distribution.
Pat: Intelligent packet distribution. My take on that is real simple.
Pat: I see that act as a form of negative aggregation.
Nicolas: Yeah, but I’m not saying it’s a good or bad idea just…
Pat: No, that’s how I feel.
Nicolas: Yeah, yeah, yeah.
Christopher: The first bullet on your next slide though.
Pat: Yeah. You’re already on my next slide?
Christopher: I’m always on the next slide.
Drew: All right, we’re moving ahead.
Pat: Okay, let’s go to the next…are we good to go to the next slide? Okay, let’s do that. Time check, Kevin, how are we doing?
Christopher: So, control signals, you technically duplicate control signals to maintain the control plane model of it and you can do data efficiency at that point.
Pat: Bingo, on every packet.
Pat: It’s bidirectional. I know, you lined me up for that. So, yes. But here’s the beauty of it. It’s preemptive, right? So those other two that are working, they’re going to get our message packets.
Christopher: I love it. I’m a huge fan of that.
Pat: And say “Hey, switch this guy over. I know you think it’s good, but let’s go over… don’t use this guy right now.” You beat me to it. So there’s the real time messaging. So we have three phases – it’s actually four because the last one has two – link down, brown out, restore. You really don’t want to chase your tail when you’re doing this kind of activity on the fly on the per packet basis, so you’ve got to have flap-check. You’ve got to make sure that for how long before I deem it’s operational again, because it can go down, it can go up, you know… it’s flapping. And every single packet has a QoE score as well. Next slide.
Kevin: Is your flap-check, just like a normal SLA kind of thing where you’re waiting for over a duration of how many seconds, x amount of packets are successful…
Pat: Yeah, it’s user configurable too.
Pat: You can say time or packets or…
Pat: So, let’s just say everybody else, or others, they could go in here and they can start class-based queuing, priority queuing, whatever, on the overlay. But that overlay is made up of a big connection, a midsize connection, and a tiny connection.
Perhaps that strategy would work fine if the tiny connection went down. You can still survive. You’re within range. What happens when the biggest connection of the bunch goes down? You’ve got a problem. I got a problem in two ways. First of all, that was 10 times the speed this way as it was that way. That’s one. Bidirectional. And it’s half of my traffic. My understanding of conventional QoS mechanisms means you have to create an artificial bottleneck in order to start making certain things better. Because if there’s no bottleneck, there’s no need in…
So this is sitting there thinking I’m 50% bigger than I am, from a traffic perspective. That’s okay, you can go ahead. Well, the priority traffic is I’m not paying attention to him right now. You multiply that across multiple links, you add bidirectional to it, it’s a big problem. So, in my opinion, that’s why conventional quality of service mechanisms don’t work.
So here’s what we did. We do this on every underlay. The intelligent packet distribution engine. First, we had to get out of the way, avoiding what we call poor quality bandwidth region. That’s bufferbloat, right? It’s a bad thing. It still is, right? Because I can get a 10 meg service, I can get a 40 meg service. It’s the same modem, set up with the same buffer size. That’s a problem, that you take that and you multiply it across the entire network path that’s rampant throughout the Internet and they call a bufferbloat. And we can sit here and build algorithms end to end, or we can sense it and avoid it. This patent senses it and avoids it, bidirectionally, completely.
Why would I even attempt to send traffic at 500 milliseconds when I got a customer who’s mixing voice and data? Let me just bring it down a little bit and avoid it. But what if I see a voice packet? It takes this one packet to recognize them, and then we carve out bandwidth for him, priority bandwidth. This is the early days. It was static. So if we had a voice and video customer, you can imagine, right? And now it’s just automatic. It’s like the flap-check. You sense, how long do I leave it there in case it was just called or something.
Now, you apply this method of avoiding poor quality bandwidth regions, or bufferbloat scenarios, or saturation, prioritizing critical traffic. But that voice packet, here’s how it times travels, right? Because those are running at 15 milliseconds. The other guys are running at 30 milliseconds, and 500 milliseconds up at the top. We learned our lesson from bufferbloat. We avoid our own buffer for that traffic. You have to. Otherwise, you’re just proliferating the problem. You do that on every link, bidirectionally, and you might have an intelligent packet distribution engine. That’s how our QoS works at the Edge. And that could be Internet inbound, it could be SD-WAN inbound, outbound, whatever, right?
Christopher: Now, does the solution natively have a list of, “These are traditionally prioritized traffic” so we don’t have to go and supply that? Maybe tunable but to say, “I don’t have to go and say voice is important,” it says, “Voice is important. We’re going to prioritize that by default.”
Pat: We do it by partner because each one has their own priorities.
Pat: We build them a template anywhere they go.
Phil: But that’s kind of a set it…I say, “set it and forget it.” But voice, you can put that out for default implementation. But then after the fact, you can go onto your portal or contact your MSP and make those adjustments because you’re running a new application.
Pat: That is correct.
Pat: In the core, in the Control Plane – I’m gonna get to that in the next slide – it’s a little bit less complicated. We launched priority queuing right out of the gate. Now you’re in the businesses of tagging, classifying packets. But that’s core, that’s better in the core. This stuff on the Edge, this was the big battleground for us. This is where all the problems begin and end. And, we used to say, “Give us your worst. We’ll make it better than your best,” and we ended up in, like three sites out of a hundred. And they were great and then you just forget about them. With SD-WAN, we’re at 100% of the sites now, with our partners. So, it’s a good evolution for us. Next slide, please.
Okay, that was the Edge, and really, to me, that’s where it all happens. However, with SD-WAN, you do need your control plane. You need to go have a core strategy. Well, I talked about the VxLAN innovation already a little bit. We made sure that… So, with VxLAN, you have VTEP gateways, right? So we made sure that our controllers can connect to a Cisco, a Juniper, a whatever VTEP gateway, VMware, and then just use us as a part of the strategy, part of the SD-WAN. It’s interoperable.
Phil: Are your partners typically selling to very large enterprise or to like… When I say SMB, I know everybody defines it differently, but what would you say then? The reason I ask is because a managed WAN solution in that context is like, “No, I don’t want to connect. Just take everything, and you’re my gateway.” You know what I mean?
Pat: If I was to pick a sweet spot, I’d say, on the SMB side, around the 100-site mark… Like, 5 sites, 100, 1,000, is the same, but those guys seem to be turning over a lot quicker. I mean, Kevin earlier, he mentioned it’s not about displacing MPLS. But for a hundred sites or under, it’s all about displacing MPLS.
Phil: Yeah. But from a service delivery standpoint, it is different. Because I know that, the engineering staff with a 400-person law firm with seven locations, or…you know what I mean?
Phil: That’s going to be different because they had like three on staff people that are running around like crazy, and “No, I just want a…you’re my gateway and you handle it from there.” Embedded firewall…whatever. And that kind of thing. Do you see…
Pat: Yeah. We give them the visualizations. You’re going to see a portal demo, if I haven’t taken up all of the time yet. We’ll show you how they can visualize their network and address that 90% of the time spent finding the problem issue. So yes, they get that.
Phil: So I can just have my core switches. And, again, in an SMB, maybe it’s just a stack of 3850s, or some Dell…whatever Dell makes. And then just plug them in, you’re my gateway.
Pat: Yeah, that’s transparent. We 802.11 trunk out of there if you want to your switches and…
Phil: Because I don’t… There are a lot of customers where I work, where they don’t want to advertise any networks to you. You know, you’re our default. But then on the other hand, you’re saying that we can…I can peer with you, whether it’s an IGP like OSPF, and advertise what networks, and then you’ll take it from there.
Phil: That’s the other…okay.
Pat: In your customer-protected route domain, for sure.
Phil: Yeah, exactly. Okay.
Kevin: How does that… Is there a difference in product between SMB and more of an enterprise level? Or is it the same across the board for any users?
Pat: It’s the same. In fact, the code that runs on the CPEs, same code. It’s universal. We’ve made one version for the…
Kevin: There’s no like tiering of it type thing?
Pat: There’s tiering of the monthly recurring license fees.
Pat: You get it like retail. You get in at that 10 meg level. We made it so efficient for them, retail loves us. There’s a lot of large retailers running this through our partners.
Pat: So they come in at this level, they get the QoS, QoE, they get no dropped calls, all of that. And then you have the next level up, license fee goes up. Next level up, license fee goes up. So it’s a scaling model.
Justin: Would you say most of your partners have their own network today and they’re using your product to enhance their offering, or…?
Pat: No, I wouldn’t.
Justin: So they’re firing…
Pat: The guys with their own networks, we are talking to, and we’re going to continue to talk to them, right? Because we have two value propositions for them. One, they can dip their toe in the water with this right away. No CAPEX, immediately in business. And they can take these sites and bridge them in to this other MPLS infrastructure that they’ve got, still got 18 months left on that. They can, over time, start to integrate our technologies into their network.
Justin: Literally re-sellers who are reselling somebody else’s network are going to…
Pat: This is great for resellers… some of these partners are a billion and a half in sales a year…[Crosstalk]Drew: Do you guys have your own hardware?
Pat: We’re OEM, you know, first, but we don’t do anything to the hardware. One piece of our hardware that we’ve been using for many, many, many years, and you might see some of our competitors, it looks like the same thing but it’s got a different label on it.
Keith: There’s a theme for this Tech Field Day, where we say, “Raspberry Pie.”
Keith: No, but the…
Drew: Will it work on raspberry pie?[Crosstalk]Pat: Oh, yeah.
Drew: Well, because I’m a Verizon partner. I’m a VPP. So, I could sell Verizon anywhere in the States, and I use anyone from Zero Wireless, Cradlepoint, and all those other guys. And so, what I’m thinking about is, what do you guys have that is a mobile friendly piece of hardware? Is there something except that…
Pat: Ethernet hand-offs, okay, that’s…
Drew: So I would still have to have that hardware.
Pat: With the Cradlepoint, we see Cradlepoint in front of us quite a bit.
Pat: And all we’re really using them for is Layer 3 transport. We don’t care if they’re up, down, whatever. We manage all of that.
Pat: And so…
Drew: But you guys don’t have a hardware that has a 4G chip set or a…?
Pat: It doesn’t mean we can’t. Again, source it and port over to… If the demand is big enough, we’ll look at it from a business opportunity perspective. But right now, we haven’t…
Drew: Hand you guys Ethernet and call it a day?
Pat: So we don’t require multicast core. That was the big step here. I talked about that. Its for NBMA, right? And unlike DMVPN, we don’t need any NHRP or any of that stuff running. When a CPE connects, it’s fully meshed.
Nicolas: It is fully meshed.
Pat: As soon as the CP connects.
Nicolas: Okay. What if I want to build my own system topology, like regional hubs? Can I do that?
Pat: You can now start policy-based routing, you know, managing your own customer…your own IGP within the core, to do that, if that would help you.
Nicolas: I can manage my own adjacencies. Is that what you’re saying?
Pat: Get me to the network diagram.
Kevin: Oh, back to the…
Pat: That one. Okay, let me give you an example. So, you don’t want to go to our Dallas PoP from the proximal Dallas site. You want to go to where?
Nicolas: I want Dallas to be by hub, for example.
Nicolas: From Los Angeles to New York.
Pat: So you…
Man: You want from Los Angeles to Dallas to get to New York.
Christopher: Instead of Los Angeles and New York. You want to control your own paths?
Nicolas: Yeah, yeah. I want to control my own path.
Nicolas: So I want Dallas to be my hub, and if…
Pat: Oh, I see. So Dallas is up there the HQ, like that?
Nicolas: Yeah, yeah, sort of, yeah.
Pat: We would…so, with policy-based routing, you can trombone all the internet traffic there, whatever traffic you want, or application. Or anything you can catch with a 5-tuple, you can make it go there and make it be your hub, no question.
Pat: And your centralized unified threat management system can now intercept all of that.
Nicolas: Can I do that per application? Is that what you’re saying?
Pat: Five-tuple today.
Pat: So, yeah, you can catch an app through there. You know your network, right? You know where your servers are. You know that that’s voice. You know the other one’s ERP.
Phil: Before we move on to the next thing, do you have your own OS that you’ve built, or do you build some…
Phil: Okay. So it’s… [Crosstalk]
Pat: That’s where we started life.
Pat: Yup, that’s where we started. It was called the Convergence Gateway Operating System.
Phil: Oh, you said that before, right. Okay.
Pat: So, everything we code is in kernel. It’s not user-land development.
Justin: So you wrote the kernel yourself?
Pat: No, we didn’t write the kernel ourselves. We’re not Linux. I’ll give you that much. That’s it.
Justin: So you’re not like ZebOS based or something like that?
Pat: No, no, no. Where they all came from, we came from. And so, we…
Tony: They’re Linux based.
Pat: Okay, we didn’t come from…well, yeah. Let me say yeah to that.
Tony: Okay. Oh, yeah, you…
Pat: Depending on who you’re talking to, right?
Pat: So we have our base OS, and that’s like out toolset – it always has been – and we just build on that in-kernel. That’s how we get our power, our performance, our flexibility, and we launch these new services over the years. Okay?
Okay. So, let’s put it all together now, end to end. We’ve got…
Nicolas: Just one quick question. You said you do not rely on NHRP.
Nicolas: And we know that NHRP is the foundation for your VPN, which is a very scalable protocol. How big can you scale and what is your biggest customer…
Pat: So this thing can scale up to thousands of sites, and what we did is we scale each individual customer control plane in such… Because once you go point-to-multi-point over Unicast, it’s no longer multicast.
Pat: There’s some duplication that might have to take place, right? So we made sure in the architecture and design of that new code, that it can handle that.
Christopher: And if you arbitrarily require multicast?
Pat: Well, that’s still the argument, right? I know where you’re getting at. That is a layer 2 core. That’s layer 2 going through that core. It’s just not layer 2 at the branch yet.
Christopher: Not me who wants you to require multicast, it’s…you know?
Christopher: Arbitrary customer application requirement.
Pat: Today, we would transport that through the core, no problem, in their control plane. But we’d have to, unicast it out to their branches from there. There are ways to do that.
Justin: How about IPV6?
Christopher: Yeah, good question.
Pat: The Edge has IPV6 capabilities, just not exposed to the world yet.
Justin: So you don’t do IPV6 today?
Pat: Not yet. We’ve got our blocks and all of that many years ago, and I just haven’t found the need yet.
Justin: Yeah. Most of my clients tend to find that they wait right up until the last minute for IPV6 when they’re rushing to get it done, so.
Pat: Yeah. We’re doing the same.
Tony: We’re out of IPs.
Justin: Yeah, keeping in mind that there’s three times the adoption of IPV6 in the United States as there’s in Canada, right? Most people in Canada don’t even know what IPV6 is.[Crosstalk]Pat: Right, but I don’t want you to look at it as just Canada.
Justin: Of course.
Pat: Those nine PoPs I told you about, only two were in Canada. It’s all U.S.
Brandon: Because we have to convert V6 to metric.
Phil: Is there…do you have any kind of presence outside the U.S., like Canada?
Pat: No, we’re…I think 2017 to 2018 is our year of expansion.
Phil: But you have a roadmap where you’re going to be global, right? Okay. Because even SMBs with 12 sites, London, New York, Singapore, it might not be a huge company, 2,000 people, but that’s still SMB enough where you don’t have the staff to setup a… dynamic expanded DMVPN network.
Pat: I agree. Like I said, organically grown, right?
Phil: Right, right.
Pat: But we’re going there now. The demand is there.
Pat: We’ve just been waiting for that to show right up at our front doorstep.
Brandon: So of the customers that you have right now, none of them have any presence outside the U.S. or…
Pat: No, they deal with…they’ll have a hybrid network to deal with that.
Brandon: That’s what my next question was. If they are, how do they deal with it?
Pat: This thing interoperates with existing networks, right? So let’s say I took a bunch of MPLS sites, for whatever reason, cost, speed, performance, whatever, and I wanted to…I have this existing network. It could be global, it could be regional, whatever it is. There are ways to bridge the two together. That’s just network engineering, and we made it real easy. Okay, next.
I’m going to move along here. So this puts the whole thing together, priority queuing in the core and every direction out of each controller. So you’re in the business of tagging and classifying traffic at this point in time. You know that exists. By the way, each of our devices, they can tag, rewrite, classify, you name it, each of the packets. It’s all built into the firewall that’s on prem at the brand. And you practice bidirectional QoE on the Edge. And, this was a vision statement many, many years ago, and you know, it’s a reality for us today. If you QoS the Edge, you can effectively QoS the entire network.