Episode 2: SD-WAN Quality of Service – February 28, 2017
Kevin: Hello everyone, this is Kevin Suitor, Chief Marketing Officer of TELoIP. I would like to thank you for joining us for today’s Chat with Pat SD-WAN session. Today’s topic is, Delivering Quality of Service, and user Quality of Experience. We’ve scheduled 45 minutes and will address several questions on this topic, including some that were submitted in advance. We also encourage you to use the chat box to ask additional questions as they come to mind and I will try to weave them in during today’s session. So I’m here with TELoIP Founder and CTO, Pat Saavedra, and I’ve got a diagram in the screen share that shows the VINO SD-WAN architecture.
(note: As you may be aware, AWS had some challenges on Tuesday, Feb, 28. The recording of our webinar was caught up in the havoc. Fortunately, I had pushed record on my iPhone about 3 minutes into the session. What I am providing is a mash-up of the iPhone audio and the join.me video which has resulted in a less than ideal audio track and for that, I apologize. – Kevin Suitor)
Fig. 1 – VINO SD-WAN Architecture
Pat, can you get us started by providing a a quick overview of our VINO SD-WAN Architecture?
Pat: Thank you Kevin, and welcome everyone. We can start off by describing the architectures Kevin mentioned.
So within our VINO architecture, we reference the edge, which is between the branch and the nearest point of entry, or PoP, as the data plane. Once connected and authenticated to a controller, this data plane is joined to our control plane, which is a point to multi-point interconnected overlay between the pops and other controllers. All the data plane and control plane underlay traffic is encrypted with IPSec AES 256-bit ciphers, and the customer-protected route domain is formed and can be managed by using OSPF or BGP per the customer’s liking, all of which is fully managed through a single-pane-of-glass portal implementation. Over to you, Kevin.
Kevin: Thanks, Pat. So the topic today is quality of service and quality of experience. What do we really mean when we talk about QoS?
Pat: Well, allow me to get into that. QoS is one of my favorite topics, you know, especially when trying to provide quality of service and better customers’ quality of experience over typically unmanaged or non-QoS circuits. So QoS, the management and guarantee of traffic, to implement this, we need to measure, classify, and discipline our packet delivery across the network. QoS should tell me how my network is behaving and help me adjust my users’ quality of experience.
Kevin: That sounds great, Pat. So does TELoIP have a recommended configuration for ensuring quality of service within our deployments?
Pat: We sure do, and we have a philosophy. You know, we believe and practice the art of distributed quality of service. If you QoS the edge, you can effectively QoS the entire network. Allow me a few moments to elaborate. I’m just going to pull up a draft white paper we’re composing, and I’ll be reading from it a little bit.
Fig. 2 – “If you QoS the edge, you can effectively QoS the entire network” – Pat Saavedra, 2005
For SD-WAN and Internet, the QoS battleground is the edge. In other words, the connections at the branch to the nearest point of entry. This is where quality is most affected. Therefore, a distributed quality of service model is ideal for delivering quality over the top of these existing network infrastructures that have limited or no IPQoS control mechanisms, such as the Internet. The goal of this model being that it is to empower the customer by addressing your needs for the application quality while abstracting the needs of the carrier, whose objective is to maximize ROI for their own existing network infrastructures and not yours.
On the other hand, underlay carriers are expected to provide bandwidth services, respecting their service agreements and with no discrimination against customer traffic and application, something our system handles. So I’m going to look at the core between pops. Here, our technology manages multiple diverse carrier upstream connections, dealing with aggregate flows and encapsulating between our points of entry and on the edge to provide quality for all the overlays. Well, as a result, this implementation delivers end-to-end, bidirectional IP QoS for both software-defined Internet and software-defined WAN overlay solutions.
Let me summarize. The distributed software-defined overlay QoS model allows full control of all the traffic within the entire available bandwidth spectrum of all the connections, in every direction, including management of both inbound and outbound traffic on the edge, even where asymmetrical bandwidth and connections are present. Over to you, Kevin.
Kevin: Thanks, Pat. I’m not sure that you can talk about quality of service without talking about the metric that you use to measure quality of service. And within the organization, I know we talk a lot about quality of experience. What do you mean when you created “QoE” as a metric?
Pat: I’m going to describe QoE. It’s our own patented real-time metric developed to accurately predict the performance of underlays, overlays, control planes, data planes, and application packets. It’s based on, our QoE is based on latency, jitter and loss, much like MoS, but tuned for the network. QoE is measured for every packet on the underlays and the applications from the CPE device themselves. Portal visualizations help show QoE scores at a glance. And as you can see on this screen here, the objective is to reduce troubleshooting and guesswork. As we know, 90% of the time is spent finding the problem. And here, you can see on the left-hand side, our QoE check to the home pop, which is a combination of underlay QoE checks and the Home PoP check, everything is good there. I don’t have to spend any time troubleshooting these disparate connections.
Fig. 3 – QoE is based on Latency, Jitter & Loss (Like MOS, but tuned for the network)
However, the customer called in with this problem. I can see that there was a dip to the session border controller. I can now focus my efforts there. So within an instant, I’m pointed in the direction that I need in order to start troubleshooting the problem. Thank you.
Kevin: Thanks, Pat. So, you talked a little bit in the set-up about bidirectional quality of service. And I also heard mention of IPDE, and I think I heard about RLA. What are all these terms? And, you know, can you sort of elaborate on them a little bit?
Fig. 4 – Rate Limit Avoidance – We perform quality checks and make decisions on a per packet basis, in every direction, for every connection
Pat: Yeah. There’s so many acronyms in this industry. Let me describe it at a high level. IPDE stands for our patented Intelligent Packet Distribution Engine. In a nutshell, we perform quality checks and make decisions on a per-packet basis. This is unique to TELoIP. We do this in every direction and for every connection. And one key element is the practice of rate limit and avoidance within that intelligent packet distribution engine using our advanced decision tree. We can rate-limit underlays bidirectionally and can avoid poor quality bandwidth quality bandwidth regions. This is common. Every take a DSL connection can saturate its upload? Although you might have 800kbps, at about 67%, you start to see a spike. Well, that’s unusable bandwidth. That’s a poor quality bandwidth region. And that can happen at any time. Well, it’s this technology that we have under the hood of all our overlays.
Kevin: That’s great. So what I’d like to do is maybe change gears a little bit and look at some other more business-oriented type of questions around quality of service. I’d like to remind everybody who’s listening, please feel free to ask questions in the chat box. I’m more than happy to weave them in here. What are the business implications of having advanced quality of experience metrics available to you?
Pat: It can impact you on many levels. I think the most obvious is the ability to lower your costs for the connections you use and the time you spend troubleshooting. So lower cost would be one of them. Quicker time to troubleshooting goes hand in hand with that. We saw an example a little bit earlier on where between two graphs, immediately, we can pinpoint one area of trouble and start focusing on that. You know, this all leads to less truck rolls, and in the end, higher productivity. But the fact that I can use Internet-class connections and get my organization on an SD-WAN functioning at a broadband per-megabit price point economy versus an MPLS per-megabit price point economy, it begs for quality of experience within a solution. Thank you.
Kevin: Great point. So how can a service provider, whether that’s an internal IT team or a managed service provider or a communication service provider, build trust quickly with the end user community by using quality of experience?
Pat: The number one way is by delivering a great product, you know, and VINO SD-WAN helps you do that. Within that, we can do things like providing reporting and analytics consistently. Even if you’ve brought your own broadband to the equation and you’re not integrated with the underlay carrier’s metrics for that connection, we do that for you within our underlay encapsulation and our overlays. We help you quickly identify network challenges and give you a view of your network. And this happens across networks and applications and underlays, etc. As a result, you have a predictive quality of experience system where VINO autonomously optimizes your quality of experience with all of our patented mechanisms. At the end of the day, service providers, partners can use QoE alerts and tools and be proactive in the management side of things and help speed their repair service and quality.
Kevin: That’s great, Pat. So looking at questions I’ve heard many times from both at the end user community and the question surfaced up from partners, an IT team may come to us or to one of our partners asking a question around how they can ensure that their CEO has embraced video. And they’ve started with these video town hall meetings. How can they ensure that these town hall meetings aren’t impacted by other traffic? Can TELoIP help?
Pat: Yes, we can. You know, it used to be the typical environment, you just had to let the IT department know that this is happening. There’s an event at this time, and, please stop downloading 10-gigabyte files and so on and so forth. That’s not the case anymore, not with this technology. We have the ability to prioritize over all other traffic, these town hall meetings. We use our intelligent packet distribution engine. We use other mechanisms as well. And something that you don’t typically see in your conventional QoS models is the fact that quality also means uptime, because if I’m not up, I don’t have any quality whatsoever.
So the ability to pre-empt a condition, to decide that I’m going to fail over to the other connection, or that at this moment in time, for this particular packet, in this specific direction, I’m not going to transmit that packet, I’m going to use a different link, compare it. I mean, it’s a must. It’s an essential part of our solution of which can only be done on a per-packet delivery system.
So being able to discriminate on applications, on flows, and breaking even those flows down into packets and do it in a bi-directional manner on the edge…it’s key, I talked about a little bit earlier that that battleground of quality being on the edge, it’s so true. I mean, no matter how great my backbone is, if I’m in a particular location and I do not have the budget to stream high-end private circuits to my branch, and I want to get to a broadband price point economy so that I can have 10 times the speed at half the cost without sacrificing quality, I need this type of system. I gotta have bi-directional control on the edge. And, you know, over the years, Kevin, we’ve had many debates. And again, this is one of my favorite topics, for the simple reason that I can’t tell you how many times people have told me, “Well, you can’t QoS the internet. What are you talking about?”
Well, we can. We can when you consider the fact that we are in a position to manage all of the traffic in all of the directions and all of the branches all of the time. So if I’m downloading a gigabyte file, a 10-gig file out of where my IT department is, and while I’m on a VoIP-type conference, how do I guarantee that that traffic doesn’t get mixed on its way in and from the Internet? Well, you do it with this system because we have both SD-Internet and SD-WAN combined in our VINO system solutions. So I classify and prioritize your traffic to you as a branch, and we can also take into account my private traffic between my branches. This is how we do it. Thank you.
Kevin: Great answer. So we have, all this great tech under the covers, creating an environment to enable quality of service, and thereby deliver a great quality of experience. How do you let IT teams know and service providers know that something has fallen outside of the pre-established limits or thresholds?
Pat: Well, I think the active measuring is key…acting on those measurements is the solution. I mean, we have many ways. Partners come in various shapes and sizes. They have different systems that we need to work with. But there’s some that stand out at the very top, which are common to everybody. Things like, can I open up an SNMP community on that device at the branch for my partner’s existing SNMP collections mechanism? The answer is, “Yes, we can.” So we can do that. We have our own NMS. It comes with the service. It’s not just an orchestration of devices. There’s order entry, there’s install and config modules. It’s a whole turnkey system. But if you choose to use your own NMS, we can support that.
There’s more to the story. Another common one is Netflow data. If you can capture that type of information, you’ll have more intelligence on the issue at hand to help you troubleshoot. I may be using this for capacity planning, which is part of quality, long-term. I may be using this to identify a problem where a particular application just started using 10 times the amount of bandwidth that it used to. That’s an anomaly. Well, Netflow analytics help you determine those conditions and help you act on those conditions. And you need that intelligence. We provide alerts, we provide visualizations, partners have some of that as well, and we provide the interconnectivity between these systems for them to use it. Thank you.
Kevin: Great. I know that this one could get to be quite lengthy. You talked a little bit about the factors that you look at around quality of experience. But, maybe, can you go into a little bit more detail about what factors TELoIP is really considering when we talk about an SD-WAN quality of experience, whether that’s class of service, ToS, things of that nature?
Pat: Sure thing. In fact, this is a great opportunity for me to talk about QoS and CoS as a whole. Even for MPLS infrastructures, these rules apply. The only difference is we’ve taken what was inspired by MPLS and its fantastic means of, providing class of service and quality of service within that hybrid infrastructure, and we’ve brought that over to today’s SD-WAN world. And in order to do that, you’ve got to deal with class of service. So you’ve got to be able to…so CoS, we’ll talk about that here. You have to be able to classify and identify your traffic, read that ToS byte data, take layer 4 information; IP protocol, ports, source/destination, and classify it. Deal with flows, source/destination, and classify them. So being able to read it and classify it, that would be one part.
Another is, what do I do with that information? Once I have this intel, I need to do something with that. Either I’m going to use it for blocking traffic, I’m going to use it for prioritizing traffic, whatever the means, I may need to mark that traffic. So now, I’ve got to take that ToS byte data and do some admission control and marking policy within that, where I take the ToS byte data, and I take all of that layer 4 information, or flow information, and I start to mark the packets in that particular stream or session. So now, I’ve taken my intelligence, I’ve used it to mark traffic.
For example, HTTPS, if I open up my Outlook email, I’ve subscribed to Office 365, this is something that can really hurt a typical WAN or MPLS network especially if I’m tromboning this particular Internet traffic. And I’m going to open it up, and how do you know what I am? Who or what I am? because I just look like HTTPS traffic, TCP traffic on port 443. If I have the ability to detect you, and I have the ability to say for outlook.com, as an example, I can now start to discriminate other traffic against yours, mark your packets specifically with… DSCP 26, or whatever it is, as an example. I’m in control of those markings. So I can mark it. So marking is definitely the next step. And then whatever system I’m running or whatever network I’m running on can now participate in the prioritization of this traffic.
The next thing I’m going to talk about is, our parts. We added IPDE, this Intelligent Packet Distribution Engine I’ve talked about, to our system. And we talked about the ability to dynamically practice rate limit and avoidance of different conditions in advanced decision tree. Well, dynamic data plane traffic engineering on the edge with this IPDE and RLA system must also be done in a dynamic manner. So if I sense your traffic because I’ve read your markings, I can now start to carve out space dynamically for you.
Why is it dynamic is so important? Well, if I hard-coded that this video conference, as an example that we used earlier, that VIP application, it may consume almost three quarters of my outbound bandwidth if I’m a small branch, for a quality video. I don’t want to do that all the time. I only want to do it when that particular session is in play. And so the ability to dynamically carve out bandwidth and then release that bandwidth, that is also important. But to do it on a per-packet basis and to discriminate with the QoS markings is a must.
That leads me to QoS policies, where we talk about QoS administration from our portal, where I can take the class of service policy that a customer wants, mark packets with our marking engine that lives on the CPE device, that is on the controllers, that is throughout the entire system, because we’ve got to do it bi-directionally, and I can apply this intelligent packet distribution engine and QoS policy, in the end, giving me a full-blown quality of experience environment where I’m now dealing with the customer’s application quality of experience or the perceived quality of experience. I’m more interested in their predictive quality of experience under these conditions and in this environment, particular branches, and so on and so forth. The fact that I’m dealing with latency, jitter, and loss, and my QoS markings, and my class of service puts me in front of the pack, puts us in front of the pack.
And so we have QoE checks where we do CPE to the world, we have controller to controller checks between points of presence, and we have points of presence to the world as well. And also, on the edge, in that battleground space, real-time underlay QoE scores on a per-packet basis to let you know what carrier one and carrier two is experiencing at this moment in time, of which you can socialize and visualize up in the single frame at once. Thank you.
Kevin: That’s a lot.
Pat: That’s why we’re doing this session.
Kevin: So if I’m a customer, and I don’t have a lot of SaaS apps or I’ve got minimal real-time apps like voice or video, how do I get benefit out of all this QoS and quality of experience magic?
Fig. 5 – VINO SD-WAN delivers end-to-end encryption with bidirectional IP QoS
Pat: Well, I’ll tell you how our customers are benefitting today. I mean, there is the killer app for SD-WAN, it’s voice over IP. The thing is, though, we need to look at hosted voice over IP and we need to look at third-party voice over IP. So my SD-WAN solution must not only help me with QoS and QoE between my branches, it needs to help me to-and- from the outside world. I need the flexibility to be able to subscribe to any hosted voice provider tomorrow if I didn’t like yesterday’s experience, and I need to do that without worry of, tromboning it through my WAN or SD-WAN. And our technology as a whole, with everything that we’ve talked about above, allows us to do that for the simple fact that we have the ability to do this for both Internet, which we call SD-Internet, and the WAN side, the SD-WAN side of the equation.
Kevin: That’s great. So I had a meeting earlier this week with a potential partner, and they were focused on a customer who’s rolling out Skype for Business. How could TELoIP ensure that they got a high adoption rate of Skype for Business?
Pat: One of the ways is to be able to provide metrics on that application for you to be able to go into the portal and see what my QoE scores are between the locations, that are in that particular session or all the locations at a glance, to know that this particular Skype traffic, video traffic, just got bandwidth carved out at the locations that it needed, and to function. But, at the end of the day, it’s just, “What did you see? Can you see me now? Never mind can you hear me now, can you see me now? If one of those connections went down, are you still seeing? How much of a glitch did I experience?”
By the way, business is still being conducted. There’s data, other forms of data being mixed with this traffic as well so I need to, on a per-packet basis, be able to control all of this. Well, you’ve got to do it, and the only way we found it can be done is by managing and controlling the traffic on the edge and the data plane, conditioning this traffic, measuring the underlays that we build our overlays on top of, and measuring the overlay itself and conditioning the overlay also. But not only on the edge, you also have to do it the control plane, between points of entry, it’s a must. So end-to-end is the answer.
Kevin: Okay. Over the past 35 years, I’ve been involved with a lot of different systems that dealt with quality of service. They’ve all used different queuing algorithms. I think you’ve talked about this, but maybe we can surface this on a little bit more detail as to how TELoIP approaches delivering QoS through queuing algorithms, etc.
Pat: Sure. And again, they differ from the edge and in the core, but there’s a lot happening on the edge. I mean, there’s more edges than there are core points of entry. And so that was the first place we perfected the technology in this bidirectional IP QoS system. And then once you get into the core, you still have to deal with that especially if you’re a multi-tenant system like us, between different customers, different partners, different applications, between each of those. So it’s a multi-tiered system.
At the end of the day, bidirectional is a must, and end-to-end, that goes hand in hand, just like the last statement I made. But we’re talking about queuing algorithms. Those aren’t the important ones. I could tell you that in the core, we practice priority queuing. We could just as easily practiced class-based queuing or other forms. We can have different traffic disciplines implemented on the dropping parts of those queuing mechanisms, different schedules, so on and so forth. You know they help, but they’re not a great help.
What’s a big help is how we deal with these on a per-packet basis, bidirectionally, with our own algorithms. Now, though they’re part of individual components, I mean, you can look at product sheets and list a whole bunch of algorithms, and that’s great, but that’s not going to solve your problem. So for me to say that priority queuing in the core and intelligent packet distribution engine on the edge, with tail drop mechanisms and avoidance techniques, that might oversimplify it. The reality is that it’s a system, you need all of the parts, and even QoS doesn’t stand alone with an algorithm. It needs the whole system to support it. Thank you.
Kevin: Understood. So, if we look at the diagram here and I see, you know, pictures of going to the branch, what kind of underlays do we support?
Fig. 6 – TELoIP Approach to Delivering IP QoS
Pat: We’re very transparent. It’s autonomous. I mean, we can take existing MPLS circuits and provide quality over them. We can take cable, DSL, 4G LTE and do the same thing. Because we function on a per-packet basis, it gives us a lot of flexibility. And the fact that it’s done bidirectionally and preemptively, it opens up, the door to many different points. We’ve got customers in the middle of nowhere running on dial-up circuits, if you can believe it or not. We’ve got customers where, bandwidth is at a premium, and there’s only so many different types of services available, but the budget is not there to backhaul private circuits. So we must make it run just as good as it does, you know, in a large city.
I mean, QoS is an interesting topic for many reasons, and I think it’s because, for me personally, you have to deal with every single part. If one of the parts within the end-to-end communication is not being addressed, you’ve got problems. We’ve seen interesting problems over the years. You know how…have you ever heard of buffer bloat and stuff like that, long fat network problems and all of that, all that can attribute to quality issues. So if you’re not sensing that in your way, along the network path, then you’re not able to adapt to it. So QoE is key. Thank you.
Kevin: Interesting. So coming back here to the topic of quality of experience, what’s being measured by QoE? Are we measuring path performance or are we measuring application performance?
Fig. 7 – QoE is based on Latency, Jitter & Loss (Like MOS, but tuned for the network)
Pat: We have several levels. We measure the underlays of the data plane themselves on a per-packet basis, bidirectionally, so we know what’s going on at all times for that particular part of the transmission. We measure the overlay that’s built on top of those, so we know what’s happening in that part of the transmission. And we measure between points of presence as well to know that leg of the transmission. And beyond all of that, we measure the application. We score applications for customers. So the application path, we can give you a score to your session border controller, we can give it to your headquarters, to a particular server out on the internet or a server within your data center for that application, and so on and so forth. That, to me, is application path support.
Kevin: So if we look at this, how does this… do I tie classes of service into policies? I mean, how many classes of QoS does the TELoIP solution support within a policy?
Pat: You can customize it. Out of the gate, we would have a priority queuing with zero to seven fields for class of service, typical, just like differentiated services model. And what we now do is we put the traffic management within the queues into the hands of the customer so that they can say, “This is my application, it runs on this port, it goes to these servers, these are the services that run throughout on this application,” and we can start marking those particular packets to fit the queues that we need in the core. Not only that, we’ve dealt with intelligent packet steering coming in and going out of the branch as well. We’re dealing with more… in a failover condition, we morph these QoS policies over to the lines that are left and practice that discipline, and we also shape traffic.
So again, it’s not about how many queues I’ve got. It’s not about what queuing discipline I’m using. It’s about the fact that I’m managing this customer’s packets from end to end against all of the traffic in every direction all of the time. And I have sight. I have sight on all the different points that I need to in order to work, adhere to my queue quality of experience rate for that app or service.
Kevin: So, Pat, that just raised the question from somebody who’s out on the session. And that question is, what class of service or QoS is the management traffic running in? And how much management bandwidth does that take on the active link or links?
Pat: Let me see if I can…I’m going to try to rephrase that question.
Pat: So I’m at the branch. I’m a packet. I’m going through the CPE device. I’m going to get to the outside world somehow. But between me and the outside world, the first thing I’ve got to deal with is that data plane, that’s branch to point of entry. And that data plane happens to made up of, let’s say, a red and a blue connection. Those are my underlays. We encapsulate on those underlays and we build you an overlay.
All right, so now, that means, what was my overhead for that? None. I mean, we have encapsulation overhead, and within those control mechanisms, within our own protocols that we use in order to build that data plane and aggregate all of the bandwidth, all of the QoE scoring is done on a per-packet basis. It’s measured on the controller and measured on the CPE device, bidirectionally, for every single packet that’s transmitted. It’s already there. It was part of it already, and we just started doing the math on the packets, per se, and so there really isn’t much overhead for that.
However, if I started to implement some type of rate-limiting and I want the majority of the traffic within that rate-limiting, I might have some overhead. I might have some overhead because I need to create a fictitious bottleneck in order to do that. We have special mechanisms that allow us to avoid that bottleneck for priority traffic. It’s like a queue bypass mechanism inspired by the problems of buffer bloat throughout the industry. The best way I can answer that question. I hope I’ve answered it.
Kevin: Okay, thanks. I’m sure if it didn’t, we’ll get some follow-up questions coming in on this. That leads me, as you were talking about, entering into the point of entry and things like that, how do you handle admission control and configuration?
Pat: Oh, we have role-based access that’s provided through the VINO portal. This is the multi-tenant portal that fosters collaboration between our services and the customer. Order entry is integrated within the portal. Data for voice over IP and other priority applications are gathered during the order entry process, and we’re already sizing up how many phone calls will be at a branch simultaneously, so on and so forth, what codec are they using? And that goes into our Zero-Touch Provisioning model so that that config will be there when the CPE lands at the site. And QoS rules can be remotely viewed and changed throughout the portal.
Kevin: Okay. We’ve got about five minutes left. I’m going to go a little bit out of sequence in the list of questions we have left. How do we collect QoE metrics? How often are they collected? How do these metrics enable the system to make some autonomous decisions?
Pat: Sure thing. Let me socialize the five different levels of QoE metrics. Number one, the underlays, okay? Already are handled on a per-packet basis, bidirectionally. Number two, a score to the Home PoP part of the overlay. Let’s call that the data plane. Number three, between all of our points of presence themselves. This is the underlay part of the points of presence.
Number four, your control plane as the customer. So we have a point to multipoint over unicast control plane that’s dynamically spawned between points of presence for controllers of your CPE devices for your customer-protected route domain. That’s your control plane. That control plane will perform priority queuing. So you get to mark your packets how you need into that priority queuing. The packets are marked by rewriting the ToS byte information, which we maintain throughout our whole system. That’s why there’s no overhead, if that adds a little bit of clarity. And number five, from the CPEs perspective, we can get a QoE score for application one, application two, so on and so forth.
Now, how do we get that to you…how do you pick all of that up? Well, you can pick up number one through SNMP. You can pick up number two through SNMP. You can pick up number three through SNMP, number four through SNMP, so on and so forth. So all these could be gathered through SNMP or socialized through the portal, where we package it all up for you.
Kevin: Okay, great. Coming back to a question that came from one of the attendees, there was a follow-up that was phrased, “Since you have the ability to change the prioritization on a per-packet basis for maintaining a VoIP or video connection, there’s likely some packet overhead for the management, although small. I just wondered how much that overhead actually is.”
Pat: Well, it really depends on how many control packets we’re sending per second. And so if I have a very aggressive failover mechanism, it means I’m going to send, you know, more control packet data. So it could be, we’re talking bits, not bytes of information overhead.
Kevin: Okay. So we’ve got one minute left here. I think the final question to leave with, I know it’s one that has come up a number of times from end users and partners. What’s an aggregated QoE score?
Pat: So imagine all of these QoE scores coming from all of your points of presence, and each QoE score has a direction. So if I’m going from JFK to LAX, I got a score from JFK to LAX, I have a score from LAX to JFK, and you put that together across all of our nine points of presence if you had a CPE at each of the points of presence, that has to be parsed and it has to be simplified. So what we did is we came up with an aggregated scoring mechanism which allows you to rate each of those connections from your entire network’s perspective.
So if you can just imagine, from left to right, there are so many PoPs, and they each get a color score according to QoE, it’s possible that one site to, crossing that particular trouble point has a lower score than others. It’s possible that most of them are high. But it’s also possible your application isn’t concerned with that particular layer of the transmission. And so we found a way to visualize that in what we call an aggregated QoE score to sort of see your network as a whole. I hope that answers the question.
Kevin: I think it does, Pat. Thank you. So we’re at the end of the time allotted. I’d like to thank you everybody for maintaining on the call. I’d like to invite you to join our community. Join us on LinkedIn. We have a company page at https://www.linkedin.com/company/teloip-inc. Join us on twitter https://twitter.com/teloip. Our blog is easily accessible at blog.teloip.com. You get one or two new pieces of information every week on the blog, including…that’s where we’ll post the transcript of this chat with Pat. That will be there.
We have monthly demos available, and we encourage you to register and attend all those demos to see the product live. It’s one thing for us to talk about it in sessions like this. It’s another to actually take a look at our web demo. And we have some other demo and videos that are up on the website that you can do. Or just ask us for one-on-one engagement with one of our certified solution engineers. So join our community.
At this point, I think Pat and I would like to thank you very much for attending, and we look forward to seeing you on our next Chat with Pat in about a month’s time. So thank you, everybody. And, Pat, over to you for the last word.
Pat: Thank you, everyone. I appreciate you taking the time. Welcome to TELoIP, and have a good day.