We have done an implementation of switchboard components, setting up some example VNF and its services and deployment and multiple Cloud platforms. We have implemented the global switchboard and OpenDaylight controller as an SDN application. We also built a switchboard portal which you saw earlier. Switchboard can see the catalog of DNS on the portal and so on. We implemented the global message back using zero and Q messaging libraries. Now we have tested switchboard with several kinds DNS, including a NAT, a caching proxy, and a video processing DNS, which again, you saw the demo video. We have also set up example Edge services, Open vSwitch and Open VPN for example to set up some Edge services. So these Edge services for example, can do labeling of packages when they enter the switchboard networks. We have, for example, sometime like some OpenFlow rules which can then figure entering package. A fixed correct labels to the entering packages using [inaudible]. In our testing and demonstration, we have applied switchboard on now multiple Cloud platform where we saw AT&T Universal Customer Premise Equipment, this is the block which is customer premise, Amazon EC2 and apply with OpenStack clouds. Let's look at some experimental results and the goal of this experiment is to compare the switchboard to some existing wide-area routing schemes. This testbed expands two sites and a 150 millisecond delay between two sites in AWS and indicates surfing in the Private Cloud. There is a 80 millisecond delay between the two sites. Hi, How does the broadcast forwarding know to which DN a package belongs to? Then there are DNS that modify anchor headers, like VPN. Yeah. That's a really good question. In that case, the way to handle that is at the entry point, you fix labels to the package and Egress fixes that label. DNS have to preserve the label's tag while processing the package. So forwarded sends the package to the DNS. Let me go back to this example for. I want this Egress. So this is the first thing that uses the package. Because it tells the labels. And thereafter it sends it to the polygon. And so the orange DNS is the one that modifies the package. It doesn't match up because in that case, it would forward it and it sends the packet to one, it sends it with the label one, preserves the label contents and back. If you have been modifying this header, this is the requirement in order to [inaudible]. So DNS has to maintain the packet level. In fact, in terms of DNS implementation, this is primarily the thing which we have to support. This idea I think has been [inaudible] of another work called FlowTags. So in other words. There is nothing special that you to have do for this because FlowTag already takes care of this particular requirement. Yes, we can use what FlowTag did to support it. Okay. So the end-to-end latency is 150 millisecond and [inaudible] milliseconds. Another figure at the top, it shows routes chosen by the defense key. You have switchboard and I have to change the global switchboard holistically optimizes routes for the two chain. Then we have Anycast, which does hop-by-hop optimization. You have Edge from to the hop-by-hop optimization meaning it only select to the next nearest route. In this case, it puts both these chains on the first available route. Then a fairer comparison with Anycast we need a computer [inaudible] version of Anycast, whereas it would reject the nearest instance, considers that the nearest consensus to load it. For example, for computing the green routing and its first choose the nearest instance. But when it's computing the orange route, it's realizing that the green one was a heavily loaded segment too, farther than the orange instance. Now switchboard, what it does is jointly optimizes both changed routes for both Egress destination support. Both site A and B in bigger aggressive [inaudible] , because it jointly optimizes the route. On the other hand Anycast chooses on a hop-by-hop basis. So what we see is that due to the global optimization, which would achieve a better throughput and latency than these two schools. It's easy to see why it can do that because it can do both things. It can keep latencies low, but it can also distribute the load homogeneously between the two instances. Its load balance as well as latency optimal. This is a small-scale experiment but in this one here, we see that holistic route optimization outperforms hop-by-hop route selection. We have larger-scale simulation that's coming on later. In order to show the responsiveness of this system, we did a dynamic service chaining experiment. We do two things here, we first show that we can update chain routes dynamically, and we can quantify what benefits it gives us. These two circles here represent the ingress and the egress. This was the initial route through 1 VNFs and that was the NAT. In the course of the experiment, what we realized is that this route was getting overloaded, so it triggered a route computation for the same chain and global switchboard then added a new route to a different site. We want to show you how long does this whole process take of adding a new route and then sending package traffic to that. There are three things here we want to show you to route. First is, a chain route only takes steps 1-6 the different steps, small details in the paper. It takes 600 milliseconds in order to add the route and then install it as the chain. Now, in this experiment, I should mention that this was then all within a single site, but only between different zones at the single site. If you have wide-area delayed, some of these latency will increase. But it shows that the control plane overhead of this processing and local processing at the controller at border, it's also the 0NQ. It's not nontrivial, but it's manageable. The second is that when you add the new route, it takes a while for the traffic to get rammed up to the new route. But during this process, the original route, this line here shows the traffic to the original route, and this dotted line here shows the traffic to the new one. What we see is that the traffic on the original route remains almost stable, but traffic on the new route increases and it reaches a steady value. We've doubled the throughput and able to manage to retrieve the load. The point here is going to switch mode enabled be enough to reactant in retrieving load and that overwhelm the resources at a single site. You can do it by creating the new routes wired response. This is another advantage of why do you need to actually approach. To give a sense of how this would work in a larger scenario. We have simulation of routing scheme on tier-1 network datasets, and dataset includes the location of the nodes, the link capacity, the link delay, traffic matrix, and network routing. This is a nationwide skill, and so that's the linearity here. Experimental parameters relevant in simulation are, we simulated 100 VNFs, which are then part of 10,000 chains, not all VNFs are part of all chains. The length of the chain is 3-5. The VNFs service coverage of 50 percent, so we assume that a particular VNFs is not present on all sites, but it's showing 50 percent of the site. There are additional results in the paper, which vary some of these parameters but those are the parameters for that part I'm showing on the screen. Eighty percent traffic is covered by switchboard and 20 percent is non switchboard traffic. Let's start with this figure first. The figure on left, below values of CPU per byte, different scenarios where the network in the bottleneck, because the CPU per byte shows the casting costs per packet for the VNFs. Let's, for example, considering VNFs which are the very lightweight processing, for example, the routers. Now, on the other hand, towards the right, we're emulating a many more heavyweight VNFs, or many more heavyweight service chains, do a lot of process. This is just intended to cover in a different casting cost of VNFs. Why it shows the relative throughput of different schemes. The throughput here is the network throughput. Now what you see is as the compute load increases, the overall throughput increase. Initially, it's network bottleneck, but at towards the end it's compute bottleneck. We have evaluated switch scheme here, so the switchboard SB, which is a globally optimal solution for this simulation. Then you have the dynamic programming one, and changes, and then we have AnyCast. It's interesting to note here that the dynamic programming when actually performs close to the global optimization for the switchboard LP and both of them are far better than hop-by-hop routing. Now let's look at this figure on the right here and this shows you a load versus latency profile. Load here is done in terms of the network load. In a compute processes, you're only feeling the volume of traffic. The latency is again a calculation based on the values of the link utilization. Latency incorporates the load as well. If the load and the link is high, it would translate to a higher value of latency in our simulation. How do these teams do? What you see with any cost is that even at a very low load or lower volume of traffic, it has a high latency. Beyond that, it doesn't sustain even. It goes through them but in both switchboard LP and DP and they continue to help support a much higher level of load. Also, that its goal DP's latency is only higher than the LP latency by a small margin. What's encouraging is that the dynamic programming approach needs to be an efficient, practical strategy for optimizing the outstanding approach in this one and gives it good results. Here's the summary. It's a network architecture to create customers specific wide area chain spanning heterogenous clouds. It gives a holistic optimization of wide area chain, uses a scalable wide-area control plane which is supported by the global message bus. We have designed a scale-out deployable and affinity preserving data plane as well, and the goal is there to enable rich ecosystem of VNFs to support customized service chaining to offer new capabilities to customers. We have been working in this for a couple of several years now, and our paper came out last year. We can take a little look back at the project and we'll also share what we learned while working and you're talking to people. Some of this is general, but some of this might be helpful, useful beyond just this projects. One is a clean slate architecture, only way to achieve the stated objective. For example, if you want to do customized search chaining, can you do customization? It should actually be each access network. For example, if you want to do customization for LP customer. Can you do it without wholesale redesign? Then the next is how do you make these architectures easier to adopt? We have fewer number of stakeholders. Can the architecture decide? Maybe you have too many people. Take all the access network, all the VNF people, all these different. Maybe forwarded is the easiest bit but in the other piece, you have many stakeholders that needs to agree to this architecture. Now, how can we reduce the number of stakeholders to make it easier to adopt? The next one is, we have introduced the switchboard forwarder as a way to enable the service chaining but then you have to ask, it's a new network elements that we're introducing. What this gives us a lot of flexibility in terms of scale-out and everything but you could do so the service sharing using a little bit more static routing approach, and so you really have to think before you deploy these technologies, you have to evaluate what are the development costs for the forwarder, and how much resources it consumes, how much additional operation is productive and then compare it to the value that it gives before you actually deploy it. First of all, you have to make all of that happen and then compare the benefits versus the cost of it. Again, that's much beyond the scope of this paper. What scale is there? Useful question to put forward. Finally, we started with the premise that we see service customization is fit but is it valuable to customers? New customers take up on the idea. What we have is, we have a rich VNF. Within that, marketplace is rich, the customization is valuable but in order to have a rich VNF marketplace, you need to have customization features available. In particular next problem, and I think switchboard is an attempt to enable service customization in ISP that we can hopefully get to an app version where we have lots of VNFs that are available to create all these new services. With that, I'll conclude. Thank you.