Let's now dive into the specifics of the NSX and neutron integration. Neutron is in networking project in OpenStack and is the project that integrates directly with NSX. So the picture that you're seeing here in this slide summarizes how this integration works. At the top we have OpenStack specifically OpenStack neutron. Again, if you're a consumer of the OpenStack APIs neutron would give you the ability to consume things like networks, subnets, routers, load balancers, firewall, security, groups, etc. So that's why we see it at the top of the diagram there. Via this plugin that is in charge of translating neutron API calls into an NSX API calls, we integrate within NSX manager, and NSX manager in the NSX architecture is a stateless management plane that exposes a RESTful API, for consumption of the network services that are offered by NSX. So NSX manager and neutron are the integration points at an API level. There's also control plane in NSX that is a stateful component and that is in charge of calculating and handling all the logical topologies that are moved and configure in the data plane. So a control plane knows where every VM is, knows the mac address of every VM in your infrastructure and with this information and some other information that you can obtain from the infrastructure, it can triangulate the location of every VM and deterministically give you the topology that that VM belongs to and how that VM is situated in your network. The control plane depending on the NSX sedation the control plane is centralized or distributed, but regardless of addition of NSX are control plane. This is stateful control plane is fully, it's highly available and fully redundant. Last but not least is the data plane. The data plane in the NSX architecture is understood to be the hypervisor and that is where we basically configure all the services for providing layer two, layer three, and layer three for network routing and security services to their applications to the virtual machines that are hosted within that hypervisor. So that is the architecture, but the integration between neutron and NSX happens at an API level at the top of the picture there. This is another representation of the same diagram. I want to call your attention before we get into a network discussion, I want to call your attention to the compute and storage services that integrate with vSphere. Nova Compute which in a typical OpenStack implementation would run as an agent of each hypervisor on each hypervisor, in our implementation works in a different manner. Nova Compute runs in the control plane in a management cluster and it is external to the hypervisor in question. Unlike some other OpenStack implementations and Nova Compute agent maps to a vSphere cluster. You have probably seen this before where the unit of consumption for compute across multiple solutions at VMware is typically a cluster. Cluster B9 aggregation of individual hypervisors. So instead of running an agent per hypervisor, we run an agent per cluster and we don't run it on the hypervisors we'd run it external to it. So that allows us to scale better and also leverage inter-cluster services without necessarily affecting or not defined the OpenStack layer. I'm talking about things like high availability, vSphere chain, vSphere DRS, which is our Distributed Resource Scheduler, which is a law very sophisticated resource management tool that is widely used by VCR admins to optimize cluster and resource consumption instead of that cluster. We have other things like host maintenance, upgrades, etc, that you can leverage without unnecessary impacting, or decommissioning hypervisors from the Open Stack layer. So all these inter-cluster services that are very unique and specific and valuable in the viscera implementation are readily available in your OpenStack deployment without necessarily affecting the API layer when you decide and when you use the services. So that is a benefit of running Nova Compute with a resolution of a cluster instead of a resolution of a hypervisor. Similarly, glands which is Catalog Service in OpenStack and similar which is that persistent block sorceress in OpenStack, integrate directly with vCenter to consume a vSphere data stores. As I mentioned at the beginning of this presentation these data stores can be NFS, VMFS or VSN. The point here is that our OpenStack does not integrate directly with the storage vendor with Stuart's array, we use Vshare as an abstraction and you have a choice of storage endpoints in your in your architecture. VSphere has perhaps the most comprehensive storage compatibility matrix in the market. So whatever your storage backend, it can be consumed by OpenStack if you can present this data stores to the ESXI hosts in vSphere. So that is the notion there. So no need of direct integration of this OpenStack projects with your specific storage vendors or storage technology, but really the focus of this presentation is the network integration and as we mentioned in the previous slide, neutron the networking component, has a driver or a plugin that is in charge of integrating with NSX manager and that is what we're going to cover over the next few slides. So it is important to level set and understand, what workflows are table stakes in neutron. These workflows I'm about to show you are offering any neutron implementation and the purpose of this slide here is to show you the corresponding service or construct that is a leverage on the NSX side every time that I instigate the consumption of a neutron construct. They start with networks, if I create a network for example web network in NSX and depending on how I'm configuring my transport zones and other component, I will create a logical switch that will be mapped two an overlay or mapped to a VLAN. So a networking neutron can be back by an overlay network in NSX or by a VLAN in NSX. And that is after you create that network you can launch VMs on that network that is what that workflow looks like. If I create another network, I will again leverage another logical switch in an NSX. Then there are DHCP services that I can also use, and if I want to provide if my neutron subnets are going to support the DHCP services, in NSX I'm leveraging a specific edge service providing the edge cluster for scalable DHCP. This is one of the key components that we replace in the neutron reference implementation we don't use an open source implementation for the DHCP, which has scale has proven to have some issues. We replace that with our own implementation of the DHCP that will scale much better and they will provide native HA, and that runs in the edge cluster. Then there is a very important service in OpenStack clouds provided by neutron security groups. The neutron security groups operate just like the NSX security groups, they have their resolution of a vNIC in your instance. So when in neutron you create a security group and you create explicit allowed rules to provide, to allow east-west traffic or north-south traffic through your instances that will correspond to NSX. Security group and an NSX distributed firewall policy. So everything that you've read and learn about micro-segmentation, can be readily applicable to an OpenStack cloud. By leveraging neutral security groups which in the backend we'll consume NSX distributed firewall policy. Then if you want these two networks to talk to each other and talk to the rest of the world, you would connect the neutral networks to a Neutron router. In our implementation, in VM where in to get OpenStack. A Neutron router will consume an NSX router and this router is referred to as the tier-1 router or a tenant logical router, so that is what the picture is showing there. Then you can connect the interface of this router to an external network. And in the case of NSX transform is that external networks lives on the external side of the tiers-0 router, and the idea there is that this is the network that will connect me to the rest of my intranet, to the internet or to another side. If I have my multi-side implementation, that is the notion of an external network. And then depending on the application profile, you may or may not need to leverage NAT in your implementation, by default, and this is a very common use case for OpenStack because overlapping IPs are very popular in OpenStack topologists would overlapping IPs, then you will need to do source NAT on those Neutron routers and if you want to access the applications from the outside coming into the environment, then you will leverage something called floating IPs which is nothing more than a DNAT, destination NAT rule that is in force in the Neutron router or the tier-1 NSX router. And then, the last service that is also basic, it's a basic service in Neutron, basic workflow that we support with both NSX edition, is load balancing as a service. This is a very very popular service in OpenStack specifically, valuable when you're writing web applications that require load balancing capabilities. This is also something that we support. In our implementation we leverage native load balancers. In NSX we don't rely on an open source implementation for that purpose. Let's dive and let's dig into each one of these services and let's explain where they run and how they run. And let's start with the notion of the Edge cluster. VMware has been promoting this notion of functional clusters for a long time. You have your management cluster, you have your Edge clusters. If you're running NSX and you have your compute and peddled cluster where your applications will eventually recite. The Edge cluster has serves a very specific purpose in the NSX architecture and that is where you will have north-south routing load balancing services, firewalls services for north-south as well as DHCP metadata etc. These all runs on the Edge cluster. It is very important that we understand that this clusters can be pool. And depending on the NSX edition, this clusters could actually be delivered as bare metal clusters for enhanced performance and scalability. But the notion is the same. This dedicated cluster will host all the stateful services for load balancing and routing and NAT, as well as things like DHCP and metadata. Our clusters can be one or more entities, meaning that you can provide robust HA capabilities for this critical network services in your OpenStack implementation. The ATP is a very very basic service in every private cloud out there. And unfortunately, the reference implementation that we seen for the ATP in open source OpenStack has some issues. Has some issues at scale, has some issues with a high availability, etc. We saw here a great opportunity to replace and enhance that DHCP implementation with something native to NSX, and as you saw from the previous slide, this runs as a stateful services inside of the Edge cluster. The Edge clusters, this very core, the ATP services will be hosted and consume. We also have supporting our Neutron plugin for load balancing as a service and North-South firewalls or service and that is fully supported. We don't rely on an open source implementation for either one and these are very very core services to every OpenStack implementation. This services saw realize and enforce on the tier-1 routers. Every tenant may decide to enable or disable services on their specific router, without affecting necessarily other tenants in the implementation. It's very very granular implementation that we've done here and very scalable. Metadata is table stakes. It's basic basic service in OpenStack. Metadata, Nova metadata is a service in OpenStack that allows an instance to interrogate the API layer and fine attributes about itself, and it's using a number and a multitude of ways. Metadata can also be used some for basic bootstrapping of your applications in the cloud, in some cases it can even be used for some basic initial configuration management of your applications. But every cloud implementation that uses OpenStack will almost certainly need to leverage metadata and this is implemented with a combination of agent and services in the cloud. In Neutron, we have provided the beauty of network virtualization is that we can make metadata work over any IP fabric, regardless and completely irrespective of physical topology. So, we've done that here and every time that you create a network we automatically plumb this network to the appropriate constructs to support the metadata service in OpenStack. So, VMs can have access to that service at the API layer which is hosted in the OpenStack or via controllers. The benefits of network virtualization, even if you're using VLAN-backed networks in your OpenStack you will leverage network virtualization to abstract the physical network and create a connectivity service between your instances and Nova metadata as shown here on the diagram. Again, whether or not you're using overlays for your applications, overlays will be used to connect VMs to the metadata service in OpenStack. Let's go through with some basic topologies that we see out there in the field with OpenStack. Sometimes we see OpenStack clouds that do not leverage or do not use overlays, for connecting the applications are plugin supports VLAN-backed Networks, Neutron Networks that are VLAN-backed, that is not a problem. And we see that some customers have actually embarked in this OpenStack journey in a very similar manner that we've seen with NSX, where they start with security as a main driver for the use case and for the implementation. They leverage micro-segmentation but they don't necessarily change the topology that is presented to the applications. This is perfectly possible in OpenStack and like I said we support VLAN-backed Neutron Networks. But there's also stopgap approach to going full, layer two, layer three virtualized. And that is connecting a overlays to VLAN-backed provider networks using a Neutron software bridge or a Neutron bridge, that is also fully supported in the Neutron plug-in for NSX and we see this use case as a stopgap approach to migrating from physical to virtual and some OpenStack implementations. And then, the end state is to use overlays across-the-board. And overlapping IPs are very common in OpenStack especially when you give the OpenStack users the ability to self-service networking inevitably, you're going to end up with overlapping IPs across tenant or cross topologies. We solve for this using NAT, that is very straightforward. But this type of topologies are better served by overlays. Because every time that I create a Neutron network that network is instantiating suffering, I don't necessarily need to make any changes to the physical infrastructure. NAT in this case is provided by the T1 router, excuse me, and the conductivity to the external world goes through T0 router or P log or Provider logical router. There's also non-overlapping IPs. We're seeing this more and more with what I call the enterprise use case and there are number of other resources that you can console where we have talked about how OpenStack is resonating with traditional enterprises that are looking for a consistent way to consume infrastructure. This is what we call the IT automation or IT automating IT use-case, and we see OpenStack having a play in that space, even though it lacks some basic government services that are almost always require. Many VI admins and IT architecture teams are looking at OpenStack as a way to consume infrastructure consistently. It's one of the contenders, it's one of the options that we see out there in the market and more often than not the applications that are put on their OpenStack are the so-called pets. These are applications that are long lived that required routable IP, that are need to be audited, et cetera. Our application, these applications are better served by no NAT topologies, where the T1 routers are basically have NAT disable and all the Neutron networks that you see on tenant A and tenant B, all these different networks are fully routable within your organization. This is a very common approach to consuming OpenStack that we are seeing. In summary, in the OpenStack view will show you conductivity of routers and networks and external networks, and in NSX, as we've covered during this presentation, we'll leverage objects such as logical switches, logical routers in various configurations and logical networks and load balancers and firewalls. This is kind of the idea.