So in general, virtualization causes overhead for I/O processing. And this is particularly bad for network functions as we said. Unfortunately, hardware vendors have been paying attention to this problem in general for making I/O processing much more performant in a virtualicity. And there have been two approaches that hardware vendors have come up with to eliminate the I/O virtualization overhead. And I'll briefly mentioned both of these technologies. Both of these are by Intel, one is called Vt-d nd other one is called SR- IOV and I'll talk about each one of these technologies in fairly quick succession. So one enabling technology for virtualizing that for function I/O in general is virtualization technology for directed I/O. And the idea is always, once you see the idea, then you can say, that makes a lot of sense. I mentioned when I talked about the processing, the levels at which the copying has to happen, first, the network interface card has to DMA a packet into a buffer. And once it comes into buffer, it has to be passed over the kernel. And the kernel has then pass it up to the protocol stack. And finally, it has to go into the user space. So all of these are different levels and because of the fact that your address is is protected, there is copying that is happening. And that's the thing that needs to overhead in virtualization. And that's exactly what Intel technology tries to do using this VT for directed I/O. And the idea is quite simple. What you wanna do is rather than gaming a packet into a particular piece of memory and then copying it into the user space. You wanna make it more efficient by having the I/O device, example a NIC, directly accessing the memory space where the packet has to finally end up. So it can avoid the overheads of trapping emulate for every I/O access. And the basic mechanism for that is remapping the DMA regions into the guest physical memory. Rather than the being separate memory for the guest and the host, we're gonna remap DMA regions into which a NIC is gonna bring in a packet into the guest physical memory itself. And that way when a packet comes in, it'll being directly broad into the memory region of the virtual machine. And that way, we are not copying from the kernel space into the buffer that is intended for the virtual machine. And so this copying from the device into the virtual machine can directly happen into the buffer that is designated for the virtual machine. And the trick is this DMA remapping hardware that makes sure that the guestsphysical memory is the one in which the device is going to do the DMA into. In the same way when an interrupt has to be delivered, the interrupt remapping has to happen into the guest's interrupt handlers directly. So that there is no two levels of interrupt processing, one in the virtual machine and monitor, and then again into the VM itself. So that we can avoid by doing that. And effectively It gives direct access for the I/O device as a guest machine, even though it is a virtualized environment. And all that needs to be done is that the guest virtual memory regions have been mapped into the I/O space of the device. And that way the I/O device can directly access the guests virtual machine. And the benefits of this VT-d is of course, it avoids the overheads for trapping emulate and the DMA by the NIC is being performed directly to and from the memory belonging to the guests VMs buffers. And also interrupts are handled directly by the guest instead of by the hypervisor. Essentially, what VT-d allows a VM to do is to own the NIC, right? So even though it is a virtualized setting, the guest VM is able to own the NIC because of the fact that this hardware assist is giving you direct access both for the buffers being used by the device as well as for interrupt processing into the virtual machine. That's the benefit of VT-d. Another technology that is also been proposed is what is called single route I/O virtualization or SR-IOV interface. Now, backing up a little bit. Now, what I showed you with the VT-d technology is the fact that you're taking a buffer that belongs to a guest VM, mapping it into the device. So the device can directly do the DMA, right? But in other words, a particular VM is now owning the device. Now, there are multiple VMs that are existing. Now, all of these devices, they need to access the same NIC, the same hardware device. Then, how do we do that? Well, that's where this a single route I/O virtualization SR-IOV interface interface comes into play. And basically, this is an extension of the PCIe specification. The PCIe is, of course, the way by which the I/O devices interact with the system and the peripheral device. The PCIe device is a physical function, but these physical function can be represented as a collection of virtual functions. So this is part of the hardware that actually allows you to do that. And so in practical deployments, they may be as many as 64 virtual function per physical function that is available in a PCIe device. And each virtual function can be assigned to a different VM or multiple virtual functions can be assigned to a given VM. And what that does is that you can have multi-tenancy for the same hardware device for multiple VMs. The trick is essentially giving configuration registers for each virtual function. Each virtual function has its own configuration register space, and that way we're not making a particular physical device owned by only one virtual machine. But because of the fact that a physical device is supporting multiple virtual function, each virtual function owns that particular device. And so this allows multi-tenancy for a particular device. So that's the idea behind SR-IOV which is saying that even though the physical device is only one of it, the physical port can now be shared by multiple virtual ports. And and the magic is happening here that it is part of the interface card that allows these multiple virtual functions to be mapped to the same physical port. And the separate registers that are being used devices registers, configuration registers for managing each one of these virtual functions. And benefits of SR-IOV is the fact that it allows multiple virtual machines to share the same physical NIC without any conflict. So multiple guest VMs can access the same physical NIC without any conflict. So that's the idea behind SR-IOV. So now what it does is that each one of these can be a separate network function for instance. All of these network functions need to access the same NIC and they can access that. And they can do their own processing without interfering with one another, all in a virtualized environment.