Okay Ankit. So, what did we learn in that lesson? I guess, um, the first, you know, "big point" is just that we're at very large scale, and very high bandwidth requirements between posts. Tens of thousands of them. So we can't, uh, any longer use just a few. Scale up, uh, switches forming the backbone of the network. That would be very expensive, and even if, um, we were willing to pay for it, we just can't get the kind of port density we need to network that big of a data center. Absolutely. As a result, data centers these days are commonly built with clos or leaf spine topologies. Where you use a large number of cheaper commodities, such as, network together to build this large data center. These networks can be, still arbitrarily large and use very simple building blocks. And as much bandwidth as we want, as large as we want. It becomes feasible, but it doesn't necessarily become cheap. You're still paying for a lot of equipment, and you still have challenges like cabling-huge numbers of cables. If you really wanted, you know, a fully nonblocking network between tens of thousands of hosts, that is, the host will never wait for the network. That would be a lot of equipment and a lot of cables. Absolutely. And cable aggregation, and physical organization,these things go a certain distance, uh, but people are evaluating other solutions to this problem, right? Yeah, so, you know, something I think is interesting is, ah, you know, looking more towards some kind of radical, as yet, unproven solutions in the research community. Ah, some very cool stuff like using wireless, as the links among racks in the data center. Yeah, in fact, one proposal involves using wireless optics to connect the host or top of rack switches that need to communicate at the time based on traffic you're monitoring in the data center. Connecting them up dynamically, that these lasers that are configurable at run time. So, you'll establish a connection in response to the traffic characteristics, and you don't have to run cables between every pair of racks, or even build fabrics as dense as these closed networks. Cool, so obviously a lot of new challenges to solve there from the physical layer on up, and we're not sure whether, You know, something like that will actually, ah, be cost effective in the future, but it's a very cool direction. Absolutely. And you know, another direction that, ah, these hyper scale data centers might go, is in trying to tackle that ah, tension between getting the, ah, economy of scale of a huge data center. Verses, being physically close to your user so you get low latency. Absolutely. In fact, I think it's very attractive to have some compute-general purpose compute-closer to the edge, where your customers are, right? Latency sensitive apps will greatly benefit from that. I will say, though, that will require a lot of re-architecting for both the network as well as the application You have to change both. Cool. It's an interesting space to watch. All right, we'll see you next time.