Welcome to Business success with Cloud Native on Intel Architecture. My name is Jordan Rogers and I am part of Intel Sales and Marketing Group based out of Europe. In this module, you will learn how Intel architecture is differentiated and delivers business value in a Cloud Native world. First, what we are going to do is we are going to go through three real-world examples of how many customers are using Cloud Native to power their business. Then we are going to have a quick history around Legacy applications to Cloud Native applications. Then we are going to spend bulk of the time going into two real-world use cases around a 5G connected bus as well as rapid medical discovery, and then lastly we are going to wrap it up with a summary. Anytime I hear about a trend like Micro-services and Cloud Native, I always ask myself, why is this important? When you foundationally look at it, Cloud Native just really means the ability to scale an application. You are running it across many Clouds with Micro-services really being the technique, and these three digital businesses have embraced Cloud Native applications to power their business. Let us just have a quick dive into each of them. First, starting with Spotify, they are a very popular streaming company and they use Micro-services to reach their 345 million active users. In the context of a bank, HSBC, their PayMe business app is able to achieve with 98 percent of the transactions are able to achieve a completion time of 500 milliseconds or less. Basically instantaneous, and Etsy is a very popular craft and do it yourself E-commerce site. They are able to upload changes 50 times a day, that is very rapid. In terms of what does that mean for all the other enterprises out there that are still embracing digitalization? Well, data points from Soft Clouds says that 86 percent of the developers are going to be adopting Micro-services by default over the next five years. That is why we should care. Let us take a look briefly at how traditional or legacy applications compared to the Cloud Native applications that we saw on the previous slide. First starting with the traditional application, or on this slide it says monolithic. You will see there are three very distinct sections of the application. There is the data tier you see on the bottom, then there's the service tier and the web tier. These tiers are all connected by logic to create ultimate what is the application. These applications, when it came to needing more be able to support for example, more users. The best way to scale that was to add more back-end storage as you needed to serve more users. As we have changed our habits over the last 30 years, as more devices have come now online, whether or not it is a phone, it is a sensor. It is us as individuals just creating data through our smartwatches and so forth. This ultimately puts stress on these systems, and these systems, the challenges that they face now in being able to process and bring in the amount of data that they were not architected for many years ago are things like just purely the different types of data that are out there, it is not in a row or a column, it is in the context of a picture as an example. Or they just need to be able to service users globally. That is not something that these systems were ultimately designed for. These challenges are not entirely impossible to deal with in terms of Monolithic approach, but it is important to know that this is just like when we were allowed to fly how we were at first had to go to the counter to be able to get a paper ticket and if we wanted to change a seat, we had to go up and ask, is there a seat available? Well, now we can do that all on our phones, and this is really one use case of how Cloud Native architecture has fundamentally changed. How we interact and then as well the IT systems that are all [inaudible] I'm going to hand over to Vivek who's going to do a deep dive on Cloud Native Architecture. Thank you Jordan. My name is Vivek Kulkarni and I am a US based Cloud Solution architect within the SMG Organization. We heard about at a high level, what is the Legacy model and the challenges it poses sometimes to a certain extent. Let's see how a Cloud Native Architecture and the software layer can change things. The big change is having an application broken down into functions specific models that can be instantiated just when needed, where needed, and orchestrated together. In short approach, it can also be referred as making use of the microservices-based architecture, usually implemented in three steps. The first step would be to design the application itself by isolating modularizing the application functions to be standalone, reducing or eliminating, if possible, the amount of data and the duration for which it is actually held in the processing mode, identifying an orchestration mechanism and the interconnects needed for the models, and finally, identifying a process to develop, test, and deploy the oral monitoring mechanism as well. Second step would be around the necessary infrastructure needed. Here, it would be identifying the compute capacity aligned with different form factors depending on the placement environment and also the feature capabilities needed and identifying the data storage and the data processing capacity that is needed. Something like even how the data gets collected, aggregated, and what type of back-end processing is really needed, and even including the data archival. Then the last piece of the infrastructure design would include around the connectivity requirements, be it within the data center or some sort of mobile connectivity, or even the internet itself. Then the last piece of the thing to design, the last step, in fact, would be around making use of existing independent software vendor's solutions or services that might actually be useful to enhance or develop the DevTest, the skill set, or maybe the DevTest environments. Then on top of it, any Cloud solution provider of choice, or maybe affinity for infrastructure as a service or any platform as a service, or even the SaaS services that might be useful for designing the solution overall. We saw earlier how an application architecture can be transformed to be Cloud Native. We can group the main characteristics into being portable, lightweight, scalable, rapid lifecycle, and decomposed. Then associated with this, the business benefits would be, you can say them as for portability, it enables the multiple user experiences. You can simply also say it as a greater appeal to a large section of end-users. At the same time, it also enables better flexibility controls or the infrastructure capacity. Basically, for the businesses, it would be in the form of a better utilization of the capacity. Being lightweight brings an ability to exist for the application. It can exist anywhere, everywhere. Basically, for the businesses, it would be more ways to provide value to the end-user, which eventually would drive up the business revenue and then at the same time, again, it would also add to lowering the cost of underutilized infrastructure. The third one, being scalable. This one opens up options for choices to house the right infrastructure at the right place be it on-premise or a single Cloud, hyperscale Cloud, or multi-Cloud, or maybe in the form of an hybrid infrastructure. The Number 4, the rapid lifecycle helps control the cost of engineering, maintenance, and even possibly lowering the cost of development and support. At the same time, it also enables, to an extent, the savings on operational cost. Now, being stateless actually helps isolating the data with the application logic so that way, it would actually provide better control over the data for a clear visibility into the data compliance and regulatory needs. Then the very last one being decomposed. This actually adds a lot of flexibility to add or remove the features based on the actual usage. Something like a faster adoption to the user needs, which actually drives a lot of stickiness for the user base. Here, another way to look at where the businesses can drive success lies in maximizing return on investments for engineering and the technology operations. To expand a little bit, applications can be built with the canary test mechanism, enabling the engineering for a faster time to market. Then in terms of the orchestration, there are two main things that drives a greater operational efficiencies. The number 1, there is an ability that gets built in for a faster instantiation, updating and even rolling back, which directly improves a greater user experience. For example, think of it as where the application is actually available all the time and is up to date with the latest fixes addressing any vulnerabilities. The number 2 benefit in terms of the orchestration would be around the automation in the form of infrastructure as a code. This enables the ability of automated scale-out based on the need which is directly tied to reducing the underutilized or even the idle capacity, which you don't typically need. Then at the same time, it enables an ability to expand, then to new geographic regions, increasing the user base. Finally, in terms of the deployment, it actually drives engineering costs down and enables faster time to market and even the deployment automation directly contributes to responding quickly, either scaling up or down based on the demand. Now, next, let's take a look at few real life examples. Let's take a look at a business case which is very relevant to what we all went through in the year 2020 because of COVID-19 pandemic situation. This has all the typical characteristics of a workload that you can classify as a high-performance computing. If we look at the business requirements for the solution, the solution need to consume multiple sets of data in large volume. Analysis needs to be done, redone and on a sections of our data alongside with the cumulative collection of other datasets and then the technology infrastructure needed to collect, store, and process. It typically starts low, but exponentially increases very rapidly. Above all, the process and procedure need to be highly automated and repeatable to drive the cost of optimization regardless of which form of infrastructure is used, whether it's on-premise or Cloud, hybrid or even multi-cloud. This is how an actual implementation was done making use of Intel optimized offerings in the Google Cloud. The key business success here is the overall cost control of the solution while delivering an ability to rapidly build and deploy highly automated work streams that will make use of two things. Number 1, just the right infrastructure to securely collect, securely store the data, and spin compute capacity of right amount only when needed. Then Number 2, scale the infrastructure rapidly be it throttling down or up, doesn't matter. Finally, the solution is flexible enough to make use of infrastructure wherever it is available. To summarize, the business over here, was able to optimally utilize the infrastructure and scale it to match the exact need of the day. Overall, we can see how the core principles or the core advantages of the Cloud native applications architecture was put into practice by making each part of the solution decomposable, stateless, scalable on-demand, and most importantly, enabled rapid lifecycle for development, testing and deployment, leveraging the best infrastructure wherever it is available. You can also see in this picture where the Intel value propositions can be leveraged, be it on-premise infrastructure or the Cloud. Let's take a look at another business case, which resonates very well with the present day talk of the town. Here, the business is looking for a technology solution to build a car that is intelligent enough to drive itself while providing all the possible creature comforts to the occupants, and it's safe enough to its surroundings wherever it goes. Here, the solution needs to be highly capable, adaptable, rapidly deployable, and scalable, and most importantly, need to be developed quick, in a cost effective manner without compromising any security and the compliance standards. Now, let's take a look at the software architecture first and then see where and how Intel can bring value to make the solution real. Key guiding principle here is to apply the Cloud Native application architecture to three distinct areas. One, in the wiggle, it could be in the form of a small footprint function, specific set of modules serving a specific needs, and then all these modules are orchestrated together in a consistent control prioritized way, ensuring, number 1, an always active interface to the drive control systems, and then second most important thing is the data is ensured to be consistent and secure and usable at all times. In the data center, it will be in the form of function specific modules orchestrated with an always available connectivity. Then this whole infrastructure is ready to scale and serve simultaneous service requests at any given time. Then the third component, which would be in the Cloud or possibly even with the Telco provider, will be the form what is similar to the data center, but with added automations to rapidly scale to meet the computer needs, to turn the data, or possibly even develop the reusable data models. Cloud could also consist a tool-set infrastructure for faster development and testing and deployment roll-outs. With the telcos, it will be options built-in to make use of available, efficient, and optimal network bandwidths. Now, if we take a little step further looking at the architecture, the task of identifying the components with Intel value propositions become little easier be it component in the wiggle, at data center, or even in the Cloud. Even identifying a partner solution or the services that can enhance the Intel affinity can also be very clearly identified over here. To summarize, with Cloud Native approach for solution, we can see business can drive success and implementing a solution that can be developed, deployed pretty rapidly, and number 2, delivering a great modern user experience as closer to the end-user and then have unnecessary infrastructure that scales on-demand. We can also see the explosion of opportunities where Intel components can deliver success to the customer. Thank you for your time and attention and handing it back to Jordan for summarizing. Thank you for that. For through, clearly for your use cases, you really highlighted that business transformation requires really a holistic view. What I want you guys to walk away with, and I hope it was time very well spent with us today, is that first off, over the next five years, this is only going to become more and more popular so focus here. Secondly, when we look at just overall, what are areas that you can go and look for what would be areas to focus your time and apply Intel value, it's really around the orchestration and the process. Then third, the value proposition that the infrastructure brand really comes down to you really what the workload is, and ultimately as well, the larger, more holistic use case, and lastly, we hope that those last two use cases, which were all about connected bus and genomics, really showed you where Intel value is exposed up and through the application, ultimately then enables our digital world. Thank you very much for your time and best of luck on the rest of Cloud.