In the previous modules, we learned about basics of Machine Learning Operationalization. That is ML Ops. Whites needed various ways off its implementation and its impact. In this module, we will be discussing a Google Cloud product AI Platform Pipelines that makes ML Ops easy, seamless and scalable with Google Cloud Services. Let's start. Throughout this module, we will learn about what a machine-learning pipeline is and what is important parameters are. The basics of AI platform pipelines, a Google Cloud product to build ML Pipelines, and how it is different from the regular ML Ops pipelines. We will also discuss what is the technical stack behind this product and what is the ecosystem around it that makes it very collaborative and scalable. Let's start with an overview of ML Ops. We will briefly discuss its impact on artificial intelligence as a field and try to understand what are the challenges that you may face while building such Pipelines. Head Google, we believe in democratizing machine learning and always target these three important factors while building any AI solutions. The first one is simple. Make the solution's simple that anyone can take advantage of them regardless of their technical capabilities. AI solutions should be simple that even a business audience can understand and implement them. The second one is fast. Make them fast you can iterate and succeed more quickly. AI cycles should be as agile as possible. Build fast and feel fast. More experimentation leads to better results and helps in making a robust production grid solution. Third, and the last is useful. Make them useful in solving your real business problems. Without business impact, any technical solution is of very little use. When we talk about AI solutions, these are the three common problems that we usually face. The first one is deployment. Many times, infrastructure is very brittle and doesn't scale as you need to grow. Complexity in the infrastructure also increased the complexity in deploying machine learning systems or pipelines. You need a robust toolbox to build, manage and deploy a machine-learning pipeline. There can be different from a regular CICD pipeline. This toolbox should be compatible and tightly integrated with your Infrastructure. This may contain model replanning and human validation in the loop. Kubeflow and TensorFlow extended. The effects are the grid solutions for this problem. The second one is talent. Very limited talent is currently available in machine learning, and you may need to lower the bar to build AI Apps. You need a solution that can be reused and can be re-implemented by anyone. You can also create a tiered architecture in your organization. While one segment is building solutions from the beginning, the second segment is using them and customizing them for various other business problems, and the third segment is just consuming them. The third common problem is collaboration. You need an ecosystem in your organization, a community to extend support, enabled specialized skills to flourish for both publishers and consumers and see the market with the content. You may also need access to a marketplace where you can publish your solutions or Pipelines and anyone in your organization can take it, modify it as per their needs, and use it. Such a marketplace should be created with a target to make developer experience better. Let's start discussing a few potential solutions. First one is using a framework to build and deploy your Pipelines. Kubeflow or TensorFlow extended TFX are two common ones. They are very easy to start with and very exhaustive to cater arrange of your needs. Kubeflow provides out of the box support to a lot of common ML frameworks like TensorFlow, PyTorch, Caffe, and XGboost. TFX is state of the art technology and provides Google's best practice on TensorFlow. It integrates tightly with TensorFlow and supports end-to-end deployment from data ingestion to serving. These services are swappable and scalable. Scale is very important today. You never know when your requirements for computation increased by a 100X or even 1000X, users can increase exponentially as your ML Pipeline should be robust enough to take care of. You cannot just rely on a fixed Virtual Machines with a dedicated processing power and running large machines will be costly to deploy as well as to maintain. These frameworks can support different platforms as well. These can work On-premises as well as Cloud. Given the requirements to use different models at different platforms and integrate back to your legacy environment, these gelato [inaudible] very well with your Infrastructure. Second is building reusable pipelines. Biggest question is, why do I need to build the same components again and again to create my pipeline? Why can't I use PBL blocks and stitch them together to get my job done? Why can't the Machine Learning pipeline follow the ID of Lego where we have defined blogs? Can you just need to know what suits you the best? These pipelines like Kubeflow or TFX, provide you with a capability to reuse the already built components than starting from scratch by reinventing the wheel. You can search, learn, and then replicate the already successful Pipelines. This will help you to solve the actual user problem rather than going through the unnecessary pain. The third and the last one is a collaborative ecosystem. A place where you can publish your pipeline on notebooks and then anyone else can take it and use it. AI hub is one example of that Google Cloud provides. It is a personal, as well as the public marketplace for your solutions. Where you can publish your work either internal to your organization or to the whole world based on your likes. You can search for already available pipelines and then directly deployed on your Cloud instance. You can modify it as well as based on your use case. This deduces the effort to start solving a problem from scratch. For example, if you're solving a problem to predict customer churn for your finance business unit, but the pipeline available is trained on retail data. Then you can take it and modify it to make it useful for you. Then you can publish it internally that anyone in your organization can actually use it. AI provides you a very clean, simple, and fast implementation. It just needs one click of a button to deploy pipelines on Google Cloud. If you want to make it hybrid, which means to make it work on both Cloud and On-premise, then you can do that as well. If we make thing easier to explain what this ML Ops or ML Pipeline ecosystem looks like and we can compare it with the Android ecosystem. In this analogy, you can consider Google Cloud Infrastructure as your hardware, like Google Pixel phone in Android ecosystem, where you can deploy things, where actually use things. Google Cloud Services as Google Mobile Services to take advantage of pre-built libraries to build any app. ML Pipelines using Kubeflow or TFX with Android operating system and Software Development Kit, SDK as a high level abstraction to orchestrate your solution. Finally, you can compare AI hub to a Play Store to provide you a marketplace or collaboration tool to publish your stuff and share it with others. That's how the analogy with your Android ecosystem. and your Google Cloud ML Pipelines Ecosystem will look like.