The next set of tools and services used in SAP deployments are primary and secondary storage services. Primary storage in this case means the disks that are attached to Compute Engine instances or VMS. To store application data for running SAP workloads, secondary storage services look after the long term retention of data. For backup. Google Cloud offers a range of storage services, persistent disks or PD for short. Or either standard disk drive, which is the classical spinning disk, or SSD, the fastest solid state drives. Cloud file store offers NFS at a zonal level. These are used as primary storage just for SAP installations. For backup on secondary storage cloud storage is the service for blob storage. Google machine images backup complete VMs along with VM configuration, and all attached disks of IBM. There are also persistent disk snapshots that can be used to backup individual disks. Let's start with persistent disk RPD for short, which is the native service that provides standard disk drive and solid state drives and Google Cloud. PD store your data independently from your virtual machine instances. So you can detach or move your disks to keep your data even after you delete your VM instances. Selecting the right type of PD, sizing them and specifying the disk layout for volumes required for SAP HANA and application servers are critical to ensure performance and reliability of an SMP system. So for SAP deployments, it is important to note a few key features of PD. First PD I/Os and throughput performance depend on the instance vCPU count, disk size and l/O block size and also on other factors like network egress cap, and so on. For example, if you have a 1 terabyte PDSSD Which has a range limit of 15 k IOPs, when you're operating it with a 4 vCPU machine, you can scale it up to 25 k IOPs when you're operating the same 1 terabyte PDSSD with 16 vCPU machine. Next, PD performance also scales automatically with size of the disk. So you can just upsize your existing persistent disk to meet increased performance and storage space requirements. Each PD can be up to 64 terabyte in size. So there is no need to manage a race of disks to create large logical volumes. Of course, you need, if you need to exceed 64 terabyte on one PD, then you can attach additional discs. You can also attach disks of different types standard or SSD to the same virtual machine. When you do this, the performance of SSD is leading for the overall performance But IOPs and throughput are split equally between each disk independent of the size of the disk. There are no per hour costs associated with persistent disks in Google Cloud. With PD, you don't need to stripe multiple disks to improve performance. Your I/O performance will scale linearly as you increase the size of your PD. These features are very relevant when you design the disk layout for SAP HANA deployments that meet disk performance that are specified by SAP. Cloud file store is the no ops NFS service native to Google Cloud. It is a fully managed network attached storage for Compute Engine. File store instances live in zones within regions. It is mostly used in scale out installations of SAP HANA with standby nodes for high availability. In this case, all the scale out, SAP HANA nodes will be in the same zone. File store has a minimum capacity requirements for pretty standard, this is one terabyte. If you choose PD SSD, it is 2.5 terabytes. File store is not suitable for HR installations of SAP application servers. Across two different zones, because file store is a zonal service. You will get more details and recommendations for HA in a later module for this course. So Let's now move to options for secondary storage. The first in this area is Cloud Storage. You might already be aware of features of Google Cloud storage, the storage classes, and so on. For SAP deployments, cloud Storage buckets are a cost-effective option to store data backups, and later on for also archival. Data backups are used to recover from disasters. So their availability across regions is critical. So let us look into Cloud storage locations a bit more in detail. First, we have the regional storage location, which is a storage within a single region. This is not that relevant in the context of disaster recovery. Second, we have multi region Multi-region is geo redundantly replicated across at least two regions within a region. However, you don't get to choose the regions, the price of this is slightly higher. Third, dual-region is a specific pair of regions that are specified. In this example, it's Finland and the Netherlands. If these two regions are your primary and disaster recovery regions, then dual region is a good option because your backups would be redundantly replicated and made available across these two geographic regions. If you are going to deploy across two regions such as London and Belgium and there is no Jewel region pay are available for these two regions. Then we recommend that you deploy buckets in single regions and one in London and one in Belgium in this example. Setup replication of storage across these two manually or scripted. In addition to native services, Google also partnered with best of breed shared file system providers as well to provide backup solutions and shared file system solutions. These solutions are certified by SAP and are already widely used by SAP customers. Google Cloud partner NetApp offers cloud volumes and on tap for shared file storage Available in select Google Cloud regions. Cloud volume service for Google Cloud is a fully managed file storage service for backup and recovery, third party solutions from actifio, Dell, commvault, provide enhanced capabilities for SAP such as test data management, push button, Dr distinct. And differentiated deduplication capabilities. These solutions are also certified by SAP. So here are some recommendations for using Google Cloud Storage for SAP deployments. You can use the cloud storage backup agent for SAP HANA backups, so that you can directly use a Cloud Storage bucket. For backup instead of a PD, provision dual region storage for backups if the pair matches your regions, provision regional storage in both primary and Dr regions and set up a replication between them, if your regions pairs. Don't match your DR regions. Finally, use archive storage for backups which will be retained for more than a year.