In this demo you will learn how to install the Migrate for Compute Engines infrastructure components on GCP and vSphere. Once you install all the components, you'll be ready to start migrating virtual machines. Migrate for Compute Engine gets enterprise applications running in Google Cloud within minutes while data migrates transparently in the background. With Migrate for Compute Engine, enterprises can validate, run and migrate applications into Google Cloud without rewriting them, modifying the image or changing management processes. Migrate for Compute Engine provides a path for you to migrate your virtual machines running on VMware vSphere to compute engine. Migrate for Compute Engine can also migrate your physical servers and Amazon EC2 or Azure VMS as well. In following lessons, you will see the same functionality within Amazon EC2. To begin, let's first discuss the primary components of a Migrate for Compute Engine installation. The following three components are deployed within Google Cloud. First the Migrate for Compute Engine manager on Google Cloud, manages all components and orchestrates migrations. It also serves the Migrate for Compute Engine web user interface. Cloud extensions handle storage migrations and serve data to migrated Workloads during migration. A cloud extension is a pair of cloud Edge nodes that runs within Google Cloud. The Migrate for Compute Engine exporter creates Google Cloud persistent disks when detaching disks. Now in this scenario, we will be also deploying some components on-premise in our VMware vSphere environment. Those components are the Migrate for Compute Engine on-premises back in virtual appliance, which serves data from VMware to the cloud extension running in Google. In addition, there's the Migrate for Compute Engine vCenter plug-in, which connects vCenter vSphere to the Migrate for Compute Engine manager. Migrate for Compute Engine decouples virtual machines from their storage and introduces capabilities that each are moved to Google Cloud. Such capabilities are the ease of deployment, where you can install Migrate for Compute Engine virtual appliances in just a few steps without installing agents on servers. Also their simple Management in VMware vCenter for VMware migrations. A plug-in flattens the learning curve for VMware administrators. Integration with tasks, events and alarms, provides visibility and control over migration. Also Migrate for Compute Engine is secure by Design. Data transfers between Migrate for Compute Engine components use TLS and AES-128 encryption. Data at rest is de-duplicated, compressed and encrypted with AES-256. You're also able to boot over when, since Migrate for Compute Engine performs a native boot in the cloud for your virtual machines in a few minutes regardless of image size. While the image boots, Migrate for Compute Engine adapts it for the target environment. No changes to the application, original image, storage drivers or networking are necessary at all. In addition, there's an intelligent streaming capability whereas Migrate for Compute Engine prioritizes the data necessary for an application to run, and moves that data to the cloud first. Other data stream to the cloud when needed. To provide high performance during these migrations, Migrate for Compute Engine provides a multi-tier caching and optimization layer, which includes a read/write layer in the cloud. The cache stores data needed by the application, de-duplication, prefetching, asynchronous right back and network optimizations further accelerate the migration, reducing migration bandwidth by up to 75% in production migrations. Another key component is resiliency. Migrate for Compute Engine cloud extensions use an active/passive configuration across two availability zones. Data is written in both zones, and then synchronously transferred back on-premises to reduce the risk of data loss. Optionally, rights can persist solely in the cloud for development and testing. The recovery point objective or RPO is the maximum acceptable length of time during which data might be lost due to an incident. Migrate for Compute Engines architecture ensures a 30-second RPO for sync to Google Cloud storage in the case of a dual zone failure and a one-hour RPO for sync on-premises. Last but not least, Migrate for Compute Engine supports multiple operating systems. Now that you have a solid understanding of the architectural design of Migrate for Compute Engine, let's go ahead and deploy the product within Google Cloud, and within VMware vSphere, which is our demo environment. To begin, you will want to open Cloud Shell within Google Cloud. Once open, you want to navigate to /google/velostrata. If we take a look in this directory, we will see that there's a python script, velostrata_sa_roles which we will want to execute. This script will create two individual roles each with its own permissions as well as two service accounts, which will be bound to each role. These roles and service accounts are absolutely critical so that the product has the appropriate permissions to carry out its functions. When running the script, we will want to provide a -p argument which contains the project idea of the project in which the service counts and roles will be created. As well as a- d argument which will consist of up to eight characters, a description of the roles and permissions that we are deploying. When done you will notice that a Veloastrata Manager role, a Velostrata Storage Access role have both been created as well as the Veloastrata Manager Service account, and the Velostrata Cloud Extension Service account. One thing to take note of here is that Velostrata was the previous product name for Migrate for Compute Engine. So if you see it's reference, just understand that it is synonymous with Migrate for Compute Engine. Now that the necessary roles and service accounts have been created, we are going to go to Go Marketplace in order to deploy the Migrate for Compute Engine manager. Once at Marketplace, search for Migrate for Compute Engine, formerly Velostrata. Click on the available option and click on launch on compute engine. We will then want to select unique name for the deployment. We will also choose the zone in which we will deploy the manager in. In this scenario, we will be deploying in East 1B. You will want to complete the following items which lists the manager service account and the Cloud extension service account. In addition, you will want to come up with migration manager and API password as well as a private key encryption password. For this scenario, I will do something simple. But when you're using this in production, it is recommended to provide a much more complex password and diversity as well between the two passwords. The final option is a comma-separated network tag. I will use the tag fw-velostrata, which is a network tag that I've created as a Firewall rule prior to which provides Ingress allow or anything within this particular IP range. And is necessary for the Migrate for Compute Engine Manager to function properly. After all of the values have been filled out properly for the Migrate for Compute Engine, formerly Velostrata, Deployment Manager template, you can go ahead and click deploy to deploy the manager. You will want to wait for the Deployment Manager template to complete. Once Deployment Manager is completed, you'll want to click on the Migration Manager UI address. The Migration Manager may take several minutes to start up. So do not be discouraged if it does not come up the first time, but rather refresh the page. Next you will be prompted for a username and password. The username was given to you during the Migrate for Compute Engine Deployment Manager deployment, and that is API user. If you forgotten the password that you set, you can easily find it by clicking on the Migration Manager instance, either within Google Compute Engine or directly from Deployment Manager is shown. Next you want to scroll down to the metadata section and you will find your API password. Upon logging into the manager for the first time, you will want to select enable stackdriver logging and enable stackdriver metrics. These two settings will allow your VMs to collect periodic data around its performance to that when you go to perform a right-sizing operation, the data is there to determine exactly the appropriate size of the VM. When done click OK. The next step is to deploy the Migrate for Compute Engine Backend within your VMware or vSphere environment. To do so, click on system settings start by creating a token. Make sure to copy the token into your local clipboard. Next you will download and deploy the on-premise Backend OVA Appliance. Once downloaded, log into your VMware or vCenter web client. After logging into vCenter, you will want to deploy the Migrate for Compute Engine Backend OVA Appliance. Once logged in, you will want to now deploy the Backend OVA. Right click data center and deploy the OVF template. Make sure to choose the recently downloaded file for the Backend. Once done, click next, provide a name. In this case, I will use the default name. Choose the resource as to where you want to run the deployed template, review the details provided, and select next, select the appropriate storage option. In this case, I'm going to use thin provisioning. Press next, choose the network. You next need to configure several settings within the customized template menu. Expand the Migrate for GCE Backend appliance configuration. Paste the token that you recently created within your manager within this menu. You will then choose an SSH password that will be used for the Backend. When done click next, verify all of your settings, and if everything looks appropriate click finish to deploy your Backend. Once the Migrate for Compute Engine Backend has been installed, you need to power it on. You will want to wait for the Backend to fully power on before you go back to the manager to verify that the Backend is properly registered. After several minutes, go back to the Migrate for Compute Engine Manager web interface and click on system settings. Will want to verify that the VM Backend is properly registered by clicking the refresh button. Please note this can take several minutes before it fully comes online. After clicking refresh you will now see that the registration status is registered, the connection status is connected and the Backend IP address has been assigned. Our next step is to register the vCenter plug-in. Click on the vCenter plug-in tab and click register vCenter plug-in. Enter either the DNS name or the IP address of your vCenter as well as the corresponding user name and password. Once done, click register. If everything is done correctly, you will receive a security warning and you should click connect to proceed. When done you will see the button to unregister the vCenter plug-in, which means that everything was properly registered. Let's go back to our VMware vCenter web interface. To ensure that the plug-in is properly working, we're going to log out and log right back into vCenter. To verify that the vCenter plug-in is properly installed, you can right-click any VM, and towards the bottom, you should see the Migrate for GCE operations menu. Now that we've deployed the manager within GCP, deployed the Backend within VMware, and also registered the vCenter plug in, our last step to set up Migrate for Compute Engine is to deploy Cloud extensions within Google Cloud platform. To do so go to the manager web interface. You will then want to click on target cloud. We are going to create a new Cloud extension. We will select our project, the region, which in this case is US-east1. The VPC We want to enter the Edge node network tags, which as you can remember was FW-Velostrata. We then ask for the default network tag for Workloads. In advance, I've set up a tag called FW- workload. You can see here that this FW- workload tag is an Ingress filter allowing traffic on port 3260 tcp and five one for udp. We will then select the default destination project for Workloads and the default service account, which we had created earlier for our Cloud extension. Next click the plus sign under the Cloud extension subsection. We now have to give our Cloud extension a name. Next choose the service account for the Edge nodes, which is the other service count that we had previously created. We are going to use the small size for the Cloud extension. But when you're doing production migrations, you will want to leverage large. Next we will select the VMware data center. You can proceed by clicking on zones and selecting the zones in which you want to deploy the Cloud extension. I will choose east1-b and east1-d. You will then choose the various subnets that apply to each of those zones which were deploying nodes within as well as the default Workload subnet. Next we will click on labels. Once I enter arbitrary labels and their corresponding values, I will click OK. At this point, the Migrate for Compute Engine manager is reaching out to Google Cloud platform to deploy the two Cloud extensions in both zones. Will now wait for the Cloud extensions to be fully deployed. Now that the Cloud extensions have been successfully deployed, we could go ahead and start them. Click on the Cloud extensions and then click the start button. When prompted, if you are sure to start the Cloud extension, click yes. You will notice the state of the Cloud extension is starting. After several minutes, you will see the state of the Cloud extension switch from starting to active. You can even go ahead and click the health checks to determine that all health checks for the Cloud extension have been successfully passed. Now that your Cloud extension is installed your vCenter plug-in is registered. Your Backend has been deployed within vSphere, and you're Migrate for Compute Engine manager's deployed in GCP, you're ready to migrate Workloads.