(upbeat music) - [Raf] All right, and here we are on the "Instructor's Office Hours" for the AWS Cloud Technical Essentials on digital training platforms. My name is Rafael Lopes and I work with Morgan Willis, which is muted right now that I can see the little icon there, so here we go. Hi, Morgan. Good morning. - [Morgan] Hi, good morning. Thank you so much. So hi, my name is Morgan Willis. I'm also a Senior Cloud Technologist on the DTP team, making digital courses for you all to learn more about AWS on websites like edX, Coursera, and Udemy. So yeah, today, we're here for the "Instructor Office Hours" for a specific course called the AWS Cloud Technical Essentials course. And this is essentially going to be just a place where we can talk and learn and we can hopefully answer some of your questions. So we have some questions that we pulled out of the discussion forums that we would love to go over, but this is supposed to be an interactive session, so don't think of this like a webinar. We want you to utilize the Twitch chat and we'll be keeping an eye on that chat. And so Raf, why don't you go ahead and introduce our moderators as well? - [Raf] Yeah, we have moderators here in the chat room. So please moderators, say hi to our guests. We have Alan, Alana, Morgan, and Jon moderating the chat. So they will keep an eye on, they will keep posting links from things we show here. And we also have ASL interpreters, American Sign Language interpreters. So we have Whitney here on the screen doing the sign language as well. And he is going to alternate with Danny, the other ASL interpreter. That's very welcoming and thank you a lot. So let's see, where are you talking from? You can post on the chat. So #NotAWebinar. This is supposed to be interactive. So we are looking to see your interaction back in the chat to talk with us. So I see that there is people from Texas in the chat. "Sunrise east of Seattle." That's very good. Dallas, Texas, that's- - [Morgan] I'm joining from the state of Massachusetts here in the US. We have someone from India as well, Virginia also. - [Raf] Bangladesh, super cool. New York, hello. So, "Greetings from Colombia." - Awesome. - Where are you from Colombia? Are you in Medellin, Bogota? That's the only two cities I know. Western New York, thank you very much. So Jon probably is in Canada, eating maple syrup. At this point, I think he drinks maple syrup. - [Morgan] Yeah, I was gonna say I don't think he eats it. He drinks maple syrup, so you know, it's a little bit of a difference there. - [Raf] Cool, very nice. All right- - [Morgan] All right. So I just kind of wanted to talk really quickly about again, why we're here today. We're here for "Instructor Office Hours". So this course that we've put out, the Cloud Technicial Essentials course, this is for people that are enrolled in the course or maybe interested in the course, but especially people who are enrolled in the course. So to kind of get everybody warmed up and used to using the chat, I'm gonna go ahead and share my screen and let's do a quick poll. And I want to see how many people that are joined are currently enrolled in the AWS Cloud Technical Essentials course. So the way that you can interact with this is you can just type a one for yes, you are enrolled, or two for no. And don't worry about it if you're not enrolled, you can still hang around and learn. We're gonna cover all sorts of different topics today. So we're mostly gonna be hopefully answering your questions. So I'm thinking of this as like your questions in the chat are like the first come, first serve. And then we also have some questions that we've pulled out of the discussion forums to answer some common issues. So the cloud, sorry, I have to turn on the percentages here so we can see it. There we go. So the Cloud Technicial Essentials course covers the basics of AWS, starting with like what is AWS, the global infrastructure, then we go through EC2 and networking, then storage and databases. And then finally, we end with scalability using EC2 Auto Scaling and Elastic Load Balancing and CloudWatch. And throughout this course, there are labs, and we've seen a good amount of people getting hung up on the labs, on not being able to reach their EC2 instance, or maybe they don't quite understand a certain concept. So what we're planning on doing here is answering some of these questions. So we actually can see we have 42% right now that are enrolled in the course and 57 that aren't. So for those of you that are in the course, well, you probably recognize me because I'm one of the instructors for that course. Raf is not one of the instructors for that course, but he is on our team. And then Alana, who's the other instructor for that course, is in the chat right now as laytalan. So if you see that in the chat, that is Alana. So yeah, what we wanna do now is just kind of start going through and answering your questions. So again, if you're not enrolled in the course, maybe go check it out. You can enroll on Coursera or edX. So the moderators, if you can, I can see that Molly already posted in the chat the links to the courses. So if you are interested in learning more about AWS and you want to start from a beginner but geared towards technical audience, you can do that by joining and enrolling in the Cloud Technicial Essentials course. - [] Yeah, and this course has seven labs. So the first lab talks about creating an AWS account to follow the instructions. The second one creates an IAM role. The third one creates an EC2 instance. The fourth one does all the networking scaffolding. The fifth lab talks about storage. This sixth exercise talks about databases. We may call them exercises or labs. It's just the step-by-step instructions that we give you for providing some hands-on experience. The last lab, the seventh lab talks about Elastic Load Balancing. So it builds the architecture that Morgan is showing on her screen. And the thing is on all those labs, they are independent because we designed the labs in a way that you can stop everything after doing the lab because you can do the lab on your own time. So you can do one lab today and another next week. So we don't want the resources being charged in your account. And that EC2 instance is something that you tear up and tear down for every lab. So if you get stuck on the creation of that instance, it may make the whole lab experience a little bit harder. So we found the most frequently asked questions, you like that Morgan, the most frequent, like they are very frequent. - [Morgan] Very frequent. - [Raf] The most FAQs on the forums, and by far the biggest one is, "I cannot connect to my EC2 instance, why?" So we prepared and I'm going to show you some troubleshooting tips for those specific steps on the lab and some things that may be ongoing to prevent you from connecting on the instance, to preventing that from happening. So once you'll tackle that, you can successfully launch the EC2 instance and proceed with all the labs. And also, do not let any exercise step, any labs stop you from doing the course because you can complete all, watching all the videos, reading all the readings, and doing all the quizzes, and leaving the labs to the end, right? So don't let the lab, if you get stuck in a step in the lab or so, you can continue in the course. So the first question, very common question is, "I cannot connect to my EC2 instance, why?" And let me share my screen here to show the exercise instructions, how they look like. This is the first exercise that you will have contact with the EC2 instance. It is the exercise that you will apply compute resource. In this case, an EC2 instance. So the previous lab, we'll ask you to create an IAM role. And this IAM role is the S3DynamoDBFullAccessRole, right? So that role will already be created, pretty easy. Go to IAM, create role and everything. Then you will launch an EC2 instance and you will put this user data in this instance. I am going to tell what these commands does and some things, let me increase the font size here a little bit, and some things that you may get stuck if you don't do it properly. So the first instructions on the lab asks you to create an EC2 instance using Amazon Linux AMI, Amazon Linux 2 with the t2.micro size instance type, right, which will have one virtual CPU and one gigabyte of memory. Then Network, choose the default subnet. Every AWS account comes with the full subnets that has an internet gateway, attach it to the VPC. It belongs to Auto-assign Public IP, Enable. Then you're going to choose the role you had created on the previous lab. So if you go to your AWS Management Console and click here in Launch instance, in this button here, Launch instances, you will see this instance creation wizard where the first thing you will need to specify is the AMI, the Amazon Machine Image. Do not confuse with IAM, Identity Access Management. Here, we are specifying the AMI. Same letters, different order. AMI, Amazon Machine Image, you are going to choose Amazon Linux, right? So this is what the lab asks you to do. Then you are going to choose t2.micro. It's already pre-selected for me. So I just click in review, sorry, I just click in Configure Instance Details here, Configure Instance Details. And here is where I will specify what is the subnet, which I can leave untouched because it already chooses the default one for me. The IAM role, I will choose the IAM role that was created, which is this one here, right? So for this- - [Morgan] Hey, Raf. Hey, Raf, can I interrupt you real quick? Can you explain the default VPC? 'Cause I think what I've been seeing in the discussion forums and hearing from students is they aren't able to connect to their instance after they launch it. So if you aren't selecting the default VPC, then you might not be getting some of the initial setup that makes it easy to have an instance publicly accessible. So can you kind of just quickly describe what the default VPC is? - [Raf] Yeah, every new AWS account or every account comes with a default VPC, Virtual Private Cloud. So this VPC is what makes the network arrangements and the network locations for your EC2 instances. Every VPC comes with, not every VPC, the VPC that comes with your account come with default subnets, right, and an internet gateway attached to it. And those subnets like subnet, subnet, those subnets, they are public because they have a route to this internet gateway. So if you place an EC2 instance within that, that EC2 instance will know how to reach the gateway in order to reach the internet. So if you have your users and your users connect to the public IP address of your instances, you will be using that connectivity. So the the default VPC, which is the one that is indicated here, the default network which makes part of the default VPC already comes with that routing configuration pre-built, right? So you choose the default one and then if you enable Auto-assign Public IP, which is something the instructions ask you to do here on this Step Six, you will have that access. You will have that route already built. So you choose the IAM role. We will talk about IAM roles, users, and groups in a moment. This role should be previously created by the previous lab. And here is the user data, here is where you put these commands here. So you should copy them and paste here. Now some important aspects here. When you paste it here, the instructions will ask you to replace and to insert the AWS region you were using right here. Let me see if I can zoom in to make this bigger, right. So it will ask you to insert the AWS region here. What you should do is this, right? You should type the API name of the region you were referring. You should not type something like that, right? You should type this, which is the API name for the given region. So if you're in California and you want to use us-west-2 or us-west-1 region. If you are in Asia Pacific, you can use ap-northeast-1. So if you see on AWS documentation, there is a list of the region API names and their corresponding physical location. So it is also important to have no spaces here between this equal and this equal, right? So you should just replace this by the region API name, okay? - [Morgan] Okay, and then I'm gonna interrupt for a second here as well. So this is, you know, really important whenever you're going through the lab. So just to kind of give some context, I realized I didn't give that for anybody who wasn't in the course. We have this application called the Employee Directory application, and it's essentially just a sample application that's written in Python using Flask. And so this user data script is a batch script that whenever the EC2 instance boots up for the first time, this will run and it will download the public code like Raf is highlighting right now, the line that's downloading the code from a public repository and then it unzips it and then it installs some different dependencies that that code is gonna have to run, and then it goes ahead and sets some environment variables like the name of the DynamoDB table, the bucket for S3, things like that and the region. And then it goes ahead and it starts that code. So this is a bash script, which you know, it's essentially code. So you wanna make sure you're really careful about having extra characters. It's just like any coding language, right? If you have one extra character somewhere, it can break the whole thing. So whenever you're working with EC2, if your user data script doesn't run correctly, it might not be entirely obvious right away. Like you'll go to try to access the EC2 instance and it just won't load. That's because the web server Flask isn't running because this script didn't run properly. So that's something that you wanna make sure that you're aware of, is that this needs to be, you know, essentially, quote, unquote, perfect as far as not having any extra characters that would break the execution of the script. - [Raf] Exactly, and sometimes it's harder to notice because the instance will boot up normally. It is just something that is running inside the instance that is not working properly, right? So the EC2 instance will show up as running, but whatever is running on that EC2 instance potentially could be broken because of a missing character here. And there is one interesting thing here that is this Stress application package that is installed. Yum is the Linux package manager and we are installing an application called Stress on this EC2 instance. So Morgan, let me actually load your screen with the diagram and can you tell a little bit more what that Stress package is doing inside that EC2 instance? - [Morgan] Sure, so this is a screenshot of the architecture that we built out throughout the course. So right now, Raf is showing you launching one EC2 instance, and he's launching it in the default VPC. But in the course, we build out a custom VPC, and then we launch our EC2 instance into it, add the database in the backend, which in the course, we end up using DynamoDB. But then after that, we say, "Okay, this is great, right? I have one EC2 instance, but what if I have, you know, too many users that would overwhelm this one instance, right?" Because one EC2 instance doesn't just automatically scale up and down, you have to put scaling mechanisms in place. So what we did is we built into the app this. We used this Stress package that has a webpage under the Employee Directory application where you can go, you can click a button that will stress the application and then you can kind of simulate having a lot of users kind of flooding your app or maybe the popularity of your app is going up. So what that allows you to do is then watch the CPU utilization on that instance go up over a certain amount of time. So whenever you have one instance, right, you don't want that to go down, we want to be scalable. So instead, what we do is we launch instances across multiple Availability Zones. So in this case, here we have two across Availability Zones and then in order to do Auto Scaling, you put an Elastic Load Balancer. And in this case, we end up using an ALB. So I was kind of right over that, an application load balancer. And then you also are using CloudWatch alarms. So the load balancer is forwarding traffic to the backend EC2 instances, and then we can simulate the stress using that Stress package, and then we can watch the CloudWatch alarm go off and it will then launch a brand new EC2 instance, and then we can watch that boot, and then you'll watch the CPU utilization come down after a certain amount of time. So that's what that Stress package is. - [Raf] Exactly, so that's why we have this package installed on the instance. You will launch this instance as part of this lab, and after the lab, you will terminate the instance. In the next lab, you will launch again, and the user data will mostly remain the same with the difference that in the next lab, you will create an S3 bucket and put the bucket name here. So let's continue and add storage to our instance. An eight-gigabyte root volume is okay. We can add tags to that instance. And the instruction asks you to add a tag with the key name and the value, let's see here, this is the value, the employee-directory-app. So let me copy this from here, right? So employee-directory-app will be the name of my EC2 instance. And in the Security Group, it asks you to use a Security Group that has the port 80 opened. So I will click here, choose Anywhere and allow the port 80, which is the HTTP port because this will be a public web server with the Employee Directory application running in the Flask application. You'll want to serve port 80. We are not connecting SSL certificates, we're not installing SSL certificates for now. If we would, then we would be opening the port 443, right? But for now, we are only working with port 80 for this application, and the security group is a firewall layer that surrounds the instance. So you need to open the port for the security group to allow inbound traffic. Once everything is ready here, according to the instructions, you're going to click and Review and Launch, then you scroll down to this page. You'll review everything, ports open, right AMI, right user data, then you'll click in Launch, and you'll have the option of choose a key pair to connect to the instance in case you need to troubleshoot something. It is not required for this specific lab because everything should run out of the box and you should not need to connect on the EC2 instance to troubleshoot anything if everything goes right. In my case, I will add a key pair because I will want to log on this instance to show you something when talking about IAM roles. So click in Launch Instance, and then after clicking Launch Instance, the instructions ask you to check if the instance passes the status check. And as soon as it passes the status track, you catch the Public IPv4 address of the instance and you try accessing that. So let's do it, right? Let's click in View Instances. Our instance is being created, let me just make more space here, employee-directory-app, which is this instance here, the instructions will ask you to go on this lower pane here and copy the Public IPv4 address, right? And open a new tab and paste, and then it may not work. Why it may not work? - Hmm. - It may not work because, let's see if they come with something in the chat to suggest that why this instance is not working, why my Google Chrome couldn't reach this page yet. - [Morgan] Right- - [Raf] The biggest, yeah, go ahead. - [Morgan] I was gonna say, pay close attention to the address bar. Right, so does anybody have any idea why this won't connect? Hmm, this is more troubleshooting part. - [Raf] I think that I chose a t2.micro instance that only has one CPU and one gigabyte in memory, and I gave a bunch of instructions to that instance, right? So perhaps the instance is still doing that, so right? - [Morgan] Yeah, right. - [Raf] Perhaps the instance is still getting the zip file and compressing the application, installing MySQL, installing Python 3, installing all the requirements for the application, installing the things. - [Morgan] Yes. - [Raf] So it's still cooking, right? And that's why when you go here, it's not ready yet, but then there is a catch here. Your browser may locally cache that failure, right? And you may try refreshing this page and you may still see the error because you had it cached. And actually, that happened with me yesterday when I was doing the lab for preparing this demo for you. It was cached so I couldn't connect. So a trick that I gave you is to copy the address bar and open an incognito window or in private window in your browser or choose another browser, or open from your cell phone, connect it to a different network. And then you should see this. This is a result that shows that you correctly followed the instructions to create and launch the EC2 instance. Right? - [Morgan] Right, so we have a couple people tuning into the chat here as well. So there's a few different things. So one of the things that could have happened is the security group not allowing it, right? So in this particular case, the answer was is that it wasn't done bootstrapping, so it wasn't done downloading everything. But another answer might be maybe you didn't configure the security group correctly. That could be a potential answer. Another one is if you forgot to auto-assign a public IP address, whenever an EC2 instance launches, unless you auto-assign a public IP address, it will only have a private IP, in which case, you can't access it, so that could be another one. But then we also have a couple of other ideas here posted in the chat as well. So could be bootstrapping still, that was the problem in this case. And then a secondary problem is if you try to access it while it's bootstrapping and then you keep refreshing the page, like Raf just mentioned, it could be caching that. So make sure you clear your cache. And then also, you might have a network ACL. So we have security groups which are instance-level firewalls, but then we also have network ACLs, which are subnet-level firewalls. So both of those things need to be in place in order for you to connect and get that connection established with your EC2 instance. Another thing as well is maybe it's your route table. So in this case, we're using the default VPC. But if you were using your own VPC, there's a chance maybe you didn't attach the internet gateway properly, there's a chance that the route table isn't configured properly. So I would make sure that you kind of check all of those networking things before you give up, right? And then the other thing too is this idea of HTTP versus HTTPS. So that's what Raf is pointing to right now. So I'll go ahead and let you talk about that. - [Raf] Yeah, so for convenience, the AWS Management Console allows you to click in open address, and then it gets that IP address and goes to your browser and opens it in a new tab for convenience. But the thing is that it opens HTTPS. So if you click here in open address, it will try to load https:// and then the IP address, and we do not have SSL certificates in the instance, right? So the instructions are clear with that, like do not click, copy the IP address, and go on a new tab and open it because that's what happens. When you click here, it loads HTTPS, right? Where this link points to is an HTTPS, right? So you may not also be able to connect because of that. And also, if you are using a corporate laptop, if you are on a corporate VPN network, sometimes, HTTP requests to unknown IP addresses may be blocked by your firewall. So make sure you're not logged on any VPN when doing that, right? That mostly happens when you try to log over SSH to the instances or remote desktop. HTTP is less common, right? But it may happen. We've seen that happening, sometimes. Me and Morgan, we were technical instructors delivering instructural lab training. So we had lots of close access with students doing labs and observing those labs, and most corporate networks like SSH and remote desktop access was not happening. - [Morgan] Yeah, they don't like it. So with this lab as well, if you wanna just make sure like that you're going through all the things that we just talked about for troubleshooting, but then also remember that, let's say you did mess up your user data and then you stopped the instance and then restarted it, something that a lot of people don't think about is the fact that user data by default is configured to only run on launch. So not start, but on the initial boot. So that means if I restart my instance, the user data's not gonna run again. So if you had an issue in your user data and then you went in and you tried to fix it, it might not run again. So you can either change your, oh geez. Sorry, I had a meeting pop up there. You can either change your user data to run on every boot You can either change your user data to run on every boot or you can always terminate the instance and start over with a new one. And for this particular lab, this is just a proof of concept. So notice we're not using HTTPS, we're using the default VPC, the database isn't set up, our permissions are probably a little bit overly loose, and all of that is to try to make it easier for you to get it up and running. So you wouldn't wanna do this the exact same way in production because you would wanna make sure that you have things like certificates in place for HTTPS and things like that. - [Raf] Yeah, there was a question here from the chat asking, "Are Lightsail instances also part of the default VPC?" Thanks for your question by the way. "Or that's just for EC2 instances? I use Lightsail because it's cheaper." So Lightsail, they are not part of a VPC you choose. So Lightsail, they are not part of a VPC you choose. Because it is a fully-managed service, it's part of another VPC, but it is a VPC. And there is a feature called VPC peering where once you have one VPC and another VPC, guess what VPC peering does? It helps you connecting those two VPCs. So if you already have some other VPC with private subnets, with databases, with other VPC resources like a Redshift cluster, an EMR cluster and you want your Lightsail instance to access that, the way you do it according to this FAQ page is by doing VPC peering, right? So I hope that answers your question about Lightsail and default VPCs. It is not part of a default VPC in your account. It's part of the Lightsail VPC, but you can do a VPC peering to connect to your VPCs, and it's pretty easy to configure a VPC peering. You just need the two VPC IDs, you go on the VPC console, create peering, you specify the ID of the VPC one, you specify the ID of the VPC two, and it creates a peering request, you accept that peering request, and then you are just routing tables. So why don't we bring up a poll, Morgan? Uh-huh, go ahead. - [Morgan] Before we do that, I do have one more question about certification. So this was somebody, they already finished the foundational course. So great job on that. And they were wondering if we could talk about the difference between Solutions Architect and SysOps. So I just wanted to quickly pull up the certification page. And so the Solutions Architect Course or Solutions Architect Certification is going to be able to validate your ability to design and implement distributed systems. So it's definitely like on a larger scale, like let's say you have a particular use case that you need to design a system for, maybe you're troubleshooting a system or redesigning system to be more optimized. It's more about that aspect of things, whereas the SysOps administrator side of things is going to be more detailed down actually like doing the work, meaning like can you deploy, manage, and operate on AWS, right? So it's a little bit more detailed. There's a difference, between, um, between the two of these certifications. So I personally, you would say that the Cloud Technicial Essentials course is more aligned with Solutions Architect. However, it also could be a baseline for you to begin to dive deeper to study for SysOps Administrator. So this Cloud Technicial Essentials course is really designed specifically to be a little bit vague in the sense of application. You could take this course and then go on to dive deeper into Developer, SysOps, or Solution Architect. So I just wanted to cover this question here. - [Raf] Yes, and let me show where they can find the other online courses we have. So this page here contains the courses that we offer and the platforms, and you can click here on the platforms and you are directed to that course on that platform. So we have Data Analytics course, Dloud Fundamentals course, Developer course, Machine Learning, Migration and Transfer. So this is what we are talking about the course, that we are talking about here the Cloud Technicial Essentials is one of those, right? So why don't we bring up a poll? Let me put your screen here. - [Morgan] Yeah, sure. - [Raf] And there is a poll where our audience can interact back with us. "A network ACL filter-" - [Morgan] Oh wait, actually, sorry, I wanted to show a different one. This one. Okay, let's do this question. All right, sorry about that. "So what does an Amazon EC2 instance type indicate?" And so again, you could just type in the chat one two, three, four. Is an Amazon EC2 instance type indicate the instance AMI and networking speed, the instant tenancy and instance billing, instance placement and instance size, or instance family and instance size? <silence> <silence> - [Raf] Pretty good, yeah. - [Morgan] Looking good. - [Raf] The good thing about those polls and that's why it is interesting for you to participate, is that if we see that most people are doing the wrong answer, then we double click and then we explore more of that, right? So that's a good measurement on how deep we should dive on each topic that we are presenting here. I think we are ready to move forward to the next question. - [Morgan] Yeah, I agree. This is great, so this is the correct answer, is the instance family and instance size. So good job, everybody who answered this one. So the next one that I wanna do is this one here. So, "Which of the following pieces of information do you need to create a virtual private cloud or VPC?" Is it the Availability Zone it will reside in, the subnet it will reside in, the group of subnets it will reside in, or the AWS Region it will reside in? <silence> <silence> <silence> - [Raf] You can type on the chat. Let's give some time for everybody to think. "Which of the following pieces of information do you need to create a virtual private cloud?" - [Morgan] So, there's a few things that you need to know to create a VPC. We'll give a few more seconds here. Let everybody get their answers in, especially 'cause this one is a little bit split, right? So we'll let everybody get their guesses in. I'll go until 60 seconds and then I'll end the question. All right, three, two, one, ending the question here. Okay, so the correct answer with 50% of the votes is the AWS region it will reside in. So I'm gonna go ahead and just do a little bit of a drawing here. So whenever you create a VPC, say you have a VPC here, VPCs live inside of an AWS region. So I'm gonna go ahead and draw my region here. So VPCs live inside of a region and they also can span across as many AZs that exist inside of that region. So let's just say this is AZ1, 2, and 3. So whenever you create a VPC, it's going to have give you access, quote, unquote, to all of the Availability Zones inside of a region. So in order to create your VPC, you have to know what region are you creating this VPC in. So the regions don't span, I mean, sorry, the VPCs don't span across regions. They only span across Availability Zones. So you have to know what region you wanna place this VPC in, and then you also have to know the CIDR range, which is the like what chunk of IP address is, how big of a chunk and what numbers do you want for your private IP space. So those are the two things that you need to know here whenever you're creating a VPC. And this is something that's really important to understand as you're working and operating within AWS because VPCs do not like automatically span regions, you have to be able to devise solutions to allow for communications between AWS regions, between VPCs. So that's like Raf was talking about peering earlier. That's one of the ways that you can create connectivity between regions, whereas connectivity between Availability Zones is a lot easier to facilitate because a VPC is already spanning all of the Availability Zones within that region. And then one of the other answers that was there was the subnets. Subnets are placed inside of Availability Zones. So those are kind of bound to a specific AZ. So you don't need to know the subnets before you create the VPC because a subnet is actually a subset of the IP addresses that you create for that VPC. Great, so do we wanna do any more polls here or should we move on to the next question? What do we think? - [Raf] Let's do one more poll. - [Morgan] Okay, awesome. All right, let's go ahead and do the network ACL one since that's something we just went over. So, "Network ACL filters traffic at the EC2 instance level," true or false? - [Raf] I think the biggest ask of this question is network ECL versus security group. Which service operates on which scope, right? So it's good - [Morgan] All right, it's looking good. - [Raf] that we have 10 answers false. We can move forward, pretty good. Security groups operates on the EC2 level. Network ACLs operates on the VPC level, which is everything. - [Morgan] All right, since everybody got that one correct, let's go ahead and do one more on this topic. So, "What must you do to allow resources in a public subnet to communicate with the internet?" And select two, and whenever you're going through and selecting two, you have to type them in. It's two separate entries in the chat, so type like one number, hit Enter, and then type the second number and hit Enter. And that's how you can get them both to count. - [Raf] I think you were giving them the correct answer. - [Morgan] Oh, am I? (laughs) I ended the question instead of starting it, that's my bad. All right, so I'll just tell you the answer for this one. (Raf laughs) But yeah, that's funny. So it's the two that have the green check marks next to it is the answer, but first, you need to attach a internet gateway to your VPC. So that's the first thing, right? The internet gateway is your doorway to the internet to allow traffic to flow into your VPC. So without the internet gateway, you can't allow traffic, which means that there can be no connectivity to your EC2 instance in a public subnet. Even if you say it's a public subnet, you need that internet gateway. And then the second thing you need to do is you need to create a route to the internet gateway from that subnet, and that allows that traffic to flow. - [Raf] Yep. - [Morgan] All right. - [Raf] So let's bring a question that was pretty common on the forum, which was not a question, but I assume it is a question. "I cannot distinguish among the IAM users and IAM roles and IAM groups and their functionalities." So let me load my screen here now and show you this diagram and get my pen. So an IAM user is an IAM entity, Identity amd Access Management entity, tjat will allow you to have a permanent connection to your AWS Management Console or to the AWS Command Line Interface. If you're using the Management Console, you're going to use a username and password. If you're going to interact programmatically with AWS from within the command line or from within your source code, you will need access key ID in secret. And a user allow you to have a permanent one, right? The thing is when you write a piece of code, when you have a Flask application, for example, doing things on Amazon DB or getting objects from Amazon S3, you will need a credential to configure that application. An IAM credential to configure on that application, right? And it is not very elegant and also not very safe to get your personal credentials and put in there. You should not do this, right? You should avoid getting your personal credentials and putting in your code. So what should we do? Some people create a separate user and call this user app, and get the credentials from that user to put in the code. Well, although that's a little better than getting your personal credentials and putting there, it is also not recommended because you are also putting credentials into code. So if someone has access to your source code, it will also have access to the credentials that are used by this code to make API calls to your AWS services. The best practice in this is to create what we call an IAM role. So once you create an IAM role, you can associate that role with an AWS service, such as for example, Amazon EC2. And when you do that, you declare what are the permissions of that role, right? And who can assume that role? Who can assume that role is also called trust relationship. And relationship. And the big difference between a role and a permanent IAM user is that the role is a mechanism that gives the ability of providing a temporary credential to whoever is authorized to assume this role. So when you create an IAM role, you can have the EC2 instance going on behalf of this role, right, to access AWS resources, and let me give you an example. The employee-directory-app EC2 instance that we just created, it has a role because when we created the instance, we attached the IAM role there, it's called S3DynamoDBFullAccessRole. So let's connect on this instance using EC2 Instance Connect, which is a very convenient way of having SSH straight out of your browser. - [Morgan] Yeah, if anybody is not familiar with EC2 Instance Connect, this is like one of the coolest things to come out of AWS in my experience like in the last few years as far as making things easier to use, especially for a beginner. Definitely check out EC2 Instance Connect. If you're like troubleshooting something on an EC2 instance, this is like a super easy way to get through SSH to your EC2 instances. - [Raf] Yeah. So if I go here and type aws configure, which is the command that I use to configure the command line with permanent credentials, you will see that I have none. In other words, you will see that my command line is not configured, right? It does not have credentials, but if I do aws s3 ls, it will work, right? And it does work because this instance is associated with a role that has permission to go to Amazon S3 as well as DynamoDB, right? And this is the role, IAM. If I go to my IAM console and click in Roles right here on the sidebar, you will search for that S3DynamoDBFullAccessRole. The instructions ask you to create a full access role just for educational purposes, just to have that POC, that server up and running quickly, right, for you to get started with AWS. We don't recommend overly permissive roles for production environments. So if you only need to access one specific DynamoDB table and one specific S3 bucket, you can restrict that on the role permissions, right? And you can see that this role permission has S3 full access and DynamoDB full access. So if you try to go on the instance and do aws ec2 describe-instances - -region us-east-1, this will fail because this role is not authorized to perform this action, right? So unless whatever is explicitly allowed here on IAM, it is denied kind of like a security group. Unless you go there and open the port, it is closed, right? So now one role can be assumed by an EC2 instance as you just saw, right? EC2, but roles can also be assumed by IAM users. And what would be the reason of allowing a user to assume a role? Well, if you have multiple accounts, let's say you have three AWS accounts. One for dev, one for staging, where you and your QA analysts do the tests, and another account for prod. If you want to have access to those three AWS accounts, option one, you can create one IAM user for yourself on each one of those accounts, which for obvious reasons is not recommended because if you need to change the password, you would need to change the password on three accounts. What if now you want to have one account per application that you have, right? So you see, that doesn't scale well. So a good practice here is to create one separate AWS account just for the users, just for all the users in your organization. And all the subsequent AWS accounts, you have roles on those accounts. And on those roles' trust relationship, then you'll specify which users can assume the roles. So instead of having permanent credentials on the other AWS accounts, you can just do an assumed role, right, for- - [Morgan] Something else that's good to note here with this too is that the roles, they're shared. So it's not like you have to create like, "Oh, this is," yeah, right. - [Raf] One role per user. - [Morgan] It's not one role per user. Me and Raf, if we were both developers on the same team and we had the same permissions and we would likely be assuming the same role. - [Raf] Yeah, and it gets pretty interesting because that AWS account, instead of having lots of users here, sometimes, you are on a corporate domain controller, Microsoft Active Directory, and all your users are already belonging to a Microsoft Active Directory locally on your workspace. So what you can do is instead of creating all the users in this account, you can create roles in this account, right? And the sub-accounts will also have the roles, and a role can be in another role's trust relationship. And you can do this, right? So that's the advantage of having roles assuming other roles. If you were working with governance and compliance, and you want to concentrate all your users on the current user directory that you have as part of your organization, you can do that with some tools like ID connector and even Single Sign On to jump from one account to another. And that's how enterprises do. That's how big companies running big workloads on AWS with complex architectures, multiple accounts. That's how they do it securely, right? They did not create multiple IAM, they should not at least create multiple IAM users around multiple AWS accounts, right? - [Morgan] All right, so I do have another question on the chat. And I know it was answered in the chat, but I just wanna answer it live. So anybody else has a question? - a_walk_in_the_clouds, I like your- - [Morgan] Yeah, that is a great username. - [Raf] Yeah. - [Morgan] So a_walk_in_the_clouds asked, "Can a VPC peering be done with a region or VPC peering is always cross-regional?" And the answer is that you can do it within a region. So you can have many VPCs in one account in one region, and you might be using VPCs as like network separation between applications. But let's say you have three VPCs in an AWS account, and you have applications that need to access shared services in a third VPC, you can still do peering even within a region. So I just wanted to answer that one as well. - [Raf] Yeah. One interesting thing is that you can do peering within the region, cross-region, and cross-account. So if I am running, let's say a machine learning cluster in my account and I want to allow Morgan's private EC2 instances to connect to my cluster to do something, then I can do that cross-VPC peering as well. I could also use a service called AWS PrivateLink for that. But it's cross-region, intra-region, and even cross-account, right, the VPC peering. - [Morgan] Yeah, and that's really important too, especially if you're looking to be in an architect position for AWS, is that it's very unlikely that you'll just have one singular AWS account. It's overwhelmingly likely that you will have many AWS accounts for one organization or maybe even one team, depending on what you're doing. So being able to understand like where the boundaries of the services exist and how to reach across accounts or across regions is something that can be very, very helpful. And so then I wanted to cover a couple of other things that I saw in the discussion forums, and this one relates to what Raf was just talking about. "How is the EC2 instance in DynamoDB connected?" So we're talking about, let me exit this, this EC2 instance. "How is this EC2 instance connected to DynamoDB?" Well, like Raf was just mentioning, that's IAM roles. But then on top of that, I also saw this question about the DynamoDB table that we're using for the employee-directory-app that I thought was really interesting. So in the DynamoDB table, there is this object_key column for the item. And this is essentially pointing you to the S3 image. So let me go ahead and do some whiteboarding here. So the way that the architecture works is we have our EC2 instance, we have an S3 bucket, and this has our images of our employees. So let's just say that's an employee image there. And then we also have our DynamoDB table. And with this DynamoDB table, this has an item for each employee. So the EC2 instance has this app here, and we had to allow full access to both DynamoDB and S3. So the reason for that is because we are not actually storing the images in DynamoDB. So we have no images here, but instead all of the images are stored in S3. So one of the questions here is, are we storing the image in DynamoDB or is this a reference to the S3 bucket? And the answer is that we store the image in S3, but then we store a reference to it in DynamoDB. So that reference ties the item, the entry for the employee in DynamoDB to their image in S3. And the reason for this is DynamoDB has a 400 KB limit per item. So you really don't want to that store large items or large objects and DynamoDB. Instead, you would more likely use DynamoDB as an index. - [Raf] A database. - Yeah, as a database and an index referencing your objects. So this DynamoDB is not object store, right? If you have an object associated with data, you would store it as a reference and then your code can say this reference over here is related to this image. So I just wanted to bring that up there. And something else that you can do just to talk about another architecture here is if you have an EC2 instance, like with our employee-directory-app. Right, we have our app here, and then you want to store some information in DynamoDB, and then you also have some images and S3 like we just did. Let's say you wanted to delete an image out of S3, right? Maybe you're deleting an employee. So if you wanted to delete an employee here, you could have the app go ahead and call DynamoDB, delete, and then you could also have it call delete over to S3. And if you have the app do this, then you are essentially relying on the code to keep these two things in sync, right? So you're relying on the developer writing the code to know and understand the data model that you have so that way, they know where to delete things from. So that's one way you could do it. Or if you wanted things to be a little bit more automatic, like on update or delete, or mostly delete, what you could do is you could have them submit the delete here and then you could set up DynamoDB streams. So DDB streams, and then you could have this trigger a Lambda function, which then would delete the item out of S3. So this architecture does the same thing, except instead of having developer be responsible for it, here, you're instead kind of creating an architecture to do it automatically. So the benefit of this is essentially like let's say your code had an error in the delete. Like let's say you tried to delete the item in DynamoDB and then it threw some sort of exception and then your code crashed. You might have the entry in DynamoDB is no longer there, but the image is. That might be a problem for you. So with this solution here, you're submitting one delete and then it kicks off a series of transactions that then will delete it from S3. So this is a little bit more fault-tolerant in that way, where you're not necessarily relying on the code to do it, but it really depends on what you're doing on which type of direction you would take. Like would you bake these instructions into the code or would you create an architecture that kind of does these sorts of things automatically? It's also important to note, you would have to have some code here as well. It's just the code would be really simple and really small. - [Raf] Yeah, on that matter where you have a Lambda function connecting to Amazon S3, there is a question here that CodeWithSean brought. Let me bring the question on screen here. "I have two Lambdas connecting to DynamoDB. One only needs read-only, the other needs read and write." And that's the thing. "I created two user accounts with key secret for the Lambdas to connect. Is this okay? Is there a better way?" - [Morgan] All right- - [Raf] And yes. There is a better way. Morgan? - [Morgan] There is a better way. Yeah, so you wanna look at roles. So let me go ahead and delete this and let's say you have two Lambda functions, one, two, and you have an S3 bucket here and this one needs read and this one needs read, write. You don't want to create IAM users for this, right? So IAM users, they have credentials that you could use for programmatic access to call these APIs using the SDKs. However, those keys are not temporary. You can rotate them, but they are permanent in a sense that like once you create them until you deactivate them, you can use them. And you would have to bake those into the code in the Lambda function or the environment variables or something like that, which is not secure. So we recommend to not use IAM users, but instead, you would want to use IAM roles, right? So this kind of comes back to that roles discussion. So this Lambda function that needs the read access, this would have Role 1, and then the IAM or the Lambda function that needs the read, write would have Role 2. So you would have two separate IAM roles, and in the IAM role, you would attach a policy to this first one that says specifically what bucket, what actions you would need so you can get really granular and we would suggest that if this is for anything beyond experimentation in your own personal account, that you are very specific about the types of permissions that you give to code and people. And so we say following the "principle of least privilege", where you give people and entities exactly what they need and nothing else. So you would include the S3 bucket here and you would also include that API call, whereas this role would be a separate role that has a totally separate set of permissions. - [Raf] Yeah, and this is where you configure all that, that Morgan just showed. So on Lambda function like this one I have here, my second function, if you click in Configuration right here, click in Configuration, then you click in Permissions, you can specify what is the role name that you want for that function. So you would create two roles, right? And each function, you associate with a different role that would match Morgan's diagram right there. - [Morgan] All right, so we have another question here. "Is DynamoDB a good option for an online shopping cart? What are other storage options for a shopping cart?" So I would say that DynamoDB is a great option for this. So DynamoDB is a key-value pair. So the nice thing about that is like if you have a shopping cart, you would have your user session. So DynamoDB is a great database for restoring user sessions in a place that is not necessarily ephemeral. So you can put it somewhere where it's gonna be stored, and then you can set like a time to live where it expires after a certain amount of time, but you know it's safe for that amount of time. Whereas comparing that to like instance storage, if the instance went down, then your user session would as well. So we recommend storing things like this, shopping cart information, session information off of the instance in a database or a cache of some sort. So DynamoDB is a great option for this because you can have items that don't necessarily all have the same data. So like one person's session, its shopping cart might not look the same as another person's session in shopping cart as far as schema. And you can do that with DynamoDB fairly easily, but another option you could use here is maybe ElastiCache. So you could have an ElastiCache cluster as well. It's really up to you on like how often are you using this, how much usage do you have? And then you can do a cost analysis. So like I personally prefer DynamoDB because it's serverless, it's super easy to use, and you can scale it up and down with usage fairly easily, meaning that you only pay for what you consume. Whereas with the ElastiCache, I'm pretty sure it's an hourly rate. So I personally prefer DynamoDB for this one. - [Raf] Yeah. Oops. (chuckles) Alan just said on the chat like, "We have a class both on edX and Coursera that focuses on DynamoDB." So if you're looking for more information- - [Morgan] Yeah, I'm actually one of the instructors for that course too. So there's a DynamoDB course on both Coursera and edX, and that one starts off with the basics of like what is a table, what are items, what is NoSQL? And then it progresses from there, getting into indexes and local secondary, global secondary indexes, how to provision throughput, things like that. And then in the last week, it goes deep diving into single table design. So that's a really great course for a deep dive on DynamoDB, especially if you're a developer type. We use most the command line in that course, but it's a great one. - [Raf] Yeah, this is the one we are talking about on Coursera and also on edX. Right, so you can find information right there. - [Morgan] All right, I think we are coming up here at the end, but let's see if we have any more questions. All right, I don't know if we have time to answer anymore here. Raf, do you have anything else you wanna finish up with? - [Raf] No, no. There is one here. "Is there a tutorial to use AWS SNS to send HTML email?" We can try finding some references on that, but I'm pretty sure- - [Morgan] The moderators might be able to find some stuff here. - [Raf] Yeah. - [Morgan] All right, so I think the last thing here is just kind of, you know, if you are currently enrolled in the Cloud Technicial Essentials course, please hop back in, check it out, go through. Hopefully, the things we talked about today gave you a bit of a deeper understanding. And if you're blocked on the labs, hopefully, some of the things and the tips we talked about will unblock you on those labs. And also remember that the labs themselves are not a blocker to learning the information, right? So if you get stuck on a lab, you can keep going past that lab, and then you can come back to it later. So keep that in mind. And if you are not enrolled in the course, please go check it out, it's a great fundamental course. It's geared towards people with a technical background. So yeah, thank you all so much for attending. I really appreciate all the questions and the engagement that we were able to have during this last hour. - [Raf] Yeah, thank you everybody- (audio cut drowns out speaker)