We might be interested in more than a point estimate for our parameters. For example, we might want it to build estimates for a distribution of the parameters as well. When trying to perform this inference, the first step would be to call the sample method from within PyMC3. The first parameter that you pass to it is the number of steps to use for this algorithm. You can also pass additional parameters such as the sampling algorithm to use in a starting position for that sampling algorithm. We're using the No-U-Turn Sampler here, and we're specifying a starting position based upon that, the map value. Some alternative sampling algorithms that we can use are the Metropolis algorithm, slice sampling, and obviously the No-U-Turn Sampling algorithm that we're using here. PyMC3 can automatically determine the most appropriate algorithm to use for most of your problem, so it is best to leave that to PyMC3 to decide. For as far as the starting position is concerned, I only have the map here for illustrative purposes, it's generally not recommended to specify a starting position based on the map. If you look at the output, there is some information regarding the sampling process that has been printed out. First of all, it tells us the optimization process terminated successfully. There's also some additional information regarding the function value and the number of iterations that it took for the optimization process. Beneath that, you'll see some information for the sampling algorithm. It tells us that it took four chains, and this corresponds to the number of OpenMP threads that we specified earlier. The general rule of thumb is to have as many threads or chains as of course available. The more samples or more chains you have available, the more certainty you have regarding the trace information or the inference process. There's some additional information here regarding the number of divergences and effective sample sizes. Essentially, they are giving us an estimate regarding the quality of our sampling process. We'll look into what these terms actually mean in more detail later in this chapter. You would notice that the result of resampling process were assigned to this object here called trace. That trace object contains information regarding all the parameters of interest here. For example, if we wanted to get all the samples for alpha, you would just say, "Trace of alpha." You get an array with all the samples drawn during the process. Another way to inspect the samples is by plotting it over time, which is what we see here. This is called a trace plot, where you have the dawn samples to on or time here, or the number of iterations. You can see on the left-hand side you also have the inferred posterior distribution for each of the parameters, alpha, beta, and sigma. These are drawn per trace, or with drawing four chains, we will see four inferred posterior distributions. We want to have good agreement between these four chains, which indicates to us that the sampling process went smoothly. You will also notice that there are two distribution for beta corresponding to beta-1 and beta-2. We can also print out a summary of the sampling process by passing this trace to the az.summary, so az is the Avis library, and within that you have a summary method you can call and by passing trace to it, you can actually inspect the information regarding the variables in a tabular format. You'll see that all the variables that are in our stack listed here, along with statistical information, suggests the mean, the standard deviation, and there's additional information such as the highest posterior density. We will look at what those terms mean again in a later section.