Hi folks, it's Matt again. And we're going to talk now about dominant strategy implementation. So the idea that we're looking at is we have a society. We have to make a decision. And we're trying to design a mechanism that is going to take people's preferences and give us outcomes. And in particular let's start by looking at a situation where we've got a full set of alternatives. And any possible ranking over those things. So for instance we have a set of candidates, that we need to pick from, say four, five, six candidates, okay. So we think about the voting rules that we've looked at earlier. Now we've got to make a choice over these candidates. And in particular what we'd like is when we ask people, that they have no difficulties in deciding what they should announce. They should just be truthful. Okay, so we want dominant strategy implementation. We want a mechanism that they want to tell us truthfully, exactly what their preference rankings are over the alternatives and they don't gain at all by trying to mix up the alternatives and manipulate the system. So that's the idea. And in particular, let's think of, we've got our society with N, individuals, finance set of outcomes, O and if we begin to think about designing mechanisms in this world And we want one more. Every agent has a dominant strategy for each preference. We can invoke the revelation principle. And the revelation principle tells us that if we do have an indirect mechanism that has dominant strategies we can. And just collapse that into a social choice function, a direct mechanism where people just tell us directly their preferences and then we give them the outcomes that would have gotten through the original mechanism for each announcements of their preferences. So if they had followed the strategies that they had, dominant strategies and original one. So this makes truth of dominant strategy so the revelation principle will means that we can without loss of generality for this exercise look at social choice functions directly, okay. And so, now what we want to do is think about which social choice functions can a society have which are going to be dominant strategy truthful and make sense. Okay, well. So, it's, this things are also known as non-manipulable. Strategy-proofs, sometimes they're called straight forward mechanisms or social choice functions. And the important result in this area, due to Allan Gibbard, Mark Satterthwaite in the early 1970s, is going to say we're going to have a really hard time doing this in a setting where people can have any possible ranking over the alternatives. So let's have a look at that. This is what's known as the Gibbard-Satterthwaite Theorem. And it's another form of an impossibility theorem similar to what we saw in terms of Arrow's theorem and the theorem. So situations where we have a set of conditions we'd like to have and the theorem says, it's impossible to have this desirable set of conditions all at ones. So what's the setting here? We've got a social choice function, we'd like to have one that's mapping all possible preferences. So people have linear orders, they can have any strict ranking of our candidates. And we're going to look at situations with a least three outcomes, so we have at least three candidates to choose from. And we're going to also look at a social choice function which is on to. That means for every possible outcome, there is a profile of preferences which gives you that outcome. And that condition can be satisfied quite easily. For instance, if you just required that your social choice function be unanimous. So if all individuals prefer the same alternative, we all say we love candidate a, then this is how you should pick candidate a. If you put that minimal condition in, then indeed the, in this domain of preferences, c is going to be on to. So if we put those conditions then what does the theorem say? The theorem says, that we're going to have the strategy in this condition, truthful reporting of preferences is a dominant strategy for every agent at every preference profile, if and only if c is dictatorial, okay. So again, that means here, there is some particular individual I for whom the choice function is just always their favorite alternative there the thing that maximizes their preferences and regardless of what anybody else says. So we just pick one individual. We just listen to that person, okay. Now in terms of the proof of this, it's clear that if we assign somebody to be a dictator and don't listen to anybody else, that's going to be strategy proof. Right, nobody else can make any difference, doesn't matter what they say and the person who is a dictator always wants to be truthful because they're getting their favored alternative. The converse of this theorem is much more difficult, the part saying that if it's strategy proves then it has to be dictatorial and this can be proven by various means there are proofs that relate this back to Arrow's theorem and show that there are similar conclusions they can get in terms of the basic steps dilemmas that where approved in there. There's a very elegant proof by Salvador which works off of individuals in pivotal in terms of changing their preferences and changing the outcome and showing that you can't have too many individuals being pivotal at once, if you are going to satisfy dominant strategies. So there is a basic conflict of allowing people to make decisions and having multiple people make decisions at the same time and making sure that everybody can not manipulate things by announcing things falsely. So were not going to explicitly offer that proof but you can find various versions of this in the literature. And we'll leave you to look at that directly. What this means, is that any social choice function that we write down that we're interested in, assuming that we don't want a dictatorial functions are going to be manipulable on some situations. So for in a voting setting and we have a set of candidates that we're looking at and we have to pick among those candidates in any possible ordering of those as possible on people's preferences regardless of what rule we use,we're going to end up having people manipulate that rule in certain situations. So, that's a very negative results in some sense, it's a damaging result and it's different from arrows theorem because what this is doing is really looking at the incentives that people have and saying it's going to be difficult to be people to be truthful. When we're looking at Arrow's theorem it was nothing said about whether people are truthful or not. The kinds of conclusions that were being reached on Arrow's Theorem were just saying that some basic independence conditions and Pareto conditions couldn't be satisfied by a social wealth ordering at the same time. But presuming that you could see what people's preferences were as the input. And this is saying that if you're going to make a choice, it's going to be difficult to get people to reveal their preferences to you in a truthful manner. One more thing about this, and this is something you can verify yourself. Suppose that you only had two alternatives, then it's going to be easy to find the rule. Think of majority rule so, is that strategy proof so, imagine that we have just two candidates and you get to vote for which one you would prefer and you always prefer one to the other, there will be a variety actually of strategy proof rules in that setting. If c is not on to then we are ignoring certain outcomes. Then you going to have some more limited conclusions here that things are going to be stick tutorial but only on the range of the outcome function, if at least, if it still has at least three alternatives on it. So there are some limitations on this. But in terms of really getting around this kind of negative result. And there's various ways we can think of doing this. One, is to use a weaker form of implementation. So here we were asking for dominant strategies for all individuals at all preferences. We could work instead for something like Bayes-Nash implementation, Bayesian implementation where we ask people just to have it be a best response to announce truthfully given that everyone else is. And that'll open some doors for us. That's going to be much more demanding in terms of an equilibrium concept and the beauty of dominant strategies is people don't have to worry about what other people are doing. And they don't have to know or have beliefs about what's going on. Bayes-Nash Structure, Bayesian Nash structure is much more demanding in that sense. We could relax the assumption that people have arbitrary preferences and look at more structured settings. And indeed, when we do that, we're going to find a much more interesting and positive results In terms of rules that we can apply so more interesting rules I should say. But we'll find more interesting rules that we can work with. We've already seen one in fact, we've been voting on single peak domain. So as we mentioned there, if you're asked for your preferences, you have no reason to try and distort your preferences, the median and peak winds and the only way to change that is to foot over and announced something on the other side which will be worse for independent individual. You can also take the max of the peaks. You could do instead of median voting, have maximum voting. So whoever has the maximum peak or the minimum peak or any order statistics. So single-peaked domains are going to be settings where what we've done is we've limited the set of preferences. So not any ordering of candidates is allowed any longer. And once you do that, then it's easier to design strategies for fools. Another setting, trade. So imagine that you have a, an indivisible good you have to, a buyer and a seller. And they might have private values for how much they value the good. And you're trying to get trade to go on one mechanism that works in that situation is just to fix a price so say you can trade the price at a price of 10. So individual will say, at a price of 10, am I willing to trade or not? And now there is a simple decision, you can't influence the price, all you can do is influence whether you actually trade or not. So you have incentive to say truthfully do you really want the trade or do you not. And then trade if both of the agents want to trade. So there you can design a dominant strategy mechanism. There's going to be some inefficiencies associated with that. You're not going to have trade occurring in all the circumstances that you'd like, but at least you've got dominant strategies. And we'll take a closer look at these things. We're going to see a whole series of other types of rules. We'll look at the Grove schemes and other kinds of schemes. And when we look and narrowing the kinds of settings we're looking at with more structure on preferences. We'll have a whole series of interesting dominant strategy comparable mechanisms, which are going to be present. And we'll take a look at it, particular kinds of environments where that's can be possible would be next up.