So far we've talked about the opportunities related to data and its use to capture value in the platform world. We focused on the business models and how they work, deferring all thoughts about the social and ethical implications of these business models to a later time. And now we will address precisely this aspect. We deliberately chose to start with a description of what these companies have been able to see in data and how they have been able to ride on these technological opportunities to generate and capture value. Unfortunately, however, not all that glitters is gold, and these opportunities hide quite a few challenges that companies, but also regulators, are still managing. The use of data has brought to light questions that were not even conceivable a few years ago. Who is the owner of the DATA ? Who can use it? What can you do with it? Where is the line between what companies do for us and what they do because of our data? We don't have unequivocal answers to these questions and we are entering a field where perceptions and regulations change very rapidly. So we're going to ask questions more than give answers. We’ll try to critically reason about the implications of using data, and we're going to start with some facts to do that. Over the course of the last 10 years, a quote, attributed to various members of the tech world, has been circulating on the web and in the management world that says "If it's free, you're not the customer, you're the product." A simple, short and particularly impactful phrase that has two implications: - the first one is the diffusion of a certain awareness on the business model of free services. - the second one is the emergence of a certain reticence and negativity towards the world of Big tech and digital companies, often portrayed as unethical and unscrupulous. This second point soon turned into facts, with real outrages born and spread on the web - ironically, on social networking platforms. And it's the case of the #LeaveFacebook movement, born after the Cambridge Analytica scandal In 2018, Cambridge Analytica, a British data mining and analysis consulting firm, was involved in a major data scandal. According to Facebook analysis, Cambridge Analytica used a personality test to harvest personal data from 87 million Facebook users. These data were used to generate psychographic user profiles; the information from each profile suggested which type of advertisement can most effectively persuade the users about certain political events. The impact was relatively high. Specifically, the scandal led to Mark Zuckerberg’s appearance and testimony in front of the US Congress in April 2018, which was followed by an apology and revision of the Facebook’s privacy policy. The dark side of data exploitation has been furthered analyzed by Carole Cadwalladr, a Guardian and Observer journalist and some of her documents are suggested after this video. She dug into Brexit and gained worldwide popularity with her Ted Talk titled “Facebook's role in Brexit – and the threat to democracy.” In relation to the Brexit campaign, she found that Cambridge Analytica’s use of dedicated advertisements may have influenced the referendum’s vote. She highlighted the ambiguous way in which Facebook leveraged data to influence Brexit’s outcome, and, in this relation, emphasized the responsibility of the technology titans in these episodes, which threatened the democracy. After this scandal a movement under the #LeaveFacebook hashtag invited users to leave the platform given what happened with user data. The movement made a lot of buzz, all the news talked about it ... but if we go to see the number of registered users on Facebook ....well, after the Cambridge Analytica scandal it has continued to grow more or less at the same pace as before. We need to emphasize now that this case is quite different from the examples we discussed about data before. In this case, the data were not only used in an aggregated and anonymous way and, above all, they were used with a particularly subtle purpose related to our psychology. Staying with the Facebook case, in 2020 there was another big public debate about how the platform uses our data and impact our lives. The debate opened with the publication of "The social dilemma" a documentary based on interviews with former employees of tech giants and produced by Netflix - ironically another major digital platform - that shows the darker side of the algorithms used by Facebook and other major platforms such as Instagram or Google. Again, should you be interested in learning more, we encourage you to watch the documentary to reflect on the darker sides of data usage. Again, despite the strong media coverage, Facebook users did not decrease. Another particularly interesting case is that of the applications for tracking infections during the Covid-19 pandemic. In many countries these apps have experienced a real media storm claiming a great risk to privacy. An unusual case is that of Immuni, the Italian app for the tracking of contagion, which has been strongly attacked for the risks related to the use of data. This is particularly surprising as the APP was very transparent on the use of data that linked the monitoring directly to the cell phone device, without a connection with the identification data of the user precisely to safeguard privacy. We conducted a study aimed at understanding the determinants of Immuni's failure. The results showed how privacy concerns were among the main factors explaining the decision not to download the application...basically causing the failure of a platform that - being based on clear network effects - needed the greatest possible diffusion. It is almost incredible to see how all these scandals and discussions happened on social networks that were exactly gathering data about us in the meantime. We are faced with a paradoxical situation. Users are increasingly concerned about their online privacy, but meanwhile they generate an impressive amount of data using that very type of service. Moreover, we have thousands of examples of applications that, thanks to the data collected during the service, offer personalized services to their users...which is precisely the key of their success. This has often been described as a trade-off, we give up privacy in exchange for a free service or at least a high degree of personalization. But….are we really facing a trade-off? The answer, in our opinion, is no. Why should we - as users - give up the opportunities that data has offered us in terms of free or personalized services? More importantly, would the world really be willing to go back? Think about how many data-driven digital services we use all the time.... But at the same time...isn't there a way for companies to capture the value contained in these data, without infringing the privacy of users? This problem, rather than a trade-off, should be looked to as a dilemma: a complex problem whose solution is anything but obvious.