In our technological journey of artificial intelligence we saw that machine and deep learning solutions can scale from large data centers to tiny IoT devices; we commented the crucial role of hardware accelerators in the processing of AI solutions and we introduced the machine and deep learning solutions provided in a “as a service” manner. The growing interest in these fields is also certified by the skills requested by the top employers’ wish lists. Recently, IEEE Spectrum, the magazine of the Institute of Electrical and Electronics Engineers, published a very interesting ranking and comparison of the tech skills requested by the top US employers in 2014 and 2019. What is really interesting is that the tech skills characterized by the largest increase are those associated with machine and deep learning solutions and technologies. For example, Python, which is the programming language of most of the machine and deep learning frameworks nowadays available, is at the third place of this ranking. The first two positions are occupied by SQL and Java, respectively. Python programming is nowadays requested in roughly 20% of the tech positions with a 123% increase over 2014. In addition to Python, from 2014 to 2019, the request for tech skills like AWS, Machine Learning, Azure and Docker increased from 400% to 4000%. Let me point out that AWS, Machine Learning, Azure and Docker are all technical points we addressed in the videos about machine and deep learning as a service. These amazing numbers let us understand that machine and deep learning solutions and technologies are nowadays valuable opportunities to be seized. But which are the challenges to face in order to move things to the ground? Without trying to be exhaustive, I would like to mention three points: first, the convergence between as-a-service and embedded AI; second, the ability to learn in real-world environments that are possibly time-varying, third, the role of user privacy in the machine and deep learning processing. We saw that the “as-a-service” approach for machine and deep learning is supported by Cloud computing platforms, while the embedded AI is meant for tiny IoT objects. These two approaches that nowadays seem at the extreme of the technology, will naturally converge into a scenario where the machine and deep learning processing will be seamlessly executed onto an ecosystem of heterogeneous hardware platforms, where real-time information is processed locally and where more “strategic” KPIs are sent to the upper levels for further processing and learning. In this scenario, the upcoming 5G will play a crucial role by means of offloading and orchestration mechanisms. But this is something we are already seeing in heterogeneous hardware devices, comprising for example a micro-controller and a GPU. In the global picture the challenge is to scale the design of this heterogeneous hardware to the whole ecosystem of pervasive devices. We are pretty sure that the integration between the “as a service” approach and the embedded AI will happen in the upcoming years and we expect on one hand, that the efficient and effective computing paradigms of the Cloud will be propagated to the edge to increase performance, scalability and reliability; and on the other hand, that the IoT units will represent the first line of processing of cloud systems. Moving machine and deep learning to the real-world must encompass the fact that the real-world changes or evolves over time. This is due, for example, to seasonality or periodicity effect, changes in the users’ behaviors, evolutions of the stock markets, faults or malfunctioning in sensors or actuators. The effect of these variations, also called concept drift, is that the previously learned models became obsolete, hence no more effective in processing the new data. From the technical point of view, most machine and deep learning algorithms assume that data are I.I.D., hence independent and identically distributed. This assumption rarely holds in real-world scenarios with potentially catastrophic effects on the envisaged applications. We had a relevant example with the occurrence of the COVID-19 pandemic in 2020. What we knew about our world, encoded in our machine and deep learning models, suddenly became obsolete and we had to learn new models to predict for example economic trends or the behavior of the users. Learning in presence of concept drift is hence a promising research area, aiming at removing the stationary hypothesis from machine and deep learning algorithms, so as to make them able to acquire new knowledge over time and discard the obsolete one in order to guarantee the largest accuracy even in time-varying scenarios. Last, but not least, privacy. We already commented that the technological evolution of Cloud-based computing infrastructures intercepted the ever-growing demand of machine and deep-learning solutions through the “machine and deep-learning-as-a- service approach”. Unfortunately, to be effective, this approach requires the processing on the Cloud of possibly sensitive user data, like, for example, personal pictures or videos, medical diagnoses, as well as data that might reveal ethnic origin, political opinions, but also genetic, biometric and health data. The challenge we have to face here is to preserve the privacy of user in the machine and deep-learning-as-a-service computing scenario or, more generally, in the scenario where acquired information is processed by remote technological actors. This is a novel and promising research and technological area of machine and deep learning based, for example, on Homomorphic Encryption and differential privacy. There is still a lot of work to do in this area and for sure we’ll see great advances in this direction in the next few years. We commented three out the many challenges we have to face in the technological journey of AI. In the next two videos of this MOOC we will see the perspectives of two companies working in the field: STMicroelectronics and Alkemy Lab. They will help us in understanding the next big steps in the field of embedded AI and Cloud-based Machine and Deep Learning for social media. Stay tuned.