Machine learning systems used in Clinical Decision Support Systems (CDSS) require further external validation, calibration analysis, assessment of bias and fairness. In this course, the main concepts of machine learning evaluation adopted in CDSS will be explained. Furthermore, decision curve analysis along with human-centred CDSS that need to be explainable will be discussed. Finally, privacy concerns of deep learning models and potential adversarial attacks will be presented along with the vision for a new generation of explainable and privacy-preserved CDSS.
Este curso forma parte de Programa especializado: Informed Clinical Decision Making using Deep Learning
ofrecido por


Acerca de este Curso
Qué aprenderás
Evaluating Clinical Decision Support Systems
Bias, Calibration and Fairness in Machine Learning Models
Decision Curve Analysis and Human-Centred Clinical Decision Support Systems
Privacy concerns in Clinical Decision Support Systems
Habilidades que obtendrás
- Calibration in machine learning models
- Human-centred clinical decision support systems
- Privacy concerns in clinical decision support systems
- Bias and fairness in machine learning models
- clinical decision support systems
ofrecido por

University of Glasgow
The University of Glasgow has been changing the world since 1451. It is a world top 100 university (THE, QS) with one of the largest research bases in the UK.
Programa - Qué aprenderás en este curso
From machine learning models to clinical decision support systems
Adopting a machine learning model in a Clinical Decision Support System (CDSS) requires several steps that involve external validation, bias assessment and calibration, 'fairness' assessment, clinical usefulness, ability to explain the model's decision and privacy-aware machine learning models. In this module, we are going to discuss these concepts and provide several examples from state-of-the-art research in the area. External validation and bias assessment have become the norm in clinical prediction models. Further work is required to assess and adopt deep learning models under these conditions. On the other hand, research in 'fairness', human-centred CDSS and privacy concerns of machine learning models are areas of active research. The first week is going to cover the ground around the difference between reproducibility and generalisability. Furthermore, calibration assessment in clinical prediction models will be explored while how different deep learning architectures affect calibration will be discussed.
'Fairness' in Machine Learning Models
Naively, machine learning can be thought as a way to come to decisions that are free from prejudice and social biases. However, recent evidence show how machine learning models learn from biases in historic data and reproduce unfair decisions in similar ways. Detecting biases against subgroups in machine learning models is challenging also due to the fact that these models have not been designed or trained to discriminate deliberately. Defining 'fairness' metrics and investigating ways in ensuring that minority groups are not disadvantaged from machine learning models' decisions is an active research area.
Decision Curve Analysis and Human-Centered CDSS
Decision curve analysis is used to assess clinical usefulness of a prediction model by estimating the net benefit with is a trade-off of the precision and accuracy of the model. Based on this approach the strategy of ‘intervention for all’ and ‘intervention for none’ is compared to the model’s net benefit. Decision curve analysis is a human-centred approach of assessing clinical usefulness, since it requires experts’ opinion. Ethical Artificial Intelligence initiative indicate that a human-centred approach in clinical decision support systems is required to enable accountability, safety and oversight while the ensure ‘fairness’ and transparency.
Privacy Concerns in CDSS
Deep learning models have remarkable ability to memorise data even when they do not overfit. In other words, the models themselves can expose information about the patients that compromise their privacy. This can results in unintentional data leakage in inference and also provide opportunities for malicious attacks. We will overview common privacy attacks and defences against them. Finally, we will discuss adversarial attacks against deep learning explanations.
Acerca de Programa especializado: Informed Clinical Decision Making using Deep Learning
This specialisation is for learners with experience in programming that are interested in expanding their skills in applying deep learning in Electronic Health Records and with a focus on how to translate their models into Clinical Decision Support Systems.

Preguntas Frecuentes
¿Cuándo podré acceder a las lecciones y tareas?
¿Qué recibiré si me suscribo a este Programa especializado?
¿Hay ayuda económica disponible?
¿Tienes más preguntas? Visita el Centro de Ayuda al Alumno.