Chevron Left
Volver a Get Familiar with ML basics in a Kaggle Competition

Opiniones y comentarios de aprendices correspondientes a Get Familiar with ML basics in a Kaggle Competition por parte de Coursera Project Network

4.4
estrellas
14 calificaciones
3 reseña

Acerca del Curso

In this 1-hour long project, you will be able to understand how to predict which passengers survived the Titanic shipwreck and make your first submission in an Machine Learning competition inside the Kaggle platform. Also, you as a beginner in Machine Learning applications, will get familiar and get a deep understanding of how to start a model prediction using basic supervised Machine Learning models. We will choose classifiers to learn, predict, and make an Exploratory Data Analysis (also called EDA). At the end, you will know how to measure a model performance, and submit your model to the competition and get a score from Kaggle. This guided project is for beginners in Data Science who want to do a practical application using Machine Learning. You will get familiar with the methods used in machine learning applications and data analysis. In order to be successful in this project, you should have an account on the Kaggle platform (no cost is necessary). Be familiar with some basic Python programming, we will use numpy and pandas libraries. Some background in Statistics is appreciated, like as knowledge in probability, but it’s not a requirement....

Principales reseñas

Filtrar por:

1 - 4 de 4 revisiones para Get Familiar with ML basics in a Kaggle Competition

por 121910303051 V S T

6 de mar. de 2021

Great to start with the basics but needed a little more explanation on libraries

por Isara S

17 de sep. de 2021

This is a really good guided project to start with Kaggle Competition. I learnt all the basics require to start with Kaggle.

por Mustak A

5 de ago. de 2021

Need to add some more explanation about Kaggle

por Hideki O

22 de oct. de 2021

This course should be called the basics of how to use JupyterLab rather than ML basics. The instructor goes through some rudimentary data preprocessing, but there is very little theoretical explanation as to why the preprocessing should be done, and for a beginner it would be difficult to understand why the instructor did that. For example, there was no explanation as to why the "stratify" option was used when splitting the training and test data with the train_test_split() function. I was able to figure out the meaning of the option and why it matters by Google it, but I think it should have been explained in the lecture. This is just one example. Overall, there was too little explanation of the theoretical background in this class.