Chevron Left
Volver a Sample-based Learning Methods

Opiniones y comentarios de aprendices correspondientes a Sample-based Learning Methods por parte de Universidad de Alberta

798 calificaciones
160 reseña

Acerca del Curso

In this course, you will learn about several algorithms that can learn near optimal policies based on trial and error interaction with the environment---learning from the agent’s own experience. Learning from actual experience is striking because it requires no prior knowledge of the environment’s dynamics, yet can still attain optimal behavior. We will cover intuitively simple but powerful Monte Carlo methods, and temporal difference learning methods including Q-learning. We will wrap up this course investigating how we can get the best of both worlds: algorithms that can combine model-based planning (similar to dynamic programming) and temporal difference updates to radically accelerate learning. By the end of this course you will be able to: - Understand Temporal-Difference learning and Monte Carlo as two strategies for estimating value functions from sampled experience - Understand the importance of exploration, when using sampled experience rather than dynamic programming sweeps within a model - Understand the connections between Monte Carlo and Dynamic Programming and TD. - Implement and apply the TD algorithm, for estimating value functions - Implement and apply Expected Sarsa and Q-learning (two TD methods for control) - Understand the difference between on-policy and off-policy control - Understand planning with simulated experience (as opposed to classic planning strategies) - Implement a model-based approach to RL, called Dyna, which uses simulated experience - Conduct an empirical study to see the improvements in sample efficiency when using Dyna...

Principales reseñas

9 de ene. de 2020

Really great resource to follow along the RL Book. IMP Suggestion: Do not skip the reading assignments, they are really helpful and following the videos and assignments becomes easy.

2 de oct. de 2019

Great course! The notebooks are a perfect level of difficulty for someone learning RL for the first time. Thanks Martha and Adam for all your work on this!! Great content!!

Filtrar por:

151 - 156 de 156 revisiones para Sample-based Learning Methods

por Duc H N

2 de feb. de 2020

The last test is a little bit tricky

por Sanat D

29 de jul. de 2020

The reading material is great (as are the lectures), but frankly, the hypersensitive autograder is a real hinderance. Correct implementations don't get full points, and are sensitive to things like the order of random number generator calls, rather than looking for a correct range of solutions. To make things worse, the autograder has poor feedback - I often had to rely on assignment discussions with people who had received similarly unhelpful feedback to debug my solutions.

por Vasileios V

15 de jun. de 2020

Some explanations need should be broken down into smaller pieces

por Chungeon K

24 de may. de 2020

너무 함축적입니다. 강의 시간이 늘어날 필요가 있을 것으로 보입니다.

por Andreas B

22 de ago. de 2020

I give the course a low rating for several reasons, the first being the most important one: The instructors basically completely absent. Having issues or problems? They don't bother. Not a single reply from either instructor in the forums for months or years. Second: Flawed and unprecise notebooks. Well known issues with random numbers, but no updates. Incorrect book references which will let you implement formulas other than intended. Third: Tons of short videos with 30% summary and "what you will learn", which is ridiculous for 3 minute videos. Fourth reason: Mathematical depth missing after the first subcourse. Suggestion: Watch the David Silver and Stanford youtube lessons instead. For free and better explained. Compared to, for instance, Andrew NGs specialization, this one is really bad mostly thanks to the complete disinterest of the instructors.

por Mansour A K

18 de may. de 2020

This is one of the worst courses I have ever taken in my life. The videos don't contain much content and presenters just read them off with no clarification or explanation. Furthermore, the book is also shit (despite the fact that it's the gospel of RL). The writers of the book, who are two well-respected scientists, really suck at writing books. There is another course (or specialization) from the National Research University Higher School of Economics called "Practical Reinforcement Learning". You probably should check it out before you take this one.