Underactuated Robotics
Robots today move far too conservatively, using control systems that attempt to maintain full control authority at all times. Humans and animals move much more aggressively by routinely executing motions which involve a loss of instantaneous control authority. Controlling nonlinear systems without complete control authority requires methods that can reason about and exploit the natural dynamics of our machines. This course discusses nonlinear dynamics and control of underactuated mechanical systems, with an emphasis on machine learning methods. Topics include nonlinear dynamics of passive robots (walkers, swimmers, flyers), motion planning, partial feedback linearization, energy-shaping control, analytical optimal control, reinforcement learning/approximate optimal control, and the influence of mechanical design on control. Discussions include examples from biology and applications to legged locomotion, compliant manipulation, underwater robots, and flying machines. Acknowledgements Professor Tedrake would like to thank John Roberts for his help with the course and videotaping the lectures.
Course Features
- Lectures 22
- Quizzes 0
- Duration 4 hours per week
- Skill level
- Language English
- Students 637
- Certificate No
- Assessments Self
-
Lesson 1
- Lecture 1.1 Introduction Locked
-
Lesson 2
- Lecture 2.1 The Simple Pendulum Locked
-
Lesson 3
- Lecture 3.1 Optimal Control of the Double Integrator Locked
-
Lesson 4
- Lecture 4.1 Numerical Optimal Control (Dynamic Programming) Locked
-
Lesson 5
- Lecture 5.1 Acrobot and Cart-pole Locked
-
Lesson 6
- Lecture 6.1 Swing-up Control of Acrobot and Cart-pole Systems Locked
-
Lesson 7
- Lecture 7.1 Dynamic Programming (DP) and Policy Search Locked
-
Lesson 8
- Lecture 8.1 Trajectory Optimization Locked
-
Lesson 9
- Lecture 9.1 Trajectory Stabilization and Iterative Linear Quadratic Regulator Locked
-
Lesson 10
- Lecture 10.1 Walking Locked
-
Lesson 11
- Lecture 11.1 Running Locked
-
Lesson 12
- Lecture 12.1 Feasible Motion Planning Locked
-
Lesson 13
- Lecture 13.1 Global Policies from Local Policies Locked
-
Lesson 14
- Lecture 14.1 Introducing Stochastic Optimal Control Locked
-
Lesson 15
- Lecture 15.1 Stochastic Gradient Descent Locked
-
Lesson 16
- Lecture 16.1 Temporal Difference Learning Locked
-
Lesson 17
- Lecture 17.1 Temporal Difference Learning with Function Approximation Locked
-
Lesson 18
- Lecture 18.1 Policy Improvement Locked
-
Lesson 19
- Lecture 19.1 Actor-critic Methods Locked
-
Lesson 20
- Lecture 20.1 Case Studies in Computational Underactuated Control Locked
-
Exams
- Lecture 21.1 Exams Locked
-
Projects
- Lecture 22.1 Projects Locked