Robust control strategies for musculoskeletal models using deep reinforcement learning

The OpenSim Project and the National Center for Simulation in Rehabilitation Research (NCSRR) at Stanford invite you to join our next webinar, featuring Łukasz Kidziński from Stanford University. He will give an introduction to reinforcement learning and its application to developing control strategies.
 

DETAILS 
Title: Robust control strategies for musculoskeletal models using deep reinforcement learning
Speaker: Łukasz Kidziński from Stanford University 
Time: Tuesday, August 7th at 10:00 a.m. Pacific Daylight Time 
Registration: https://simtk.webex.com/mw3300/myweb...imtk&service=6

ABSTRACT 
Predicting how the human motor control system adapts to new conditions during gait is a grand challenge in biomechanics. Computational models that emulate human motor control could assist in many applications, such as improving surgical planning for gait pathologies and designing devices to restore mobility for lower-limb amputees. Deep reinforcement learning is a promising approach for modeling motor control and its adaptation to new conditions, but it has not been widely explored in biomechanics research. In this webinar, we will provide an introduction to reinforcement learning and highlight its use for biomechanical applications. 

Traditional, physics-based, biomechanical simulations track experimental data, such as joint kinematics and ground reaction forces (GRFs), which limits these studies from investigating how kinematics and GRFs would adapt to a new control strategy. Although generating simulation de novo currently is difficult due to the large optimization space, which typically requires movement-specific controllers, recent developments in machine learning have been shown to search large spaces efficiently. One such technique, reinforcement learning, is an unsupervised machine learning approach that seeks to take actions to maximize some user-defined performance metric, or reward, thus creating a complex controller for any movement without specific domain knowledge.

We have developed osim-rl (http://osim-rl.stanford.edu/), an OpenSim-based platform that enables anyone to easily develop and test new reinforcement-learning-based control strategies with a physiologically accurate musculoskeletal model. In the webinar, we will introduce this platform, which is being used as part of a challenge to develop a controller to enable a model with a prosthetic leg to walk in specifically requested directions and speeds. We encourage the biomechanics community to participate and will discuss how to get started. For details about the Conference on Neural Information Processing Systems (NIPS) 2018 challenge, visit https://www.crowdai.org/challenges/n...tics-challenge

For more information or to find links to recordings of past webinars, visit http://opensim.stanford.edu/support/webinars.html. The OpenSim Webinar Series is funded by the NIH National Center for Simulation in Rehabilitation Research (NCSRR). Find out more about the NCSRR and the webinar series by visiting our website http://opensim.stanford.edu.

Webinar Start Date