The field of motor control has been recently steadily moving towards the idea that there is no such thing as an ideal movement. The system is not trying to reliably produce a single, stable, perfect form, and movement variability has gone from being treated as noise to being studied and analysed as a key feature of a flexible, adaptive control process. This formalises Bernstein's notion of 'repetition without repetition' in movement, and recognises that the redundancy in our behavioural capabilities relative to any given task allows for multiple solutions to that task being legitimate options.
There are many new analysis techniques within this 'motor abundance' framework, and I've reviewed most of them already; uncontrolled manifold analysis, stochastic optimal control theory and goal equivalent manifolds are the three big ones, as well as nonlinear covariation analysis. The essence of all these methods is that they take variability in the execution or outcome of a movement, and decompose that variability into variability that does not interfere with achieving the outcome and variability that does.
This post will explain the variability decomposition process in Sternad & Cohen's (2009) Tolerance, Noise and Covariation (TNC) analysis, which my students and I are busily applying to some new throwing data from the lab. I have talked a little about this analysis here but I focused on the part of the analysis that involves a task dynamical analysis identical to the one I did for my throwing paper in 2016. In this post, I want to explain the TNC analysis itself. I will be relying on Sternad et al, 2010, which I've found to be a crystal clear explanation of the entire approach; you can also download Matlab code implementing the analysis from her website.
Showing posts with label motor control. Show all posts
Showing posts with label motor control. Show all posts
Tuesday, 25 September 2018
Saturday, 21 October 2017
What Limits the Accuracy of Human Throwing?
Throwing a projectile in order to hit a target requires you to produce one lot of the set of release parameters that result in a hit; release angle, velocity (speed and direction) and height (relative to the target). My paper last year on the affordances of targets quantified these sets using a task dynamical analysis.
There is one additional constraint; these release parameters have to occur during a very short launch window. This window is the part of the hand's trajectory during which the ball must be released in order to intercept the target. It is very easy to release slightly too late (for example) and drill the projectile into the ground.
How large is this launch window? It is surprisingly, terrifyingly small; Calvin (1983) and Chowdhary & Challis (1999) have suggested it is on the order of 1ms. Those papers used a sensitivity analysis on simulated trajectories to show that accuracy is extremely sensitive to timing errors and this millisecond level precision is required to produce an accurate throw.
Smeets, Frens & Brenner (2002) tested this hypothesis with dart throwing. If this intense pressure on timing the launch window determines accuracy, then throwers should organise their behaviour and throw in a way that makes their launch window as tolerant of errors as possible. They replicated the sensitivity analyses on human data to see if people try to give themselves the maximum error tolerance in the launch, or whether they were trying to accommodate errors in other variables.
What they found is that the launch window timing is not the limiting factor. Their throwers (who were not especially expert) did not throw so as to minimise the sensitivity of the launch window timing to errors. Quite the contrary; they lived in a fairly sensitive region of the space, and then didn't make timing errors. They did throw so as to reduce the sensitivity to speed errors, however, and errors in the targeting came from errors in the spatial path of the hand that the system did not adequately compensate for, rather than the timing of the hand's release. (The authors saw some evidence that the position, speed and direction of the hand trajectory were organised into a synergy, which aligns nicely with the motor abundance hypothesis).
I would like to replicate and extend this analysis process using more detailed simulations and data from better throwers. I've become convinced it's a very useful way to think of what is happening during the throw. I also think these results point to some interesting things about throwing. Specifically, while timing and speed must both be produced with great accuracy, the system has developed two distinct solutions to coping with errors. Timing errors are reduced by evolving neural systems that can reliably produce the required precision. Speed errors have been left to an online perception-action control process which adapts the throw to suit local demands. The latter is the more robust solution; so why was timing solved with brain power?
There is one additional constraint; these release parameters have to occur during a very short launch window. This window is the part of the hand's trajectory during which the ball must be released in order to intercept the target. It is very easy to release slightly too late (for example) and drill the projectile into the ground.
How large is this launch window? It is surprisingly, terrifyingly small; Calvin (1983) and Chowdhary & Challis (1999) have suggested it is on the order of 1ms. Those papers used a sensitivity analysis on simulated trajectories to show that accuracy is extremely sensitive to timing errors and this millisecond level precision is required to produce an accurate throw.
Smeets, Frens & Brenner (2002) tested this hypothesis with dart throwing. If this intense pressure on timing the launch window determines accuracy, then throwers should organise their behaviour and throw in a way that makes their launch window as tolerant of errors as possible. They replicated the sensitivity analyses on human data to see if people try to give themselves the maximum error tolerance in the launch, or whether they were trying to accommodate errors in other variables.
What they found is that the launch window timing is not the limiting factor. Their throwers (who were not especially expert) did not throw so as to minimise the sensitivity of the launch window timing to errors. Quite the contrary; they lived in a fairly sensitive region of the space, and then didn't make timing errors. They did throw so as to reduce the sensitivity to speed errors, however, and errors in the targeting came from errors in the spatial path of the hand that the system did not adequately compensate for, rather than the timing of the hand's release. (The authors saw some evidence that the position, speed and direction of the hand trajectory were organised into a synergy, which aligns nicely with the motor abundance hypothesis).
I would like to replicate and extend this analysis process using more detailed simulations and data from better throwers. I've become convinced it's a very useful way to think of what is happening during the throw. I also think these results point to some interesting things about throwing. Specifically, while timing and speed must both be produced with great accuracy, the system has developed two distinct solutions to coping with errors. Timing errors are reduced by evolving neural systems that can reliably produce the required precision. Speed errors have been left to an online perception-action control process which adapts the throw to suit local demands. The latter is the more robust solution; so why was timing solved with brain power?
Labels:
affordances,
launch window,
motor abundance hypothesis,
motor control,
Smeets,
throwing,
timing,
UCM
Thursday, 13 October 2016
Optimal Feedback Control and Its Relation to Uncontrolled Manifold Analysis
Motor control theories must propose solutions to the degrees of freedom problem, which is the fact that the movement system has more ways to move than are ever required to perform a given task. This creates a problem for action selection (which of the many ways to do something do you choose?) and a problem for action control (how do you create stable, repeatable movements using such a high dimensional system?).
Different theories have different hypotheses about what the system explicitly controls or works to achieve, and what is left to emerge (i.e. happen reliably without explicitly being specified in the control architecture). They are typically about controlling trajectory features such as jerk. Are you working to make movements smooth, or does smoothness pop out as a side effect of controlling something else? This approach solves the degrees of freedom control problem by simply requiring the system to implement a specific trajectory that satisfies some constraint on that feature you are controlling (e.g. by minimising jerk; Flash & Hogan, 1985). They internally replace the solutions afforded by the environment with one desired trajectory.
Todorov and Jordan (2002a, 2002b; thanks to Andrew Pruszynski for the tip!) propose that the system is not optimising performance, but the control architecture. This is kind of a cool way to frame the problem and it leads them to an analysis that is very similar in spirit to uncontrolled manifold analysis (UCM) and to the framework of motor abundance. In these papers, they apply the mathematics of stochastic optimal feedback control theory and propose that working to produce optimal control strategies is a general principle of motor control from which many common phenomena naturally emerge. They contrast this account (both theoretically and in simulations) to the more typical 'trajectory planning' models.
The reason this ends up in UCM territory is that it turns out, whenever it's possible, the optimal control strategy for solving motor coordination problems is a feedback control system in which control is deployed only as required. Specifically, you only work to control task-relevant variability, noise which is dragging you away from performing the task successfully. The net result is the UCM patterns; task-relevant variability (V-ORT) is clamped down by a feedback control process and task-irrelevant variability (V-UCM) is left alone. The solution to the degrees of freedom control problem is to simply deploy control strategically with respect to the task; no degrees of freedom must be 'frozen out' and the variability can be recruited at any point in the process if it suddenly becomes useful - you can be flexible.
What follows is me working through this paper and trying to figure out how exactly this relates to the conceptually similar UCM. If anyone knows the maths of these methods and can help with this, I would appreciate it!
Different theories have different hypotheses about what the system explicitly controls or works to achieve, and what is left to emerge (i.e. happen reliably without explicitly being specified in the control architecture). They are typically about controlling trajectory features such as jerk. Are you working to make movements smooth, or does smoothness pop out as a side effect of controlling something else? This approach solves the degrees of freedom control problem by simply requiring the system to implement a specific trajectory that satisfies some constraint on that feature you are controlling (e.g. by minimising jerk; Flash & Hogan, 1985). They internally replace the solutions afforded by the environment with one desired trajectory.
Todorov and Jordan (2002a, 2002b; thanks to Andrew Pruszynski for the tip!) propose that the system is not optimising performance, but the control architecture. This is kind of a cool way to frame the problem and it leads them to an analysis that is very similar in spirit to uncontrolled manifold analysis (UCM) and to the framework of motor abundance. In these papers, they apply the mathematics of stochastic optimal feedback control theory and propose that working to produce optimal control strategies is a general principle of motor control from which many common phenomena naturally emerge. They contrast this account (both theoretically and in simulations) to the more typical 'trajectory planning' models.
The reason this ends up in UCM territory is that it turns out, whenever it's possible, the optimal control strategy for solving motor coordination problems is a feedback control system in which control is deployed only as required. Specifically, you only work to control task-relevant variability, noise which is dragging you away from performing the task successfully. The net result is the UCM patterns; task-relevant variability (V-ORT) is clamped down by a feedback control process and task-irrelevant variability (V-UCM) is left alone. The solution to the degrees of freedom control problem is to simply deploy control strategically with respect to the task; no degrees of freedom must be 'frozen out' and the variability can be recruited at any point in the process if it suddenly becomes useful - you can be flexible.
What follows is me working through this paper and trying to figure out how exactly this relates to the conceptually similar UCM. If anyone knows the maths of these methods and can help with this, I would appreciate it!
Subscribe to:
Comments (Atom)