Complex Systems in Sport - Keith Davids
Complex Systems in Sport - Keith Davids
Complex Systems in Sport - Keith Davids
Complex systems in nature are those with many interacting parts, all capable
of influencing global system outcomes. There is a growing body of research
that has modelled sport performance from a complexity sciences perspective,
studying the behaviour of individual athletes and sports teams as emergent
phenomena which self-organize under interacting constraints.
This book is the first to bring together experts studying complex systems in
the context of sport from across the world to collate core theoretical ideas,
current methodologies and existing data into one comprehensive resource. It
offers new methods of analysis for investigating representative complex sport
movements and actions at an individual and team level, exploring the
application of methodologies from the complexity sciences in the context of
sports performance and the organization of sport practice.
Complex Systems in Sport is important reading for any advanced student
or researcher working in sport and exercise science, sports coaching,
kinesiology or human movement.
Keith Davids is Professor of Motor Control at the Centre for Sports
Engineering Research, Sheffield Hallam University, UK.
Robert Hristovski is Professor in the Faculty of Physical Education at the
University of Ss. Cyril and Methodius, Republic of Macedonia.
Duarte Araújo is Associate Professor in the Faculty of Human Kinetics at
University of Lisbon, Portugal.
Natàlia Balagué Serre is Professor of Exercise Physiology in the INEFC at
the University of Barcelona, Spain.
Chris Button is Associate Professor of Motor Learning at the School of
Physical Education, Sport and Exercise Sciences, University of Otago,
Dunedin, New Zealand.
Pedro Passos is Assistant Professor of Motor Control in the Faculty of
Human Kinetics at the University of Lisbon, Portugal.
Routledge Research in Sport and Exercise Science
The Routledge Research in Sport and Exercise Science series is a showcase
for cutting-edge research from across sport and exercise sciences, including
physiology, psychology, biomechanics, motor control, physical activity and
health, and every core subdiscipline. Featuring the work of established and
emerging scientists and practitioners from around the world, and covering the
theoretical, investigative and applied dimensions of sport and exercise, this
series is an important channel for new and groundbreaking research in the
human movement sciences.
Also available in this series:
1. Mental Toughness in Sport
Developments in Theory and Research
Daniel Gucciardi and Sandy Gordon
2. Paediatric Biomechanics and Motor Control
Theory and Application
Mark De Ste Croix and Thomas Korff
3. Attachment in Sport, Exercise and Wellness
Sam Carr
4. Psychoneuroendocrinology of Sport and Exercise
Foundations, Markers, Trends
Felix Ehrlenspiel and Katharina Strahler
5. Mixed Methods Research in the Movement Sciences
Case Studies in Sport, Physical Education and Dance
Oleguer Camerino, Marta Castaner and Teresa M. Anguera
6. Complexity and Control in Team Sports
Dialectics in Contesting Human Systems
Felix Lebed and Michael Bar-Eli
7. Complex Systems in Sport
Edited by Keith Davids, Robert Hristovski, Duarte Araújo,
Natàlia Balagué Serre, Chris Button and Pedro Passos
Complex Systems in Sport
Edited by
Keith Davids, Robert Hristovski, Duarte Araújo,
Natàlia Balagué Serre, Chris Button and Pedro
Passos
First published 2014
by Routledge
2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN
and by Routledge
711 Third Avenue, New York, NY 10017
© 2014 Keith Davids, Robert Hristovski, Duarte Araújo, Natàlia Balagué Serre, Chris Button and
Pedro Passos
The right of the editors to be identified as the authors of the editorial material, and of the authors for
their individual chapters, has been asserted in accordance with sections 77 and 78 of the Copyright,
Designs and Patents Act 1988.
All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by
any electronic, mechanical, or other means, now known or hereafter invented, including photocopying
and recording, or in any information storage or retrieval system, without permission in writing from the
publishers.
Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are
used only for identification and explanation without intent to infringe.
PART 1
Theoretical bases for understanding complex systems in sport
PART 2
Methodologies and techniques for data analyses in investigating complex
systems in sport
PART 3
Complexity sciences and sport performance
PART 4
Complexity sciences and training for sport
Index
Illustrations
1.1 Top: a stable linear system with one attractor and one basin of
attraction (see converging arrows). Bottom: unstable linear system with
one repeller and diverging flows toward infinity. There are no
alternative stable states (attractors) for the behavioural variable Y
1.2 Nonlinear systems can possess repeller(s), but also two or more stable
states of the behavioural variable Y and associated basins of attraction
shown by converging arrows toward the attractors
1.3 Left panel: the increase of the inter-strike time interval for increasing
of D control parameter (right-to-left direction). The whole cycle of
leaning forward to strike – upright posture restoration – leaning
forward to strike, increases, showing increase in the relaxation time
toward the upright parallel-foot stance. For D0 − D < 0.05, the
relaxation time towards parallel stance tends to become infinite; that is,
no relaxation toward the parallel stance exists anymore. The parallel
stance loses its stability and transits to a more stable diagonal stance;
see right panel
3.1 The interpretation of coupling angles; arrows represent the direction of
a vector connecting data points on an angle-angle diagram
4.1 Feedback (A) and feed-forward (B) control of voluntary movement. A
sign of correction is created in both mechanisms to change the action
of the muscles according to the difference between the desired and
achieved states
4.2 Quasi-static arm-curl exercise holding an Olympic bar with an elbow
flexion of 90° at 80% of the one-repetition maximum until the
fatigueinduced spontaneous termination point
4.3 A) Time series of the elbow-angle data for participant 1; B) A typical
difference in the power spectral density values for the online
fluctuations of the elbow angle in the first (a) and the last (b) third of
the quasi-static exercise. The difference of the elbow angle variability
between the first and the third phase spans over sub-second and
seconds time scale. This signifies a correlated instability of the system
under fatigue
4.4 Differences in means for the spectral degrees of freedom; horizontal
axis: the first and third part of exercise; vertical axis: spectral degrees
of freedom
4.5 Volition-state dynamics in six participants during the quasi-static
elbow angle exercise
4.6 Cycle ergometer exercise
4.7 (Upper panel): standardized fluctuations from the data on revolutions
per minute; (middle panel): power spectrum for the first half of the
exercise with its slope of −1.5 showing anti-persistent fractional
Brownian motion (fBm); (lower panel): power spectrum for the second
half of the exercise with a slope of −2.2 showing persistent fBm
4.8 A: Treadmill exercise with a participant reporting through signs; B:
sample of 11 individual time series of task-unrelated–task-related
thought dynamics. Starting with the task-unrelated thoughts (TUT)
state, one can observe the switches between TUT and task-related
thoughts (TRT). Eventually, the TRT state becomes the one that
precedes the exhaustion point. Numbers on the left signify participants
5.1 Simulated time series to illustrate the distinction between measures of
central tendency and measures sensitive to the sequential properties of
the time series. Panel A depicts random uncorrelated noise, while Panel
B shows ‘pink’ or 1/f noise where each successive observation is
positively correlated with the others
5.2 The phase space plot of the Lorenz attractor; we used the parameters Ρ
= 10, σ = 28 and β = 8/3 and initial conditions of x = 1, y = 1, z = 1
5.3 Panel A shows the simulated time series used in the tutorial calculation
of SampEn; panel B shows the same time series with added patterns of
[1, 2, 3] values (shown in squares) to increase the regularity of the time
series
5.4 Illustration of the digitization (quantization) effect on the SampEn
calculation
6.1 The calculation of the coordinative variable
6.2 Angle between geometrical centre of each team and the goal position
6.3 Exemplar data of a network in water polo
7.1 Exemplar data of the distance from a sailing boat to a fixed starting
position before the start of the regatta; the length of this time series is
579 data points
7.2 Simulated time series of length n = 200 of a stationary process (top),
non-stationary process with non-constant mean (middle) and a
nonstationary process with non-constant variance (bottom)
7.3 Simulated time series of length n = 200 of a white noise (top) and
respective sample autocorrelation function at lags k = 0, …, 50
(bottom) with critical bounds (dashed lines)
7.4 Exemplar data of length n = 579 of the distance from a sailing boat to a
fixed starting position before the regatta starting (top) and respective
sample autocorrelation function at lags k = 0, …, 350 (bottom) with
critical bounds (dashed lines)
7.5 Simulated time series of length n = 200 of a white noise (top) and
respective normalized periodogram at frequencies λr = 2πr with r = 0,
…, 0.5 (bottom)
7.6 Exemplar data of length n = 579 of the distance from a sailing boat to a
fixed starting position before the regatta starting (top) and respective
normalized periodogram at frequencies λr = 2πr with r = 0, …, 0.5
(bottom)
7.7 Simulated time series of length n = 200 of a short-range correlation
process (top), its sample autocorrelation function at lags k = 0, …, 50
(middle) and normalized periodogram at frequencies λr = 2πr with r =
0, …, 0.5 (bottom)
7.8 Simulated times series of length n = 400 of a long-range correlation
process (top), its sample autocorrelation function at lags k = 0, …, 50
(middle) and normalized periodogram at frequencies λr = 2πr with r =
0, …, 0.5 (bottom)
7.9 Example of a set of points in a plane (A) and corresponding Voronoi
diagram (B)
7.10 Spatial configurations: (A) attacker (grey areas) and defender (white
areas) teams and; (B) attacker player breaking defence organization;
the arrow indicates the direction of the attack
7.11 Example (one play) of the mean Voronoi area (VA) across time for
each team; error bars represent the standard deviation
7.12 Comparison of the mean entropy of Voronoi area (VA) between teams
in the same trial; error bars represent the standard deviation
7.13 Construction of the superimposed Voronoi diagram (bottom) from
considering, separately, the Voronoi diagrams for team A (black dots)
and team B (white dots)
7.14 Construction of the superimposed Voronoi diagram for (A) exclusively
paired opponents and (B) randomly located individuals
7.15 Measures from the superimposed Voronoi diagram: (A) maximum
percentage of overlapped area for each individual of the group marked
with black dots; and (B) percentage of free area (in black)
7.16 Example (one play) of the observed percentage of free area (%FA)
across time (solid line) and the 95% confidence interval for spatial
random distribution (dashed lines)
8.1 Clustering dendrogram resulting from an average linkage algorithm;
horizontal lines indicate the level of the rescaled distance at which the
respective movements are grouped into one cluster
8.2 Linear versus nonlinear separation
8.3 Variations of data processing by means of self-organizing maps
9.1 Schematic representation of single-camera video motion capture
9.2 The TACTO 8.0 device window; manual tracking of a selected
working point with a computer mouse allows virtual coordinates of the
tracked player/object to be obtained
9.3 The direct linear transformation (2D-DLT) method for camera
calibration and bi-dimensional reconstruction
9.4 Converted pitch coordinates (metres) allow the reproduction of
movement displacement trajectories of players in the space of action
9.5 The two behavioural states of the order parameter; top panel shows the
distance of the single centroid to the defensive boundary line of all the
analyzed trials, synchronized by the assistance pass instant; bottom
panel shows the order parameter subtracted by the own mean to
highlight the qualitative changes associated with perturbations of the
initial stability of teams
9.6 Identification of the nonlinear qualitative changes between the two
behavioural states; top panel displays an exemplar play with increased
slope in the transition between the two system states (down-sampled to
1 Hz); bottom panel shows the average and standard deviation bands of
the first derivative of the order parameter in all plays, highlighting with
an ellipse the high rate of change of the order parameter; the slope or
high rate of change indicates the order−order transition
9.7 Moving average and standard deviation values of the inter-centroid
distances (top panel) and relative stretch index (bottom panel); vertical
dashed lines highlight the instant of the instabilities corresponding to
the zero crossing of the order parameter previously identified in Figure
9.5
10.1 Illustrative immersive interactive virtual reality apparatus in the
Movement Innovation Laboratory at Queen's University, Belfast. It
shows a participant wearing a pair of gloves with attached hand
trackers, a head-mounted display with attached head tracker and a
back-pack housing the control unit
10.2 A schematic representation of what the ball-carrying participant could
see in front of him/her (i.e. the defensive line) in a virtual environment
simulated three-versus-three rugby task
11.1 (left) Self-organizing map (SOM: neurons are grouped to clusters
which form the output; (right) feed-forward network (FFN):
comparison of expected output Oexp and computed output Ocomp is
feedback and so changes the neuron connections to minimize the
difference between expected and computed output
11.2 Network with a process trajectory and the corresponding type profile
11.3 (left): TeSSy input interface with video control (bottom), animation
interface (left), attribute selection and input panel (top); (right):
examples of a stroke frequency matrix and a stroke success matrix
representing tactical concepts and technical skills, respectively
11.4 Squash court with game process BR–FR–BL (left); trained net
representing the most frequent 4-position-sequences (right); BL =
backhand left side, BR = backhand right side, FR = forehand right side
11.5 Tactical patterns of players depending on their opponents; A, B and D
are the players. The squares represent the corresponding networks,
where the circles represent the players' tactical concepts. Each circle
corresponds to a stroke sequence, the frequency of which is encoded by
the diameter of the circle
11.6 The network was trained with game processes from volleyball,
presenting the most important sequences as circles, whose diameters
represent the frequencies of the corresponding sequences, three of them
being explained in more detail
11.7 (left): Trained net with a trajectory representing a sequence of player
formations starting at the‘ O' and ending at the ‘X'; (right): scheme of a
configuration prototype of a team formation, which could, for example,
correspond to the marked cluster
11.8 Trajectories showing the preparation of the defence against the
opponent's service
11.9 A two-dimensional replication of a match situation by means of
position data
11.10 Net-based recognition of formation types and the recombination with
position and time information
11.11 Example of the user interface of a tool for the combined quantitative
and qualitative analysis of formations in football
11.12 Trained neural network with grey shaded areas that illustrate different
quality levels (top left) and a representation of the trajectories of
hockey training. The learning process begins in the dark grey square
and ends in the light grey square; the colours of the neurons correspond
to those in the large net graphic (top left)
12.1 Illustration of football as a complex dynamical system
12.2 Phase space for two players in a tennis rally; Serena Williams (left)
Justine Henin (right); ▵ , □ = strokes of Williams; ▪, ♦ = strokes of
Henin; going for the ball to strike and returning to a neutral position
results in cyclical structures in a speed/position phase space
12.3 Longitudinal team centres, differences and relative phase for Italy and
France during the first half of the 2006 World Cup final game
12.4 Separating a constellation of players on the playground into its
formation and position
12.5 A trajectory of formations on the net and its reduction to a formation
type trajectory
12.6 Frequencies of formations and their correlations
12.7 Distribution of a typical pair of formations between minutes 21 and 30
12.8 Example of a typical tactical pattern produced between the two teams
12.9 Prototype of the tactical pattern from Figure 12.5, together with
success values
13.1 The constraint of goal location on coordination processes in dyadic
systems presented in decomposed format: (left column) distances of
each player to the centre of the goal; (right column) angles of each
player to the centre of the goal; (a) and (b) exemplar data from attacker
five [A5] and nearest defender [Def]; (A) and (B) exemplar data from
attacker five [A5] and nearest defender [D]; (C) and (D) dynamics of
the relative phase of the exemplar data from A5 and nearest D; (e) and
(f) frequency histograms of the relative phases of all A–D dyadic
systems (n = 52)
13.2 Mean values and standard error of the required velocity for intercepting
the ball of (A) defender and (B) goalkeeper in shots that ended in a
defender's interception, in a goalkeeper's save and in a goal. The
represented levels of statistical significance are P < 0.05 (*), P < 0.01
(**) and P < 0.001 (***). Note that the required velocity of the
goalkeeper was not measured when the defender intercepted the ball,
since it is impossible to compute the goalkeeper's interception point
15.1 Schematic (i.e. one-dimensional) presentation of the corrugated
hierarchically soft-assembled potential landscape with two confining
barriers on both sides
15.2 (Top) snapshots of the four metastable states of dancers for the first 35
seconds of improvisation; (middle): overlaps q of the four metastable
body configurations with principal components (axis Y) and pathway of
their dynamics. Overlap values are given in the legend on right;
(bottom): reconfigurations are given as nonzero Hamming distances of
the dancer's action system; after some reconfigurations take place, the
action system relaxes and dwells in a state where no further
reconfigurations occur for some time; i.e. zero reconfiguration; this
process represents the metastable attractor configurations
(movement/posture pattern)
15.3 The profile of the average dynamic overlap qd(t) for different time lags;
its dynamics proceed on three timescales (from seconds to several
minutes) and does not converge to zero during the observation time
scale
16.1 Continuous relative phase (CRP) between elbow and knee through a
complete cycle for 24 beginners (left panel) and for 24 expert
swimmers (right panel), showing lower inter-individual variability for
experts
16.2 Angle between horizontal, left limb and right limb (left panel); modes
of limb coordination as regards the angle value between horizontal, left
limb and right limb (right panel). The angle between the horizontal line
and the left and right limbs was positive when the right limb was above
the left limb and negative when the right limb was below the left limb
16.3 Trajectory plots from O'Donovan et al. (2011)
17.1 Waddington's (1957) schematic of the epigenetic landscape
17.2 The landscape dynamics of the basic equation of the HKB (Haken-
Kelso-Bunz) model, with the same parameters as in Kelso (1995,
Figure 2.7)
17.3 (a) Landscape of learning the 90-degree phase task of the HKB model;
at the beginning of practice (C = 0.4) only temporary stabilization of
the target phase x0 = 0.25 can be achieved when starting from special
initial conditions close to C; (b) right at the transition (C = 0.425) the
target phase x0 = 0.25 shows one-sided stability: initial conditions
close to C will be attracted to the new attractor. Note that, in this
situation, the system is very sensitive to noise perturbations; (c) after
sufficient practice (C = 0.525), all initial conditions close to the target
attractor x0 = 0.25 will converge to the fixed point
17.4 A two-timescale landscape model associated with Snoddy's (1926)
score data (black dots) as elevation levels. The four clusters correspond
to the four practice sessions. The x behavioural variable corresponds to
the slow timescale (shallow dimension), whereas the y variable
corresponds to the fast timescale (steep dimension)
17.5 Initial and final performance of the mirror tracing task; (A) breakdown
of one-dimensional performance score measure into movement time
and spatial error (movement time) components. Filled and open
symbols indicate outcome score on trial 1 and trial 50, respectively.
Triangle, circle and square symbols reflect movement time (MT),
mixed (MIXED) and spatial error (SE) group assignment (see text for
detailed explanation); (B) performance score on trial 1 and trial 50 as a
function of group; error bars indicate standard error
18.1 (left) Angles identified for the horizontal planes of the left and right
limbs in the upper and lower body of ice climbers; (right) modes of
limb coordination as regards the angle value between horizontal, left
limb and right limb
19.1 The role of Brunswik's lens model in understanding informational
variables for complex systems in sport – analysis of a tennis serve
19.2 Principles for the assessment of representative learning design
Tables
3.1 Categories of coordination and their associated coupling angle (ã)
ranges
4.1 Summary of the four experiments performed until the fatigue-induced
spontaneous termination point
11.1 Summary of the results of all trajectories of all three groups (hockey,
soccer, control)
Contributors
Alexandar Aceski is currently Assistant Professor at the Faculty of Physical Education, University of
Ss. Cyril and Methodius, Skopje, Republic of Macedonia. He has research interests in methods of
qualitative analysis and modelling used in biomechanics, particularly as applied to transfer in motor
learning.
Daniel Aragonés is a PhD student at the National Institute of Physical Education of Catalonia,
University of Barcelona, Spain. His research is currently focused on the psychobiological integration
of exercise-induced fatigue.
Duarte Araújo is Associate Professor at the Faculty of Human Kinetics of the University of Lisbon,
Portugal. He is the Director of the Laboratory of Expertise in Sport. He is the President of the
Portuguese Society of Sport Psychology and a member of the National Council of Sports. His
research on ecological dynamical approaches to expertise and decision making in sports has been
funded by the Fundação para a Ciência e Tecnologia.
Natàlia Balagué Serre is Professor of Exercise Physiology at INEFC University of Barcelona, Spain.
Her field of research is complex systems in sport, with special focus on dynamic integrative
approaches to exercise-induced fatigue and the nonlinear psychobiological integration during
exercise. In 2003, she organized the first Complex Systems in Sport Congress, which was held in
Barcelona. Co-author of Complejidad y Deporte and papers relating to the effects of exercise-
induced fatigue on attention focus, perceived exertion and exercise termination.
Scott Bonnette received a BA in psychology from Wheeling Jesuit University, USA, in 2009 and a
MA in experimental psychology from the University of Cincinnati, USA, in 2013. He is currently a
graduate student at the University of Cincinnati. He has been involved with research and publications
concerning the nonlinear analysis of postural control and with research on how exploratory
movements facilitate perception.
Eric Brymer is a psychologist and Senior Lecturer in the Faculty of Health at Queensland University
of Technology, Australia. His research focuses on investigating nature-based activities, adventure
and extreme sports. Eric is interested in the broad psychological understanding of the experience, the
development of skill and how such activities enhance positive health and wellbeing. He is
particularly interested in the role of the physical environment and how to design and facilitate nature-
based experiences so that positive outcomes for the environment and people are optimised.
Chris Button is Associate Professor of Motor Learning at the University of Otago, New Zealand. His
research interests include the ecological dynamics approach to motor learning and human behaviours
in relation to water safety.
Jia Yi Chow is Assistant Professor at the Physical Education and Sports Science Department, National
Institute of Education, Nanyang Technological University, Singapore. His area of specialization is in
motor control and learning. Jia Yi's key research work includes nonlinear pedagogy, investigation of
multiarticular coordination changes, analysis of team dynamics from an ecological psychology
perspective and examining visual–perceptual skills in sports expertise.
Vanda Correia is Assistant Professor at the University of Algarve in Faro, Portugal. She did her PhD
in Sport Sciences at the Faculty of Human Kinetics of the University of Lisbon, Portugal.
Specializing in decision making in team sports, Vanda is particularly concerned with understanding
how the dynamics of players' interactions express adaptive behaviours to performance constraints
and are coupled with key information sources. She has been conducting research both in the field and
in virtual reality settings.
Cathy Craig is Head of the School of Psychology of Queens University Belfast, Northern Ireland, and
Director of the Movement Innovation Laboratory of that institution. She completed her PhD at the
University of Edinburgh under the supervision of Professor Dave Lee in the Perception in Action
Laboratories.
Keith Davids is Professor of Motor Learning at the Centre for Sports Engineering Research at
Sheffield Hallam University in the UK. He currently holds additional appointments at the University
of Jyväskylä in Finland (FiDiPro) and at the Queensland University of Technology in Australia. He
is a graduate of the University of London and gained a PhD at the University of Leeds in 1986.
Between 1993 and 2001, he led the Motor Control group at the Department of Exercise and Sport
Science at Manchester Metropolitan University, UK. In 2002 he moved to the University of Otago in
New Zealand before taking up an appointment at the Queensland University of Technology in
Australia. Currently he supervises doctoral students from Portugal, UK, Australia and New Zealand.
His major research interest involves the study of movement coordination and skill acquisition in
sport. He is particularly focused on understanding how to design representative learning and
performance evaluation environments in sport.
Ana Diniz is a teacher and researcher at the Department of Mathematics of the Faculty of Human
Kinetics, University of Lisbon, Portugal. Her investigation includes mathematical methods and
models related to motor control and interpersonal coordination processes.
Ricardo Duarte is Lecturer in Training Methods of Soccer at Faculdade de Motricidade Humana,
Portugal. His academic career involves mentoring young football coaches, developing research on
tactical behaviours in soccer with applications to training and performance analysis and presenting
his perspective in applied soccer training courses, congresses and seminars around the world.
Orlando Fernandes is Assistant Professor at the Department of Sport and Health of the University of
Évora, Portugal. He is an expert in biomechanics of human movement, with particular interest in
signal processing and nonlinear analysis of time-series data. His expertise has been also transferred
to the preparation of some elite athletes in track and field sports.
Hugo Folgado is Lecturer at the Department of Sport and Health, University of Évora, Portugal, and
collaborator at the Research Center for Sport Sciences, Health and Human Development, Portugal.
He is currently working on his PhD studies about football players' movement synchronization. His
research interests are performance analysis and expertise in team sports.
Sofia Fonseca has a degree in statistics from the Faculty of Sciences of Lisbon (1999), a PhD in
statistics from the University of Aberdeen (2004) and a PhD in sports science from the University of
Lisbon (2012). Sofia has been Assistant Professor at the Faculty of Physical Education and Sports,
Lusofona University, Lisbon, Portugal, since 2006. Her research interests are team sports behavior,
particularly modelling players' and teams' spatial organization.
Paul Glazier is a Research Fellow at the Institute of Sport, Exercise and Active Living, Victoria
University, Melbourne, Australia. He has expertise in sports biomechanics, motor control, skill
acquisition and performance analysis of sport. He has authored or co-authored over 40 peer-reviewed
journal articles, invited book chapters and published conference papers in these areas. Paul also has a
wealth of practical experience, having provided sports biomechanics and performance analysis
services to a wide range of athletes and teams, from regional juniors to Olympic and World
Champions, in a variety of sports.
Jonathon Headrick is a PhD scholar at the School of Exercise and Nutrition Sciences, Queensland
University of Technology, Australia. His research interests include the application of an ecological
dynamics approach for studying the role of emotion in learning and skill acquisition in sport.
Robert Hristovski is currently Professor at the Faculty of Physical Education, University of Ss. Cyril
and Methodius, Skopje, Republic of Macedonia. He obtained his MSc degree in 1994 and a PhD
degree in 1997 at the Faculty of Physical Education. In 2001 he had a five-month research visit at the
Institute of Nonlinear Science at the University of California, San Diego. He has research interests in
methods of analysis and modelling in nonlinear dynamics, particularly as applied to human action
selection and adaptation during training. He is also an invited lecturer on masters and soctoral
courses at several European universities.
J. A. Scott Kelso is a neuroscientist and Glenwood and Martha Creech Chair in Science, Professor of
Complex Systems and Brain Sciences, Florida Atlantic University, Boca Raton, Florida, USA, and
Professor of Psychology, Biological Sciences and Biomedical Science at the University of Ulster
(Magee Campus), Derry, Northern Ireland. He has worked on coordination dynamics, the science of
coordination and on fundamental mechanisms underlying voluntary movements and their relation to
the large-scale coordination dynamics of the human brain. His experimental research in the late
1970s and early 1980s led to the HKB model (Haken–Kelso–Bunz), a mathematical formulation that
quantitatively describes and predicts how elementary forms of coordinated behaviour arise and
change adaptively as a result of nonlinear interactions among components.
Nikita Kuznetsov received a BA in psychology from California State University, Northridge, in 2008,
and a PhD in experimental psychology from the University of Cincinnati in 2013. He is currently a
post-doctoral associate at Northeastern University, USA. His research focuses on perception–action
from the complex/dynamical systems perspective and the application of nonlinear methods in motor
control.
Peter Lamb was born in Canada and obtained a PhD in biomechanics from the University of Otago,
New Zealand, in 2010. Currently, Peter is Associate Researcher at the Technische Universität
München (TUM), Germany. As head of research and diagnostics at the TUM Golf Laboratory, Peter
works with national-level golf teams and their coaches, as well as private golfers. Part of his work
includes applying the constraints-led perspective to coordination of the golf swing in search of
critical boundaries of stability. The implications are both for theoretical aspects of human movement,
as well as golf-specific applications to training and on-course strategies. Peter is an avid golfer, skier,
cyclist and ice-hockey player.
Martin Lames is full Professor at the Faculty of Sports and Health Science at the Technical University
of Munich, Germany. His main research interests are modelling of sports performances, talent
research and top-level sports, with special focus on information technology support. He has served as
President of the International Association of Computer Science in Sport (IACSS) since 2013.
Yeou-Teh Liu was born and raised in Taipei. She received her PhD in kinesiology from the University
of Illinois at Urbana-Champaign, USA, and is currently Professor in the Department of Athletic
Performance, National Taiwan Normal University. Her research focuses on the dynamics of motor
skill acquisition, movement adaptation and motor control. Her other research interest is in the
performance analyses in competitive sports including both team and individual sport events.
Tim McGarry is Associate Professor in the Faculty of Kinesiology, University of New Brunswick,
Canada. He has published many journal articles and book chapters on various aspects of movement
control and sports performance. He serves as an advisory editorial board member on the Journal of
Sports Sciences and the International Journal of Performance Analysis in Sport and is coeditor of the
2013 Routledge Handbook of Sports Performance Analysis.
Daniel Memmert is Professor and Head of the Institute of Cognitive and Team/Racket Sport Research,
German Sport University of Cologne. His research interests are cognitive science, human movement
science, computer science and sport psychology. He has 15 years of teaching and coaching
experience, has published more than 100 publications, 20 books or book chapters and is a recognised
figure through his multi-keynotes in football. He is a reviewer for several international journals and
has transferred his expertise to business and to several professional soccer clubs within the
Bundesliga.
Karl M. Newell PhD is the Marie Underhill Noll Chair of Human Performance and Professor,
Department of Kinesiology, Pennsylvania State University, State College, USA. His research focuses
on the coordination, control and skill of normal and abnormal human movement across the life span;
developmental disabilities and motor skills; and the influence of drug and exercise on movement
control. One of the specific themes of his research is the study of variability in human movement and
posture, with specific reference to the onset of aging and Parkinson's disease. His other major
research theme is processes of change in motor learning and development that is the focus of his
chapter contribution to this book.
David O'Donovan is a postgraduate student who currently works for High Performance New Zealand
as a knowledge editor. He completed his Masters in 2011 in which he examined the throwing
kinematics of elite athletes with cerebral palsy. He has worked as a coach and sport science provider
for Boccia New Zealand and he has supported athletes participating in World Championships,
Commonwealth and Paralympic Games.
Dominic Orth is a PhD student at the University of Rouen, France, and Queensland University of
Technology (QUT), Australia. He completed his undergraduate and Masters degree by research at
the School of Exercise and Nutrition Science at QUT. His research programme examines the role of
adaptive movement variability in skilled climbers.
Pedro Passos is Assistant Professor at the Faculty of Human Kinetics, University of Lisbon, Portugal.
He gained his PhD in sport sciences in 2008. His research involves the study of the dynamics of
interpersonal coordination in team sports, which led him to produce several papers accepted for
publication in scientific journals, book chapters, as well as communications in scientific meetings.
He currently maintains his research work regarding interpersonal coordination in social systems as
team sports and extending the paradigm of analysis to video games cooperative tasks, searching for
new methods of analysis and extending his collaboration with researchers in Portugal, across Europe,
Singapore, Australia and New Zealand. He supervises masters' and doctoral students from Portugal.
Parallel with his research activity, he is also technical coordinator of a rugby union club. In his
leisure time, he practices mountain biking, surfing and alpine skiing.
Jürgen Perl is Professor Emeritus of Computer Science, University of Mainz, Germany. His main
research interests using modelling and simulation methods include pattern recognition of game
behaviours and player movements, as well as physiological load performance dynamics. He is a
founding member of the International Association of Computer Science in Sport, serving as
President from 2003–2007 and as Honorary President thereafter.
Elissa Phillips is Senior Biomechanist at the Australian Institute of Sport. Elissa's responsibility
includes implementing a programme of biomechanical services and research for athletes and coaches
to enhance performance. Recent research has focussed on feedback technology and coordination
profiling in expert performance.
Ross Pinder is Lecturer in Sport and Exercise Sciences at the University of the Sunshine Coast,
Australia. He is primarily interested in maximising skill learning in sport through the design of
representative experimental and practice environments. He currently works as a skill acquisition and
high-performance consultant for the Australian Paralympic Committee.
Ian Renshaw is Senior Lecturer, Queensland University of Technology, Brisbane, Australia. Ian's
teaching and research interests are centred on applications of ecological dynamics to sport settings.
Given Ian's background in physical education and coaching, he is particularly interested in enhancing
pedagogical practice. Recent research has focussed on developing the links between sport
psychology and skill acquisition; implementing constraints-led approaches in physical education;
emotions and learning in sport; talent development; developing expertise in cricket and visual
regulation of run-ups. Ian has worked with numerous sports providing coach education and skill
acquisition advice.
Michael A. Riley received a BA in psychology from the University of Louisiana- Monroe, USA, in
1994 and a PhD in experimental psychology from the University of Connecticut, USA, in 1999. He
is currently Professor of Psychology and Director of the Center for Cognition, Action and
Perception, University of Cincinnati, USA, where he has been on the faculty since 2000. His
research on ecological and complex dynamical systems approaches to perception–action has been
funded by the National Science Foundation and the US Army Medical Research and Materiel
Command.
Wolfgang I Schöllhorn is Professor for Movement and Training Science and Director of the Institute
of Sport Science at the University of Mainz, Germany. With a background in physics, sports,
pedagogy and neurophysiology his research areas include dynamic systems, adaptive behaviour,
learning and brain states, biomechanics, signal analysis and pattern recognition.
Ludovic Seifert is Associate Professor at the Faculty of Sport Sciences, University of Rouen, France.
He conducts his research in the field of motor learning and motor control regarding expertise in
sport, movement variability and temporal dynamics of learning. He gained a PhD in expertise and
coordination dynamics in swimming at the University of Rouen in 2003, then a certification to direct
research in 2010 entitled ‘Motor coordination and expertise: A complex and dynamical system
approach of sport and physical education', for which he exhibited numerous publications in this field.
He is also a mountain guide certified by the International Federation of Mountain Guides
Associations and now investigates expertise and motor learning in climbing.
Carlota Torrents is a teacher of expressive movement and dance and a researcher at the Human Motor
Behavior and Sport Laboratory, National Institute of Physical Education of Catalonia, University of
Lleida, Spain. She completed her PhD in complex systems applied to training methods and has
published international books and papers related to complex systems, dance and sport.
Bruno Travassos is Assistant Professor at the Department of Sport Sciences, University of Beira
Interior, Portugal, and member of the group of performance analysis at CIDESD – Research Centre
in Sports, Health Sciences and Human Development, Portugal. His research interests are in the area
of game analysis and also in the learning processes and decision-making behaviour of players in
team sports with special emphasis in futsal and soccer.
Alexandar Tufekcievski is Professor at the Faculty of Physical Education, University of Ss. Cyril and
Methodius, Skopje, Republic of Macedonia. He has research interests in methods of qualitative
analysis and modelling used in biomechanics, particularly as applied to transfer in motor learning.
Alfonsas Vainoras is Professor at the Lithuanian University of Health Sciences. He investigates novel
methods of analysis of the electrocardiograph using complex systems approach and analysis tools.
Has participated in the development of the E-Health program in Lithuania and Europe.
Pablo Vázquez is a PhD student at the National Institute of Physical Education of Catalonia,
University of Barcelona, Spain. The focus of his research is the application of complex systems
principles on processes related to motor and performance changes under fatigue.
Luis Vilar completed his PhD in sports sciences, investigating the informational constraints on attacker
and defender performance in futsal. Currently, he is Assistant Professor at the Faculty of Human
Kinetics, University of Lisbon, Portugal, and at the Faculty of Physical Education and Sports,
Lusófona University of Humanities and Technologies, Portugal. He teaches UEFA-pro courses for
coaches. He is also head of youth football department and coach at Colégio Pedro Arrupe. He was a
football and a futsal player.
Gareth Watson gained a BSc in psychology from Queens University Belfast, Northern Ireland, and
concluded his PhD in mechanical and aeronautical engineering and psychology at the same
university.
Jon Wheat is a Principal Research Fellow in Biomechanics, Centre for Sports Engineering Research
(CSER), Sheffield Hallam University, UK. He gained his undergraduate degree in sport and exercise
science from Manchester Metropolitan University before completing his PhD at Sheffield Hallam
University. Jon works on biomechanics research and consultancy projects in CSER and teaches on
the MSc sports engineering and MSc sport and exercise science degrees. His work is influenced by
the ecological approach to motor control and dynamical systems theory and he has a keen interest in
the development and application of biomechanics measurement systems for use outside the
laboratory, in more representative settings. He leads the Biomechanics Research Group in CSER
which has several research and consultancy projects in this area.
Acknowledgements
We give our profound thanks to all the chapter authors for their willingness to
share their comprehensive knowledge of the complexity sciences. We also
thank all the individuals who helped with reviewing the text and compiling
the index, especially Dominic Orth, José Pedro Silva, Rens Meerhoff, Pablo
Vázquez Justes, Sergi Garcia Retortillo.
Keith Davids: I acknowledge the efforts of my fellow co-editors for their
wonderful professionalism in seeing this project through from
conceptualisation to completion. They showed exemplary patience in dealing
with my endless requests. Finally, as always, I dedicate this book to my
family (Anna, Mike, Jake, Charlie and India) for their love and support.
Robert Hristovski: I dedicate this book to my family who supported me
each step of the way.
Duarte Araújo: This book is dedicated to those students who taught me the
meaning of inter-independence: Bruno Travassos, Vanda Correia, Ricardo
Duarte, Luis Vilar, Pedro Esteves and João Carvalho. I also acknowledge the
students of SpertLab for stretching their autonomy to offer me more time for
the book.
Natàlia Balagué Serre: I dedicate to this book to Dani, Gerard and Pau.
Chris Button: This book has been an amazing act of teamwork. As such I
give my thanks to my offsiders, the chapter reviewers, and the authors.
You've made my editorial role a pleasure. And to the most important dyad in
my life: Ange and Melanie, thanks for your support.
Preface
Complex systems in nature are those with many interacting parts, all capable
of influencing global system outcomes. There is a growing body of research
that has modelled sport performance from a complexity sciences perspective,
studying the behaviour of individual athletes and sports teams as emergent
phenomena which self-organize under interacting constraints. This literature
has been published in journals covering physical education, sport science and
sports medicine, coaching science, psychology and human movement
science.
This book was conceived over many years of discussion and research,
when it became apparent that there was a need to bring together the
conceptual creativity and innovative ideas of many experts studying complex
systems in the context of sport performance from across the world. The
intention is to provide a coherent summary of where we currently stand with
regards to theory, current methodologies and empirical data concerning the
understanding of sport performance from a complexity sciences perspective.
Our rationale is to complete a comprehensive overview of complex systems
in sport for advanced undergraduate students, postgraduate students and
academics in a range of disciplinary areas. The authors contributing chapters
to this edited textbook have undertaken comprehensive overviews to
summarize the key ideas that have appeared in many excellent empirical
reports, reviews and theoretical position papers in the extant literature. The
aims of the various chapters in this book include the presentation of key ideas
from the complexity sciences and a summary of how these ideas might be
adopted in the organization of sport practice. In this way, this textbook builds
on existing material to provide a comprehensive foundation for students of
complex systems in sport.
This is a timely endeavour, since the study of complex systems in sport has
gained increasing prominence in recent years, leading to a deep interest from
students from a range of different disciplinary backgrounds. For example,
there is an increasing number of physicists and mathematicians involved in
this field of work and this book showcases research in sport science and
performance analysis that will enhance their understanding of the
applications of methodologies from the complexity sciences. Conversely,
within the sport sciences, there is a need to document the range of new
methods of analysis used for investigating representative complex sport
movements and actions at an individual and team level. These requirements
cannot be adequately captured by experimental designs and methods of
analysis that already exist in the basic movement sciences.
The proposed structure of the book has been carefully designed to equip
readers with the basic theoretical knowledge of complex, nonlinear
dynamical systems (Part 1: Theoretical bases for understanding complex
systems in sport) prior to delivering an understanding of current methods for
studying such systems in sport performance environments (Part 2:
Methodologies and techniques for data analyses in investigating complex
systems in sport). In the final parts of the book (Part 3: Complexity sciences
and sport performance and 4: Complexity sciences and training for sport), the
emphasis is on expanding knowledge of practical applications of ideas and
methods in the study of training and performance in individual and team
sports.
Many of the chapters deliberately explore common themes, although each
chapter attempts to provide a unique and detailed contribution to the topic of
complexity in sports. The interaction of many authors across multiple
chapters allows the overall book to develop collective themes, which emerge
throughout the whole text. This collective approach has been a deliberate
strategy of the editors, meaning that comprehension of the whole book
provides much more than the sum of each chapter in isolation. Typical
complexity science thinking!
This book supports a burgeoning area of academic interest. There is now a
biennial international scientific congress dedicated to this area of study,
which attracts around 250 delegates to each meeting. The proposed book
neatly complements the research presented at this meeting. There are also
academic journals newly emerging to support this field of work. The editors
acknowledge the powerful and influential role that every one of the chapter
authors of this book have played in bringing together theoretical material and
practical applications which will further develop our understanding of
complex systems in sport.
Keith Davids
Robert Hristovski
Duarte Araújo
Natàlia Balagué Serre
Chris Button
Pedro Passos
Part 1
Theoretical bases for
understanding complex
systems in sport
1 Basic notions in the science of
complex systems and nonlinear
dynamics
Robert Hristovski, Natàlia Balaguél Serre
and Wolfgang Schöllhorn
The historical roots of the complexity sciences can be traced back to ancient
philosophers such as Aristotle (384–322 BC), whose famous saying, ‘The
whole is more than the sum of its parts', indicated the duality of holism versus
reductionism in science. The beginning of modern Western science is mostly
associated with the development of a mechanistic world view, originating in
contributions from Galileo, Kepler and Newton in the seventeenth century.
The mathematicoexperimental method became trend setting and in the same
period Newton created the mathematical basis of dynamical systems theory.
By showing explicitly that, celestial mechanics, Earthly tides and falling
bodies were governed by the same law of universal gravity, he actually paved
the way to what later became a foundation of general systems theory and
particularly synergetics: the search for the same principles acting at different
levels in the organization of matter. This world view may be conceived as a
special kind of holism where general principles manifest themselves through
different contexts, i.e. levels of organization. The whole manifests itself
through different partial phenomena, owing to different contexts in which
these phenomena are embedded.
These ideas have been influential in the movement sciences and, during the
1970s and 1980s, concepts of dissipative structures and self-organization
were incorporated in explanations of movement coordination (e.g. Kugler et
al. 1980). General predictions of this theoretical approach, such as non-
equilibrium phase transitions and critical fluctuations enhancement in cyclic
movements, were corroborated (Kelso 1984), and modelled (Haken et al.
1985; Schoner et al. 1986) with great success. These papers became
milestones in the search for principles of motor behaviour from a complexity
sciences and dynamical systems perspective, which made direct contact with
theory in sport science. Principles of self-organization were successfully
applied to multi-limb cyclic movements before they were experimentally
corroborated (Fuchs et al. 1992) and mathematically modelled (Jirsa et al.
1994) at the level of the central nervous system, as well as in learning
processes (Zanone and Kelso 1992). Self-organizing phenomena were also
discovered in studies of social coordination (Schmidt et al. 1990). In 2005, a
unified model of rhythmic and discrete movements was published (Jirsa and
Kelso 2005), predicting as a generic consequence the possibility of the
emergence of false starts. In the past two decades, the complex dynamic
systems paradigm became a fruitful experimental and theoretical approach in
capturing and explaining many phenomena of motor behaviour that are
closely related, although not equal, to problems in sports science. This
relatedness and prospects of the dynamical systems approach to sports
science problems were advocated in the works of Davids and colleagues (see
e.g. Davids et al. 1994, 2003).
In the following sections, we define complex systems and point to some
main differences between non-living and living systems. We then discuss in
more detail the differences between linear and nonlinear dynamical systems
and point to some necessary concepts important for understanding why
nonlinear dynamics is important in explaining sports phenomena. The
material is presented in a way suitable for unfamiliar readers to be acquainted
with basic terms and meanings from the complex dynamical systems
approach to sports.
What are complex systems?
Complex systems consist of many components which interact among
themselves and, as a whole, interact with their environments. Complex
systems may be homogenous or heterogeneous. For example, a piece of ice
contains innumerable interacting components, i.e. water molecules. These are
complex but homogenous systems. Living complex systems, besides having
many interacting elements, consist of structurally and functionally
heterogeneous (neural, muscle, tendinous, etc.) components, so they belong
to the class of heterogeneous complex systems. Biological systems also
contain parts existing in different physical phases: fluid (e.g. blood), semi-
rigid (muscles have properties of liquid crystals) and rigid (e.g. bones). Social
systems, as well, consist of interactions between heterogeneous agents. Thus,
whereas between water molecules there is one kind of interaction, i.e.
hydrogen bonds, between heterogeneous components there may be different
kinds of interactions (generally informational or/and mechanical). These may
have varying intensities, and span different spatiotemporal scales, which
immensely increases the level of complexity of description of such systems.
In such systems, each single component can ‘perceive’ a different
environment. There is another important difference between non-living and
living complex systems. In non-living systems one can isolate a large portion
of the larger system and study it because the behaviour of the system will be
the same. This is one of the main advantages that make statistical physics
feasible, and in living systems this is not possible. One cannot isolate an
organ that will function independently of the organism. Living complex
systems are also adaptive and goal directed, while one cannot find an
argument to claim the same for the non-living systems. Adaptive systems are
those which evolve, develop and learn to negotiate with their environments
by changing and fitting their behaviour to emerging constraints.
Besides these qualitative differences, there are universal features that are
valid for either living or non-living complex systems. Both kinds of systems
possess mutual interactions and interdependence between constituent
components. It seems that interactions are largely responsible for the
possibility of capturing both kinds of systems within similar formal
frameworks because mutual interactions, and recursive self-interactions that
result from these, form the nonlinear character of such systems. As a
consequence, complex systems, living or non-living, exhibit nonlinear
dynamical properties and form the class of complex nonlinear dynamical
systems. How these interactions change depends largely on the constraints
embedded within the complex systems. Under some constraints, new forms
of behaviour emerge spontaneously, without being previously designed and
imposed on the system's behaviour, and this is a property of all complex
systems, regardless whether they are living or non-living. Complex systems
may exhibit complex or simple behaviour. An athlete may perform simple
arm-curl rhythmic movements but also may be able to perform complex
sequences of dribbling actions. On the other hand, simple systems like a
single-component nonlinear pendulum may produce simple oscillatory
behaviours but also a very complex pattern of chaotic behaviour. Hence, the
complexity of behaviour should not be confused with the complexity of the
system. Complex systems may behave in a simple fashion because their
interacting components, under certain constraints, may form large coalitions
of cooperative elements, which reduces the dimensionality of the behaviour.
In this way, a complex system attains simple behaviour and may be treated as
a simple system on a macroscopic level. We get simplicity from complexity.
There are unifying principles that make possible to treat complex systems in
a relatively simple fashion.
Linear and nonlinear complex dynamical systems
Dynamical systems are systems that change over time. Because all systems
change over time, although on different timescales, it follows that all systems
are dynamical. They are usually represented by differential or difference
equations but also by cellular automata and networks, or even by a mixture of
some of these. Dynamical or behavioural variables converge in their
evolution to a stable state in which they can dwell infinitely under given sets
of constraints. This stable state is called an attractor, because it attracts all
nearby initial states of the system (Figure 1.1).
If the system is placed into different initial positions, it will converge to
one state that is stable, i.e. the attractor. The set of initial states that converge
toward the attractor form its basin of attraction. The attractor can be
conceived as a source of forces that pull all initial states toward it. Its
antipode is the unstable state called a repeller. The repeller repels all the
nearby initial states further away from it. If the system is placed into different
initial positions close to the repeller, they will diverge far from it (see Figure
1.1).
Dynamical systems consist of two broad classes: linear and nonlinear.
Linear dynamical systems are those whose rate of change of the relevant
behavioural variable is a linear function of that same variable. These systems
are proportional, in the sense that a small change of the constraints
influencing them brings about a small change in the behavioural variable. A
large change in constraints is needed to produce a large change in the
behavioural variable. In a sense, linear systems are overly flexible because of
their proportional response to changing constraints. However, they are
monostable (see Figure 1.1); i.e. under any set of constraints they either
converge to a well-defined attractor or diverge to infinity if they become
unstable. In this sense, linear systems are too rigid.
Figure 1.1 Top: a stable linear system with one attractor and one basin of attraction (see converging
arrows). Bottom: unstable linear system with one repeller and diverging flows toward infinity. There
are no alternative stable states (attractors) for the behavioural variable Y
Figure 1.2 Nonlinear systems can possess repeller(s), but also two or more stable states of the
behavioural variable Y and associated basins of attraction shown by converging arrows toward the
attractors
Collective variables, instability and bifurcations (phase
transitions)
Consider a complex system comprising many, say thousands, of components
and their connections, enabling a vast set of interactions among them. If we
seek to capture the dynamics of that system we have to formulate thousands
of equations describing the dynamical laws governing their behaviour and
then solve them to deduce the behaviour of each of those components. This
kind of microscopic approach to capturing complex systems behaviour seems
quite unreasonable. Think of the complexity of description if we are to
deduce the macroscopic behaviour of a biological system, say running,
starting from microscopic biochemical processes in each cell of the organism.
Fortunately, large masses of cells in living systems perform in coherent and
cooperative ways so that they create much smaller numbers of mesoscopic
and macroscopic behavioural variables, which render their comprehension
easier. These macroscopic variables are those which are essential for
describing the coordinated behaviour of the system as a whole and, because
they emerge from the cooperative behaviour of collectives of components,
they are called collective variables. Since they arise from the collective task
dependent cooperation among components, they capture the order, i.e.
coordination, present within the system and hence they are called order
parameters.
Now, how are these collective variables or order parameters connected to
stability and instability properties of the system? It happens that these
variables are best detected in the vicinity of the instability points of the
system. In this region, the system, after a perturbation, incrementally returns
(relaxes) back to the attractor, as some parameter is varied, a property known
as a critical slowing down. The increase of the local relaxation time shows
that components of the system behave less cooperatively, i.e. they are losing
their coherent synergic action and attain a larger degree of independence. As
the control parameter nears a critical value, any initial perturbation grows and
leaves the previous stable state. This point is called a critical point. At this
point, the system suffers a loss of stability and the local relaxation time
becomes infinite, since the system never relaxes back to the previous
attractor. This growth is due to the self-enhancing, positive feedback process.
Positive feedback exists when the subsequent influences enhance the initial
change. At critical points, a qualitative discontinuous change in a system's
behaviour occurs – a bifurcation or a phase transition – and the values of
influential parameters at those points are called bifurcation or critical values.
An example of this critical phenomenon in the sports domain concerns the
interstrike time intervals of phase-free boxing actions used to strike a target
(Chow et al. 2009). Consider when performers initiate strikes in a parallel
stance from different scaled distances to the target. Scaled distance D may be
measured as a ratio between the physical distance of a performer's tip of the
toes from the target and their arm's length. The forward performer's strikes
toward the target act as unidirectional perturbations on their centre of mass,
tentatively considered as a collective variable. For increasing scaled distances
D from the target, the forward lean is increasingly less quickly restored, so
the inter-strike time intervals increase too. The restoration time of the
forward-leaned trunk position back to an upright two-foot parallel stance
slows down and tends to infinity for critical scaled performer–target distances
region D > 1.35, i.e. D0– D < 0.05, with D0 = 1.4, i.e. exhibits a critical
slowing down effect (Figure 1.3, left panel). In fact, the curve has a typical
critical behaviour form: <###> = A(D0 – D)–α + B, where <###> is the
average inter-strike time interval; A = 0.63; B = 0.19 and the critical
exponent α = 0.456.
In other words, below D0 = 1.35, a qualitative coordination change from
the parallel to the more stable diagonal stance, by stepping forward, takes
place (Figure 1.3, right panel), settling the centre of mass to a more stable
state. This posture-to-posture transition, which places the centre of mass in a
more stable state, is obviously preceded by increasing of the relaxation, (i.e.
restoration) time of the forward lean toward backwards as the scaled distance
D was increased. In the newly formed coordination, not only does the centre
of mass become more stable but the new stance is also more functional and
affords a more stable position adjacent to the target-striking position.
In general, it has been shown (Haken 1987) that, at a transition point, only
one or few collective modes of the system become unstable and grow (like
the centre of mass). The other system degrees of freedom, such as the leg
components, become dependent on these collective modes and start to be
governed by them. In other words, collective modes enslave the rest of the
components and force them to organize in a certain way (e.g. a step forward).
This is the well-known slaving principle introduced by Herman Haken (e.g.
Haken 1987). These enslaved components stabilize the value of the collective
variables by nonlinear interactions to a finite stable value (the newly formed
body position). That is why, instead of growing infinitely, the behavioural
variable converges to a finite value, which is the new stable state of
organization, i.e. the attractor, of the system. A temporal hierarchy is settled
in a spontaneous way. The collective variable (e.g. the centre of mass) takes
the role of a slowly varying top-down influence and the enslaved elements,
i.e. the leg components, adopt the role of fast variables that follow the
behaviour of the collective variable. Enslaved components, by their
cooperative behaviour, maintain the collective variable and the collective
variable governs the components. The system spontaneously splits into a two-
level hierarchy; i.e. on variable(s) that govern and those that are being
governed. This is the meaning of the circular causality present in complex
systems. In this way, a pattern emerges from the interaction of components
that is greater than and different from the individual components themselves.
Emergence means that the macroscopic pattern has properties that cannot be
found in the components that form it. A parallel or diagonal stance cannot be
reduced to properties of individual motor units, metabolic processes or single
neural firings. Motor units, metabolic processes and neurons do not possess a
stance themselves. This is how synergetics solves the problem of the part and
the whole, which we have already discussed briefly.
Figure 1.3 Left panel: the increase of the inter-strike time interval for increasing of D control parameter
(right-to-left direction). The whole cycle of leaning forward to strike – upright posture restoration –
leaning forward to strike, increases, showing increase in the relaxation time toward the upright
parallelfoot stance. For D0 – D < 0.05, the relaxation time towards parallel stance tends to become
infinite; that is, no relaxation toward the parallel stance exists anymore. The parallel stance loses its
stability and transits to a more stable diagonal stance; see right panel (modified from Chow et al. 2009,
with kind permission from Nonlinear Dynamics, Psychology and Life Sciences)
Synergies
Synergies are functional groupings of components which are temporarily
assembled as a single unit. Synergies reflect nature's propensity for animate
objects to reduce their organizational complexity. For example, brain
synergies can be identified in the presence of a perturbation to one part of the
synergy (or network) and the subsequent reorganization of putatively linked
brain areas (Jantzen et al. 2008). Bernstein (1967) recognized the existence of
synergies in all forms of human movement, preferring the term ‘coordinative
structure’ in explaining how humans solve the degrees of freedom problem.
That being said, given the complexity of the movement system, how can the
human movement system arrive at a single solution for a task, given the
infinite possible solutions?
Self-organization
Complex systems have many independent parts which communicate in
different ways but primarily in terms of information exchange. Such systems
have a tendency to form patterned behaviour (synergies) which is not
prescribed externally nor indeed controlled by a central manager. Instead,
behaviours are said to spontaneously organize at a macroscopic level as a
result of the microscopic fluctuations of individual components. Whilst sports
teams are influenced to varying extents by the instructions of key figures
such as a coach or captain, the spatiotemporal trajectories of each player can
be thought of as a product of self-sorganization (Duarte et al. 2012a).
Metastability
Complex systems are typically composed of multiple stable states.
Consequently, they exhibit periods of stability and instability, as they transit
between them in response to changing control parameter dynamics. A
particularly important property arises when pre-existing stable states dissolve
(i.e. bifurcations) to create remnants or ‘ghosts’ of these stable states. In a
metastable performance region, one or several movement patterns are weakly
stable (when there are multiple attractors) or weakly unstable (when there are
only attractor remnants) and switching between two or more movement
patterns occurs according to interacting constraints. A metastable system can
simultaneously realize a number of different competing patterns and thus has
the potential to exhibit novel and independent solutions (i.e. creativity) as
well as stable, coordinated behaviour. The neural dynamics of the brain
capitalize upon the metastability of the system, to flexibly reorganize
thoughts, memories and intentions on a moment-to-moment basis. This
allows appropriate online guidance of our actions.
Particular impetus to the field of coordination dynamics has been provided
by studies of bimanual coordination in identifying the role of key constructs
of selforganization, collective variables and control parameters, as well as
transitions between stable states of neurobiological organization (see Kelso
1984; Schöner and Kelso 1988). The construction and adaptation of
movement patterns has been successfully modelled and investigated by
means of synergetic theoretical concepts since Haken, Kelso and Bunz
(HKB; 1985) applied them in investigations of brain and behaviour. In their
pioneering HKB model and its subsequent development (e.g. Schöner et al.
1986), abrupt changes in bimanual and multi - limb oscillatory movement
patterns (Jeka and Kelso 1995) were explained by a ‘loss of stability’
mechanism, which produced spontaneous phase transitions from less stable to
more stable states of motor organization with changes in critical control
parameters.
Together, these theoretical and empirical advances have provided a sound
rationale for a coordination dynamics-based explanation of how processes of
perception, cognition, decision making and action underpin intentional
movement behaviours in dynamic environments (e.g. van Orden et al. 2003).
This framework proposes that the most relevant information for decision
making and regulating action in dynamic environments is emergent during
performer–environment interactions.
Traditional investigations of limited-degree-freedom actions have provided
some useful models for understanding how control systems may operate
during neurobiological action. But they have shed fewer insights on
understanding how many biomechanical degrees of freedom are managed in
complex actions prevalent in dynamic performance environments common to
sport (Davids et al. 2006). Although many initial studies of coordination
dynamics tended to favour analysis of actions involving a limited number of
degrees of freedom, over multi-articular movement patterns (for a review of
that body of work see Davids et al. 1999), investigation of complex multi-
articular movements has proceeded rapidly in the last two decades (see for
example: Chen et al. 2005 on learning a pedalo task; Forner-Cordero et al.
2007 on postural control). Interesting issues in neurobiological coordination
and control concern the specific order parameter/collective variable dynamics
that have been studied in this body of work and how coordination training
shapes its manifestation over time.
How has coordination dynamics been analyzed in movement
science?
Coordination dynamics have been studied in the movement sciences at
several scales of analysis. These scales range from the study of intralimb
coordination to examine interactions between players in sports. Various
methods and techniques have been used to analyze coordination dynamics at
these different scales. In addition to providing a brief account of the details of
the methods and techniques in this section, pertinent issues related to their
application in different contexts are highlighted.
Cross-correlations
Similar to commonly used correlation techniques such as Pearson's
productmoment, cross-correlations assume that a linear relationship exists
between the two time series under analysis – for example hip and knee
flexion–extension angles during gait. However, unlike other correlation
techniques, crosscorrelations do not assume that the variables change in
synchrony during motion (Mullineaux et al. 2001). Rather, by time shifting
one relative to the other, the ‘time lag’ at which the correlation between two
time series is greatest can be identified. As such, in addition to identifying the
strength of the relationship or degree of linkage between the time series,
cross-correlation analyses reveal the type of relationship (the degree to which
the time series are in-phase or anti-phase). Indeed, if expressed relative to the
period of the motion, the time lag associated with the greatest correlation
coefficient indicates the phase relationship between the two segments
(Temprado et al. 1997) and is analogous to the measure of discrete relative
phase – discussed later in this section.
As Mullineaux et al. (2001) highlighted, cross-correlations have been
suggested as being particularly suited to the study of human movement, as
the coordinated actions of body segments and joints are often time shifted
relative to each other. However, several issues need to be considered before
conducting cross-correlation analyses in the study of coordination dynamics.
Firstly, crosscorrelations are not suited to the analysis of two-time series that
have a non-linear relationship (Sidaway et al. 1995). It is prudent that cross-
correlations are interpreted alongside more qualitative indications of the
relationship between the time series, such as variable–variable plots (known
as angle–angle plots if the time series concerns body segment/joint angles).
Secondly, as Pohl and Buckley (2008) highlighted in their study of foot and
shank motion during running, crosscorrelation techniques provide
information about only the temporal similarity of, but not the ratio of the
coupling between, two time series. In other words, crosscorrelation
techniques do not take account of excursion magnitudes (Pohl and Buckley
2008). Thirdly, there is disagreement regarding the recommended maximum
number of time lags that should be applied when estimating cross-
correlations. Mullineaux et al. (2001) highlighted that the probability of type-
I statistical errors inflates with an increasing number of lags. To reduce the
risk of a type-I error, they cited a recommendation that a maximum of plus or
minus seven lags be used. Alternatively, as a general rule, Derrick and
Thomas (2004) recommended a maximum time lag of n/2 (where n is the
number data points in the time series) but acknowledge that factors specific to
the time series under investigation should be considered when defining a
maximum number of time lags. A final point to consider before using cross-
correlations in the study of coordination dynamics is that they provide only
one, discrete, measure of coordination per movement cycle.
Vector coding
Angle–angle diagrams provide a convenient means for qualitatively
analyzing coordination dynamics. The shape of the angle–angle trace reveals
important information about the interaction and coupling between the body
joint/segment angles of interest. Several techniques have been developed to
provide a quantitative measure of the shape of the angle–angle trace; the
techniques are collectively referred to here as vector coding methods.
An early vector coding technique was presented by Freeman (1961). By
superimposing a grid on to the angle–angle curve and defining an eight-
element direction convention, a chain of integers (0–7) is established to
represent the shape of the curve. Although the integer chains capture the
shape of the trace and this technique has been used to study human
movement (e.g. Hershler and Milner 1980; Whiting and Zernicke 1982), a
limitation of the approach is that ratio data (joint/segment angles) are reduced
to the nominal scale (Tepevac and Field-Fote 2001). More recent vector
coding methods have addressed this issue. Hamill et al. (2000) reported a
modification of a technique presented by Sparrow et al. (1987), in which the
shape of the angle–angle trace is quantified by calculating a ‘coupling angle’;
the angle formed by the vector connecting two adjacent data points on an
angle–angle trace and the right horizontal.
The coupling angle provides information about the shape of the angle–
angle trace and the interaction/coupling between body segments. Figure 3.1
illustrates how coupling angles can be interpreted. Coupling angles of 0° and
180° indicate movement solely in the body segment/joint angle represented
on the x axis of the angle–angle diagram – where 0° indicates positive and
180° indicates negative, angular motion. Similarly, coupling angles of 90°
and 270° indicates movement solely in the body/segment angle represented
on the y axis of the angle–angle diagram (90° indicates positive and 270°
indicates negative, angular motion). Coupling angles of 45° and 225° indicate
that both body segment/joints are moving at the same rate in the same
direction (in-phase). Finally, coupling angles of 135° and 315° indicate that
the body segments/joints are moving at the same rate but in opposite
directions (anti-phase).
Figure 3.1 The interpretation of coupling angles; arrows represent the direction of a vector connecting
data points on an angle-angle diagram
Coupling angles have been used to analyze coordination in a variety of
contexts. For example, Wilson et al. (2009) used vector coding methods to
determine the degree to which specific training drills represented lower-
extremity coordination patterns seen during triple jumping. Also, many
studies have used coupling angles to investigate coordination in gait (e.g.
Pohl and Buckley 2008; Ferber et al. 2005; Ong et al. 2011; Chang et al.
2008). For example, Ferber et al. (2005) investigated the effect of foot
orthotics on the coupling between the rearfoot and tibia during running. In
many studies, interpretations of coupling angles regarding coordination are
made directly using the approach outlined in Figure 3.1. However, it is rare
for coupling angles to be exactly equal to those highlighted in Figure 3.1,
making interpretation more complex. Recently, Chang et al. (2008)
introduced an approach to aid the interpretation of coordination, whereby the
range of possible coupling angles (0–360°) represented in Figure 3.1 is split
into four quadrants. The quadrant within which a particular coupling angle
lies is used to categorize coordination patterns and indicate the type of
coordination present (Table 3.1).
Table 3.1 Categories of coordination and their associated coupling angle (γ)
ranges; x-axis phase and y-axis phase denote the phases in which
movement is predominantly associated with the joint/segment
rotations represented on the on the x and y axis of the angle-angle
diagram, respectively
Type of coordination Coupling angle ranges
Relative phase
Coordination dynamics have also been investigated by calculating the relative
phase between oscillating system components. Both discrete relative phase
(DRP) and continuous relative phase (CRP) methods have been used. DRP
estimates the latency of the motion of one system component relative to
another (Kelso 1995). It is calculated by estimating the time difference
between an event common to both system components, relative to the period
of oscillation. For example, where the system components of interest are joint
angles, the common event might be the time at which the maximum joint
angles occur and the period would be the time to complete a joint rotation
cycle. DRP has been used to investigate the coordination between, for
example, respiration and stride rate during gait (O’Halloran et al. 2012),
pelvis and thorax rotations during treadmill walking (Lamoth et al. 2002) and
upper-arm segments during field hockey drives (Brétigny et al. 2011). DRP is
simple to calculate and, generally, does not require data normalization
(Hamill et al. 2000; Wheat and Glazier 2006; Krasovsky and Levin 2010).
However, similar to the coupling angle, DRP is a circular variable and should
be analyzed using directional statistics (Batshelet 1981). Finally, a
disadvantage of DRP is that it provides only one measurement of
coordination per movement cycle.
A continuous measure of relative phase has also been used in the study of
coordination dynamics: CRP. CRP indicates the phase relation between two
oscillating system components at each time point of a cycle of movement.
Based on phase plane plots – a plot of velocity against position – phase
angles are calculated by obtaining the arctangent of the ratio between velocity
and position. In other words, the angles between vectors connecting the
origin and each data point on the phase plane and the right horizontal identify
phase angles time series for each system component. The CRP between two
system components is then calculated as the difference between their phase
angles (0° represents in-phase and 180° represents anti-phase coupling).
In addition to providing a continuous measure of coordination, as velocity
is included in the calculation of phase angles, CRP offers the potential for a
rich and more detailed analysis of coordination (Hamill et al. 1999). Many
studies have used CRP to study coordination dynamics. For example, Silfies
et al. (2009) used CRP to investigate differences in movement strategies
during reaching in participants with and without lower-back pain. Irwin and
Kerwin (2007) used CRP to identify effective skill progressions for
developing the long swing on the high bar in men's gymnastics. Also,
numerous studies have used CRP in the study of bimanual coordination and
nonlinear phase transitions originating from the seminal studies of Kelso et
al. (1981, 1984). In addition to studying intra- and intersegmental
coordination, CRP has also been used on a macro scale to investigate
interactions between players in sports such as squash (McGarry et al. 1999)
and tennis (Palut and Zanone 2005).
There are important factors to consider before CRP is used. First, position
and velocity data used in the calculation of phase angles can require
normalization and the resulting CRP values vary dependent on the
normalization used – see Peters et al. (2003) for more information. Related to
this, an assumption of CRP is that the time series used in the calculation are
sinusoidal. Data that violate the sinusoidal assumption can make
interpretation of CRP difficult (Peters et al. 2003). Alternative methods have
been developed to address this limitation, including those based on the
Hilbert transform (Rosenblum and Kurths 1998) and relative Fourier phase
(Lamoth et al. 2002).
Self-organizing maps
Self-organizing maps (SOMs) are a specific type of artificial neural network
most commonly used for dimensionality reduction and pattern classification
(Kohonen 2001). A SOM is commonly visualized as two layers of
information: an input layer and an output layer. The input layer consists of a
series of nodes, each of which represents an input vector (e.g. a single
multivariate time sample of a movement trial). The output layer consists of a
grid of nodes, which are each associated with a respective weight vector. The
weight vectors in the output layer compete to best represent nodes in the
input layer and, as a result, collectively model the input layer. Because of the
competitive learning algorithm, SOMs are able to model high-dimensional,
time-series data with a simple low-dimensional, visualizable mapping, while
maintaining the original topology of the input. Therefore, similar to PCA,
SOMs provide an opportunity to study complex coordinated movement.
Additionally, SOMs are also able to model nonlinear relationships in the
input – a prevalent feature of neurobiological systems. The nonlinear input-
output mapping is possible, owing to the competitive learning strategy and
the neighbourhood function (see Kohonen 2001), which together tend to
cluster similar data to similar map regions.
In the two-layer SOMs, the output layer represents postures or
coordination states during the movement. By connecting the sequence of
best-matching nodes for a single movement trial and superimposing the
trajectory on a mapping of the output layer, the time-series change in
coordination can be visualized. SOMs have been used to identify changes in
coordination for trials of gait (e.g. Barton et al. 2006; Lamb et al. 2011a) and
various sports actions, including discus throwing (Bauer and Schöllhorn
1997), football kicking (Lees and Barton 2005) and golf chipping (Lamb et
al. 2011b).
Several authors have employed a second SOM or a third layer, which is
trained on the output of the original SOM (e.g. Barton 1999; Lamb et al.
2011b). The sequence of the best-matching nodes is used as input for the
second SOM, which represents the movement pattern as a whole, rather than
the various states of coordination during the movement. Classifying the
movement as a whole can often be used to complement the findings of the
original SOM, especially when the coordination patterns underlying the
classification are of interest. In particular, and relevant to the coordination
dynamics framework, Lamb et al. (2011b) used the SOM trajectory of best-
matching nodes to study the coordination dynamics of the golf chip. Since the
SOM trajectory represents the evolution of coordination throughout the
movement, the authors used the trajectory as a collective variable, which they
then used to train a second SOM and subsequently to identify coordination
stability and transitions to new stable patterns.
SOMs represent a powerful tool for studying human movement,
particularly from a coordination dynamics perspective. Some issues that
remain contentious are whether to train one SOM on the data of several
subjects, at the risk of masking intra-individual changes in coordination, or to
introduce several SOMs unique to each subject, at the risk of generalizing
between subjects; how to determine the training parameters and which
normalization procedures are most appropriate. General solutions to these
issues are difficult and should instead be considered with respect to the
specific research question. For example, data normalization should be treated
as Hamill et al. (2000) outline for the continuous relative phase. Training
parameters should be data driven, the principal components of the input data
give an objective method for determining the number of nodes, the
dimensions of the map and the training length (both rough and fine tuning;
Vesanto et al. 2000). With continued use by researchers, SOM analyses
should become more familiar and their use more uniform across different
working groups.
Why coordination dynamics are relevant for sports
performance
Having introduced some of the key concepts underpinning coordination
dynamics and the various techniques used to examine these characteristics,
we finally consider what this field of study can offer for our understanding of
sport performance. Athletes in sports exemplify self-organizing, complex
systems, using specific information sources to coordinate their actions with
respect to important environmental objects, surfaces, and events (Turvey
1990). As they train, athletes educate their attention by becoming better at
detecting key information variables that specify movements from the myriad
variables that do not (see Chapter 4). In addition, learners calibrate actions by
tuning existing coordination patterns to critical information sources and,
through practice, establish and sustain functional information–movement
couplings to regulate coordinated activity with the environment (see Chapter
18). Coordination dynamics may be conceived as the manifestation of these
processes, whereby athletes match their intrinsic dynamics to the
requirements of a task (i.e. behavioural information).
Performance enhancement
With knowledge of the coordination dynamics of a system, it is theoretically
possible to model a task space and to determine potential solution spaces
(‘hot spots') within that area in which high levels of performance are most
likely to surface (Cohen and Sternad 2009). Furthermore, manipulation of
control parameters that move a system towards these hot spots would enable
sports practitioners to systematically improve an athlete's performance
(McGinnis and Newell 1982). In other words, understanding the stability
attributes of the athlete–environment system allows one to identify potential
strengths and weaknesses and, hence, these can be used to develop strategies
and tactics in an objective fashion (as opposed to relying on the intuition of a
coach). For instance, in team sports such as soccer and basketball, the
dynamics of the team centroid (including its position and stretch index) can
be used to predict and potentially influence critical moments in a game such
as shots or turnovers in possession (see Chapter 19).
A number of applied studies exemplify these practical suggestions. In
association football (soccer), Duarte et al. (2012b) demonstrate how the
stability of a defending group of players is upset by the collective movements
of attacking players as they converge toward the goal. Moreover, a series of
studies in futsal have recently shown how key parameters such as
interpersonal distance, phase angle between attacker and defender relative to
goal and the distance from goal influence the likelihood of success in
attacking scenarios (for an overview see Button et al. 2012). Switching to
individual sports, Barbosa et al. (2010) identified a number of critical factors
related to optimizing swimming performance. They noted that a swimmer's
segmental mechanics and centre of mass kinematics are strongly related to
energetics and ultimately to optimal performance. Cignetti et al. (2009)
examined how the coordination dynamics of cross-country skiing were
adapted under varying degrees of slope steepness (e.g. 0–7°). Common to
many other studies of bimanual coordination, a number of stable modes of
coordination were revealed and transitions between them were marked by
temporary losses in stability.
Figure 4.3 A) Time series of the elbow-angle data for participant 1; B) A typical difference in the
power spectral density values for the online fluctuations of the elbow angle in the first (a) and the last
(b) third of the quasi-static exercise. The difference of the elbow angle variability between the first and
the third phase spans over sub-second and seconds time scale. This signifies a correlated instability of
the system under fatigue (Hristovski et al. 2010)
At the behavioural level, these dynamics of order parameter variability
may be explained in terms of fatigue-induced dynamic competition between
two global processes at the neuromuscular level: the increasingly cooperative
protective inhibition and the goal-directed, intermittent bursting excitation,
the aim of which is to match more closely the task constraints, i.e. to keep
elbow flexion closer to the task-goal value of 90°. Under task constraints, the
increasingly cooperative protective inhibitory component processes, acting in
compliance with the pull of gravity, have to be counteracted by increasingly
cooperative excitatory component processes. Whereas in the initial phase of
exercise these processes compete over shorter time scales, resulting in a
stabilizing effect (small fluctuations) on the goal variable (elbow angle) as
exercise proceeds, they begin to compete over longer time scales, i.e. seconds
(Figure 4.3A), leading to larger fluctuations.
Figure 4.4 Differences in means for the spectral degrees of freedom; horizontal axis: the first and third
part of exercise; vertical axis: spectral degrees of freedom
The time series of the RPM variable were analyzed by time and frequency
domain methods (autocorrelation and spectral analysis). The spectral indexes
were calculated by estimating the linear fit slope of the power spectrum with
respect to frequency in logarithmic coordinates (Figure 4.7).
Figure 4.7 (Upper panel): standardized fluctuations from the data on revolutions per minute; (middle
panel): power spectrum for the first half of the exercise with its slope of −1.5 showing anti-persistent
fractional Brownian motion (fBm); (lower panel): power spectrum for the second half of the exercise
with a slope of −2.2 showing persistent fBm
A scale-invariant relationship was found between the spectral power of
RPM variability and the frequency. The values of the spectral indexes were in
the intervals between –1 and –2.5, pointing to the presence of a fractal time
structure for RPM variability (from anti-persistent to persistent fractional
Brownian motion (fBm) as fatigue develops. The results of this experiment
corroborate the results presented in experiment one for a dynamic type of
exercise. The scale invariance in the power spectra suggests that there may be
no specific site and associated timescale of their dynamics that would
dominate the cycling frequency variability. In other words, the system seems
to be dominated not by the components but rather by interactions between
processes, e.g. control loops dwelling in different time scales.
The power spectra slopes showed a dominantly anti-persistent profile
(between –2 and –1) for the RPM variable in the first half of the exercise
(right upper panel). In the second half, there was a clearly persistent or super-
diffusive fBm profile (spectral slope between –2 and –3) in participants who
performed to exhaustion and whose time series at the end were dominated by
high-amplitude non-stationary fluctuations of the RPM variable (lower
panel). Performers who cancelled at accumulated effort values which did not
produce such a fluctuation profile attained fBm values of around –2 and less.
These results are consistent with those of the previously discussed quasi-
static exercise study from a different perspective. Anti-persistent fBm is
characterized by dynamics in which increments are anticorrelated, meaning
that the present trend is more likely to be followed by an opposite trend. This
characteristic points to a stabilizing synergy for constant continuous tasks,
such as maintaining cycling frequency at 70 RPM. On the other hand,
persistent fBm is characterized by increments which are positively correlated
in time; in other words, the present trend is more likely to be followed by the
same trend rather than the opposite. This tendency puts the system in a state
that is dominated by inappropriately low frequency variability at the expense
of high-frequency, shorter timescale corrections, as well as by the inability to
maintain the mean value constant around 70 RPM. Such a profile clearly
points to a system whose stability is disrupted. These findings show that
when highly motivated performers are close to the termination point the
fluctuation profile closely resembles the one discussed earlier and points to a
change in the neuromuscular cooperative processes prior to stop.
Attention focus during a dynamic accumulated effort
To investigate the emergent nature and dynamics of task-related thoughts
(TRT) during accumulated effort, 11 participants ran twice on a treadmill at
an intensity of 80% of their maximum heart rate until voluntary exhaustion,
while selfmonitoring and reporting through signs the changes in their
thoughts (Figure 4.8). During the first run, the intrinsic dynamics of their
thought processes was established. As no participant reported an emergence
of task-unrelated thoughts (TUT), only TRT, they were asked during the
second run to intentionally maintain TUT and to report back about
spontaneous switches from TUT to TRT and vice versa (for more details, see
Balagué et al. 2012). As can be seen in Figure 4.8B, the results revealed that
the intentionally imposed TUT was stable at the beginning of the exercise but
switched spontaneously to TRT with accumulated effort. Close to voluntary
exhaustion the TUT and TRT competed, showing a fully developed
metastability until the final TRT state prevails. In summary, a nonlinear
dynamic effect of thought processes (loss of stability of TUT, spontaneous
emergence of TRT, spontaneous switches from TUT to TRT (a metastable
dynamical regimen) and, finally, an absolute destabilization of TUT and
spontaneous transition to TRT) during the dynamic exercise was noted until
the termination of effort. This is a further demonstration that intentional
systems are subject to different constellations of peripheral and central
constraints (like attention focus) as exertion and fatigue accumulates.
Figure 4.8 A: Treadmill exercise with a participant reporting through signs; B: sample of 11 individual
time series of task-unrelated-task-related thought dynamics. Starting with the task-unrelated thoughts
(TUT) state, one can observe the switches between TUT and task-related thoughts (TRT). Eventually,
the TRT state becomes the one that precedes the exhaustion point. Numbers on the left signify
participants
This results illustrate that performers were not able to impose TUT
deliberately with equal efficiency during the exercise. Rather, the thought
states were constrained by the accumulated effort. The intention and attention
focus spontaneously self-organized into a different, more stable solution, i.e.
the TRT state of mind.
Sequential dependence
In sport science, the time series we capture will usually have sequential
dependence – the order of the data points matters. This occurs when the
underlying process that generated the data is not random. The position of a
player on the pitch at one point in time is influenced by that player's position
at some earlier time or by the position of the ball, for example. These
dependencies can be strong or weak. Sometimes, when the dependency is
strong, we can discern a highly specific, deterministic rule that describes how
our measured quantity (the ‘output’ of the behavioural system under study)
varies over time as a function of some inputs and some parameters. This is
often not possible, however, and more often we can identify some general
properties about how the measurements change over time but not an exact
rule. In either case, though, pinpointing the nature of sequential dependence
in the data helps us to understand what kinds of laws or constraints shape the
behaviour of the system we are studying.
Figure 5.1 shows two artificial time series. Both have means of about zero
but they clearly differ visually. The bottom series is more structured, with
upward and downward trends, while the top is more erratic and
unpredictable. Typical measures of variability, like the standard deviation,
just measure the spread of observations around the mean, and do not measure
how the time sequence impacts the data. The series' means or standard
deviations would not differ.
The more interesting differences between the time series in Figure 5.1 lie
in the patterns of change over time (i.e. the dynamics of the behaviour).
These dynamics are a consequence of the sequential dependence (or lack
thereof) of the time series. If you randomly rearranged the order of data
points in a time series, the mean and variance would not change but doing
this would destroy any dependence among the data points and thus would
alter the dynamics. Time-series analyses quantify the dynamics of behaviour.
Using a method described later (detrended fluctuation analysis; DFA), it can
be shown that the two series evolve over time very differently. The bottom
series exhibits a type of sequential dependence termed 1/f scaling and the
data points are correlated with each other over time, whereas the top series
lacks this temporal structure and evolves over time randomly (it lacks
sequential dependence altogether).
Figure 5.1 Simulated time series to illustrate the distinction between measures of central tendency and
measures sensitive to the sequential properties of the time series. Panel A depicts random uncorrelated
noise, while Panel B shows ‘pink’ or 1/f noise where each successive observation is positively
correlated with the others
Nonlinearity
There are many kinds of sequential dependence. As noted, one distinction is
strength. A system with weak dependence is impacted more strongly by
random factors than one with stronger sequential dependence. Another basic
distinction is whether the sequential dependence is linear or nonlinear. This is
a fundamentally important distinction.1
Nonlinear is a term used to describe a type of physical system or
mathematical equation for which the ‘output’ is not directly proportional to
the ‘input’. Any system, whether it is the movement of an entire football
(soccer) team or repeated finger-tapping movements, is assumed to have
components or mechanisms responsible for the process (see Carello and
Moreno 2005). Linear and nonlinear analyses assume that the components of
a system interact in fundamentally different ways. For a linear system, the
components interact additively, so their behaviour adds up to the system's
behaviour – the whole is the sum of its parts. For nonlinear systems, the
components interact multiplicatively rather than just additively, which is why
the output is disproportional to the input and the whole can differ from the
sum of its parts. Nonlinearity is more general than linearity. A practical
implication of this is that nonlinear time-series analyses can work if the
system under study is linear but linear time series analyses may not work for
studying nonlinear systems. This is a reason to strongly prefer nonlinear time
series analyses over linear. (This does not imply, however, that linear
methods are not useful or that they are never preferred. Many linear methods,
such as spectral analysis, have well-developed mathematical foundations that
are not yet available for some nonlinear methods, for example.)
Dynamical systems
Nonlinear time-series analyses stem from a field of mathematics known as
nonlinear dynamical systems. For good introductions to the topic see Kaplan
and Glass (1995) and Kantz and Schreiber (2004). Dynamical systems are
simply those systems whose states change over time. To better appreciate
nonlinear time-series methods, it is useful to understand a few basic concepts
of dynamical systems.
The mathematical study of dynamical systems usually employs well-
known equations rather than empirical data. An example is the Lorenz
system, which is used to model processes of heat transfer known as thermal
convection, given by:
The Lorenz system has three state variables (x, y and z) and three parameters
(σ, ρ and β). The values of the state variables change over time according to
equation 1. The dot over the variables on the left side of the equal sign
signifies change over time in the variable (i.e. the derivative of the variable)
and the terms on the right-hand side specify the exact rule according to which
this change occurs. Although we have written three separate equations (one
for each state variable), the Lorenz system is considered to be a unified
system because the state variables are coupled to each other – the equation
for each state variable contains at least one of the other state variables,
meaning that the evolution of each state variable depends on that of the
others. We can simulate these equations and plot time series of each state
variable (Figure 5.2) but none of those individual time series fully
characterizes the total behaviour of the system. The proper way to visualize
the system is a plot of the phase space – a three-dimensional (in this case)
space whose coordinates are the state variables x, y and z.
A graph of the phase space (Figure 5.2, left) shows that the Lorenz system
gravitates toward certain regions and never enters others; the trajectory is
drawn to a subset of the phase space. This subset is called an attractor and it
is the solution to the underlying equations. The attractor is termed such
because whatever the initial values of the state variables, the trajectory will
eventually be drawn to the attractor. The Lorenz attractor is called a ‘strange’
attractor because the Lorenz system exhibits deterministic chaos – a small
change in initial conditions or a small perturbation will become amplified so
that the long-term behaviour of the system is unpredictable, even though
there is no element of actual randomness (the system is completely
deterministic but unstable). Trajectories in the phase space never actually
cross each other, because of a mathematical rule known as the uniqueness
theorem (differential equations must have unique solutions; if the trajectories
crossed, it would mean two solutions existed at once).
How does this relate to using nonlinear time series analyses on behavioural
sequences? Usually, we do not know the underlying equations that govern the
behavioural sequences that we analyze. Nor do we usually even know what
the state variables are, or what the attractor may look like. Presumably, the
variables that we can measure are relevant state variables but, even then, we
are left with a one-dimensional view of a potentially high-dimensional
system. This onedimensional signal may contain distortions that result from
projecting the data from a higher-dimensional space to the single dimension
of the measured variable (just like a two-dimensional map of the three-
dimensional Earth contains distortions in the shape and size of land masses).
However, thanks to another mathematical theorem called the embedding
theorem (Takens 1981; also see Webber and Zbilut 2005), it turns out that
measuring a single-state variable (i.e. a single time series) is sufficient to
allow us to understand the underlying dynamics of the system. Essentially,
this theorem permits us to reconstruct a phase space that preserves the
dynamical properties of the system and that is free of projection distortions,
provided that the underlying system is nonlinear (a safe assumption in
practice) and the state variables are coupled. Phase-space reconstruction
(Abarbanel 1996) is a first step for several nonlinear methods, such as
recurrence quantification analysis (RQA). Webber and Zbilut (2005) and
Pellecchia and Shockley (2005) discuss phase-space reconstruction
extensively in the context of RQA, so we do not go into more detail here.
Figure 5.2 The phase space plot of the Lorenz attractor; we used the parameters ρ = 10, σ = 28 and β =
8/3 and initial conditions of x = 1, y = 1, z = 1
Fractal measures
Fractal methods have been widely used for investigating dynamical systems
in physiology, movement science, psychology and other disciplines (e.g.
Bassingthwaighte et al. 1994; Holden 2005; Liebovitch 1998). Most of these
methods describe how a measure of variability scales with sample size (i.e.
the amount of data over which the measure is computed). Many people are
familiar with visual images of fractals that repeat geometric patterns (i.e. they
are selfsimilar) across different spatial scales. Fractal signals are also self-
similar across scales – this is what is meant by ‘variability scales with sample
size’ – but, in this case, we refer to time scales rather than spatial scales and
the self-similarity is statistical rather than exact. The various fractal analyses
usually provide a single measure, the scaling exponent, which describes the
relation between the measure of variability and time scale. Brown and
Liebovitch (2010) provide a good introduction to practical uses of fractal
methods and Chapter 7 discusses the closely related concept of long-range
correlations.
There are many kinds of fractal analyses, each with unique procedures and
assumptions about the data being analyzed (Eke et al. 2000; Gao et al. 2006).
Spectral analysis (see Chapter 7) – a linear method – can be used to measure
fractal scaling in a time series because fractals exhibit what is termed a 1/f
power spectrum wherein spectral power scales inversely with frequency, f. In
this case, the scaling exponent is the slope of a linear fit (in log–log
coordinates) of the power spectrum. For certain types of data, spectral
analysis can be a preferred method but it places limiting assumptions on the
data (particularly stationarity).
A robust (especially with regard to non-stationarity) and widely used
fractal method is DFA (Peng et al. 1994). DFA yields a scaling exponent, α,
which describes how a variability measure called the detrended fluctuation
function scales with the size of a time window over which it is computed.
Like scaling exponents derived from other fractal analyses, α can be used to
classify the type of sequential dependence in a time series. For α = 0.5, the
time series lacks sequential dependence; the data are random white noise and
each point is independent of the others. For α = 1.0, the time series possesses
sequential dependence; the data are correlated. The particular sequential
dependence indicated by this α value is pink noise, which is another term for
1/f noise. For α = 1.5, the data are more strongly structured brown noise.
DFA and related fractal methods have been frequently used to study motor
control, where changes in α may indicate changes in neuromuscular control,
such as those that result from learning. One recent study, for example,
showed that people's hand movements become more pink (closer to ideal 1/f
noise) with practice on a Fitts’ law task (Wijnants et al. 2009).
Approximate entropy
Entropy is a measure of the amount of disorder in a system. To get an
intuitive feel for the meaning of the original usage of the concept of entropy
(from statistical mechanics), imagine a room with a bottle of perfume in the
middle. There is no draft or exchange of air with the outside world – the
room is a closed system. In the initial state, the molecules of the perfume are
all concentrated in the bottle and therefore are in a state of low entropy (high
order) with respect to the total positions that the room allows them to take.
Low entropy reflects the fact that the distribution of perfume molecules is not
uniform across the room such that one particular location contains a
disproportionally large quantity of molecules. When the bottle is opened, the
perfume molecules spread in the room and will fill the whole room uniformly
given enough time. In this process, the perfume molecules reach a state of
high entropy (high disorder), wherein all parts of the room have the same
probability of housing perfume molecules.2
There are similar notions about the amount of ‘disorder’ in observed
measurements. This sense of entropy is rooted in information theory
(Schneider and Sagan 2005). The information-theoretical definition of
entropy (Kolmogorov– Sinai entropy; KS) indexes the predictability of a time
series (Gao et al. 2007). Entropy in this case is related to the following
question: if we measure a value of a system (a value in a time series) at a
particular moment in time, how much can we predict about the next state of
the system (Kantz and Schreiber 1994) or how much information is generated
about the system with each measurement? For deterministic periodic systems,
prediction of future states is easy – we only need to know a few points to be
able to perfectly predict their evolution; these systems have low entropy. For
example, knowing only one cycle of a sine wave is enough to fully describe
the underlying system that produced it (Gao et al. 2007). Things become
more interesting when irregularity and randomness appear because the rate of
information obtained about the system underlying these time series increases
with each measurement. Such systems have higher entropy.
Pincus (1991) introduced approximate entropy (ApEn) with the intention
of providing a practical method of calculating the regularity or repeatability
of relatively short and noisy empirical time series. This measure is
conceptually related to KS entropy but KS entropy requires really large N to
be accurate. The logic of ApEn is simple: what is the average probability that
a sequence of m + 1 data points finds a match in the time series given that it
has already found a match for m data points? (see Pincus 1991, for a
mathematical definition). Matches do not have to be exact; matching
sequences are identified within a tolerance defined by r. The probability of
finding matches is expressed as a negative logarithm to yield the ApEn value.
If ApEn is closer to zero, the signal is very regular, predictable, and less
complex – the next observation can be readily predicted from the previous m
observations. If ApEn is high (closer to two), the signal is more
unpredictable, random and, consequently, more complex. We suggest Pincus
and Goldberger (1994) for a detailed step-by-step visual introduction to
ApEn.
ApEn has been applied to cardiovascular dynamics (Pincus and Viscarello
1992; Tulppo et al. 2001), postural control (Cavanaugh et al. 2005), isometric
force production (Slifkin and Newell 1999) and psychological time series
(Bauer et al. 2011; Yeragani et al. 2003). A general finding from the
application of this and other measures of complexity to the cardiovascular
system is that less-healthy systems show more regular, less-complex
dynamics (lower ApEn). In the context of cardiac physiology, Pincus (1994,
2006) hypothesized that a decrease in complexity corresponds to a
breakdown of communication between the subsystems participating in
cardiovascular control. Conversely, increased complexity (higher ApEn)
suggests greater coupling between the subsystems and fast and efficient
communication. This hypothesis has been supported using numerical
simulations from a variety of different types of mathematical models (Pincus
1994). These and similar observations using DFA have led to a general
perspective (Goldberger et al. 2002) that associates health with complex
fluctuations in physiological systems and pathology with a loss of
complexity. However, the pattern of changes in complexity with disease does
not always follow this trend (Vaillancourt and Newell 2002).
ApEn has also been applied to behavioural time series. Cavanaugh et al.
(2005) found that the regularity of centre of pressure (COP) fluctuations
exhibited by healthy young adults decreased with the removal of sensory
information. Following Pincus (1994), they interpreted the decrease in
complexity as an indication of restriction of the interactions among
components of the postural system. Cavanaugh et al. (2005) also reported
that athletes, following concussion, have more regular sway compared with
their own preinjury baseline values. In terms of recovery rates from
concussion, Cavanaugh et al. (2006) found that, while the traditional measure
of postural stability (equilibrium score) returned to preinjury levels within
three to four days, ApEn remained lower than the baseline level beyond this
period. The major implication is that the recovery is not as fast as has been
previously thought and that these athletes should not be allowed to return to
sport participation so soon.
Despite its success, ApEn is susceptible to some shortcomings related to
the specifics of the calculation of the conditional probabilities of repeating
patterns (Richman and Moorman 2000). The original ApEn algorithm
requires that each pattern of length m and m + 1 (called template vectors) find
at least one match in the time series, because the algorithm involves taking a
logarithm and the log of 0 is undefined. The algorithm therefore counts self-
matches in the estimates of conditional probability to make sure that at least
one match is present. This means that ApEn is a biased statistic and does not
estimate the population value of entropy well, especially for short time series.
To alleviate the bias, Richman and Moorman introduced an improved
algorithm called sample entropy, SampEn (described in detail below), which
does not count self-matches and uses a slightly different procedure to
quantify regularity of the time series. Additionally, ApEn is more affected by
measurement noise and sampling frequency than SampEn (Rhea et al. 2011).
Practically, this means that SampEn may be more reliable than ApEn when
comparing across studies that use different equipment or sampling rates.
Because of these limitations of ApEn, we focus on SampEn as a measure of
complexity.
Sample entropy
SampEn also quantifies signal regularity and is conceptually similar to ApEn.
Here, we demonstrate the SampEn calculations based on the description
provided by Richman and Moorman (2000). It is possible to do these
calculations using Microsoft Excel® but Matlab® (MathWorks Inc.) is better
because it is well suited for working with vectors. The Excel and Matlab files
used in the example are posted online at
http://homepages.uc.edu/~rileym/pmdlab/nonlinear/index.html. The sample
code is for illustration purposes only and we encourage you to use the more
robust code by Lake, Moorman and Hanqing available on PhysioNet
(www.physionet.org) for data analysis.
Assume that we continuously sampled a time series from a system of
interest. A plot of these data (Figure 5.3A) shows that they are relatively
stationary (the mean is about 5 and the variance does not change drastically
over the measurement period). The presence of stationarity is an important
requirement for SampEn (Govindan et al. 2007). Our example data series is
[0, 4, 8, 0, 4, 2, 0, 10, 8, 10, 7, 2, 3, 6, 1, 6, 7, 9, 0, 10, 8, 7, 4, 1, 8, 5, 9, 8, 2].
We first define all possible vectors of length m = 2 from the original N =
30 time series. A vector in this case is simply an array of numbers taken from
the original time series. We combine two consecutive measurements into a
vector and then move ahead one point to define the next two-element vector
until the last data point is reached. These vectors are fundamental to the
algorithm because all other calculations are based on them. Following the
algorithm for SampEn introduced by Richman and Moorman (2000), the total
number of vectors will be equal to N–m; in this case, it is 28. The vectors are
(where i stands for the vector number):
Figure 5.3 Panel A shows the simulated time series used in the tutorial calculation of SampEn; panel B
shows the same time series with added patterns of [1, 2, 3] values (shown in squares) to increase the
regularity of the time series (see text for details)
Xm = 2 (i = 1) = [0, 4],
Xm = 2 (i = 2) = [4, 8],
Xm = 2 (i = 3) = [8, 0],
Xm = 2 (i = 4) = [0, 4],
Xm = 2 (i = 5) = [4, 2],
Xm = 2 (i = 6) = [2, 0],
Xm = 2 (i = 7) = [0, 10],
…
Xm = 2 (i = 28) = [9, 8].
Now we designate one vector as a template with which all other vectors are
compared. We will take Xm = 2 (i = 1) as a template and find the number of
other vectors whose respective elements (i.e. values in the time series) differ
from the template by an amount less than the matching threshold parameter r.
For this example, we set r = 1. We exclude self-matches; they do not add any
new information about the regularity of the vectors extracted from the time
series (self-matches are counted in ApEn). The calculation below applies to
the first vector Xm = 2 (i = 1) but the procedure is exactly identical for all other
vectors:
Xm = 2 (i = 1) – Xm = 2 (i = 2) = [–4, –4],
Xm = 2 (i = 1) – Xm = 2 (i = 3) = [–8, 4],
Xm = 2 (i = 1) – Xm = 2 (i = 4) = [0, 0],
Xm = 2 (i = 1) – Xm = 2 (i = 5) = [–4, 2],
Xm = 2 (i = 1) – Xm = 2 (i = 6) = [–2, 4],
Xm = 2 (i = 1) – Xm = 2 (i = 7) = [0, –6],
…
Xm = 2 (i = 1) – Xm = 2 (i = 28) = [–9, –4].
Out of all possible 27 comparisons, only one vector Xm = 2 (i = 4) is within r =
1 from the template vector Xm = 2 (i = 1). We record that fact as a match for
this particular vector i of length m = 2:MATCHi=1 m=2 = 1. We then calculate
the probability of having a matching vector for our template by dividing the
number of observed matches by the number of possible matches:
Bi=1 m=2(r) = MATCHi=1 m=2/(N–m–1) = 1/27 = 0.037.
We repeat these steps for all vectors (i from 1 to 28) and find the average
probability of finding a matching vector Bm=2(r) for vectors of length m = 2
in the whole time series:
Bm(r) = sum(Bi=1:27 m=2(r))/(N–m).
For this particular time series, Bm=2(r) = 0.00793651 represents the
probability of finding a vector of length m = 2 that matches the template
vector within the radius r in the time series.
We then repeat these steps for vector length m+1 to find the probability of
finding matching vectors of length 3. This quantity is exactly equal to Bm=3 as
defined above but, in the literature, it is sometimes referred to as Am(r). In the
case of this time series, Am(r) = 0.00284903.
We then convert the probabilities of observing recurrences of vectors of
length m and m+1 into the numbers of actual recurrences denoted by B and A
for Bm(r) and Am(r), respectively. Using the formula provided by Richman
and Moorman (2000):
B = [((N–m–1) × (N–m))/2]×Bm(r)
A = [((N–m–1) × (N–m))/2]×Am(r).
This calculation is also warranted because simply calculating all possible
matches of between the vectors overestimates the number of real matches.
For example, vector 1 may be recurrent with vector 2 and the algorithm
would automatically count the match of vector 2 with vector 1, as well thus
introducing redundancy. This formula removes the redundancy – it
effectively forces only forward matches to be counted.
We then find the ratio between the number of matches of m+1 length
(Am(r)) and the number of matches of length m (Bm(r)). The ratio of A to B is
the conditional probability that two sequences within a tolerance r for m
points remain within r of each other at the next point (Richman and Moorman
2000, p. 2042). We take the negative natural logarithm (ln) of this ratio to
make the final value positive, since number of matches of length m+1 will
always be less than the number of matches of length m. If the A is exactly the
same as B, then we have a limiting case in which the SampEn of the system is
0. As the time series becomes less predictable, the number of m+1 matches
becomes smaller, making the ratio closer to 0 while the –ln of the ratio
increases. The SampEn of our sample data time series is thus:
SampEn(m,r,N) = –ln(A/B) = –ln(1/15) = 1.0986
This rather high SampEn value suggests the time series has few repeatable
patterns that remain close to one another. This is not surprising because these
data were actually generated using uncorrelated samples from a Gaussian
distribution. The lowest non-zero SampEn value is [(N–m–1)*(N–m)]/2 and
the maximum value is ln(N–m)+ln(N–m–1)–ln(2) (Richman and Moorman
2000).
To gain additional intuitions about what SampEn measures, we
deliberately increased the degree of predictability of the measurements by
introducing sequences of values [1, 2, 3] in the early and late parts of the time
series (Figure 5.3B). After performing the calculation described above, we
find that SampEn is 0.559 – a lower value than one for the original time
series.
It is always possible to get a number for SampEn. It is up to the researcher
to make sense of the results. In general, the interpretation is easier if one has
a good intuitive feel for the structure in the data, which can be enhanced by
always inspecting a plot of the data
Parameter selection
Typical parameter settings are m = 1 or 2 and r between 0.1 and 0.25 of the
standard deviation of the time series (Pincus and Goldberger 1994).
However, there are more involved selection criteria that rely on the
calculation of autoregression parameters of the time series (Lake et al. 2002).
Alternatively, Ramdani et al. (2009) proposed to estimate m by plotting
median SampEn values for the time series as a function of different values of
r. The m value at which the SampEn-r curves become similar should be
selected. They also recommended selecting r based on estimates of SampEn
relative error in entropy estimation (Richman and Moorman 2000). The r
value at which relative error is minimal should be chosen for the analysis.
One thing to keep in mind is that these parameters should be kept constant
between all compared conditions once they are selected (Pincus 1991). A
practical piece of advice is to perform the analysis with a range of different r
values to make sure that the results are not just an artefact of parameter
selection.
Data length
The appropriate length of the time series for regularity classification depends
on the quality of the measurements. In some cases, signals as short as 60
points may work (Pincus 2006). Richman and Moorman (2000) suggested
using time series of the order of 100 to 20,000 data points.
There are situations when the experimenter expects that the lengths of time
series will naturally differ across conditions because of natural differences in
the durations of measured behaviour. For example, one may be interested in
the complexity of the attacker's movement trajectory between successful and
unsuccessful attacker–defender situations. The duration of the measured time
series will differ from one behavioural sequence to another, owing to a host
of factors. Despite the fact that length of the time series may distort the
results of SampEn, it is still possible to compare the regularity across these
qualitatively similar conditions. But to minimize the effects of data length it
is advisable to standardize the length of the time series to some reasonable
value (e.g. average length of all recorded time series).3 In other situations, it
may not be possible to use the same strategy, especially when the behaviour
of interest qualitatively changes as a function of measurement length. For
example, comparing the complexity of heart rate between a four-minute and a
ten-minute practice period is not appropriate because the players may be
using different strategies for conserving energy between the two.
Outliers
SampEn is not very susceptible to singular outliers. However, one way
outliers may affect estimation of SampEn is by biasing the value of r. A
typical suggestion is to set r as a percentage of the standard deviation of the
series (e.g. 10–25%). However, the standard deviation (SD) is likely to be
inflated owing to outliers, leading to an increase in r and a consequent
decrease in SampEn. One can remove outliers if appropriate or use the
median instead of the mean for the SD calculation.
Non-stationarity
SampEn is designed to work with stationary data and it is likely to
malfunction when strong non-stationarity is present. Positively establishing
stationarity is tricky in nonlinear systems (Kantz and Schreiber 2004). One
practical solution is to use RQA to check for stationarity prior to SampEn
analysis using the trend parameter. As an example of poor applicability of
SampEn to non-stationary signals, we take the results of Rhea et al. (2011),
who showed that SampEn of non-differenced COP position data decreased
with faster sampling rate whereas differenced, stationary COP signals were
not subject to this artefact. Ramdani et al. (2009) showed that SampEn
discriminated between standing with eyes open compared with closed only
when analyzing the differenced (stationary) COP. Of course, differencing
removes non-stationarity but it should only be done if the differenced signal
is theoretically meaningful as in the case of postural control (Delignières et
al. 2011).
Long-range correlations
A time series has long-range correlations when the autocorrelation function
decays exponentially slowly as a function of time lag (Diniz et al. 2011). This
is typical of fractal processes described earlier and in Chapter 7. The presence
of long-range correlations reduces estimates of system complexity provided
by SampEn, potentially biasing the estimate (Govindan et al. 2007; Richman
and Moorman 2000). SampEn assumes that there are only significant lag-1
autocorrelations because the template vectors are created from consecutive
values from the time series. If there are correlations, then the template vectors
need to be defined from non-consecutive values (e.g. take every fifth value)
to minimize the correlations between the template vectors (Govindan et al.
2007). However, for truly long-range correlated signals it is impossible to
find an appropriate lag for creating the vectors, so Govindan et al. (2007) also
suggested differencing the time series and conducting the analysis on the
increments.
Periodic data
The method proposed by Govindan et al. (2007) will be useful for periodic,
continuously sampled data such as breathing and gait kinematics. In such
cases, it is still possible to use SampEn but now we need to introduce a delay
into the definition of the template vectors. Another possibility is to difference
the data as suggested above (Bruce 1996) or define the events of interest and
do the analysis on the differences between these events along the time or
magnitude dimensions. One apparent disadvantage of increasing the delay
time for vector embedding is a reduction in the number of template vectors.
Sampling rate
Time series collected using A/D converters are digitized versions of the
recorded continuous processes. When the resolution of the A/D converter (or
any other measurement procedure) is not fine enough to capture the
continuous dynamics of the phenomenon of interest, SampEn of the system
will become an ordinal variable (Stevens 1946). Consider the time series of
COP velocity presented in the left panel of Figure 5.4. This time series was
obtained by differencing the COP recorded from a person standing on a force
plate sampled at 100 Hz for three seconds. There is a clear discretization
effect such that the values of the velocity vary between a limited number of
states (especially –0.006, 0.003, 0.004, and 0) – this is also known as
quantization error. In such cases, entropy does not change continuously with
r. To illustrate this, we calculated SampEn as a function of r varying between
0.05 and 1 in steps of 0.05 and plotted the SampEn–r curve in the right-hand
panel of Figure 5.4. As expected, increasing the radius decreases SampEn but
there are also apparent plateaus of constant entropy. The effect of
discretization seems to be more apparent in short time series. Plateaus and
discontinuity in the entropy estimates on SampEn–r plots make SampEn an
ordinal-dependent variable, with the implication that only non-parametric
statistics should be used for data analysis. Therefore, we recommend
examining how SampEn changes as a function of r in a representative subset
of the data before doing the statistical analysis to establish whether
parametric or non-parametric statistics should be used. If SampEn changes
continuously with r for all experimental time series, then parametric statistics
are appropriate.
Figure 5.4 Illustration of the digitization (quantization) effect on the SampEn calculation
Filtering
Filtering usually makes the time series more regular so it will lower SampEn.
As long as all conditions are filtered similarly, this should not affect the
overall pattern of results within a study. But differences in the filtering
procedures need to be considered when making comparisons across studies
(Rhea et al. 2011). In general, a good practice is to filter the signal as little as
possible because excessive filtering may remove the legitimate aspects of the
dynamics in the time series instead of measurement artefacts (Abarbanel
1996).
Implications for sport science and conclusions
The dynamical systems approach and the time-series analyses described in
this chapter can potentially provide useful information about the phenomena
of interest to sport scientists beyond that of the standard measures of central
tendency and variability. Time-series measures take into consideration time-
dependent properties of the data and therefore are sensitive to changes in
evolution of the time series. Our hope is that this will allow researchers to
develop more sensitive measures of performance, more efficient methods of
learning new skills and more effective treatment evaluation protocols for
injuries. Measures of signal complexity such as SampEn are especially
promising because they can be readily applied to short and noisy sequences
of data that are typical of sport science.
Notice that the sample autocorrelation function (.) can be computed for any
time series, even for series that are not realizations of stationary processes.
Some works suggest that these sample autocorrelations (.) should only be
used if the series length n satisfies n ≥ 50 and the time lag k satisfies k ≤ n/4.
The simplest kind of stationary process is the white noise process. A
stochastic process {Yt, t∈T} is said to be white noise if the random variables
have constant mean, usually equal to zero, constant variance and are
uncorrelated (see also Chapter 5). This implies that the autocorrelation
function Ρ(.) is zero everywhere, except at k = 0, and therefore is defined as:
Figure 7.3 Simulated time series of length n = 200 of a white noise (top) and respective sample
autocorrelation function at lags k = 0, …, 50 (bottom) with critical bounds (dashed lines)
Note that the time series fluctuates randomly around the mean and the
sample autocorrelations are very close to zero lying between the critical
bounds.
With respect to the sailing example shown earlier in this chapter, Figure
7.4 presents the time series {Y1,…,Yn} of length n = 579 relative to the
distance from a sailing boat to a starting position. It also provides the sample
autocorrelation function (.) at lags k = 0, …, 350 with the bounds ± 1.960/
√579 = ±0.081.
Observe that the time series has a cyclical feature and the sample
autocorrelations also exhibit a cyclical pattern lying outside the critical
bounds. This implies that the sample autocorrelations are significant and take
positive values at certain lags and negative values at other lags. Although it is
unadvisable to compute the sample autocorrelations at lags larger than n/4,
some of these values are represented here merely to show their periodicity.
If the process {Yt, t∈T} is stationary, then its spectral density function f(.)
has some important properties, such as:
f(λ) ≥ 0, for all λ,
f(−λ) = f(λ), for all λ,.
The spectral density function f(.) measures the intensity of each frequency in
the process over a range of frequencies. The larger the value of f(.) at a given
frequency, the larger the intensity (or power) of that frequency.
Given a time series {Y1,…,Yn}, the usual estimator of the spectral density
function f(.) is the normalized periodogram In(.) defined by:
For the sailing example, Figure 7.6 presents the time series {Y1,…,Yn}of
length n = 579 relative to the distance from a sailing boat to a starting
position. It also provides the normalized periodogram In(.) at frequencies λr =
2πr with r = 0, …, 0.5. Observe that the time series has a cyclical feature and
the periodogram ordinates exhibit a high peak at a given frequency. The peak
is significant and occurs at the frequency 0.00518, which implies a period of
1/0.00518 = 193. This means that the series has indeed a cyclical pattern,
repeating itself roughly every 193 time units, i.e. every 7.72 seconds.
Figure 7.6 Exemplar data of length n = 579 of the distance from a sailing boat to a fixed starting
position before the regatta starting (top) and respective normalized periodogram at frequencies λr = 2πr
with r = 0, …, 0.5 (bottom)
Figure 7.7 Simulated time series of length n = 200 of a short-range correlation process (top), its sample
autocorrelation function at lags k = 0, …, 50 (middle) and normalized periodogram at frequencies λr =
2πr with r = 0, …, 0.5 (bottom)
Formally, a stationary process {Yt, t∈T} is said to have long-range
correlation (or long memory) if its autocorrelation function Ρ(.) satisfies the
power law:
Ρ(k) ~ c k−(1.2d), k → ∞,
where c and d are two constants such that c ≠ 0, d ≠ 0, and d < 0.5, and k is
the lag. This means that the function Ρ(.) decays to zero very slowly with a
hyperbolic decay. Moreover, the process is said to have persistent long-range
correlation if 0 < d < 0.5, so that ∑k Ρ(k) = ∞, reflecting the fact that the
remote past has an influence into the present.
In the frequency domain, a long-range correlation process can be defined
as a process whose spectral density function f(.) satisfies the power law:
f(λ) ~ c ∞−2d, λ → 0,
where c and d are two constants such that c ≠ 0, d ≠ 0, and d < 0.5, and λ is
the frequency. This means that the function f(.) has a pole at zero if 0 < d <
0.5, that is f(0) = ∞, signifying that the low frequencies predominate and
therefore longterm oscillations are expected. These processes whose function
f(.) has the form f(λ) ~ λ −α, where α is a constant, are usually known as 1/f α
noise. This property of the spectral density function f(.) can be used to
distinguish different types of noise in terms of colour: α = 0 (white noise) and
α = 1 (pink noise), amongst others. It is worth mentioning that pink noise has
been found in a number of time series from human movement systems and
some stochastic models have been proposed for explaining these phenomena
(e.g. Diniz et al. 2010, 2011).
Figure 7.8 represents a realization {Y1,…,Yn} of length n = 400 of a long-
range correlation process. It also shows the sample autocorrelation function
(.) at lags k = 0, …, 50 and the normalized periodogram In(.) at frequencies λr
= 2πr with r = 0, …, 0.5. It is clear that the time series exhibits long non-
periodic oscillations around the mean, the sample autocorrelation function
has large and positive values with hyperbolic decay and the normalized
periodogram has larger values at low frequencies (which explains the series
fluctuations).
For the distinction between short- and long-range correlations, the length
of the time series is an important question. In fact, a realization of a stationary
process with long-range correlation can easily be mistaken for a realization of
a non-stationary process, if the series length is very small. The statistical
discrimination between short- and long-range correlations can be done using
several tests, such as the rescaled range method (R/S), the detrended
fluctuation analysis (DFA) and others (e.g. Palma 2007; but see Chapter 5).
With the sailing example, the method of analysis of the distance between a
sailing boat and an optimal point for the start of the regatta was illustrated in
terms of its time structure. Somehow, space was analyzed as a function of
time. It is also possible to analyze changes in the spatial structure across time.
After presenting some bases for performing time series analysis, we then
describe a method for performing spatial patterns analysis – Voronoi
diagrams – which is starting to be used in sport sciences.
Voronoi-based models for team sports analysis
Studying spatial patterns formed by a group of individuals gives us some
insight into the understanding of interactive behaviour, in both the individual
and collective dimensions. For example, spatial patterns characterized by
large distances between all individuals will indicate some source of
inhibition, whereas small distances will indicate some source of attraction. In
addition, larger interpersonal distances associated with a single individual
may reflect some source of avoidance. In the sports context, spatial pattern
analysis is thought to be a useful approach for characterizing and evaluating
players' space management, which is associated with patterns of interacting
behaviour.
Figure 7.8 Simulated times series of length n = 400 of a long-range correlation process (top), its sample
autocorrelation function at lags k = 0, …, 50 (middle) and normalized periodogram at frequencies λr =
2πr with r = 0, …, 0.5 (bottom)
Figure 7.9 Example of a set of points in a plane (A) and corresponding Voronoi diagram (B)
This spatial construction has already been suggested by other authors for
studying players' spatial distribution in team sports, having been applied in
variety of game settings, namely, electronic soccer games (Kim 2004),
robotic soccer (Law 2005), on-field hockey games (Fujimura and Sugihara
2005) and on-field soccer games (Taki et al. 1996). The Voronoi cells that
define the individual dominant region of each player (Taki et al. 1996;
Fujimura and Sugihara 2005) change continuously over time, owing to
continuous adjustments of the players' positions, which implies permanent
changes in the global spatial configuration. Thus, for an adequate application
of such markers in any team sports, these areas should be analyzed
throughout the duration of the game or trial. For instance, in the work by Kim
(2004), the spatial markers considered were area and number of vertices of
each Voronoi region; however, these were averaged, eliminating the temporal
component of the phenomenon, which in this particular study limits the
understanding and explanation of players' spatial relation.
The models described here allow the study of performance at different
levels of analysis: model 1 – player and team individual behaviour; and
model 2 – intra- and interteam interaction behaviour. Here, we present some
examples that, with some adaptations, can be applied to other sports. A
particular aspect to be considered in such an adaptation is the number of
players involved and the field dimensions. Codes are available upon request
from the lead author of this chapter.
where I(i,j) is a Boolean function that takes value 1 if player k is the closest
player to the grid position (i,j) and 0 otherwise
Figure 7.10 Spatial configurations: (A) attacker (grey areas) and defender (white areas) teams and; (B)
attacker player breaking defence organization; the arrow indicates the direction of the attack
Grid points that are equidistant between two or more players constitute the
boundaries of their respective regions and therefore are not added to the
corresponding areas.
The calculated areas can be used to investigate how the size of the Voronoi
cells changes over time for each team and/or for each player and related to
specific phases, events and/or characteristics of the game.
The model described was applied to data from futsal (Fonseca et al.
2012b). We considered 19 trials from five versus four plus goalkeeper plays,
all starting with similar conditions and each ending when the attack lost ball
possession. On average, plays lasted 848 (± 374) frames (corresponding to
approximately 0.57 (± 0.249) minutes), a minimum of 315 and maximum of
1558 frames (approximately 0.21 and 1.04 minutes, respectively). The main
results are presented below.
With model 1, it was possible to verify the following: (1) on average and
for all plays, the attacker team had Voronoi regions with larger areas in
comparison with those defined by the defender team (Figure 7.11); (2) the
area of these regions was more variable for the attacker team, which
presented, in each frame and across the duration of all trials, a larger standard
deviation of the Voronoi area (Figure 7.11). In addition, (3) the spatial
behaviour of the attacking players was more stable during each trial,
presenting Voronoi areas with smaller approximate entropy (see Chapter 5;
see also Fonseca et al. (2012a) for criteria for normalizing time series for
performing entropy analysis), as illustrated in Figure 7.12.
It is also interesting to study intra- and interteam behaviour, as we next
describe.
Figure 7.11 Example (one play) of the mean Voronoi area (VA) across time for each team; error bars
represent the standard deviation (adapted from Fonseca et al. 2011a)
Figure 7.12 Comparison of the mean entropy of Voronoi area (VA) between teams in the same trial;
error bars represent the standard deviation (*** P < 0.001); adapted from Fonseca et al. (2011a)
Figure 7.13 Construction of the superimposed Voronoi diagram (bottom) from considering, separately,
the Voronoi diagrams for team A (black dots) and team B (white dots)
Figure 7.15 Measures from the superimposed Voronoi diagram: (A) maximum percentage of
overlapped area for each individual of the group marked with black dots; and (B) percentage of free
area (in black)
The max%OA is calculated for each player and represents the maximum
percentage of the Voronoi cell that is covered by the cell of an opponent; the
smaller this measure the greater the number of opponents in the
neighbourhood (Figure 7.15A). The %FA summarizes the degree of
similarity between the overlapped Voronoi diagrams, which is calculated by
extracting from the plane the sum of the max%OA calculated for one of the
groups (Figure 7.15B).
The %FA is inversely proportional to the degree of agreement between the
spatial configurations of both groups and therefore can be used to analyze
and characterize the interaction behaviour between them. This construction is
of particular interest for studying interaction behaviour in team sports, where
the two groups correspond to two competing teams.
Consequently, in team sports, when analyzing the spatial distribution of
two opponent teams playing in a well-defined area of known dimensions, it is
possible to consider reference values associated to two specific spatial
relations: i) when each player of the defender team assumes an exclusive
pairing with an opponent, which can be linked to a man-to-man defensive
method; and ii) when players from both teams are randomly distributed in the
playing area, which, while may not appear to be a reasonable assumption for
players' behaviour, provides a reference for the interteam spatial arrangement
when the location of each player is chosen, regardless of the location of the
others. To exemplify this, model 2 was applied to data from futsal (Fonseca
et al. 2011). We considered 19 trials from five versus four plus goalkeeper
plays, all starting with similar conditions and each ending when the attack
lost ball possession.
Reference values for this specific setting, ten players in a play area of 20
m2, were generated for randomly distributed players and exclusively paired
at different interpersonal (inter-pair) distances. For situations where players
are exclusively paired with an opponent, the percentage of free area increases
as the maximum distance allowed between the pairs increases. The lower and
upper reference values obtained for complete spatial randomness (CSR), i.e.
when all players are randomly allocated in space, were 0.22 and 0.50,
corresponding to the upper and lower limits of a 95% confidence envelope
for the %FA:
where %FA25:1000 and %FA95:1000 represent the order, out of 1000, of the
%FA obtained in simulated patterns of CSR.
Based on these reference values, it is possible to classify at each frame of a
trial how players are spatially distributed; in particular, observed values
below the lower limit of the 95% confidence envelopes for CSR suggest that
the defender team is likely to be applying a man-to-man defence method
(exclusive pairing). Figure 7.16 shows the %FA observed across the duration
of one trial in this study.
A spatial approach of futsal players' collective behaviour in a five versus
four plus one played in midfield suggests that players are not considering a
man-toman defence strategy, which is what was expected, given the setting of
this study. It also indicates that the zone defence method spatially relates to
an absence of interaction between players, as the observed value of %FA is,
during most of each trial, within the corresponding bands, [0.22, 0.50].
Figure 7.16 Example (one play) of the observed percentage of free area (%FA) across time (solid line)
and the 95% confidence interval for spatial random distribution (dashed lines)
Conclusion
In sport, space and time are key parameters to consider if individual or
collective behaviours are to be understood. However, space and time should
not be considered in abstract terms but embedded in the variables that capture
the interaction between the performer and the environment. In this chapter,
we have presented ways in which such analysis could be performed. First, we
described how time series could be analyzed in the time domain and in the
frequency domain. Moreover, we argued that the analysis of spatial patterns
formed by a group of individuals could give insights about individual and
collective behaviours. For this type of analysis, we presented a technique
called the Voronoi diagrams, a two-dimensional spatial decomposition of
geometrical space. The techniques described focus on the individual
behaviour of the player and team and on the intra- and interteam interaction
behaviour. These are possibilities that open new ways to understand the
complexity of sport behaviour.
Exploratory approaches
One of the first exploratory applications of cluster analysis in the context of
movement pattern analysis in sports was provided by Müller (1986). Several
kinematic variable time courses and muscular activities were measured
during the demonstration of different skiing techniques for downhill skiers.
Discrete biomechanical parameters were extracted and classified by means of
a cluster analysis. It was found that irrespective of the snow and slope
conditions, the techniques could be classified as similar.
An explorative cluster analysis on the basis of time-continuous variables in
discus-throwing movements during practice of a single high-performance
athlete provided evidence for a successful learning process that led to
enduring/lasting qualitative changes of the throwing technique (Schöllhorn
1993). Eight discusthrowing trials before and after a biomechanical feedback
intervention were described by means of the time courses of 40 joint angles
and angular velocities during the final throwing phase. Three trials were
performed within one competition before the intervention and five trials
during different competitions following the intervention. For data reduction,
these high-dimensional data were factor analyzed. The resulting factor-
loading matrices were compared by means of a structure comparison
algorithm, which led to a distance matrix. The subsequent cluster analysis
(Figure 8.1) clearly separated three trials (T791–T793) before a specific
training intervention from five trials (T84–T88 ) performed after the
biomechanical feedback. Figure 8.1 displays exemplarily the history of a
hierarchical cluster analysis applied to eight discus throws during a learning
process. The cluster analysis begins with determining the distances between
all trials and is followed by determining the two most similar (smallest
distance) trials (T791 and T793). The clustered trials are then considered as a
single new trial. Subsequently, the next two trials with the smallest distance
are clustered together and so on until all trials are clustered together. Similar
results revealed the cluster analysis of the same data when data reduction was
performed by means orthogonal reference functions (Schöllhorn 1995).
Figure 8.1 Clustering dendrogram resulting from an average linkage algorithm; horizontal lines
indicate the level of the rescaled distance at which the respective movements are grouped into one
cluster
Confirmatory approaches
A confirmatory approach of cluster analysis with a single discrete duration
parameter was examined by Lames (1992). Using ideas from dynamical
systems theory and with a focus on investigating the hysteresis effects (see
Kelso 1995), the duration of the driving and pitching movement of several
golfers was measured for different distances. In order to initiate a hysteresis
effect, the golfers performed shots from 100 metres first, then decreasing
down to ten metres in five-metre steps and then increasing back to 100
metres. The subsequent cluster analysis of the movement durations led to
two, three, four or five clusters from which only the three-cluster solution
could be interpreted plausibly in accordance with the assumed hysteresis
phenomenon.
In a more recent study by Chow et al. (2008), which examined
coordination changes of novice participants in a football (soccer)-kicking
task, cluster analysis was also effectively used to determine intra-individual
differences in kicking patterns. Key kinematic data of various joint motions
were captured over a fourweek intervention period (40 trials per session over
12 sessions) and the kinematic data were used as input for a cluster analysis
procedure. Differences in coordination patterns within individual for each
session were effectively determined and the information provided valuable
insights about movement pattern variability, as well as the presence of
preferred kicking patterns. Such information is extremely critical in helping
researchers to understand the learning processes and to investigate
nonlinearity in learning evidenced by sudden transitions from one movement
pattern to another, as well as the emergence of pattern variability prior to a
transition.
Another confirmatory application of cluster analysis was performed by
Ball and Best (2007) to determine the presence of weight transfer for two
styles of golf swing. Sixty-two golfers, from professionals to high-handicap
players, performed simulated drives, hitting a golfball into a net. While
standing on two force plates, the centre of pressure position relative to the
feet was quantified at eight swing events identified from a 200-Hz video.
Cluster analysis on the basis of these timediscrete parameters revealed two
major styles of golf swing: a front-foot style and a reverse style.
Nevertheless, validation procedures were required.
Validation of clusters
As cluster techniques will always identify groups of data depending on the
identification parameters, it is important to consider additional procedures to
validate them. For supporting and providing the statistical proof of the
resulting clusters, different approaches have been suggested. According to
Handl et al. (2005), cluster validation measures can be distinguished into
internal and external measures:
External validation measures comprise all those methods that evaluate a
clustering result based on the knowledge of the correct class labels . . .
Internal validation techniques do not use additional knowledge in the
form of class labels, but base their quality estimate on the information
intrinsic to the data alone.
(Handl et al. 2005: 3203)
With further extension to the work by Lames (1992), Rein et al. (2010b)
applied an internal validation approach to their cluster analysis of basketball
shooting. The phenomenon of phase transition in basketball hook shots with
decreasing and increasing distances from nine metres to two metres and back
to nine metres, with one-metres increments was investigated. The input
variables for the cluster analysis were 12 angle variables derived from a 13-
segment, rigid, three-dimensional body model. The clusters were interpreted
in the terminology of systems dynamics as attractors with certain criteria.
Only two of eight participants showed a clear expected phase transition
behaviour. Importantly, in this preliminary study, it was possible to identify
three distinctive shooting patterns with varying frequencies at different
shooting distances.
As three different shooting patterns had been previously established, the
external validation procedures were adopted by Rein et al. (2010a). Two
studies in basketball served for testing the sensitivity of cluster analysis to
pre-processing and for testing the phenomenon of phase transitioning in
hook-shot technique. For the first experiment, four professional basketball
players had to throw from three different distances with three different
techniques. Owing to the impact of data normalization, the same analysis was
performed with z-transformed (average = 0, standard deviation = 1) and raw
data. Both sets were validated by means of bootstrapping and Hubert-Gamma
method. Overall, in the first experiment, the cluster analyses led to ‘entirely
feasible’ results and were able to reproduce a priori known differences
between diverse movement patterns. In the second validation study, two
basketball players were instructed to shoot baskets by means of a hook-shot
technique from distances between two to nine metres. In contrast to Rein et
al. (2010b), the task was limited by a lowered ceiling to force flight curves of
the ball that are dominated by the velocity of release than by the angle of
release, with the aim of causing a phase transition in the movement pattern
with increasing or decreasing distance. Only one participant showed strong
indications of the use of two distinct patterns whereas another participant
displayed ‘distinctively fewer differences’ as shooting distance was
manipulated. Bootstrapping and Hubert-Gamma values showed that a
validation procedure is necessary for the confirmatory approach of cluster
analysis.
The external validation approach pursues the probability of ending up with
certain clusters relative to arbitrary or accidental data. The internal approach
compares the variation within a cluster relative to the variation between
clusters (Bauer and Schöllhorn 1997). Both approaches demonstrate the
problem and importance of variable selection and preparation that is known
in the context of analyzing complex self-organizing systems (Haken et al.
1995). According to Haken et al. (1995), the selection of collective variables
or order parameters is highly dependent on the investigator's intuition. The
problem seems to become even bigger with increasing complexity of the
movement task and the possibilities of compensations. Yet there are no
general rules for the choice and selection of variables as well as for the
preparation of the data before cluster analysis.
In summary, cluster analysis provides a powerful dimension reduction tool
which can serve different purposes depending on the nature (i.e. explorative
or confirmatory) of the research undertaken. In contrast to cluster analysis,
ANNs have the ability to separate two classes nonlinearly (Figure 8.2) and
therefore lead to higher recognition rates and potentially more flexibility in
their use (Haykin 1994; Schöllhorn and Jäger 2006). As we discuss in the
next section, ANNs can be administered independently or in combination
with cluster analysis.
Self-organizing maps
SOMs, also known as Kohonen maps, are a specific type of ANN that can be
used to mathematically model specific characteristics of neuronal cell
assemblies. In contrast to supervised learning ANNs such as multi-layer-
perceptrons (MLPs), SOMs are trained using unsupervised learning to
typically produce a two-dimensional discrete representation of the input
space of the training samples, called a map. A big advantage of SOMs is the
way in which it captures low-dimensional views of high-dimensional data,
akin to multidimensional scaling. SOMs are different from other ANNs
because of their usage of a neighbourhood function that preserves the
topological properties of the input space. Similar to most ANNs, SOMs
operate in two phases. In the first phase, the map is trained using input
examples. The second phase, called mapping, automatically classifies a new
input vector. The competitive working process is also called vector
quantization. Components of a SOM are called nodes or neurons that are
associated with a weight vector of the same dimension as the input data
vectors and a position in the map space. The usual arrangement of nodes is a
hexagonal of rectangular grid. The procedure for placing a vector from input
data space on to the map is to first find the node with the closest weight
vector to the vector taken from input data space. Once the closest neuron is
located, it is assigned the values from the vectors taken from the input data
space. Mathematically, SOMs are sometimes associated with nonlinear forms
of principal component analysis.
Figure 8.2 Linear versus nonlinear separation
Single-camera recordings
An important methodological issue of the present method is that camera
positioning does not need to be perpendicular with the field of performance;
i.e. the camera lens does not need to be perpendicular in relation to the pitch
plane of motion. Particularly in this experimental task, participants’ on-field
movement displacements were recorded using a regular digital video camera
statically positioned at approximately 30 metres from the pitch, to capture the
whole playing area. Owing to the facilities available in the stadium, the
camera was placed five metres above the ground, perpendicular to the
longitudinal component of the pitch and with an angle of depression of
approximately ten degrees. However, this method can be used with other
camera placements and specifications without losing accuracy (e.g. Duarte et
al. 2010b; Correia et al. 2012; Travassos et al. 2012). Before the beginning of
the experiment, several non-collinear control points corresponding to specific
landmarks visible in the video camera were measured for later calibrations
(see the camera calibration and object-plane reconstruction section below).
Previous studies showed that seven control points were sufficient to obtain
adequate accuracy levels in movement data (Fernandes et al. 2007).
Figure 9.1 Schematic representation of single-camera video motion capture
Image treatment
Video-recorded images of the small-sided game were transferred to digital
support, coded and saved as ‘.avi’ format and goal-scoring opportunities
identified throughout the game. From the 21 goal-scoring opportunities
occurred during the game, only ten plays were analyzed further, in which: (i)
the ball was not projected into an aerial trajectory; and (ii) there were no
changes in ball possession between teams. The video recordings of these
situations were split into unique video files, with minimum file size to
increase the computational performance during extraction of positional
variables and analysis.
The software package TACTO 8.0 (Figure 9.2; Fernandes et al. 2010) was
used to extract the virtual positional coordinates (measured in pixels units)
from participants' movement displacement trajectories. The procedure
consisted of following with a computer mouse cursor the middle point
located between the feet of each participant (chosen as working point). This
working point was used because it represents an estimate of the projection of
the player's centre of gravity on the pitch (Duarte et al. 2010a). Data were
obtained at 25 Hz as recommended by Fernandes and Caixinha (2004). The
TACTO package was also used to assess the virtual coordinates of the seven
control points previously selected that afterwards were used for calibration.
During these procedures, the computer resolution was set at 1280 × 800
pixels and the TACTO 8.0 window was fixed on a permanent position on
screen. It is necessary to keep this procedures unchanged during the
digitization of all trials to avoid improper data transformation owing to
changes in the image plane reference frame.
Figure 9.2 The TACTO 8.0 device window; manual tracking of a selected working point with a
computer mouse allows virtual coordinates of the tracked player/object to be obtained
Figure 9.3 The direct linear transformation (2D-DLT) method for camera calibration and bi-
dimensional reconstruction
Figure 9.4 Converted pitch coordinates (metres) allow the reproduction of movement displacement
trajectories of players in the space of action
Accuracy
To test the accuracy and validity of the instrument for the specific task under
analysis, we used the procedures suggested by Fernandes and Caixinha
(2004). We compared the distances obtained from digitization with a fine
path of a player from which we knew the real distances (i.e. a circuit on the
pitch with changing directions). The mean absolute percentage error showed
values less than five per cent for all the measurements, which were taken as
indicative of high accuracy (Fernandes and Caixinha 2004).
Variables computation
For the study of team sports as multi-agent dynamical systems, there are
some compound-motion variables that might capture the complex dynamics
of players' interpersonal interactions during competitive performance (see
Chapter 6). With the individual kinematic data of each player obtained by the
aforementioned procedures it is possible to obtain specific compound motion
variables. A previous study identified a high coupling tendency between the
movements of attacking and defending players in the three-versus-three sub-
phase of association football (Duarte et al. 2012b). Thus, based on players’
positional data, a single centroid value (i.e. the geometrical centre) for the six
outfield players was calculated. The centroid or group ‘centre of mass’ was
calculated as the mean position of the six outfield players over time
(Frencken et al. 2011). Next, using the longitudinal component of motion, we
calculated for each time instant the smallest distance of this centroid to the
defensive line (i.e. the boundary line between the central performance space
and the scoring area, see Figure 9.1) as a potential order parameter.
The inter-centroid distance (i.e. distance between the centroid of the
attacking and defending players; Folgado et al. 2012) and the relative stretch
index (i.e. the mean vectorial distances of players to their team centroid;
adapted from Bourbousson et al. 2010) were tested as control parameter
candidate variables. As suggested by Frencken et al. (2011) the inter-centroid
distances tend to decrease immediately before the creation of a shooting
opportunity, suggesting that the closeness of the two sub-groups of players
can influence the stability of the competing team relations. Our purpose was
also to assess whether differences in the relative stretch index of teams
influenced the stability of the relative positioning in reference to the goals.
Specifically conceived MATLAB files were used to compute these time-
series data.
Illustrative data on pattern-forming dynamics in association
football
The exploratory analyses of the centroid distances to the defensive line
suggested the existence of two behavioural states, before and after the instant
of an assisting pass (i.e. the last pass before a shot occurred; see top panel of
Figure 9.5).
Using normalization procedures that did not alter the structure of the
timeseries data, we subtracted each value of the order parameter from the
own mean (Rosenblum et al. 2001). This procedure allowed us to find a mean
point between the two states corresponding to values of zero (see bottom
panel of Figure 9.5). Thus, the first state displayed positive values of the
order parameter corresponding to the initial stable relations between teams
(i.e. where the defending sub-group successfully interacted together to
prevent shooting opportunities). The second state displayed negative values
of the order parameter that emerged from an arrangement of interpersonal
relationships that led to defensive system instabilities and goal-scoring
opportunities. The qualitative changes between the two identified states
observed in the bottom panel of Figure 9.5 by the shift from positive to
negative values suggested that the emergence of instabilities within these
small-groups (i.e. changes in their relative positioning leading to penetrations
in the scoring areas) may be characterized by some nonlinear properties such
as order–order transition (Kugler and Turvey 1987).
Analysis of first derivative values of the order parameter also revealed a
high rate of change at the moment prior to the appearance of system
instabilities (Figure 9.6). These results reinforced the idea that transitions
between the two identified states occurred suddenly from one state to
another, and not by a cumulative linear process.
Figure 9.5 The two behavioural states of the order parameter; top panel shows the distance of the single
centroid to the defensive boundary line of all the analyzed trials, synchronized by the assistance pass
instant; bottom panel shows the order parameter subtracted by the own mean to highlight the qualitative
changes associated with perturbations of the initial stability of teams (see text for details)
Figure 10.2 A schematic representation of what the ball-carrying participant could see in front of
him/her (i.e. the defensive line) in a virtual environment simulated three-versus-three rugby task
Implications of virtual environments for sport performance
research and practice organization
There is still a need to bridge the natural–virtual context gap. For instance,
descriptive or experimental investigations undertaken in the field of
performance may provide evidence of how the key constraints in natural
contexts influence behaviour. By way of virtual contexts (use of immersive
and interactive virtual reality technology), a coach, a psychologist or
player/athlete, may reproduce and experience those performance challenges
and thus potentiate not only the diagnostic of performance but also the
subsequent intervention (Craig et al. 2011; Watson et al. 2011). Virtual
environments could be helpful in off-field training in cases of injury, to
increase training load without significant increases in fatigue or even when
there are no facilities available for training because of weather conditions.
Key constraints on performers' behavioural events and outcome found in
natural contexts of performance, such as interpersonal distance and relative
velocity (Duarte et al. 2010; Passos et al. 2008), initial interpersonal distance
(Correia et al. 2012a) could be reproduced and manipulated in virtual
contexts. This potentially facilitates practising off field some aspects of
performance, which, owing to various aspects (e.g. excessive opportunities to
be tackled in the natural context may restrict the number of trials suitable to
avoid injury and excessive fatigue), cannot be performed with such a high
frequency on the field.
In tasks designed for both research and practice, the possibility of actually
intercepting or running through spaces, for example, must be provided to the
participants. Designed tasks must also contain objects in the virtual
environment context with properties that are relevant to an individual's
purpose within the task (i.e. that afford desired skills and adaptive and
functional behaviours).
As mentioned before, the types of cross-modal interactions that take place
during direct perception in the natural context should also be exploited.
Moreover, to our knowledge, immersive and interactive virtual environment
tasks applied to sports research has focused on the study of one participant
interacting with an entirely controlled a priori virtual simulated performance
context (including the behaviour of other players). Taking advantage of the
advance in technology, such as tracking systems and vision display devices,
further research could use more elaborated virtual worlds and could
investigate also the interpersonal coordination between players and teams, by
immersing more than one participant in the virtual environment simulated
sport task; i.e. to investigate interactions between teammates or opposing
players when facing pre-established movements of the other virtual players
involved. Although this might be not possible at the moment, it is certainly
being worked on (e.g. a simple Kinect4® or a Nintendo Wii®, has interaction
between players/participants) and this is of great interest for furthering this
area of research.
Considering the dynamics in the individual–environment relationship and
scaled to the individual (Marik 1987; Oudejans et al. 1996; Warren 1984;
Warren and Whang 1987; Turvey 1999), we may incorporate these
characteristics within virtual environments. For instance, when catching a
ball in a virtual context, the participant could see a longer arm or a faster arm
movement, and participants' height or jumping reach could be changed. This
would not be possible except through virtual means.
Conclusion
The potential for studying behaviour in sport through virtual environment
technology has been outlined. One of the main advantages it offers is to
experimentally control the information available and examine how this
information guides action in sport. Although such technologies are not yet
widely accessible, large displays and body-based interactive devices are
becoming increasingly advanced and accessible (e.g. Kinect4, Nintendo
Wii®), and immersive and interactive sport situations will be easier to
simulate while maintaining their functional fidelity.
DyCoN-approach
Owing to the fact that SOM training is controlled by an external algorithm
using parameters that run down to final values and so eventually cause the
end of the training process, a SOM that has been trained once cannot be
reactivated for further training. Therefore, continuing training would require
a complete rearrangement of the net and all controlling parameters, a practice
that cannot be done in a satisfying way.
The DyCoN concept, however, is different: each neuron contains an
internal memory and a self-controlling algorithm. The effect of the individual
neural self-control is that a DyCoN has no final state but it can always adjust
its internal memory to new input and can therefore learn continuously as well
as in separate phases (see Perl 2002a; Perl and Dauscher 2006).
Figure 11.2 Network with a process trajectory and the corresponding type profile
Figure 11.3 (left): TeSSy input interface with video control (bottom), animation interface (left),
attribute selection and input panel (top); (right): examples of a stroke frequency matrix and a stroke
success matrix representing tactical concepts and technical skills, respectively
Game complexity: from simple to complex game dynamics
The first approaches to video-based game data recording and analysis date
back to the late 1970s. While many modern concepts and methods of game
analysis have already been developed, the weakness was – and still is – the
data extraction process, which normally had to be done manually. Thus, the
first case studies were restricted to structurally simple two-person games such
as tennis or squash, later followed by games like volleyball, in which the
teams are separated, acting like (abstract) super-players.
During the past five years or so, the combination of automatic position data
extraction and net-based process pattern analyses has made great progress,
with analyses of complex game processes such as those from handball,
basketball or even soccer (Perl and Memmert 2011).
Tennis
The first computer-based tennis analyses were restricted simply to strokes,
which were simply be described by the ‘from’ position and the ‘to’ position.
Figure 11.3 shows the interface of TESSY® (Razorcat Development GmbH,
Berlin), a videoand computer-based tennis analysis tool which was developed
during the 1990s, offering a collection of complex game-process analysis
features (Mussel et al. 2001).
TESSY enabled statistical evaluations of game situations and stroke
sequences, including stroke combinations within single rallies. Moreover,
tactical patterns, e.g. position clouds or stroke bundles, could be presented
graphically. Finally, simulations of games and their tactical concepts were
possible, as is shown in Figure 11.3. A player's technical skills (against a
fixed opponent or as mean values) can be characterized by a matrix of
success values of those ‘from–to’ strokes, while tactical concepts are
represented by a similar matrix containing the frequencies of strokes.
Obviously, as was in fact done in practice, such matrices can be used to
simulate the effects of tactical concepts and technical skills by just changing
the corresponding matrix values and analysing the changing game dynamics.
Squash
A player-specific process in squash can be defined as sequence of stroke
positions (cf. tennis) of a player (see McGarry et al. 1999). Figure 11.4 shows
an example of two strokes (left) and a net trained with the positions of four-
stroke sequences, taken from games of an international tournament in 1988.
Figure 11.4 Squash court with game process BR-FR-BL (left); trained net representing the most
frequent 4-position-sequences (right); BL = backhand left side, BR = backhand right side, FR =
forehand right side
Volleyball
The first net-based approach to volleyball from 1999 (Perl and Lames 2000)
was similar to the squash approach above, resulting in a characterization of
the most important sequences and their frequencies as is presented in Figure
11.6.
Figure 11.6 The network was trained with game processes from volleyball, presenting the most
important sequences as circles, whose diameters represent the frequencies of the corresponding
sequences, three of them being explained in more detail
Testing original game data on the trained net again (as in squash) results in
activations of neurons, representing the types and frequencies of the activated
formations. Moreover, the order of appearance of the configurations can also
be transferred onto the net and results in a trajectory (grey line).
By way of example, Figure 11.8 shows the differences between two top
teams in the game of an international women's tournament: both nets show
the defence's preparation for the expected service of the opponent team. It can
easily be seen that Germany and Italy prefer quite different formations when
it comes to taking the service. Moreover, there is another remarkable
difference: while the Italian team obviously finds the optimal formation
comparably fast, only adjusting the formation itself (most of the moves are
inside the marked clusters), the German team prepares by changing the whole
formation (most of the moves are between the marked clusters), which means
much more movement and a certain restlessness that later affects their actions
negatively.
Figure 11.8 Trajectories showing the preparation of the defence against the opponent's service
Football (soccer)
Neural networks can already be used for the computer-supported analysis of
several more complex tactical performance factors in soccer. Based on
position data (Figure 11.9), soccer-specific situations can be objectivized by
using analysis software for the assessment of match situations (see Grunz et
al. 2009, 2012; Memmert and Perl 2006, 2009a,b; Memmert et al. 2011; Perl
et al. 2011).
The basis for the generation of the data is the collection of the x–y
coordinates of all 22 players and the ball for the entire match time of 90
minutes (‘tracking’). Using a sampling rate of 25 frames per second, an
amount of 135,000 x–y data per player is generated which equals a total
amount of 3,105,000 x–y data per game, including the ball. The basic idea is
that the developed neural networks make it possible to compare match scenes
from one or more games, to discover which sequences have led to which
results. As described for volleyball, the neurons activated by the single
datasets are being connected to trajectories that represent the two-
dimensional patterns of the match sequence.
Similar patterns of such match sequences are then assigned to a common
neuron or a cluster of adjacent neurons on a neural net of the second level and
form a characteristic type. The aim is to group sequences, e.g. a ‘quick build-
up’ on to the net in according clusters during the training, to automatically
detect the respective realizations during the game analysis. In this way, it is
possible to analyze extensive amounts of data online and to classify them
with regard to differences and similarities.
First validation studies show that approximately 90% of the match events
that are collected by means of traditional game analysis can also be detected
by neural networks (Grunz et al. 2012). These events include various group
tactics such as build-ups and set pieces (further differentiated into throw-ins,
free kicks and corner kicks) as well as goal scoring.
From quantitative to qualitative analysis – and back
Analyses of complex behavioural processes – such as team sports – are
intended to map the time-oriented flow of situations and actions to a time
series of data packages, which then have to be transferred and condensed into
a sequence of the characteristic and/or relevant pieces of information.
Obviously, statistical numbers like mean values or even distributions of types
of activities are neither sufficient nor adequate to characterize or analyze such
complex behaviour. In 1971, Günter Hagedorn formulated the problem of the
high time- and space-related meaning of qualitative game situations
(Hagedorn 1971), also called the context orientation of activities (e.g. see Perl
1999). It turns out that patterns are, for instance, useful for the necessary
quantitative–qualitative transfer (see Pattern recognition, below) and, as
described above, that self-organizing neural networks can be used to find,
characterize and analyze those patterns (e.g. Perl and Dauscher 2006).
Owing to the massive lack of game data, these first steps were normally
case studies, demonstrating ways and dealing with aspects as well as basic
phenomena. Of course, a well-based research requires information not only
about one game but about a collection of games, to recognize standards,
invariants, opponent- dependent behaviour and so on. By now, automatic
position recording is one very important step towards developing standard
routines for analyzing games and collecting information about them, which
opens the way to a high-level empirical analysis. Not only is this the
necessary step from data-based qualitative analysis to information-based
quantitative statistics; the approaches that have been developed in the
meantime allow much more: the combination of behavioural patterns and
statistical distributions of frequencies and success can be used for simulation
of training effects as well as tactical innovations (see Simulation, below).
Moreover, the combination of advanced network skills and statistical
analyses can help to recognize tactical creativity – and to improve it by
means of net-based simulation. Two examples are given in the next chapter.
Pattern recognition
Based on the approach depicted in Figure 11.10, Figure 11.11 exemplarily
shows some of the possibilities of the formation-based quantitative and
qualitative pattern analysis of game processes in soccer. On the left of Figure
11.11, the interaction analysis with the selection tool for the data of both
teams (here, the back-four of Team A (in light grey) and four offensive
players of Team B (in dark grey), the scrollbar for a review of the entire
match, the window to display the respective formation and the synoptic table
which lists the number of formations as well as the coincidence frequencies
for the entire match. Very frequently occurring formations are framed in a
dotted line, very rarely occurring ones in solid black. On the top right, the
window of a simplified formation, the group's field of attention and of the
combined formation of the offence and defence group. On the bottom right,
the window with the interaction process of the observed offence/defence
formation for a certain period of the match (Perl and Memmert 2011).
Figure 11.11 Example of the user interface of a tool for the combined quantitative and qualitative
analysis of formations in football (see explanation in the text)
Figure 11.12 Trained neural network with grey shaded areas that illustrate different quality levels (top
left) and a representation of the trajectories of hockey training. The learning process begins in the dark
grey square and ends in the light grey square (Memmert and Perl 2009b); the colours of the neurons
correspond to those in the large net graphic (top left)
Table 11.1 Summary of the results of all trajectories of all three groups (hockey, soccer, control); the
five different types of learning behaviour are outlined in the second column
Simulation
An advanced application of neural networks is the simulation of tactical
behaviour, creative actions and dynamic learning in games. The current step
of the game process is tested on the network, activating the corresponding
neuron, which then returns information in different semantic categories such
as type of activity, degree of creativity, probability of success or probability
of transition to other activities. The idea is to replace the current activity by a
simulated one, which could be more creative or successful, and to further
simulate the resulting process by means of transition and success matrices
(see Figure 11.3) and then to analyze the resulting simulated process with the
intention of improving the team's tactical behaviour. Mapped to a network,
this means that neurons should have the ability to represent not only frequent
but also – and in particular – rare actions. If such a net is calibrated with
respect to success or adequacy, the time series of a process is mapped to a
trajectory, where the neurons can be recognized to correspond to creative
actions (Grunz et al. 2009).
Figure 12.1 Illustration of football as a complex dynamical system (Lames and McGarry 2007)
Net games
Tennis
Palut and Zanone (2005) first presented relative phase analysis in tennis. Four
tennis players of national level were instructed to play a rally from the
baseline while not trying to win the point in the first seven strokes. Two-
dimensional coordinate positions of the players on the tennis court were
recorded at 25 Hz for 40 rallies, from which lateral distances from the centre
line of the tennis court (i.e. the longitudinal line that divides the tennis court
into two equal parts) were obtained. Relative phase of the two players in the
lateral direction was then computed using the Hilbert transform procedure.
The results reported a bimodal distribution demonstrating approximate anti-
phase and in-phase behaviours corresponding with baseline exchanges
between line and cross-court shots, respectively. For example, a player
producing a line shot thereafter moves in the direction of the midline while
the opponent retrieving the shot moves from the midline in the opposite
direction to the player, thus yielding anti-phase. Alternatively, a player
producing a cross-court shot once more travels towards the midline following
the shot, whereas the opponent this time leaves the midline in the same
direction as the player to retrieve the shot, thereby producing in-phase. As
such, the rallies exhibited intermittent phase transitions between the generally
stable properties of in-phase and anti-phase, indicating phase attractions
within the tennis dyads as hypothesized by virtue of shared information
exchange between the players.
Lames and Walter (2006) analyzed a single rally in a top women's tennis
game with the aim of investigating relative phase in reference to game
behaviour. Firstly, it was demonstrated that a rally in tennis produced cyclical
behaviours as demonstrated in the circles observed in the phase plane results
(Figure 12.2). Secondly, transitions between in-phase and anti-phase as
reported by Palut and Zanone (2005) were once more observed (not shown).
Since these phase transitions occurred by regular switching between cross
play and line play, however, the more important challenge for game
understanding is that of identifying the ‘critical fluctuations’ in relative phase
(cf. perturbations), indicating the destabilizing of a phase relation before a
possible phase transition. In addition, relative phase measures in both lateral
and longitudinal directions are required for a more complete accounting of
tennis behaviour, as information pertaining to important tactical aspects of
game behaviour such as approaching the net cannot be obtained from lateral
data.
Figure 12.2 Phase space for two players in a tennis rally (Lames and Walter 2006); Serena Williams
(left) Justine Henin (right); △, □ = strokes of Williams; ▪, ♦ = strokes of Henin; going for the ball to
strike and returning to a neutral position results in cyclical structures in a speed/position phase space
Squash
McGarry et al. (1999) used radial distance from the T to investigate
interaction among squash dyads and consequently reported single anti-phase
coordination for all four squash rallies investigated. Radial distance was
selected on the reasoning that it offers a single metric to express the two-
dimensional movement kinematics of both players at any instant. As noted,
however, when investigating baseline tennis behaviour as a dynamical self-
organizing system, Palut and Zanone (2005) restricted analysis to the lateral
direction only and, moreover, selected velocity instead of displacement as the
kinematic metric of choice. To address these issues, McGarry and Walter
(2012) applied Hilbert analysis to squash game behaviour for purposes of
investigating the movement kinematics of squash dyads separately in lateral,
longitudinal and radial directions using displacement and velocity metrics,
with the aim of reporting on the similarities and dissimilarities that exist
between these various selected measures. Speaking generally, the findings
demonstrated strong phase attractions within squash dyads with varying
combinations of direction and kinematic metrics producing varying results.
More specifically, bimodal phase attractions were reported for both lateral
and longitudinal directions for both displacement and velocity metrics,
although the phase attraction values differed depending on the kinematic
metric, whereas the radial direction produced only single anti-phase attraction
for both metrics. As with the results from Palut and Zanone (2005), the
bimodal phase relations of the squash dyad in the lateral directions are
attributed to transitions between line and cross-court shots and, similarly, to
transitions between short and long shots in the longitudinal direction. These
results furthermore indicate that additional information for understanding
game behaviour is derived from analyzing movement kinematics in both
directions instead of a single direction which results necessarily in some loss
of information.
The different results reported by McGarry and Walter (2012) were
interpreted to indicate that selection of direction and kinematic metrics are
important considerations when investigating coordination dynamics of game
sports. Of more importance, however, was the observation that the varying
outcomes from the varying initial conditions for analysis nonetheless
conformed to common dynamical system descriptions, as expected given the
universal underpinnings of self-organizing complex systems.
Invasive games
Basketball
Team sport behaviour in basketball resulting from dynamical interactions of
dyads comprising players (Bourbousson et al. 2010a) and teams
(Bourbousson et al. 2010b) was investigated. These authors recorded the
kinematic trajectories of individual players and then undertook relative phase
analysis of all possible player combinations, yielding dyads comprising
players from the same team and from opposing teams. The results indicated
in-phase coordination between dyads, with stronger attractions observed in
the longitudinal direction (basket-to-basket) than the lateral (side-to-side)
direction. Moreover, the phase attractions were influenced by the particular
make up of the playing dyad, with dyads comprising direct opponents
identified from playing position reporting stronger phase attractions than
other permutations. This result is explained by the basketball teams using
individual marking defensive strategy rather than zone defence. Other phase
attractions reported were anti-phase in the lateral direction observed for the
playing dyads comprising the wing players from the same teams, a result
attributed to these players working in concert to increase width when
attacking and decrease width when defending.
The kinematic data for each team were obtaining from the geometric
means of the individual players data. These data were then subjected to
relative phase analysis as before, thereby investigating game dynamical
behaviour at the level of team instead of the level of players. As expected,
similar results regarding inphase attractions between teams was reported,
with stronger phase locking in the longitudinal direction than the lateral
direction. In-phase attraction between teams was furthermore stronger than
between players as anticipated from statistical considerations.
Football
Frencken et al. (2011) also used team centroids (geometric means) as well as
surface areas to analyze playing behaviours in small-sided (five versus five)
football games. The distance between team centroids was taken as an
indication of game pressure with lesser distances between teams indicating
higher pressure exerted by one or both teams on the other. The surface area
contained within the perimeter of a team configuration was interpreted as an
index of player (or team) distribution with higher values indicating higher
dispersion of players (Frencken and Lemmink 2008). Visual inspection
indicated strong in-phase couplings between teams on both variables. A
crossing of team centroids was also reported before some of the goals were
scored possibly representing behavioural perturbation in these instances.
Lames et al. (2010) also reported dynamical analysis of a football game
using relative phase (see also Cordes et al. 2011; Siegle and Lames 2013).
Position data of all players from the 2006 FIFA (International Federation of
Association Football) World Championship final were recorded at 25 Hz
using automated image processing techniques (Beetz et al. 2005) and relative
phase between teams was obtained from centroid data aggregated to 1 Hz
using Hilbert transform. Figure 12.3 presents data for both team centroids
from the first half and demonstrates strong in-phase coupling in the
longitudinal (forward–backward) direction (Xrp = 0.002° ± 5.254°). Three
main perturbations from in-phase are noted from data inspection, each of
which is associated with significant breaks from open play. The first
perturbation (minute 6) was associated with a penalty kick resulting in a goal
to France, the second perturbation (minute 19) corresponded to the equalizing
goal by Italy and may well be a result of the time taken to restart the game,
and the third perturbation (minute 33) was attributed to game injury lasting
more than a minute. These results highlight the strong in-phase couplings
attributed to behavioural interactions between teams produced in open play,
as contrasted against the weaker coupling behaviours observed during periods
of inactive game behaviour, as expected. As before, strong in-phase
couplings between teams is predicted on the basis of shared information
exchanges between players and teams as they contest the game using
common, if competing, objectives.
The coupling of teams in the lateral (side-to-side) direction was marginally
stronger than the longitudinal direction as indicated in reduced phase
variability (Xrp = 0.010° ± 3.844° versus Xrp = 0.002° ± 5.254°). In contrast
to the results from the team centroids just noted, the coupling of the team
ranges was weaker in both lateral (Xrp = 0.130° ± 18.250°) and longitudinal
(Xrp = 0.128° ± 18.319°) directions as represented by increased phase
variability. Since the lateral and longitudinal range values for a team are
determined from the maximum and minimum player coordinates within a
team configuration, this finding is expected, as the range is more sensitive
than the centroid (mean) to changes in player movements. Regarding game
behaviour, the result indicates that the teams are less coupled on measures of
dispersion (cf. surface area) than on measures of central tendency.
Figure 12.3 Longitudinal team centres, differences and relative phase for Italy and France during the
first half of the 2006 World Cup final game
Figure 12.4 Separating a constellation of players on the playground into its formation and position
Figure 12.8 Example of a typical tactical pattern produced between the two teams
Figure 12.9 Prototype of the tactical pattern from Figure 12.5, together with success values
Figure 13.2 Mean values and standard error of the required velocity for intercepting the ball of (A)
defender and (B) goalkeeper in shots that ended in a defender's interception, in a goalkeeper's save and
in a goal. The represented levels of statistical significance are P < 0.05 (*), P < 0.01 (**) and P < 0.001
(***). Note that the required velocity of the goalkeeper was not measured when the defender
intercepted the ball, since it is impossible to compute the goalkeeper's interception point
The mean values of the required velocity of the defender to intercept the
ball were significantly lower in plays ending in a defender's interception
(mean [M] = 3.29, standard error [SE] = 0.39) than in plays ending in a
goalkeeper's save (M = 32.16, SE = 10.11) and in plays ending in a goal. This
finding suggests that the time taken for the ball to arrive at the interception
point was higher than the defender's ability to move to the interception point.
That is, the time allowed for Ecological dynamics as an alternative
framework 235 Figure a defender to close the gap between him and the
interception point and to intercept the ball was within the defender's action
capabilities. In this sense, to score goals, attackers need to move in order to
‘pull’ the opponents away from an imaginary line between him and the centre
of the goal.
Similarly, the mean values of the required velocity of the goalkeeper to
intercept the ball were significantly lower in plays ending in a goalkeeper's
save (M = 3.29, SE = 0.39) than in plays ending in a goal (M = 12.97, SE =
4.41). This result suggests that the time for the ball to arrive at the
interception point was greater than the time needed for the goalkeeper to
arrive at the same point. These data suggest that attackers need to be able to
identify an opportunity in the performance environment to shoot the ball
without allowing the immediate defender and the goalkeeper to move fast
enough to intercept the ball. Such opportunities emerge not only from the
information from the performance environment but also from attackers'
action capabilities. That is, attackers should scale information they perceive
according to their own capabilities. Decisions where, when and how to shoot
should always be placed in a performance context and should be guided by
the information of both the time for the ball and for the opponents to arrive at
potential interception points (Watson et al. 2011).
Figure 15.1 Schematic (i.e. one-dimensional) presentation of the corrugated hierarchically soft-
assembled potential landscape with two confining barriers on both sides (reproduced with kind
permission from Nonlinear Dynamics, Psychology and Life Sciences, Springer)
The main idea behind this model is the absence of any simple symmetries
of potential order parameters (see Chapter 1 and 2) governing discrete and
whole body movements. In other words, in principle, it is hard to relate two
or more actions through simple symmetry transformations. For example, in
the HKB (Haken, Kelso and Bunz) model (Haken et al. 1985), the relative
phase order parameter exhibits mirror (change of the sign) symmetry, which
allows one to relate a set of patterns; i.e. values of the order parameter, with
those symmetries. However, a set of whole-body movements/postures or
multiarticular discrete movements in sport or dancing, generally, cannot be
related by simple symmetries and their rearrangements. Hence, the
relatedness of any set of discrete and whole-body movements may be made
by the so-called overlap order parameter q, which measures the mutual
correlation of this kind of actions. It can be defined as a cosine of the angle
between two random vectors, i.e. replicas, in real or formal space of any
finite dimension (e.g. Domany et al. 2001). Under absolutely relaxed
constraints, one may expect that these replicas are totally uncorrelated and
their average correlation is <q> = 0. We can say that the system is replica
symmetrical. Any replica, i.e. performer–environment configuration, can
emerge. However, under constraining influences of the task, properties of the
individual performer and the environment, this symmetry and ergodicity is
broken and clusters of correlated actions arise (for detailed examples, see
Hristovski et al. 2011, 2012). Some similar replicas, i.e. configurations, are
more likely to occur and some are unlikely. Constraints break system
symmetry, produce a phase transition and also create ergodicity breaking
high barriers on the both sides of the potential landscape, confining all
available actions within the internal space (Figure 15.1). It can be observed
that the action landscape soft-assembles under specific constraints
configurations.
These correlated clusters form a hierarchical landscape. Actions lying in
one attractor basin (see Chapter 1 for definition), separated by smaller
barriers are more correlated than those separated by higher barriers. Thus, we
can define order parameters at each level, with those defining lower levels
depending on increasingly subtle constraints. These order parameters form a
tree-like structure. Note that, while slowly changing (i.e. quasi-stationary)
interacting constraints, are forming the deterministic structure of the
landscape (see Figure 15.1 and the previous text), there is also a need of
stochastic agitation force within the performer–environment system, mainly
contributed by the interaction between performer's intention to move and the
unpredictable and quickly changing physical and information constraints,
which are the driving force of reconfigurations. Hence, both the deterministic
structure and the stochastic drive interact in reconfiguring the systems
dynamics. The exploratory dynamics within this landscape, may be seen as a
hopping of the system from one basin to another or equivalently as a random
walk on a tree. The hopping or the random walk has a meaning of
reconfigurations within the system that are taking place. Hopping over larger
barriers means larger reconfigurations and vice versa.
This modelling shows how exploration is a requisite of creative behaviour;
i.e. inventing novel or innovating extant movement forms and actions with
respect to the extant sociocultural milieu (for details see Hristovski et al.
2011). Without it, a complex neurobiological system cannot find novel
functional behavioural solutions. In Hristovski et al. (2011), exploratory
breadth Q was defined as being equal to the average escape probability over
all possible state basins of attraction (see Saxton 1996), Q = We. Escape
probabilities for each movement mode are defined as We = 1 – Wc, where Wc
is the conditional probability of staying inside the same attractor (Hristovski
et al. 2009). In other words, Wc measures the trapping strength of the
attractor; i.e. the probability of being able to achieve the same performance
outcomes sequentially. The larger the average escape probability We, the
larger the exploratory breadth Q of the system and vice versa. In general, it
can be said that: for any performer–environment system containing a large
amount of degrees of freedom, there always exists a set of constraints which
maximizes the functional action versatility, i.e. exploratory breadth, defined
as maximum action entropy (see e.g. Hristovski et al. 2006; Pinder et al.
2012). A more thorough exposition of this model and an experimental
example of novel action emergence in martial arts can be found in Hristovski
et al. (2011) and Hristovski et al. (2012).
In the following section, we further illustrate how this theoretical model
provides an analysis of movement exploration structure and dynamics applied
in context of dance improvisation. The general predictions of a hierarchical
structure and softly assembled dynamics were preliminarily tested. A
conceptual model of creativity in team sports within the general framework
of dynamical systems is also presented.
How creative behaviours emerge under ecological constraints
in dance and team sports
Figure 15.3 The profile of the average dynamic overlap qd(t) for different time lags; its dynamics
proceed on three timescales (from seconds to several minutes) and does not converge to zero during the
observation time scale
In this section we have shown how exploratory, i.e. process and product
aspects of creative behaviour can be analyzed within the model of soft-
assembled hierarchy. The creative process unfolds on more time scales,
owing to the hierarchical structure of the perception–action landscape. The
soft-assembled hierarchical landscape model predicts also other interesting
phenomena, such as aging: the more time the learner spends in a confined
and thus correlated region of the landscape, the less responsive s/he becomes
to a change. Based on the obtained consistency of the experimental results
with respect to the predictions of the theoretical model, we further
hypothesize that, together with Q, the slope of initial relaxation â and the
value of the <qd(t)> plateau, are good candidates for assessing the creative
capacity of performer–environment systems under different types and
strengths of constraints (for details see Hristovski et al. 2011). Future work is
needed to test other predictions of the model.
Conceptual modelling of creative behaviour in team sports: the
need to play within critical regions of interpersonal distance
Attackers' interactions aim to actively explore space–time windows that
emerge because of defender displacements. On the other hand, defender
displacements aim to cover the possible paths to goal, which demands high
levels of interpersonal coordination among the players in defence. However,
space–time windows will only emerge if the attackers' movements are
powerful enough to disturb the defenders' interpersonal coordination and, to
do that, attackers' actions must be performed within short distances of
attacker–defender interpersonal distance (Passos et al. 2008).
Thus, sudden changes in the attacker–defender structural organization can
only happen when the attacker–defender systems moved towards regions of
very short interpersonal distances, where the contextual dependency among
players emerge, characterizing the performance region as critical. Within
these critical regions, the players' contextual dependency moves the system
from equally poised options to a single option, that emerges under the
influence of task and environmental constraints. In other words, within these
critical regions, creativity occurs as ongoing performance solutions emerge
and are annihilated, until a sudden change occurs where a single, i.e. creative
solution, emerges (Passos et al. 2009). In this sense, within critical regions,
exploratory metastable behaviour emerges as a precursor to the creative
product, i.e. the single solution. Hence, both the exploratory and product
phase of creative behaviour exist (Drazin et al. 1999 and Sternberg and
Lubart 1996). The players' contextual dependency creates local information
that originates at a specific moment in time and space where a gap in the
defensive system emerges and the attackers exploit it to move closer to the
goal or even score. This process underlines the notion that creativity in team
sports is based upon a self-organization mechanism that only occurs within
critical regions.
Creativity in team sports is sustained by the nonlinear interactions among
players, which enable nonproportional, abrupt and unpredictable environment
for the opponents. As in any other social system, the way that each player
interacts with others in the neighbourhood of play influences the behaviours
of players within the same team and this is a requisite to disturb the actions of
opponents (Fajen et al. 2009). From an attacker's perspective, the decisions of
the ball carrier and support players are based on the perceptions that they
have created of the defenders' relative positioning, running-line trajectories
and proximity to each other (Passos et al. 2008). On the other hand, the
decision making of defenders depends on the perception that they have of the
ball carrier's actions as well as the behaviours of the support players (Passos
et al. 2008). These variables include interpersonal distances, the speed and
running-line trajectories that contain important information concerning the
attackers' ability to perform different actions. These variables contain
information that are perceived by the players and specify the action
possibilities of each opponent or teammate (Gunns et al. 2002; Weast et al.
2011). This is where creativity emerges, with the need for attackers to
perform deceptive actions that creates the impression of multiple different
possibilities for action. These deceptive actions can also be characterized by
intrateam coordination, where attackers perform a set of previous established
movements that are intended to open a space–time window against a stable
opposing team. This is when creativity is needed again and players need to
reorganize, avoiding defenders. This reorganization process is grounded on
situational information concerning defenders' relative positions, number,
speed and distance to goal (Travassos et al. 2011; Cordovil et al. 2009;
Passos et al. 2011). These sources function as task constraints that attackers
use to avoid defenders. The reorganization of attackers is grounded on
situational information that emerges because of opponent players' nonlinear
interactions and is selforganized.
Typically, attacker–defender interactions are characterized with many
subtle fluctuations in the attacker–defender balance but also with few abrupt
changes in the attacker–defender structural organization, meaning that
suddenly the attackers gain an advantage and are in a crucial position to
score.
To summarize, in this chapter, we have outlined a model of creative
exploration and solution (product) emergence as a soft-assembly of actions
under ecological constraints. The obtained structural and dynamical hierarchy
of the creative behaviour, consistent with the model predictions, was
demonstrated in a contact improvisation dance example. The creative process
unfolds on different timescales, owing to the hierarchical structure of the
perception–action landscape. Hence, creativity is a nested process, both
structurally and temporally. The experimental examples provided here are
extensions of those in Hristovski et al. (2011), where an emergence of a
novel punch in martial arts was treated within the framework of the same
theoretical model. The role of slowly changing or constant environmental and
personal constraints, (i.e. gravity and morpho-anatomical structure in creative
exploratory activity) is provided by their slow contextualizing function.
Quickly changing and stochastic physical and informational constraints form
the base of the unpredictability function in creativity. Through this process
they mould the goal-directed activity of complex systems potentially leading
to inventions of new movement forms or team actions. By manipulation of
these key constraints, athletes structure different types of contexts which
eventually lead to soft-assembly of novel forms of actions. In this way,
highly motivated athletes, by self-generated experimentation in the full space
of constraints, may facilitate the emergence of new and functional action
forms.
References
Araújo, D., Davids, K., Bennett, S., Button, C. and Chapman, G. (2004)
Emergence of sport skills under constraints, in A. M. Williams and N. J.
Hodges (ed.) Skill Acquisition in Sport: Research, Theory and Practice.
London: Taylor and Francis, pp. 409–33.
Araújo, D., Davids, K. and Hristovski, R. (2006) The ecological dynamics of
decision making in sport. Psychology of Sport and Exercise, 7: 653–76.
Bartlett, R. M. (2007) Introduction to Sport Biomechanics: Analysing Human
Movement Patterns, 2nd edn. London: Routledge.
Bressler, S. L. and Kelso, J. A. S. (2001) Cortical coordination dynamics and
cognition. Trends in Cognitive Sciences, 5 (1): 26–36.
Bril, B., Roux, V. and Dietrich, G. (2005) Stone knapping: Khambhat (India),
a unique opportunity? in V. Roux and B. Bril (eds) Stone Knapping, the
Necessary Conditions for an Uniquely Hominid Behaviour. McDonald
Institute Monograph Series, Cambridge: McDonald Institute for
Archaeological Research, pp. 53–72.
Castañer, M., Torrents, C., Anguera, M. T., Dinušova, M. and Jonsson G. M.
(2009) Identifying and analyzing motor skill responses in body movement
and dance. Behavior Research Methods, 41 (3): 857–67.
Challet, D., Marsili, M. and Zecchina, R. (2000) Statistical mechanics of
systems with heterogeneous agents: minority games. Physical Review Letters,
84 (8): 1824–7.
Chow, J. Y., Davids, K., Button, C., Shuttleworth, R., Renshaw, I. and
Araújo, D. (2006) Nonlinear pedagogy: a constraints-led framework to
understand emergence of game play and skills. Nonlinear Dynamics,
Psychology and Life Sciences, 10 (1): 74–104.
Chow, J. Y., Davids, K., Button, C. and Rein, R. (2008) Dynamics of
movement patterning in learning a discrete multiarticular action. Motor
Control, 12 (3): 219–40.
Cordovil, R., Araujo, D., Davids, K., Gouveia, L., Barreiros, J. and
Fernandes, O. and Serpa, S. (2009) The influence of instructions and body-
scaling as constraints on decisionmaking processes in team sports. European
Journal of Sport Sciences, 9 (3): 169–79.
Davids, K., Araújo, D., Shuttleworth, R. and Button, C. (2003) Acquiring
skill in sport: A constraints-led perspective. International Journal of
Computer Sciences in Sport, 2: 31–9.
Davids, K., Renshaw, I. and Glazier, P. (2005) Movement models from
sports reveal fundamental insights into coordination processes. Exercise and
Sport Science Reviews, 33: 36–42.
Davids, K., Button, C., and Bennett, S. (2008) Dynamics of Skill Acquisition.
A Constraints-led Approach. Champaign: Human Kinetics
Davids, K., Araújo, D., Vilar, L., Renshaw, I. and Pinder, R. (2013) An
ecological dynamics approach to skill acquisition: implications for
development of talent in sport. Talent Development and Excellence, 5 (1):
21–34.
Domany, E., Hed, G., Palassini, M. and Young, A. P. (2001) State hierarchy
induced by correlated spin domains in short-range spin glasses. Physical
Review B, 64: 224–406.
Drazin, R., Glynn, M. A. and Kazanjian, R. K. (1999) Multilevel theorizing
about creativity in organizations: a sensemaking perspective. Academy of
Management Review, 24: 286–307.
Fajen, B., Riley, M. and Turvey, M. (2009) Information, affordances and the
control of action in sport. International Journal of Sport Psychology, 40: 79–
107.
Gunns, R. E., Johnston, L. and Hudson, S. M. (2002) Victim selection and
kinematics: a point-light investigation of vulnerability to attack. Journal of
Nonverbal Behavior, 26 (3): 129–58.
Haken, H., Kelso, J. A. S. and Bunz, H. A. (1985) Theoretical model of phase
transitions in human hand movements. Biological Cybernetics, 51: 347–56.
Hristovski, R., Davids, K. and Araújo, D. (2006) Affordance – controlled
bifurcations of action patterns in martial arts. Nonlinear Dynamics,
Psychology and Life Sciences, 4: 409–44.
Hristovski, R. and Davids, K. (2008) Metastability and situated creativity in
sport. Paper presented at the 2nd International Congress of Complex Systems
in Sport, 4–8 November, Madeira, Portugal.
Hristovski, R., Davids, K. and Araújo, D. (2009) Information for regulating
action in sport: metastability and emergence of tactical solutions under
ecological constraints, in D. Araújo, H. Ripoll and M. Raab (eds)
Perspectives on Cognition and Action in Sport. New York: Nova Science
Publishers, pp. 43–57.
Hristovski, R., Davids, K., Araújo, D. and Passos, P. (2011) Constraints-
induced emergence of functional novelty in complex neurobiological
systems: a basis for creativity in sport. Nonlinear Dynamics, Psychology and
Life Sciences, 15 (2): 175–206.
Hristovski, R., Davids, K., Passos, P. and Araújo, D. (2012) Sport
performance as a domain of creative problem solving for self-organizing
performer–environment systems. Open Sports Science Journal, 5 (Suppl 1-
M4): 26–35.
Joliffe, I. T. (2002) Principal Component Analysis, 2nd edn. New York:
Springer.
Kello, C. T., Anderson, G. G., Holden, J. G. and Van Orden, G. C. (2008)
The pervasiveness of 1/f scaling in speech reflects the metastable basis of
cognition. Cognitive Science, 32 (7): 1217–31.
Newell, K. M. (1986) Constraints on the development of coordination, in M.
G. Wade and H. T. A. Whiting (eds) Motor Development in Children.
Aspects of Coordination and Control. Dordrecht, Netherlands: Martinus
Nijhoff, pp. 341–60.
Parisi, G. (1979) Infinite number of order parameters for spin-glasses.
Physical Review Letters, 43: 1754–6.
Passos, P., Araújo, D., Davids, K., Gouveia, L., Milho, J. and Serpa, S.
(2008) Informationgoverning dynamics of attacker–defender interactions in
youth rugby union. Journal of Sports Sciences, 26 (13): 1421–9.
Passos, P., Araújo, D., Davids, K., Gouveia, L., Serpa, S., Milho, J. and
Fonseca, S. (2009) Interpersonal pattern dynamics and adaptive behavior in
multi-agent neurobiological systems: a conceptual model and data. Journal of
Motor Behavior, 41 (5): 445–59.
Passos, P., Milho, J., Fonseca, S., Borges, J., Araújo, D. and Davids, K.
(2011) Interpersonal distance regulates functional grouping tendencies of
agents in team sports. Journal of Motor Behavior, 43 (2): 155–63.
Pinder, R., Davids, K. and Renshaw I. (2012) Metastability and emergent
performance of dynamic interceptive actions, Journal of Science and
Medicine in Sport, 15 (1): 1–7.
Reader, S. M. and Laland, K. N. (2001) Primate innovation: sex, age, and
social rank differences. International Journal of Primatology, 22: 787–805.
Saxton, M. (1996) Anomalous diffusion due to binding: a Monte Carlo study.
Biophysical Journal, 70: 1250–62.
Sternberg, R. J. and Lubart, T. I. (1996) Investing in creativity. American
Psychologist, 51: 677–88.
Taylor, A. H., Elliffe, D., Hunt, G. R. and Gray, R. D. (2010) Complex
cognition and behavioural innovation in New Caledonian crows. Proceedings
of the Royal Society B Biological Sciences, 277 (1694): 2637–43.
Torrents, C., Castañer,M., Dinušova, M. and Anguera, M. T. (2010)
Discovering new ways of moving: observational analysis of motor creativity
while dancing contact improvisation and the influence of the partner. Journal
of Creative Behavior, 44 (1): 45–61.
Travassos, B., Araújo, D., Vilar, L. and McGarry, T. (2011) Interpersonal
coordination and ball dynamics in futsal (indoor football) Human Movement
Science, 30 (6): 1245–59.
Weast, J. A., Shockley, K. and Riley, M. A. (2011) The influence of athletic
experience and kinematic information on skill-relevant affordance perception.
Quarterly Journal of Experimental Psychology, 64 (4): 689–706.
Part 4
Complexity sciences and
training for sport
16 Variability in neurobiological
systems and training
Chris Button, Ludovic Seifert, David
O'Donovan and Keith Davids
Variability in achieving consistent performance outcomes has become
increasingly recognized as important in preparing for dynamic performance
contexts like sport. Here, we examine the theoretical basis for viewing
variability as functional and examine the implications for understanding
training for sport in individual and team sports. The issue of a putative
optimal movement pattern, common to all learners, is challenged. In this
chapter, we describe motor expertise as the capacity to functionally adapt
behaviours to satisfy key constraints in order to achieve intended
performance outcomes. Darwin, amongst many others, recognized the
fundamental value of individual variation and adaptation for the functional
behaviour and long-term survival of biological organisms. Similarly, at the
timescale of perception and action, success in sport is underpinned by the
capacity of athletes to capitalize on their individual strengths and to respond
appropriately to different challenges. Indeed, the principles of overload in
physical training (i.e. frequency, intensity, duration, type) are built upon
biological adaptation through which the human body becomes increasingly
prepared to function more efficiently and to produce sufficient energy to
attain higher performance goals. The importance of adaptability in motor
control was originally raised by Bernstein (1996) when he conceptualized the
notion of resourcefulness (i.e. stability and initiative) as an important
property of an organism's dexterity (p. 221). Whilst performing a complex
coordination pattern may be difficult for a learner in sport, more challenging
is its functional adaptation to a specific performance context; i.e. in
responding to interacting constraints (task, environmental, or organismic) that
continually change over time (Newell 1986). Biryukova and Bril (2002)
commented that ‘the dexterity is not movements in themselves, but their
ability to adapt to external constraints’ (p. 65).
A corollary of these theoretical ideas is the acceptance that there is not one
optimal pattern of coordination towards which all developing learners should
aspire but, instead, that expertise concerns ‘individual-constraint coupling’
(Seifert et al. 2013). Davids and Glazier (2010) postulated that this requisite
adaptability of complex movement systems was founded on pertinent
neurobiological system properties including degeneracy, defined as ‘the
ability of elements that are structurally different to perform the same function
or yield the same output’ (Edelman and Gally 2001: 13763). They proposed
that evidence of intra-individual movement variability often observed in
experts could play a functional role; for instance, it could correspond to
several types of movement and/or to the ability to use co-existing modes of
coordination (i.e. exploit multistability and metastability in complex
neurobiological systems). This relationship between multistability and
metastability in expert performance illustrates the requisite subtle balance
needed between stability and variability in sport performance (Jantzen et al.
2008), which arises from the complex interactions between intentions, actions
and perceptions of individual performers.
Traditionally, there has been an overriding tendency to define expertise as
the capacity to both reproduce a specific movement pattern consistently and
to increase the automaticity of movement (Seifert et al. 2013). In
representational accounts of expertise, movement variability is seen as noise
(i.e. an artefact limiting the system's processing of data input and output),
which should be minimized (Summers and Anson 2009). Interestingly, in
recent times, several computational models of control have surfaced,
attempting to cast the problem of variability for motor control in a more
positive light (i.e. computational modelling, Wolpert et al. 2003; optimal
control theory, Todorov and Jordon 2002). However, for some time, research
from a dynamical systems theoretical orientation has shown that movement
system variability is not noise detrimental to performance, error or deviation
from a putative expert model, which should be corrected in the beginner.
Movement system variability instead indicates the functional flexibility
needed to respond to dynamic performance constraints (Davids et al. 2003).
In this context, Schöllhorn et al. (2009) have more radically argued for the
value of adding noise to the initial conditions of performance to stimulate
learning by forcing the individual to adapt to varying constraints of the
context.
The functional role of variability has been historically overlooked by
researchers because of the types of tasks studied (e.g. pursuit tracking) and
limitations of data collection and analysis tools. For example, Newell and
Slifkin (1998) indicated that the magnitude of performance variability has
traditionally been assessed by the standard deviation or variance over trials;
these statistical indicators attempt to characterize the data distribution and the
amount of noise in a single measurement. The standard deviation measure
indicates the magnitude of variability (i.e. the amplitude, the spatial aspect of
the distribution of performance over trials) but not the structure of system
variability (Newell and Slifkin 1998). Instead, studying the temporal structure
of variability by spectral analysis of noise provides information on its
deterministic or stochastic nature. In this chapter, we overview empirical
research from sports science employing a range of emerging data analysis
tools more suited to examining the structure of movement variability (for a
review of variability analysis in medicine, see Bravi et al. 2011). As we will
demonstrate in a later section, the functional role of movement variability has
typically been explored in performance of ball skills. Our aim for this chapter
is to extend our breadth of understanding by exemplifying how intra-
individual and inter-individual movement variability could play a functional
role in a range of physical activities common to sport, such as different forms
of locomotion, object manipulation and team invasion games.
Functional movement variability in sport
Gait
Human gait is seemingly characterized by smooth, regular, repeating patterns.
The ‘control problem’ in terms of mechanical regulation of gait is that there
are many more muscle actuators involved than independent equations that
define the system (Vaughan 1996). In fact, when analyzed in detail, there are
notable strideto- stride fluctuations even under constant environmental
conditions (Hausdorff 2007) and these fluctuations seem to be a prominent
feature when key parameters such as gait speed and balance are considered
(i.e. Jordan et al. 2006, 2007; van Emmerik and Wagennar 1996). Hausdorff
(2007) suggests that fluctuations in the stride interval exhibit long range,
fractal-like correlations similar to those found in heartrate beat fluctuations.
Put simply, in the short term, the stride interval is dependent on other nearby
cycles but this dependency weakens in a power-law fashion. Interestingly, the
fractal scaling underpinning gait dynamics differs significantly for subgroups
who exhibit much less stable and effective gait patterns (e.g. infants and the
elderly; Hausdorff 2007).
These ideas were demonstrated in a study by Jordan et al. (2007) who
required participants to walk for 12-minute trials at 80%, 90%, 100%, 110%
and 120% of their preferred walking speed. A range of gait-cycle variables
was investigated, including the intervals and lengths of steps and strides and
the impulse from the vertical ground reaction force profile. Detrended
fluctuation analysis revealed the presence of U-shaped long range
correlations in gait-cycle variables. The reduced strength of long-range
correlations at certain locomotion speeds (100–110%) was interpreted as
reflective of enhanced stability and adaptability at preferred speeds (see also
Li et al. 2005).
Button et al. (2010) pointed out that stable attractor states in healthy and
pathological gaits are an important, functional feature of stable coordination
and intra-individual and inter-individual variability amongst these patterns
are discernible with the appropriate analysis tools. It seems that such
differences are more likely to be identified when using time-continuous
analysis tools (i.e. selforganizing Kohonen maps) rather than summative,
discrete statistics (Schöllhorn et al. 2002). For example, a number of
discernible patterns can be detected from the phase relations of pelvis and
thorax segments as gait speed changes (van Emmerik and Wagenaar 1996).
Stability analysis of relative phase and range of motion data have also been
used to identify hysteresis effects in transitions between stable patterns (i.e.
direction of speed dependent).
A particularly distinctive form of gait (i.e. competitive race walking) has
also been considered in relation to functional variability. Donà et al. (2009)
explored the use of functional principal component analysis for assessing and
classifying the kinematics of the knee joint in competitive race walkers.
Functional principal component analysis was applied bilaterally to the sagittal
knee angle data, because knee joint motion is fundamental to race walking
technique. Scatterplots of principal component scores provided evidence of
athletes' technical differences and asymmetries, even when traditional
analysis (mean plus or minus standard deviation curves) was not effective.
Whilst there were certain features, such as the absence of a flight phase, that
were common for all seven participants, principal components provided
indications for the classification of race walkers and identified potentially
important technical differences between higher- and lower-skilled athletes
(Donà et al. 2009).
Bradshaw et al. (2007) were also interested in whether skilled athletes
showed evidence of functional variability, in this case related to the sprint
start. Indeed, high biological movement variability (in comparison with
systematic error) was observed for the joint velocities of ten track sprinters.
Of particular interest, regression analysis indicated that a decrease in ten-
metre sprint time was associated with an increase in the variability of the lead
ankle step velocity.
From this brief overview, it is apparent that functional variability
manifested as stride-to-stride fluctuations is a consistent feature of several
types of gait patterns (e.g. walking, running, sprinting). Movement variability
is most noticeable around transitions in speed, which is an underlying feature
of all types of gait that contribute to balance and stability. It is relevant to
note that only certain kinds of data analysis tools were suited to detecting the
functional fluctuations that subserve gait dynamics.
Breaststroke swimming
Whether locomoting on ground, over an object or through aquatic
environments, it seems that biological organisms exhibit strong preferences
and many global similarities in terms of the cyclical patterns they use to
move. In skilled breaststrokers, one cycle is composed of an alternation of
propulsions (i.e. arm propulsion during the leg glide; leg propulsion during
the arm glide), a brief glide time with the body fully extended and a
synchronization of arm and leg recoveries (Chollet et al. 2004). In
comparison with beginners, during performance, skilled swimmers display a
high level of intra-individual coordination pattern variability, exemplified by
a high intracyclic knee and elbow angular variability and several modes of
arm–leg coordination, depending on swim speed (Seifert et al. 2010). Expert
swimmers need to organize different coordination patterns for each
performance phase. They display an out-of-phase pattern of coordination of
their arms and legs during propulsion (i.e. flexion or extension of a pair of
limbs while the other pair of limbs is fixed in extension), an in-phase
coordination mode during glide (i.e. extension of the arms and legs) and an
anti-phase coordination mode during recoveries (i.e. extension of the arms
during leg flexion) (Seifert et al. 2010; Figure 16.1). In a cycle of 1.5–2.0
seconds, expert swimmers are able to alternate between these three modes of
coordination.
In contrast, owing to their different mix of intentions, perceptions and
actions, learner swimmers typically demonstrate low intra-individual
coordination variability but very high inter-individual coordination variability
(Figure 16.1), notably a bi-stability of the arm–leg coordination modes that
could lead to several intermediate profiles. The first coordination mode
corresponds to an isocontraction of the nonhomologous limbs: that is, the in-
phase muscle contraction of the arms and legs (Baldissera et al. 1991). One
way to enhance system stability is to synchronize the flexion movements of
both arms and legs, as well as the extension movements like an ‘accordion’,
supporting emergence of low intracyclic arm–leg coordination variability
(Figure 16.1). The accordion mode of coordination corresponds to a
superposition of two contradictory actions (Seifert et al. 2010): leg
propulsion during arm recovery and arm propulsion during leg recovery. It is
not mechanically effective and does not provide high swim speed because
each propulsive action is thwarted by a recovery action. As observed in other
studies of interlimb coordination, this coordination mode appears to be the
most stable and the easiest to perform for learners (for an overview, see Kelso
1995).
Figure 16.1 Continuous relative phase (CRP) between elbow and knee through a complete cycle for 24
beginners (left panel) and for 24 expert swimmers (right panel), showing lower inter-individual
variability for experts
Ice-climbing
The way in which intentions, actions and perception of information guide
movement pattern formation has also been shown in a study of ice climbers
(Seifert et al. 2011a). Ice climbing involves climbing with ice tools in each
hand and crampons on each foot on frozen water falls, the properties of
which vary stochastically in shape, steepness, temperature, thickness and ice
density. Since these environmental constraints are neither predictable or
controllable, this task requires successful climbers to use numerous types of
movements (e.g. swinging, kicking and hooking actions) and patterns of
interlimb coordination (e.g. horizontally, diagonally and vertically located
angular positions) during performance by exploiting complex neurobiological
system properties of degeneracy and multistability. For instance, climbers
could either swing their ice tools to create their own holes or hook an existing
hole (owing to the actions of previous climbers or by exploiting the presence
of natural holes), supporting the functional role of intra-individual variability.
Seifert et al. (2011a) examined the performance of beginners and skilled
climbers as they climbed a frozen waterfall. They assessed interlimb
coordination patterns by using the angle between the horizontal line and the
displacement of the heads of the left- and right-hand ice tools for the upper-
limb coordination. Lower-limb coordination patterns corresponded to the
angle between the horizontal line and the displacement of the left and right
crampons (Figure 16.2).
When both groups of climbers climbed an ice fall with a quasi-vertical
slope (range between 80–90 degrees), beginners showed low levels of intra-
individual movement and interlimb coordination variability, as they varied
their upper-limb and lower-limb coordination patterns much less frequently
and extensively than the experts. As in the study of the novice breaststroke
swimmers, Seifert et al. (2011a) observed patterns of movement that were
indicative of the intentions idiosyncratic to this group. Beginners mostly used
horizontally and diagonally located angular positions (since limb anchorages
are at the same level for the horizontal angle, the arms or legs appear in an in-
phase coordination mode). This highly stable behaviour resembled climbing
up a ladder and led them to maintain a static ‘X’ body position with arms and
legs extended or with arms flexed and legs extended, corresponding to a
freezing of the degrees of freedom of the motor system. Owing to their lack
of attunement to information from properties of the ice fall, novices tended to
swing their ice tools and kick with their crampons more frequently than
experts, in patterns synonymous with achieving deep anchorages, instead of
exploiting existing holes in the ice fall. Like the novice breaststrokers, the
novice ice climbers tended to prioritize stability and security of posture in
interacting with environmental constraints, rather than speed and efficiency
of movement.
Figure 16.2 Angle between horizontal, left limb and right limb (left panel); modes of limb coordination
as regards the angle value between horizontal, left limb and right limb (right panel). The angle between
the horizontal line and the left and right limbs was positive when the right limb was above the left limb
and negative when the right limb was below the left limb
Throwing: Boccia
Goal-directed throwing actions typically need to balance the requirements of
speed and accuracy and the manipulation of task goals is a powerful
constraint on emergent throwing patterns. For example, in throwing tasks, the
alteration of target location simultaneously imposes constraints on both
movement speed (and by extension movement force) and accuracy.
Variability in movement kinematics has been demonstrated in throwing tasks
including javelin (Bartlett et al. 1996), basketball (Button et al. 2003),
underarm (Kudo et al. 2000) and overarm throwing (Wagner et al. 2012) and
a Frisbee®-throwing task (Yang and Scholz 2005).
This body of work confirms that kinematic variability is influenced by task
constraints. In precision throwing tasks, the relative invariance in inter-trial
spatial release points (Hirashima et al. 2002) of thrown objects and decreases
in kinematic variability closer to object release (Wagner et al. 2012) point to
the observed variability in joint parameters serving a functional purpose. For
example, increased variability in distal joints like the elbow and wrist
probably compensate for variations in proximal joints like the shoulder, in
that individuals can use a number of joint configurations to maintain desired
sets of release parameters (e.g. height, speed, and angle) to satisfy various
task constraints.
Compensatory variability has also been observed in a three-dimensional
kinematic analysis of athletes with motor system disorders, such as cerebral
palsy. Cerebral palsy is a clinical term summarizing a group of non-
progressive conditions arising from damage to the brain during early
development. Individuals with cerebral palsy display obvious movement
difficulties in performing tasks of everyday living, typically showing longer,
slower movements, with disturbed speeds and trajectories, particularly at the
elbow, wrist, and fingers (Jaspers et al. 2009).
Boccia is a precision throwing sport in the Paralympics, similar in nature to
lawn bowls or the French game of pétanque. It is played indoors on a firm,
flat surface, where the aim of the game is to land six balls closer to the target
ball than the balls thrown by an opponent. The sport requires a high degree of
muscle control, concentration and tactical awareness. O'Donovan et al.
(2011) recorded the kinematics of four cerebral palsy athletes who had
competed at the Paralympics. In this study, the athletes were required to
throw to four different distances (three metres, five metres, seven metres and
nine metres). Three-dimensional movement analysis showed that the athletes
displayed kinematic variability comparable to typically developed persons
but in the context of distinctly individual movement patterns arising from the
organismic constraints imposed by their type of cerebral palsy. Adopting an
underarm, typically pendulum- like throwing style, all four athletes produced
relatively invariant ball release locations, despite systematically adjusting key
release parameters (primarily adapting release angle and speed to changes in
target distance) to modify the distance thrown. Distal joints (e.g. elbow and
wrist) typically showed increased relative variability compared with proximal
joints (e.g. shoulder), indicating that kinematic variability is necessary to
adapt to task constraints and preserve release parameters in throwing (Figure
16.3).
Figure 17.3 shows the hypothetical landscape change for the 90-degree
relative- phase learning (Newell et al. 2009). The top panel of Figure 17.3
depicts the landscape at the beginning stage of the 90-degree relative-phase
learning where, although the trajectory from initial position C shows a
temporarily stable target pattern of 90-degree relative phase, the trajectories
from all initial positions still tend toward the stable in-phase pattern.
Continuing practice results in further change in the layout of the landscape.
The middle panel of Figure 17.3 shows the landscape at transition where
selected initial positions will tend to the stable 90- degree relative phase
target pattern but the majority of the initial positions still converge to the in-
phase pattern. The landscape continues to deform with further practice and
the probability of converging to the stable 90-degree relative phase from any
initial position is greatly increased. Practice contributes to the change of
landscape and the change of landscape creates a new stable coordination
pattern.
Figure 17.3 (a) Landscape of learning the 90-degree phase task of the HKB model; at the beginning of
practice (C = 0.4) only temporary stabilization of the target phase x0 = 0.25 can be achieved when
starting from special initial conditions close to C; (b) right at the transition (C = 0.425) the target phase
x0 = 0.25 shows one-sided stability: initial conditions close to C will be attracted to the new attractor.
Note that, in this situation, the system is very sensitive to noise perturbations; (c) after sufficient
practice (C = 0.525), all initial conditions close to the target attractor x0 = 0.25 will converge to the
fixed point (reproduced from Newell et al. 2001)
Averaging masks the individual learning processes
The existing structure of the motor landscape determines the ability to
perform particular coordination patterns (whether there is an attractor at the
particular location); performing a particular coordination pattern will also
modify the structure of the motor landscape. When the current landscape
conforms to the required coordination pattern of the target task, the learning
curve usually shows improvement on the precision and/or stability of the
performance and the learning dynamics may be modelled as approaching to a
fixed-point attractor (Newell et al. 2009). Theoretically, the dynamics close
to the fixed point show an exponential function that provides a single
timescale as the basis of improvement rate. However, if the learning curves
are averaged among different individual learners, it is likely that the resulting
improvement rate does not capture any individual's learning rate. The
exponential nature of the learning curve is also lost in the averaging process.
Within the class of fixed-point-attractor learning, although the dynamics of
learning can be approximated by an exponential function, there may be other
dynamics involved in the process. Based on the frameworks of multiple
timescales of learning and development, a two-timescales landscape model of
a sensory–motor learning task has been established to examine the co-
existing adaptation and learning dynamics (Newell et al. 2009). The two-
timescale model of adaptation and learning was derived from a
decomposition of the performance dynamics into separate adaptation and
learning processes.
Figure 17.4 shows the graphical illustration of the two-timescale model.
The fast timescale, indicating a large change of performance level, describes
the adaptation phenomenon that is observed at the beginning of each practice
session. The slow timescale, on the other hand, representing the small
changes over time, reflects the persistent change of learning. Other processes
of different timescales, such as fatigue/inhibition, may also be identified and
included to make the model more comprehensive (e.g. Newell et al. 2010).
These different processes dominate different parts of the practice sequence
and will be masked if averaging over the sequence of trials is implemented in
data processing. The contemporary work in neuroscience has also shown
multiple processes to memory consolidation that each have their own
timescale of change on the performance dynamics (Kandel 2006; Shadmehr
and Holcomb 1997; Tse et al. 2007). The averaging technique has been
widely practised in the processing of learning data. However, the analyses
based on averaged data do not reflect well the characteristics of the individual
learning performance.
Self-organization in learning and development
Practice is one of the most important factors in motor skill learning. Through
practice, learners try to produce the target movement under the organismic,
environmental and task constraints (Newell 1986). Practice sessions provide
the opportunity for the learners to organize the movement systems and
perform the target movement repeatedly, the movement subsystems interact
with one another en route to the equilibrium state of the system.
Consequently, the emergent movement performance reflects the results of
self-organization process of the complex dynamical systems.
Figure 17.4 A two-timescale landscape model associated with Snoddy's (1926) score data (black dots)
as elevation levels. The four clusters correspond to the four practice sessions. The x behavioural
variable corresponds to the slow timescale (shallow dimension), whereas the y variable corresponds to
the fast timescale (steep dimension)
Figure 19.1 The role of Brunswik's lens model in understanding informational variables for complex
systems in sport – analysis of a tennis serve (model adapted from Araujo and Kirlik 2008)
dance 265
decentring distortion 164
decision making 31, 34–5, 47, 120, 160, 182, 190, 230, 271
degeneracy 117, 182, 230, 241–2, 244, 255–6, 262, 277, 282–3, 294, 306, 310–11, 314–16, 322
degrees of freedom: biomechanical 27, 47, 56; motor system 311, 316; performance 328; spectral 68,
70–1, 77
deterministic models 145
detrended fluctuation analysis (dfa) 86, 92–3, 101, 134, 279
dimension reduction tool 151
direct linear transformation 164
direct perception 184
displacement trajectories 233
distal cues 321
dorsal visual system 321
dyadic system 107, 109–10, 116, 120, 233–4, 237
dynamics: cognition 19; intrinsic 55, 75, 243–4, 253, 255–7, 316, 325; overlap 266–70; pattern analysis
191, 205; pattern forming 160, 161, 232; systems 3, 13, 87, 91, 100, 249
Judo 328
learning 19–20, 25–34, 92; design 56, 321, 323–7, 329, 330; motor 14, 26, 254, 293–304; process 204,
250, 288, 299; task 28–30, 299, 324–5, 328–9; timescales of 293, 295–6, 299–301, 303, 306; rate
299; supervised 151
level of description 19, 20–4, 29
limbic system 321, 327
limitations in averaging 293, 294, 295, 299, 303
locomotion 53, 278–9, 301, 308
long-range correlations 91, 99, 133–5, 279
loss of stability see also instability 8, 31, 47, 67, 72–3, 76–8, 169, 171, 314
memory recall 309
mesoscopic protectorate 24
metastability 14, 26–7, 46–7, 244, 245, 250, 254, 256, 262, 306, 312, 316; performance region 47, 313;
region
movement form 262, 264–5, 272, 309, 313
movement output 309, 313
multifunctionality 6, 31
multi-layer-perceptrons 151
multistability 15, 27, 31, 65, 282, 284, 306, 310–11, 312, 315–16
337?>
naturalistic 22, 45, 179
neighbourhood preservation 152
networks 5, 11, 117, 121,152, 190, 194–7, 199–202, 205, 211, 217, 223; artificial neural 152, 190,
194–7, 199–202, 205, 211, 217, 223; social 117, 121
neurobiological system 11, 54, 56, 230, 242, 244, 254–5, 262, 264, 277, 278, 282–3, 285, 306, 308,
310, 323, 324
nodes 152, 154, 156–7
noise 92, 134, 128–9, 131, 134; brown 92; pink 92, 134; white 92, 128–9, 131, 134
nonlinearity 10, 15, 18, 78, 85, 87–90, 98, 101, 110, 113–14, 250, 254, 262
nonlinear dynamical systems 4–6, 87, 172
nonlinear pedagogy 253, 263, 323, 327, 329–30
non-stationarity 92, 98–9, 127–8, 134, 229–30, 236–7
variability 56, 150, 155, 241, 263, 278–8, 293, 306–7, 310–11, 313–16; adaptive 306, 310–11;
compensatory 284; functional 279–80; movement 56, 155, 241, 263, 278–88, 293; movement pattern
56, 150, 306–7, 311, 314–16; movement system 56, 278, 313
vectors 94–6, 99 coding 49–51
ventral visual system 321
vicarious functioning 321
video-tracking 161
virtual positional coordinates 163
volition states 66, 71–2
volleyball 61, 149, 157, 194–5, 197, 199
two-landscape model 299, 301, 303