Markov Chain Generators Applied to Motion Capture Data

Simulating human-like movement is the subject of current research in robotics. If we could understand how to mathematically identify human movement qualities and patterns, we could create machines that are better integrated into a human workflow, such as improved prosthetic limbs and wearable devices. My long-term goal for this research is to use neural networks to identify movement patterns and qualities. In an intial attempt to apply predictive models to human movement, I created various Markov models using recorded motion capture data.

My first movement Markov chain generator considered only x and y positions of each point, uncoupled, with no body-specific information. The result was excessively random, as any point could jump to any other recorded x location and separate recorded y location for any other body part at any other time. In an effort to produce more continuous data, I coupled x and y, so that a point must move from an (x,y) position to an observed subsequent (x,y) position. I tried coupling z as well, but this resulted in too little repetition, so the generated movement matched the recorded movement almost exactly. I also stipulated in the model that each point on the body can only draw on recorded data for the same point. Thus, a hand always moves as a hand, and a right knee always moves as a right knee. The final measure taken to produce human-like data was to increase the order of the Markov generator to third order, so that the previous 3 positions are considered in predicting the next position. The resulting generator is used to animate the red dots in Self-Portrait 2015. It is posted on the lower left of this page.

In another experiment, I tried looping through the points and generating data based on position relative to the previous point. In other words, I generated points from adjacent points in space rather than previous points in time. I hoped to preserve the human form by preserving the spatial relationships between adjacent points, but there is no coupling between a point's previous and current positions, so the result is disappointingly random.

In the upper left is a model that generates velocity data rather than position data and adds the change in position to the previous position. The movement is smooth and life-like, but the form of the body is completely lost. The points looked a lot like little goldfish, so I decided to run with the theme.

This research was done using motion capture data gathered with the help of Javier Molina at the NYU Integrated Digital Media Motion Capture Lab. The data was saved in .csv format, and the analysis was done in javascript with a few functions from the p5.js library.

Here is the code: