Points of Origin

Points of Origin is a series of vfx-driven dance films I am creating on the subject of computer vision, mathematics and machine learning in collaboration with dancers in Naples and London with mentorship from the Bartlett School of Architecture's MArch Design for Performance and Interaction course leader Fiona Zisch and tutor and choreographer Alexander Whitley.

The project is a year long development of short films funded by a Develop Your Creative Practice grant from the Arts Council titled "Developing screendance artworks through vfx, reality capture and game engines".

Above is a short excerpt exploring the use of machine learning for separating a dancer from their environment and estimating material maps which allow for complex relighting and shading effects.
Singular Value Decomposition

In the below images and videos, I am using Singular Value Decomposition, a common method in numpy and pytorch for breaking matrices down into their components. It has uses in data science, machine learning and image compression. Here I am using it for stylistic effect, seeing how far I can reduce the data and still have the dancer's body recognisable. 

Reducing the image red green and blue matrices down to a SVD value of 1 gives us a very abstract representation reminiscent of colour field painting. As we recompose the image by combinining higher singular values, we gradually get something resembling our original input.

the input video processed with a SVD value of 2

input video processed with a SVD value of 7

revealing the SVD image using a frame difference buffer that accumulates over time: the camera is static so I can subtract the background to get a rough body shape, then combine this image over multiple frames to get a trail effect.

Reconstructing an image from a series of singular values

Camera tracking and texture projection
In the below two examples, I am tracking the camera in 3 dimensions and 1) using texture projection to paint a heatmap of contact between the floor and the dancer, and 2) projecting the image of the dancer's body onto 3 dimensional planes in sync with their movement.

To create this heatmap of contact with the floor, I tracked the camera movement and manually painted a large textured plane from the camera viewport using projection painting in Blender. The dancer is separated from the background using switchlight's extraction machine learning model, while not perfect, the output can be manually cleaned up in compositing software and it is considerably faster than traditional methods.
Back to Top