MIT CSAIL Researchers Develop Video Processing Algorithms to Magnify Minute Movements and Changes in Color
Analyze video to detect and amplify imperceptible movements and variations in color
Use MATLAB to develop and refine spatial decomposition and temporal filtering algorithms, Parallel Computing Toolbox to accelerate their execution, and MATLAB Compiler to package them as standalone software
- Collaboration with other researchers improved
- Multiple experiments run in parallel
- Integration with other programming languages enabled
Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed video magnification algorithms that make virtually imperceptible changes such as movements and color variations visible to the naked eye. The team initially developed the technology to measure heartbeats from a video stream by detecting the subtle changes in skin color caused by each pulse. They have subsequently used it for numerous other applications, including studying a video of a vibrating object to reconstruct ambient sound or estimate the object’s material properties.
The CSAIL team used MATLAB® to develop, refine, and deploy their video magnification algorithms.
“Like many research teams and universities, we value the ability to rapidly test ideas,” says Michael Rubinstein, CSAIL research affiliate. “With MATLAB I can quickly write a prototype algorithm and see if it works. I can then share it with students and collaborators to further build on and refine the prototype.”
Video cameras are not the best sensors for detecting minute color changes or vibrations, many of which appear in video recordings as extremely small changes in intensities. To detect these subtle signals, the team needed to implement spatial filtering algorithms that would aggregate individual pixel measurements. They also needed temporal filtering algorithms to analyze how those aggregates changed over time.
After developing the initial algorithms, the team would need to accelerate their execution by using multiple computing cores to process multiple frames or test multiple configurations of the algorithm in parallel. In addition, the researchers wanted to share their code with other vision researchers, and to enable anyone to process videos using their methods.
CSAIL researchers developed the video magnification algorithms in MATLAB, accelerated them with Parallel Computing Toolbox™, and deployed them with MATLAB Compiler™.
Working in MATLAB with Image Processing Toolbox™, the team implemented an initial spatial decomposition algorithm that analyzes the area around each pixel at several scales to generate an accurate measurement of color at that point in the frame. For improved accuracy, they later updated this algorithm to use changes in (spatial) phase of image subbands, computed from local wavelets applied to the frame, instead of using color directly.
Part of the spatial decomposition algorithm was based on code written by a researcher at another university. The team incorporated this code, which comprised both MATLAB and MEX functions, into their MATLAB implementation.
For the temporal filtering algorithm, the team used MATLAB and DSP System Toolbox™ to apply Fourier transforms as well as Butterworth and other passband filters to the signals generated via spatial decomposition. This filtering enabled the algorithm to reduce noise by focusing on the specific frequency range of movement or color variation the researchers wanted to magnify or analyze.
During algorithm development, the team generated plots in MATLAB to visualize signals.
Using Parallel Computing Toolbox, the team sped the execution of the algorithm by processing multiple frames simultaneously on a 24-core computer. They also ran multiple experiments in parallel to rapidly test and tune algorithm parameters.
After sharing their results and MATLAB code with other researchers, the CSAIL researchers used MATLAB Compiler to create standalone versions of their algorithms for Windows®, Linux®, and Mac OS X operating systems. Anyone can use these versions, even if they do not have MATLAB installed.
- Collaboration with other researchers improved. “Many researchers in the computer vision community use MATLAB,” says Rubinstein. “MATLAB code is often easier to read than C++, so students or other researchers who are inspired by the project can download the code and understand it. It was simple for us to compile executables that anyone could use.”
- Multiple experiments run in parallel. “A big part of our research is trying and applying different ideas and algorithms to test which work better,” notes Rubinstein. “Parallel Computing Toolbox gave us a very easy, accessible way to run multiple experiments in parallel or process multiple frames in parallel—often just by changing a for loop to a parforloop.”
- Integration with other programming languages enabled. “Many MATLAB functions we use provide sufficient performance for our needs,” says Rubinstein. “If we do need to speed up a particular part of an algorithm, MATLAB gives us the flexibility to write it in C++ and include it as a MEX function, which can be conveniently called from the MATLAB code.”
MIT is among the 1300 universities worldwide that provide campus-wide access to MATLAB and Simulink. With the Campus-Wide License, researchers, faculty, and students have access to a common configuration of products, at the latest release level, for use anywhere—in the classroom, at home, in the lab or in the field.