New App Performs Motion Capture Using Just Your Smartphone — No Suits, Specialized Cameras or Equipment Needed
Motion capture, a staple of film and video game industries, traditionally demands a hefty investment in specialized equipment and infrastructure, often exceeding $100,000. However, groundbreaking research introduces a thrilling alternative: a smartphone app combined with an AI algorithm capable of achieving the same result.
Enter “MobilePoser,” a revolutionary app poised to replace the myriad of systems currently required for motion capture. This innovative solution leverages data from sensors embedded in consumer devices, such as smartphones, earbuds, and smartwatches, and integrates this information with artificial intelligence (AI) to track full-body poses and movements.
Motion capture is crucial for translating actors’ movements into computer-generated imagery seen on screens. A renowned example of this technology’s prowess is Andy Serkis’ performance as Gollum in the “Lord of the Rings” trilogy. Traditionally, this process entails the use of designated rooms, expensive equipment, bulky cameras, and mocap suits.
Historically, setups of this nature come with a staggering price tag, reaching upwards of $100,000. While more affordable alternatives like the defunct Microsoft Kinect relied on stationary cameras, their limitation was the need for actions to occur within the camera’s view, making them impractical for on-the-go applications.
The scientists, in a novel study presented at the 2024 ACM Symposium on User Interface Software and Technology, advocate for replacing traditional tech with a single, versatile smartphone app. Their creation, MobilePoser, delivers impressive accuracy by merging machine learning with advanced physics-based optimization. “This opens doors to immersive experiences in gaming, fitness, and indoor navigation, minus the specialized gear,” explained study author Karan Ahuja, Professor of Computer Science at Northwestern University.
The team utilized inertial measurement units (IMUs), a system already embedded in smartphones. It comprises sensors like accelerometers, gyroscopes, and magnetometers to assess the body’s position, orientation, and motion. Despite the ordinary fidelity of these sensors being inadequate for precise motion capture, the researchers enhanced them using a sophisticated machine learning algorithm.
The AI was trained using a publicly available dataset of synthesized IMU measurements, derived from high-quality motion-capture data, resulting in a tracking error margin as small as 3 to 4 inches (8 to 10 centimeters). Additionally, the physics-based optimizer further refines predicted movements to accurately reflect real-life body motions, preventing impossible feats such as joints bending backward or a full 360-degree head rotation.
As Karan Ahuja noted, “Accuracy improves when multiple devices, like a smartwatch and a smartphone, are in use. Yet, the system’s adaptability is crucial. Even with just a phone, it adapts to determine your full-body pose.” This adaptability potentially paves the way for entertainment advancements, offering more immersive gaming and fitness experiences.
Moreover, the possibility of applications in health and fitness sectors emphasizes the broader potential of this technology. To stimulate further development, the team has made the AI models and accompanying data central to the app accessible, inviting other researchers to build upon their pioneering work.
MobilePoser symbolizes a leap forward, simplifying and democratizing motion capture technology. By placing it within reach of everyday consumer devices, this innovation diminishes reliance on costly, cumbersome gear, ushering in an era of accessible and immersive digital experiences for all.