Work smart and fast through innovative technology of RADiCAL, AI based motion capture system.
RADiCAL is a first-of-its-kind, motion capture solution that is entirely powered by AI. No suits, no hardware. All you need to bring to the table to use RADiCAL is a regular consumer-grade camera, including that on your smartphone.
RADiCAL’s philosophy is simple: traditional motion capture is too expensive and requires bulky, specialized hardware, suits, trackers, controlled studio environments and loads of skilled experts to make it all work. RADiCAL replaces all of those traditional physical features with software that anyone can access.
You can record any regular video via the Motion mobile app and upload it into the RADiCAL cloud for processing. Within minutes, you’ll get results back that can be used in all common 3D content creation pipelines. To enable a fast feedback loop, RADiCAL also gives you a free visualization tool so you can preview your results in 3D and VR (Virtual Reality). That’s how RADiCAL allows anyone to create immersive content without spending a fortune.
Gavan Gravesen, RADiCAL’s CEO and co-founder, took some time to sit down with us to talk about their technology and the company’s vision.
This technology is a game changer. How did you develop AI-powered motion capture?
As co-founders, we’ve had an affinity for the audiovisual industries for some time going back. Many of my family members touch on the film and entertainment sector and I’ve been fortunate to partner with great people to build creative projects and businesses, for example when I co-founded Slated.com. It was in part through those experiences that I realized just how awkward, slow, disjointed and expensive the traditional 3D content creation pipeline is.
Then, back in 2014 – 15, I was starting to work with computer vision solutions with respect to the human body. It was around that time deep learning models started to dramatically outperform traditional, feature-engineered approaches in computer vision. A lot of that inspiring work was being done by in academia across America, Europe and Asia. So we started reading every single paper in the field and that became the groundwork for the science that RADiCAL came to develop. Because we had started to form a belief that machine-learning could replace hardware-based approaches in virtualizing human motion in 3D.
But the single most important catalyst for RADiCAL’s genesis was finding the right partner in my co-founder and CTO Anna-Chiara Bellini. Machine learning is such a hot area, and what we want to do is so challenging and unique, the number of people in the world who could do this remains extremely small. Fortunately, Anna contributes that rare combination of scientific genius, intuitive foresight, professional tenacity and commercial curiosity that is needed to develop ideas into frontier technology, and from there into a viable product for users. Bringing those skills together made it possible for us to work across several disciplines, including deep learning, robotics, computer vision and biomechanics, to get the product off the ground.
What does the complete pipeline of your 3D motion capture look like from start to finish?
We combine convolutional neural networks, generative models, photogrammetry, and biomechanics in the pipeline. But none of that is relevant or apparent to the user. You simply download the AI based motion capture RADiCAL app to your phone, record a few clips from which you wish to capture motion, upload them to the cloud, and examine your results through our 3D visualizer.
The motion capture file is downloadable by you and can be used in 3D animation software like Autodesk, Blender and gaming engines like Unity and Unreal. We always aim to make our motion as smooth and plausible as possible, such that it can be used right out of the box.
Standard motion capture requires a heavy suit and a lot of sensors. How did you get rid of these?
We have trained our AI to detect and reconstruct the joints of the human body. Just like traditional motion capture systems, the 3D coordinates and biomechanical rotations of these joints are then computed to collectively represent the skeleton of the body in question.
In essence, therefore, an important part of our AI is that it replaces the hardware that tracks the human joints. We of course add a lot more smart data and algorithms besides that, but to answer your question, that’s how we replace the suits, sensors and specialized cameras.
What were your biggest challenges and how have you solved them?
To date, our biggest struggle has come from a lack of standards in the Animation industry. With lots of closed source and proprietary formats, it can be a hassle to find solutions that work with large amounts of differing workflows. Between rigs, forward and inverse kinematic solutions, it can be a lot to handle. Essentially, however, those challenges have become part of our mission. We very much see it as our purpose to bring some degree of standardization and scale to the 3D industry — at least with respect to human motion.
How does RADiCAL integrate with AR & VR?
There are two ways AI based motion capture RADiCAL integrates with AR & VR (Augmented Reality & Virtual Reality) solutions.
First, we of course want to be part of the immersive content that is produced in post. As with any other 3D content, we think that shortening the production pipeline will give more developers the chance to create content to power the AR & VR revolutions. That’s why we’re looking to use and strengthen standards that will accelerate XR content creation, including the ability to preview our results in VR, which you can already do.
Second, with a bit more time, we aim to deliver a real-time solution that can provide human tele-presence integrations. A few things have to come together for that to happen, including with respect to the devices the hardware industry makes available. But we’d like to think we’re perfectly placed to play a dominant role in the real-time industry when everything comes together.
How do NVIDIA and Kickstart Accelerator support RADiCAL?
The principal benefit we experienced as part of the Kickstart program is access. Access to technical mentors, access to executives and access to academic resources. We wish we could spend more time in Zurich, with the companies down there, and especially with the folks at ETH.
NVIDIA has been instrumental in a different, but profound way. For a company their size, they’re surprisingly available and hands-on. Besides early access to state-of-the-art technology, they also provide technical advice critical to what we do around GPU optimization. But it was once we were ready to access the market that NVIDIA really stepped up. It felt like they made entire departments available to help with PR, industry relations, and just getting the word out. I cannot praise the team at NVIDIA enough — it’s been extraordinary.
Feel free to zoom in / rotate following video using left mouse button.
How, in your opinion, will AI evolve in the future?
That’s a big question. And not one we can satisfactorily address here. However, I think you can reliably draw a few solid conclusions from what’s going on.
First, specialized — or narrow — AI is going to accelerate to the point where it will outperform even the wildest expectations we dismissed as fantasy just a few years ago. AI based motion capture RADiCAL is narrowly focused on emulating the brain’s ability to reconstruct what human eyes can’t see, in terms of human motion. To do that, we’re most involved in, and excited about, generative models, including adversarial networks and reinforcement learning, especially models that can handle temporal correlation so as to meaningfully mimic human cognitive patterns.
Second, it’s safe to say that the rise of AI presents a tremendous growth opportunity that will have a positive impact on the way we live our lives. I can say that, because AI is not new at all. Technology that all of us rely on every day, and even love, arguably represents highly specialized AI in some shape or form.
Any recommended setup / rig / headset?
In terms of the rig, we’ll be releasing results for the HumanIK rig first, and then add more as we learn what most users want most of the time. We will make more announcements quite soon.
In terms of the headset, we’re not limited by any particular standard at the moment. At the moment, our visualizer displays motion capture results in 2D, 3D and VR using WebGL, which means our 3D previews are available through any regular browser, including on your phone. So you can give our VR view a try by slotting your phone into a $5 Google Cardboard.
Beyond that, for right now, we’re trying to learn how to work with what our users want. We want users to reach out to us with suggestions for something else we should support.
How many third party plugins are supported by RADiCAL?
We’ve recently partnered with Sketchfab to allow users to visualize their motion results through their technology. That also means that users can publish their results to their Sketchfab account right away.
Unity and Unreal plugins will be coming soon. We’re also exploring Amazon Sumerian, MagicLeap, Facebook AR Studio and a few other platforms that are looking to streamline and unify 3D pipelines. We see ourselves as part of those efforts.
Any other output format apart from .FBX?
We’re just a few weeks away from releasing our FBX download feature. We’ll likely add .OBJ and .BVH options pretty quickly after that.
What’s next? What are your short- and long-term goals for AI powered motion capture RADiCAL?
We have several goals we’d like to accomplish.
First, we want to be the go-to motion capture system for independent 3D content creators everywhere in the world — from students, animation freelancers, artists through teams of 3D professionals as well as small to medium-sized agencies and studios. The importance of that emerging eco-system is at the core of our belief system. We feel that, in a few tears down the line, independent creators will be largely credited for making the 3D content economy what it deserves to be. And we will be at the heart of it.
Second, for the AAAs and Majors, we want to establish ourselves as the predominant “previsualization” tool. Prototyping is an important part of the pipeline and the ease of access and low cost of our platform is ideal for the industry.
Third, we want to bring about a genuine revolution in the way content creators animate their designs with motion. For that, we’re looking to introduce standardized rigs that are optimized to immediately absorb our motion through a cloud-based system that can be used by any 3D designer, however skilled. That cloud motion engine would work best with strong partnerships, and I hope we’ll be able to announce much more about that soon.
Fourth, we have a few features to add to our product. Most importantly, we’ll shortly be releasing our Android app, which we know has been requested by many of our users. We will also add support for multiple cameras, so that we can capture the same shot from as many perspectives as possible. Lastly, we’re also aiming to add custom uploading profiles, so with a few details about any camera at all, we can allow users who plan to use our service more frequently the chance to upload as much footage as they need, regardless of its origin.
RADiCAL is here to stay and grow by leaps and bounds. Best wishes for future endeavors to Gavan and his entire technical and management team.
Kudos to all of you for such innovative technology.