Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on January 30, 2018

Programmers use TensorFlow AI to turn any webcam into Microsoft Kinect


Programmers use TensorFlow AI to turn any webcam into Microsoft Kinect

A pair of AI developers turned a $10 webcam into a motion-tracking system. It’s like a DIY Kinect with a Google brain. And best of all they named it Skeletron.

The project, a collaboration between Or Fleisher and Dror Ayalon, was developed using TensorFlow, an open source AI platform created by Google, and Unity, a popular video game engine.

Motion-tracking usually requires expensive cameras, high-end computers, and someone to wear a skin-tight bodysuit with those silly plastic balls Velcroed all over. With this project though, all you need is that webcam you’ve had buried in your junk drawer for half a decade:

Using AI for motion-tracking isn’t exactly new; Microsoft’s Kinect was an early example of a consumer product that utilized machine learning. However, it required several sensors and cost $150 – not to mention it’s dead now.

Fleisher and Ayalon’s AI is experimental, but it’s still exciting to see real-time motion tracking happen at all with cheap hardware, especially considering it outputs to a video-game engine already popular with AR and VR developers.

A software solution for motion-tracking based on open-source AI and dirt-cheap webcams could revolutionize a myriad of industries, not just gaming. It could enable medical professionals analyze movement and gait without the need for highly specialized hardware, which could improve the study of neurological disorders and help with orthopedic rehabilitation.

And, if you watched the video all the way to the end, you’ll see it’s also a cool way to visualize dance moves in real time. It’s always a party when you bring TensorFlow and a webcam.

Update 1/30 8:57 CST: We reached out to the developers to find out what was next for this project, Fleisher told TNW:

Skeletron was built as a project for the NYU ITP XStory (Experiments in Storytelling) research group. This project will open sourced by the group and will continue to be developed together with future fellows.

Since the technical aspects of Skeleton are working at this point, we are looking into experiments that actually tap into this tool’s abilities that weren’t possible at the age of the Kinect. For example, being able to look at someone’s movements and pull scenes from films where people have had similar skeletal attributes is an experiments we are working currently working on at ITP. This would essentially create an experience where you can index and search films by body movements. Also, we plan to use Skeletron as a tool that will allow us to build physical interactions in the browser, combining with other tools, such as deeplearn.js, using a laptop webcam. Bringing physical interactive experiences to the browser will allow people to use them worldwide without the need for extra hardware.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top