Celebrate King's Day with TNW 🎟 Use code GEZELLIG40 on your Business, Investor and Startup passes today! This offer ends on April 29 →

This article was published on September 17, 2013

Microsoft updates Kinect for Windows SDK with background removal, color capture, other new APIs and samples


Microsoft updates Kinect for Windows SDK with background removal, color capture, other new APIs and samples

Microsoft today announced the release of Kinect for Windows software development kit (SDK) version 1.8, the fourth update since the company first launched the SDK commercially 1.5 years ago. You can download the latest version now from the Microsoft Dev Center.

Here’s what’s new in Kinect for Windows SDK 1.8:

  • New background removal. An API removes the background behind the active user so that it can be replaced with an artificial background. This green-screening effect is especially useful for advertising, augmented reality gaming, training and simulation, and other immersive experiences that place the user in a different virtual environment.
  • Realistic color capture with Kinect Fusion. A new Kinect Fusion API scans the color of the scene along with the depth information so that it can capture the color of the object along with its three-dimensional (3D) model. The API also produces a texture map for the mesh created from the scan. This feature provides a full fidelity 3D model of a scan, including color, which can be used for full color 3D printing or to create accurate 3D assets for games, CAD, and other applications.
  • Improved tracking robustness with Kinect Fusion. This algorithm makes it easier to scan a scene. With this update, Kinect Fusion is better able to maintain its lock on the scene as the camera position moves, yielding a more reliable and consistent scanning.

1030.k4wcrop

Microsoft has also included three new samples in the new SDK. All of these help developers build new experiences that Kinect simply couldn’t offer before.

The HTML interaction sample demonstrates implementing Kinect-enabled buttons, simple user engagement, and the use of a background removal stream in HTML5. The multiple-sensor Kinect Fusion sample shows developers how to use two sensors simultaneously to scan a person or object from both sides (to construct a 3D model without having to move the sensor or the object). Lastly, the adaptive UI sample shows how to build an app that adapts itself depending on the distance between the user and the screen (gestures vs. touch).

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Top Image Credit: Microsoft

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top