top of page

Faith Group

Public·5 members

[ KINECTSDK] HowTo: Use Kinect As A Green Screen (II) LINK

Click Here ->>->>->>


There are multiple options for cross-calibrating devices. Microsoft provides the GitHub green screen code sample, which uses the OpenCV method. The Readme file for this code sample provides more details and instructions for calibrating the devices.

An exciting new feature called the background removal API has been added to the Kinect For Windows SDK 1.8 which was released last week. Background removal or green screening is a feature many people have been using the Kinect for to varying degrees of success. Prior to the background removal API, getting a decent green screen effect out of the Kinect required a lot of heavy lifting and creative thinking because the depth data from the Kinect is too noisy to use for a smooth player mask. A photo taken using the default kinect depth data with no blur or special techniques.The official Kinect SDK 1.8 Background Removal API is a great step forward for developers as it allows obtaining a great green screen effect with minimal work. However there are a few important restrictions in place which make it unusable right now for a multi-user photo experience.The initial Background Removal API requires successful skeleton tracking to work.The initial Background Removal API only allows the effect to be preformed on one tracked player at a time. Update: 10/3/2013 - Joshua Blake writes in from Twitter, "By the way, you can do multiple people background removal in 1.8. You just need to create an instance per tracked skeleton.". Thanks Joshua.Those criticism's aside, it would be awesome if Microsoft were able to decouple the Background Removal API from skeleton data or add a mode where you can specify depth away from the camera as a threshold instead of detected skeletons. A photo taken using the Kinect SDK 1.8 Background Removal API.This summer for the Kinect Green Screen Photo kiosk I had at Maker Faire Detroit, I invested about a month of time figuring out how to get a good mask out of the Kinect in real time. The approach I used was to ignore skeleton data, and only use depth data, then run that raw data through EMGUCV(.Net OpenCV) and do blob detection, take the detected blobs, and run them through a point by point averaging algorithm based on work done in openFrameworks. I also used a simple shader based blur effect available in Windows Presentation Foundation as it proved way faster than any other implementations I tested as well as writing my own box blur or gaussian blur. While the results I came up with are not as good as the official SDK implementation they are pretty close and don't require detected skeletons. (The project is open source, so you can check out my implementation here: _Kinect_GreenScreen_PhotoKiosk) Not requiring skeleton detection is a HUGE factor when groups of users are posing for photos. A photo taken using custom techniques and EMGU CV.How I think Microsoft's Background Removal API WorksThe unfortunate part of the Microsoft Background Removal API is that it is closed source; so people like me who have been working on the same problem are interested to know exactly what is going on at a low level. I wish they would release a paper on the technique being used. Based on my own work there are a few things I am going to guess Microsoft is doing.I am pretty sure they are using some sort of frame averaging of the depth data. During my own work I found this concept presented by Karl Sanford in his early work smoothing depth data. In my testing, averaging the depth data was too slow of a process in managed C# code and the results were not very good for creating smooth masks that fit the contour of subjects; so I threw out this technique. The tell in the 1.8 SDK that this is happening is when you wave your hands around or move fast, you can see some lag in the mask as it follows you; which could either be from frame averaging or intentional slow down of processing to increase end user performance.I am confident that they are using external computer vision libraries th


Welcome to the group! You can connect with other members, ge...

bottom of page