Using the Kinect to verify fall events

Microsoft’s Kinect is a powerful sensor for motion tracking and analysis. There are different applications that take advantage of its functionality of 3D motion capture. In the medical field, for example, the sensor offers great possibilities to the treatment and prevention of disease, illness or injury, as we discussed in this post.

The Kinect can be used in a fall detection system to detect when an individual is walking and suddenly falls. Its implementation is quite easy using the framework for skeleton tracking. However, we designed a system to detect fall events using a smartphone and we want to use the Kinect for verification after a fall. This verification consists of detecting if the individual is lying down in the floor. In this post we will discuss three different approaches for the verification of the fall event and its associated problems.

Skeleton tracking with the Microsoft SDK

The fall verification could consist of detecting some joints (head and hands, for example) using the skeleton tracking framework (included in the Microsoft SDK) and calculating the distance from the floor.The fall will be considered detected if the distance from the floor is almost zero from all joints.

We performed several experiments implemented with the official Microsoft SDK. The main challenge is to detect the joints when the Kinect turns on and the individual is already lying on the floor. The algorithm gives good results after small movements of the individual, but sometimes the person remains unconscious after a fall which makes this approach not useful.

skeletonKinect

Skeleton tracking with OpenNI

OpenNI is the open source SDK for the Kinect. As we discussed in this post, it has some advantages and disadvantages but is always an alternative to develop Kinecting applications. Since the first approach presented some problems to detect the individuals joints when the Kinect turns on and the individual is already lying on the floor, we decided to try with this SDK. Using this SDK we obtain better results in terms of accuracy of detection but not enough for a reliable verification of the fall.

skeletonOpenNI

User selection using depth data with OpenNI

We also performed some experiments using OpenNI and open source libraries. The fall verification consist of detecting the individual using the depth data to segment the user from the background. Once the individual is selected, we check if the individual’s bounding box is less tall than a threshold value and the position of the highest point is lower than a threshold, which means than the user is lying on the floor. This approach has the same limitation: pickup the person if is already lying on the floor when the Kinect turns on. This is the same problem we had with the previous approaches.

userselectionOpenNI

After all the experiments using both SDKs for the Kinect and different methodologies, we realized the Kinect has an important limitation to track joints when the individual is motionless. All SDKs present good accuracies after small movements but this is not useful in our system, where we want to detect if an individual is lying on the floor. Any ideas or suggestions about how to implement it?

Kinect overview before starting programming

The Microsoft Kinect is a set of sensors developed as a peripheral device to use with the Xbox 360 gaming console. Since it was released, hackers immediately saw potential in the device far beyond it and created open source libraries to use in other applications. Microsoft also released the official SDK and the Kinect for Windows, a more powerful device to use in research. Nowadays there is a big community of developers and researchers around the world and several new applications are emerging.

Developing for Kinect is really easy. There are lots of official and non-official tools, libraries, demos, tutorials….But Kinect sensor has some limitations that you should know before starting developing for this device.

Kinect for Windows vs Kinect for XBox 360

Kinect for Windows is specifically designed to be used with computers. It is licensed for commercial app distribution so it is the best option for development. Kinect for Xbox 360 was designed and tested for the console but can also be used on development with some limitations. 

The features unique to the Windows version of Kinect include:
kinect

  • A shortened USB cable to ensure reliability
  • A small dongle to increase compatibility with other USB devices
  • Expanded skeletal tracking abilities, including “seated” skeletal tracking that focuses on 10 upper body joints
  • A firmware update called “Near Mode” that allows the depth sensor to accurately track objects and users seated as close as 40 cm from the device
  • Expanded speech recognition including four new languages, French, Spanish, Italian, and Japanese
  • Language packs improving speech recognition for many other English dialects and accents
  • Improved synchronization between color and depth streams
  • Improved face tracking

Official Kinect SDK vs Open Source alternatives

The official SDK maintained for Microsoft is better than open source alternatives in some applications such as the skeleton tracking. However the force of the open source community with OpenNI (drivers, APIs, libraries, demos) and OpenKinect (drivers) to create the SDK, middleware libraries and apps.Both have cons and pros.


Official SDK

  • Programming languages supported: C++, C#, or Visual Basic by using Microsoft Visual Studio.
  • Operative System support: Windows 7 and 8.
  • Documentation and support: official website, development toolkit and support forum.
  • Calibration: not needed.SDKs

OpenNI and OpenKinect

  • Programming languages supported: Python, C, C++, C#, Java…not requiring Visual Studio.
  • Operative System support: Linux, Mac OS X and Windows.
  • Documentation and support: website, support forum, twitter...
  • Calibration: needed.

Features and limitations

The Kinect’s image, audio, and depth sensors allow to detect movements, identify faces and recognize speech of players. However they have some physical like the sensing range. On the other hand, Kinect for Windows SDK frameworks also have limitations such as the number of tracking skeletons.depth

  • RGB camera: angular field of view of 57° horizontally (plus 27° up or down with the motorized pivot). and 43° vertically.
  • Depth sensor: viewing distance range from 0.8 m to 4m. Practical limits are from 1.2m to 3.5m.
  • Depth near mode: viewing distance from 0.4m to 3m.
  • Audio beam forming: angular field to identify the current sound source of 100° in intervals of 10°.
  • Skeleton tracking: normal mode with 20-joints per player and seated mode with 10-joints. Both modes have simultaneously tracking up to six people including two active players (motion analysis and feature extraction of 20 joints per player).
  • Interactions: library with basic gestures (e.g. targeting and selecting with a cursor) which also supports the definition of gestures.
  • Face tracking: angle limits to track face movements are +-45 degrees (yaw), +-90° (roll) and +-25° (pitch).

skeletonTracking

Microsoft Kinect in healthcare

Microsoft Kinect has transformed the way people interact with technology. The combination of the camera, the depth sensor and the microphone enable new applications and functionalities such as 3D motion capture or speech recognition. New applications are emerging.

In the medical field, kinect sensor offers great possibilities to the treatment and prevention of disease, illness or injury. Most of applications in the medical field, specially in eHealth (healthcare through the use of technology), use the kinect for tracking patient’s movements in order to perform rehabilitation or patient’s monitoring.

Physical Theraphy and Rehabilitation

Red Hill Studios from University of California is researching about how Kinect can be implemented to help people with Parkinson’s disease. They have developed specialized games to improve the gaith and balance of people with functional impairments and diseases.

Home Training System for Rehab Patients (HTSRP) motions and captures the physical exercises of the patient using a Kinect. It presents a 3D version of the person, analyzes the movements  to pre-detect if physical exercises are performed right or wrong and gives visual feedback on the screen. Jintronix, like HTRSP and other companies, performs rehabilitation in a virtual environment.

Telemedicine

Medicine and diagnose can be performed remotely for people who lives a long way from the hospital. A few months ago Microsoft launched InAVate, a platform which allows group therapy session using avatars.

Collaboration and Annotation of Medical Images (CAMI) of University of Arcasas uses Microsoft Kinect to capture the data, which can be checked on a Windows device such as a tablet, phone or computer.

University of South Florida developed a Robotic Telemedicine System using the kinect sensor for mapping the environment and plan a path and trajectory. The telemedicine platform also provides video communications for doctor-patient communication through the camera and microphone’s kinect.

Remote patient monitoring

Remote patient monitoring and telehealth are used in chronic diseases like heart disease, diabetics or asthma. Cognitiont’s Global Technology is a remote health monitoring solution for multiple sclerosis physiotherapy at home.

Medical applications

Microsoft Kinect is also used as a hands free tool to control medical imaging equipment. Toronto’s Sunnybrook Hospital uses Kinect to move and zoom X-Ray and MRI without touching the screen and leaving the sterile area, which is very useful for surgeons and interventional radiologist. Getstix also has developed a touchless gestural interface for the surgeons and interventional radiologists.

Researchers from University of Konstanz have made a step forward with NAVI system. They use the Kinect camera and depth sensor with a laptop, a vibrating bell and a bluetooth headset to avoid obstacles for blind people.

Getting started with Kinect SDK

Microsoft’s Kinect allows a gamer to use his body to play without a controller. But, actually, it is more than a virtual remote: Kinect is a step forward in controlling technology with natural gestures. It allows to interact with a PC without touching  the mouse, keyboard or touchscreen.

The Kinect has a color camera, a depth camera and a 4 microphone array.  Both cameras are used for motion tracking and the multiarray microphone is used for speech recognition.

The Kinect SDK enables developers to create applications that support gesture and voice recognition.There are lots of step-by-step tutorials (and videotutorials), so in this post I will try to summarize the main ones.

Installing the Kinect

  1. Kinect SDK: It includes the drivers to use on Windows.
  2. Kinect Developer Toolkit: Contains code samples and sources to simplify developing applications.
  3. Visual Studio Express 2010: IDE for developing on Windows.
  4. .NET Framework 4: developer platform to build apps for Windows.

For C++ SkeletalViewer samples:

Speech Recognition software:

Kinect SDK Sample Applications

The Developer Toolkit includes applications and samples to provide a starting point for working with the SDK: skeletal and face tracking, speech recognition, the kinect explorer, the shape game… To start using it, you just have to load the project on Visual Studio and start playing.

Creating a new Project

  1. Create a new Windows Based application project.
  2. Add references to the Kinect library by right-click on References menu in the Explorer Pannel.


Starting the sensor

Kinect must be initialized before it can begin producing data for the application. The initializing process consists on three steps:

  1. Enumerate Kinect sensors
  2. Enable Data Streaming
  3. Start the Kinect

Writing the source code

Once Kinect is started and a stream is enabled, the application extracts frame data from the stream and processes it as long as a new stream is available. For each frame of data:

  1. Register an event which fires when data is ready
  2. Implement an event handler
  3. Allocate storage for the data
  4. Get the data

Running the application 

The Visual Studio IDE is also used to compile a project, build the application and run it on a new window. 

Loading and recording data

Kinect Studio is a tool to read and write depth and color streams. It is used to create scenarios for testing and analyze performance.

To start loading/recording data, you just have to run your coded application on Visual Studio and start reading/writing data on Kinect Studio.