Using the Kinect to verify fall events

Microsoft’s Kinect is a powerful sensor for motion tracking and analysis. There are different applications that take advantage of its functionality of 3D motion capture. In the medical field, for example, the sensor offers great possibilities to the treatment and prevention of disease, illness or injury, as we discussed in this post.

The Kinect can be used in a fall detection system to detect when an individual is walking and suddenly falls. Its implementation is quite easy using the framework for skeleton tracking. However, we designed a system to detect fall events using a smartphone and we want to use the Kinect for verification after a fall. This verification consists of detecting if the individual is lying down in the floor. In this post we will discuss three different approaches for the verification of the fall event and its associated problems.

Skeleton tracking with the Microsoft SDK

The fall verification could consist of detecting some joints (head and hands, for example) using the skeleton tracking framework (included in the Microsoft SDK) and calculating the distance from the floor.The fall will be considered detected if the distance from the floor is almost zero from all joints.

We performed several experiments implemented with the official Microsoft SDK. The main challenge is to detect the joints when the Kinect turns on and the individual is already lying on the floor. The algorithm gives good results after small movements of the individual, but sometimes the person remains unconscious after a fall which makes this approach not useful.


Skeleton tracking with OpenNI

OpenNI is the open source SDK for the Kinect. As we discussed in this post, it has some advantages and disadvantages but is always an alternative to develop Kinecting applications. Since the first approach presented some problems to detect the individuals joints when the Kinect turns on and the individual is already lying on the floor, we decided to try with this SDK. Using this SDK we obtain better results in terms of accuracy of detection but not enough for a reliable verification of the fall.


User selection using depth data with OpenNI

We also performed some experiments using OpenNI and open source libraries. The fall verification consist of detecting the individual using the depth data to segment the user from the background. Once the individual is selected, we check if the individual’s bounding box is less tall than a threshold value and the position of the highest point is lower than a threshold, which means than the user is lying on the floor. This approach has the same limitation: pickup the person if is already lying on the floor when the Kinect turns on. This is the same problem we had with the previous approaches.


After all the experiments using both SDKs for the Kinect and different methodologies, we realized the Kinect has an important limitation to track joints when the individual is motionless. All SDKs present good accuracies after small movements but this is not useful in our system, where we want to detect if an individual is lying on the floor. Any ideas or suggestions about how to implement it?


Smartphones in healthcare

Today’s smartphones are devices with advanced computing capabilities and connectivity which also have a wide range of built-in sensors. These features make the smartphones a good platform for telehealth,  health-related services and information via telecommunications.

Using smartphones and communication networks it is possible for practitioners and patients to collect remote aggregate health data, provide care via mobile telemedicine, and real-time monitoring of patients. Different systems have been developed, that take advantage of the new technologies and devices.

Autodiagnosis and telemedicine

  • Poket Doc is like having a doctor in your pocket. It allows you to search for and compare information about health services and pricing based on location, condition or a doctor’s speciality.
  • Scanadu (available in 2014) is an app connected with a scanner device packet with sensors to check the temperature, heart rate, oximetry, ecg wave, heart’s beats, pulse, urine analysis and stress. The app keeps the report with all the information and it contacts to the doctor in case of emergency.
  • uCheck analyzes the urine. Users have to purchase a kit containing urine test strips that can be visually analyzed with the iPhone’s camera.
  • Mango Health lets patients monitor their use of medications by setting up schedules for taking their medication. When it is time to take the medications, it reminds them through notifications.


Patient monitoring

  • Asthmapolis is connected with a sensor that patients can place on the inhaler. The app collects all the data and information. It is used by doctors to monitor asthma symptoms and create the patients report.
  • Dr. Diabetes provides diabetes awareness, monitoring and management to patients with chronic illness. It provides medical data (via the cloud) to physicians for accurate diagnosis.
  • AirStrip allows doctors check in on patients and review their vitals, cardiac waveforms, medications, intakes and outputs, and allergies. The phone is connected to a bedside monitor and send the collected data to doctors or caregivers through cellular or Wi-Fi connection.
  • GI Monitor is an app that helps patients with Crohn’s and Ulcerative Colitis track their symptoms and provide accurate data to physicians for optimal treatment. Data is synchronized across all platforms in real-time and users can print out easy-to-read reports for their physicians.
  • RheumaTrack is a patient diary app which record pain on the VAS scale. Patients have to record their pain, use of medication and activities. All this information is useful for the doctors.


Television signaling standards

Emitters and receivers use the signal information to share information about the network, frequencies of the multiplex, services, channels guide… Each digital television broadcast standard uses a different signaling tables and descriptors to provide information to the receiver though all the signaling data is based on the MPEG-2 standard. 


The Program Specific Information (PSI) contains metadata about the programs (channels) and it is part of MPEG-2 TS. The PSI data contains the following tables:

  • PAT (Program Associated Table, PID=0x00): location of all the programs contained in the TS. It shows the association of PMT PID and Program Number.
  • PMT (Program Mat Table, tableId=0x02): PID numbers of ES associated with the program and information about the type of ES (audio, video or data).
  • CAT (Condition Access Table, PID=0x01, tableId=0x01): It is used for the conditional access and provides association with EMM stream.
  • NIT (Network Information Table): information about the multiplexes and transport streams on a given network.
  • TSDT (Transport Stream Descriptor Table): information about services and associated MPEG-2 descriptors in the TS. Descriptors are used for additional information.


The Service Information (SI) are additional tables used in the DVB standard to identify the services and the associated events contained in the TS. Most relevant signaling tables are:

  • NIT (Network Information Table): Information about the physical network such as network provider name, transmission parameters…
  • BAT (Boutquet Association Table): describes the program structure of several phyisical channels.
  • SDT (Service Description Table): describes the program structure or one physical channel.
  • EIT (Event Information Table): contains the program guide (EPG). There are 4 EITs to cover a period of 12h.
  • TDT (Time and Date Table) and TOT (Time O set Table): Information about time and date.
  • RST (Running Statuts Table): allows rapid updating of the timing status of one or more events
  • ST (Stung Table): features of the packetization.
  • Other tables:
    • DIT (Discontinuity Information Table) and SIT (Selection Information Table) are used in storage environments.
    • AIT (Application Information Table) is used in interactive applications.
    • INT (IP/MAC Noti cation Table) is used in transmission.
    • UNT (Update Notifi cation Table) is used for System Software Updates.


PSIP is the protocol used in the ATSC television standard in United States, Canada and Mexico. It is based on MPEG-2 to encode the content but defi nes new signaling tables:

  • STT (System Time Table): current time.
  • MGT (Master Guide Table): data pointers to other PSIP tables.
  • VCT (Virtual Channel Table): de nes each virtual channels and enables EITs to be associated with the channel.
  • RRT (Rating Region Table): content ratings for each country or region.
  • EIT (Event Information Table): titles and program guide data
  • ETT (Extended Text Table): detailed descriptions of channels and aired events.
  • DCCT (Directed Channel Change Table): allows automatic changes to the channel width in response to noise conditions.
  • DCCSCT (Directed Channel Change Selection Code Table): update states, counties and program genres used in DCCT tables.


ISDB is the Japanesse boradcast digital TV standard. It is based on MPEG-2 and it uses some PSI tables but also defines new ones:

  •  PSI: PMT, CAT and PAT with speci c descriptors.
  •  Equivalents to SI (DVB) with specif c descriptors]: NIT, SDT, BAT, EIT, TDT, RST, TOT and ST.
  •  ISDB/Tb:
    • PCAT (Broadcaster Information Table): conveys partial content announcement for data broadcasting.
    • BIT (Boutquet Association Table): it is used to submit broadcaster information on network.
    • NBIT (Network Board Information Table): board information on the network such as a guide.
    • LDT (Linked Description Table): it is used to link various descriptions from other tables.

The ISDB Japanese standard also use extended SI Information to describe local events and their information. It uses the LIT (Local Information Table), ERT (Event Relation Table) and ITT (Program Index Transmission Information Table). It also defines new descriptors to add new functionalities.

Signal tables in the DTMB standard

DTMB is the standard used in China, Hong Kong and Middle East countries. It is similar to DVB in terms of service information but there are some di fferences in the transmission parameters as you can see in the previous post.

Digital television broadcast standards

Nowadays many countries are replacing broadcast analog television with digital television. Digital standards use narrower bandwidth signal transmission which allows to fit more channels in a certain range of frequencies and higher resolutions. Several regions of the world are in different stages of adaptation and are implementing different broadcasting standards. DVB is the suite of open standards used in Europe and has some variations used in Japan (ISDB) or USA (ATSC). Other countries, like China (DTMB) developed their own standard.


DVB defines a a suite of  standards using different coding and modulation techniques and standards to allow the transmission of the signal in different environments and conditions. This standard was developed in Europe and it is internationally accepted for most of the countries. The suite of standards for digital television includes:

  • DVB-S: DVB for satellite television. It provides error protection to the MPEG-2 TS and adapts it to the channel characteristics. DVB-S use TDMA with a single carrier and QPSK modulation.
  • DVB-T: DVB for the digital terrestrial television. It uses MPEG-2 TS using COFDM modulation to reduce the ISI and the fading e ffect that appears in terrestrial communications. Several parameters can be chosen for a DVB-T transmission channel such as the bandwidth (6, 7 or 8MHz) and the operation modes (2K or 8K).
  • DVB-C: DVB over cable. It is similar to DVB-S with 64QAM modulation and without adding error protection to the MPEG-2 TS due to the channel characteristics.
  • DVB-H: DVB for handhelds. It use 4K COFDM because of the handhelds low energy consumption and mobility robustness.
  • DVB-SH: DVB Satellite services to Handhelds. The satellite downlink guarantees rural coverage and the terrestrial downlink is used in urban environments.
  • DVB-S2: DVB-Satellite 2nd generation. It is the successor for the DVB-S system. It includes enhanced modulation (QPSK, 8PSK, 16APSK and 32APSK) schemes and higher bitrates.
  • DVB-T2: DVB-Terrestrial 2nd generation. It is the extension of the DVB-T standard with higher bitrate and better usage of spectrum. It use OFDM with a large number of sub-carriers and several modes.
  • DVB-T2 Lite: New profi le of DVB-T2 for very low capacity applications such as mobile broadcasting. It is based on a limited subset of the modes of the T2 pro file and by avoiding modes which require the most complexity and memory.


ISDB-T is the Japanese standard for digital TV. It is an extension of the DVB-T standard. It  uses MPEG-2 and the same DVB-T coding and COFDM modulation.This standards enables  hierarchical transmission which allows partial reception for mobile TV.


ATSC standards are a set of standards for digital television used in USA, Canada and Mexico. It also uses MPEG-2 codification like DVB but it uses new modulation techniques. The stream can be modulated on 8VSB (terrestrial) or 16VSB (cable TV) modulation which consist on modulate a sinusoidal carrier to one of eight or sixteen levels allowing high spectral efficiency and impulse noise immunity.  ATSC signals use 6MHz bandwidth and achieve a throughput of 19.4Mbps.


DTMB is the TV standard used in China, Hong Kong and Middle East. The system use advanced technologies like a pseudo-random noise code, low-density parity check
encoding to protect again mistakes, modulation TDS-OFDM. The system gives flexibility to the services off ered where diff erent modes and parameters can be chosen depending on the type of service and network.


Unlocking an Android phone

Android is an open source operative system released by Google. Carriers and manufacturers create their own version based on the Android code but they block some funtionalities and add commercial applications. Though it is an open source OS, they design a specifically Android OS version for the device and they don’t allow you to install other operative systems. Unlocking your phone allows to install a custom operating system (also known as custom ROMs) with innovatives features and root the phone to have complete access to the system.

Why root the phone ?

Rooting the device allows to modify the device’s software on the very deepest level. The greatest advantage that rooting provides is the ability to install powerful applications  that requires more than usual privileges on your device. When your phone is rooted you can
Android_unlock install apps to access and edit the system memory, speed up or slow the chip for more performance and battery life,  connect to WiFi networks that have proxy settings, block advertisements on websites and apps… Moreover, if your phone is rooted it is pretty easy to manage the operating system, backup and restore your data and manage custom ROMs.

Why install a custom ROM?

A custom ROM is simply a version of Android to replace the version of Android that the manufacturer provided on your device. Custom ROMs are created by developers and in most of cases they take away all the bloatware that is usually impossible to remove, increase performance and/or improve battery life.

There are hundreds of custom ROMs but CyanogenMod is probably the most popular and
cyanogenmod one of the most reputable. CyanogenMod is supported for a huge variety of phones and it includes some cool features such as gestures in locked-screen mode, music player and other apps, phone goggles, set up a VPN to tunnel all your IP and network data, install apps to the SD card (this safe a lot of space!) and much more. 

How to root the phone and install a custom ROM

The bootloader is the code that loads the system software on the devices and determines which applications must run in the startup (boot up) process. Manufactures block the bootloader for security reasons but also to prevent you to install custom ROMs. Unlock the bootloader is the first step to install a custom firmware on your Android phone.

HTC_BootloaderNot all the Android-phones can be unlocked although it is possible in the most advanced onces. Some manufactures such as HTC, Samsung or Google launched the official tools to unlock the bootloaders. In this post I will show the steps for HTC phones altough it is really easy to find the steps for other phones on the xdna developers forum.

 Be aware that unlock your bootloader may void your warranty.

1. Backup

Before start unlocking your phone it is highly recommended to backup all your data. You can use your Google account to backup your apps and automatically sync back when the process was complete. HTC Sync, available from the HTC website allows to backup your contacts, messages, notes, call logs….The application Go Backup Pro has the same purpose.

2. Unlocking the bootloader (HTC phones)

HTC_unlockBootloaderThe official webpage of HTCDev provides all the resources to unlock the bootloader of your HTC. If you follow all the steps of the HTCDev guide you will get your unlock code to root your phone. You need to download the Android SDK and once downloaded it basically consist on start the device into Bootloader mode and run the adb (which is on the SDK) to get the token, which is an unique code that identifies your phone. Then they will provide you with a unlock token to copy on the phone and unlock the bootloader from the Bootloader screen. All the steps are detailed in the website.

3. Rooting the phone

The bootloader is now unlocked but still not rooted. There are different ways to root your phone, for example installing the application ClockworkMod from Google Play and use the feature Reboot into Recovery. You can also create and run your temp_root script (be sure your phone is plugged in) with the following code:

adb shell mv /data/local/tmp /data/local/tmp.backup
adb shell ln -s /data /data/local/tmp
adb reboot
adb shell echo "ro.kernel.qemu=1 > /data/local.prop"
adb reboot

If you want to remove the temp root you can create and run the remove_temp_root script with the following code:

adb shell rm /data/local.prop
adb shell rm /data/local/tmp
adb shell mv /data/local/tmp.backup /data/local/tmp
adb reboot

4. Installing a custom ROM

There are lots of custom ROMs you can download and install on your phone. One of the most populars is CyanogenMod which offers features not found in the official Android based firmware. You could choose another one though, and the process will be basically identical.

clockwormodBe sure the custom ROM you chose is supported on you phone and download it. Then copy the downloaded zip file to the phone SD card. If you didn’t download Clockworkmod, you will need to do at that point. This applications allows to root your phone and install custom ROMs among other possibilities. Installing the downloaded custom ROM is as easy as tap into the Install ROM from SD Card and wait until process finish.  I highly recommend to do a backup of the current ROM before start the installation.

At this point your phone is rooted and running a customized operating system. You have super-user privileges, you can control everything from your phone,  you can do wathever you want!

Connecting an Android phone and the Kinect sensor

The Microsoft’s Kinect sensor revolutionized the touch-free gaming experience two years ago. The open source drivers and frameworks opened a new world of possibilities for researchers and developers. The official SDK made easier to develop new speech, posture and gesture applications.

The possibilities for smarthone applications are nearly endless in number and design. The Android SDK and all the resources available on Internet combined with the latest smartphone models make easy to imagine and develop new applications and new features.



But, what if we go one step further to combine an Android-based smartphone and the Microsoft’s Kinect sensor? although the SDKs, the IDEs, the libraries, the programming language…everything is different, it is possible to integrate them. The first step consists on communicating them in order to share messages and data through the TCP protocol. In this post I will show how to program the TCP server running on the Kinect application and programmed in C# language. The client will run on the Android device and coded in Java language. 

KinectServer (C#)

KinectServerThe TCP server uses the official Microsoft Kinect SDK, the TCP protocol and C# language to run as a server. In this first implementation it accepts client connections and is able to send and receive data. The server accepts, in theory, an unlimited number of connections and it spawns a thread for each client connected.

The server class creates a new TcpListener to accept socket communications at port 3200. It creates a new Thread for client connections.

  class Server
    private TcpListener tcpListener;
    private Thread listenThread;

    public Server()
      this.tcpListener = new TcpListener(IPAddress.Any, 3200);
      this.listenThread = new Thread(new ThreadStart(ListenForClients));

The server blocks until a client has connected and when a client connects, it creates a new thread to handle communication with the client. 

private void ListenForClients()

  while (true)
    //blocks until a client has connected to the server
    TcpClient client = this.tcpListener.AcceptTcpClient();
    System.Console.WriteLine("Client connected");

    //create a thread to handle communications with connected client
    Thread clientThread = new Thread(new ParameterizedThreadStart(HandleClientComm));

The next function allows to communicate the client and the server. The server waits until the client has sent a message and then it sends a new message to the client. After that, it closes the communication.

private void HandleClientComm(object client)
  TcpClient tcpClient = (TcpClient)client;
  NetworkStream clientStream = tcpClient.GetStream();

  byte[] message = new byte[4096];
  int bytesRead;

  while (true)
    bytesRead = 0;

      //blocks until a client sends a message
      bytesRead = clientStream.Read(message, 0, 4096);

    if (bytesRead == 0)
      //the client has disconnected from the server

    //message has successfully been received
    ASCIIEncoding encoder = new ASCIIEncoding();
    String mes = encoder.GetString(message, 0, bytesRead);

    //Server reply to the client
    byte[] buffer = encoder.GetBytes("Hello Android!");
    clientStream.Write(buffer, 0, buffer.Length);

The server must run in background while the Kinect application is running and collecting data such as the video stream or the depth stream. To run it for testing I recommend to create a new instance from the MainWindow.

   public partial class MainWindow:Window {
          Server TCPServer = new Server();

Finally, to test if the server is listening for new connections, open a new command window and run the netstat. It will list the tcp connections on the port 3200.

netstat -anp tcp | find “:3200”

Telnet also allows to connect to the server and test the communication. The following command will show ‘Hello Android’ on your command window, which is the message that the server is sending to the client. 

telnet localhost 3200 //(telnet @ipserver #port)

AndroidClient (Java)

AndroidClientAny Android device running the TCP client should be able to connect and share information to the Kinect Server. The client is programmed in Java language and using the Android SDK. In this first implementation it sends a message to the server, although t is not difficult to implement the functionalities to share data from the phone such as a file or raw data collected from the sensors. 

The client will run on the background, which means we have to create a new AsyncTask to handle the communications with the server. The class has the socket and the input and output streams.

public class InternetTask extends AsyncTask<string, void,="" string=""> {
DataOutputStream dataOutputStream = null;
DataInputStream dataInputStream = null;
Socket socket;
String message;

The function doInBackground creates a new socket and exchange the messages. The socket connects to the server using its ip and port where is listening. Then it sends a message and wait for the reply which will be showed on the screen.

protected Void doInBackground(String... params) {
    try {
        socket = new Socket("",3200); //connect to the server
        //Send message to the server
        dataOutputStream = new DataOutputStream(socket.getOutputStream());
        dataOutputStream.writeUTF("Hello Kinect!");

        //Receive message from the server
        dataInputStream = new DataInputStream(socket.getInputStream()); 
       message = inputStreamToString(dataInputStream);

        //Write the message on the screen
        TextView tv = (TextView) findViewById(; //TextView element to show the received message

    } catch (UnknownHostException e) {
    } catch (IOException e) {
    } finally {
        if (socket != null) {
            try {
                socket.close(); //close the connection
            } catch (IOException e) {
        if (dataOutputStream != null) {
            try {
                dataOutputStream.close(); //close the output stream
            } catch (IOException e) {
        if (dataInputStream != null) {
            try {
                dataInputStream.close(); //close the input stream
            } catch (IOException e) {

Depending on your application you will need to run the client when the app starts, when a button is clicked or when another application is running. In this example I created a button that starts the client to connect to the Kinect server.

public void btnConnectToServer(View view) {  
        InternetTask task = new InternetTask();

Tilt the Kinect from the Android phone

At this point the Kinect and the Android can exchange messages. However, it is not difficult to implement an application to send some data from the phone, such as a file, a song or raw data collected with the sensors. For example, you can tilt the Kinect to point to your phone using the raw values collected with the orientation sensor. This requires some work so I will explain how to do in the following post, although you can tilt the Kinect using Up and Down buttons.

Tilt Kinect

Any ideas for future applications integrating the Kinect and the Android phone?

Google I/O at the Brightcove

Google I/O is an annual developer-focused conference held by Google. The event takes place in San Francisco (CA) but they have extended events in many cities around theIMGP6679.jpg world. The Brightcove, in Boston, showed off the Zencoder cloud transcoding service at the conference. I had the opportunity to join developers, engineers and passionate students to watch and discuss some of the conferences.

Google Play

Google Play is the Google’s distribution platform which allows to download music, books, movies, videos, games and applications. Google redesigned the platform both mobile and web versions. They also introduced new services such as Google Play Games and new features such as the personalize suggestions based on the user preferences and search history.

Google Play Games is the new game service with real-time multiplayer action and multi-platform synchronization. The gaming experience becomes more social, allowing for example to invite and play with your friends and share your scores. It can be synchronized with other platforms such as tablets, computers, smartphones running both Android and iOS, which is an interesting feature for the developers.

Google Play StoreA monthly music subscription service has been introduced for the Google Play Music. It allows to play all songs available in the market and also to upload your songs.  Google Play Books now allows to upload your files to the cloud and it includes a Read Now section that features books you have recently uploaded, purchased or read.

Google also announced a new education program that will help teachers manage and push out apps, books and other educational content to student tablets and computers: Google Play for Educational Program.

Google +

The social network of Google has been redesigned  with responsive design in the feeds, a tags feature to dig into more content and a new photo manager and editor.

google-plus-featuresWhen you upload hundreds of pictures, Google+ Photos will choose the best ones based on the bright and contrast, number of people in the picture, famous landmarks and attractions and it will use face-recognition algorithms to select the one where you and your friends appear in the picture.

Google Hangouts is a new application that substitutes Google Talk. The new multi-platform app allows to chat, share videos and pictures with your friends. It is available on Android, iOS and Chrome.

Google Maps and localization

Google released new APIs to improve localization and battery life on Android devices. An interactive map will recommend location, compare travel modes and integrate Street View and Google Earth on Google Maps. The new APIs includes better outdoor localization, indoor localization and navigation and human activity recognition.


The most common IDE to develop Android apps is Eclipse which runs on Windows, Linux and Mac. But Google developed their own IDE, Android Studio that runs on all operative systems.

The new features that Android Studio includes are the navigation drawer, a better integration with Google Analytics and the possibility to launch beta and alpha versions of the applications.

Samsung Galaxy S4

The new Samsung Galaxy S4 from Google is a completely unlocked device. It comes with an unlocked bootloader that gives to the developers a root access to the operating system. The phone will be release on June and it costs $649.


Other features

Google Voice Search is already available for Android devices but Google wants to expand its voice search capabilities and other features on Google Search for desktop devices, a fundamentally change the way the people look things up on its search engine.

Google Wallet, the online payment method from Google, enables now to send money as a Gmail attachment over other interesting functionalities. It will be integrated on the Chrome browser wich gets faster, more secure and includes faster video-streaming.