Software and tools


On the file ChalearnLAPSample.py there is a class GestureSample that allows to access all information from a sample. In order to open a sample file, use the constructor with the ZIP file you want to use:

>> from ChalearnLAPSample import GestureSample

>> gestureSample = GestureSample("SampleXXXX.zip")

With the given object you can access to the sample general information. For instance, get the number of frames, the fps or the max depth value:

>> numFrames=gestureSample.getNumFrames()

>> fps=gestureSample.getFPS()

>> maxDepth=gestureSample.getMaxDepth()

Additionaly we can access to any information of any frame. For instance, to access the RGB, depth, and user segmentation information for the 10th frame, we use:

>> rgb=gestureSample.getRGB(10)

>> depth=gestureSample.getDepth(10)

>> user=gestureSample.getUser(10)

Finally, we can access to an object that encodes the skeleton information in the same way:

>> skeleton=gestureSample.getSkeleton(10)

To get the skeleton information, we have some provided functionalities. For each join the [Wx, Wy, Wz, Rx, Ry, Rz, Rw, Px, Py] description array is stored in a dictionary as three independent vectors. You can access each value for each join (eg. the head) as follows:

>> [Wx, Wy, Wz]=skeleton.getAllData()['Head'][0]

>> [Rx, Ry, Rz, Rw]=skeleton.getAllData()['Head'][1]

>> [Px, Py]=skeleton.getAllData()['Head'][2]

The same information can be retrieved using the especific methods:

>> [Wx, Wy, Wz]=skeleton.getWorldCoordinates()['Head']

>> [Rx, Ry, Rz, Rw]=skeleton.getJoinOrientations()['Head']

>> [Px, Py]=skeleton.getPixelCoordinates()['Head']

Additionally, some visualization functionalities are provided. You can get an image representation of the skeleton or a composition of all the information for a frame.

>> skelImg=gesture.getSkeletonImage(10)

>> frameData=gesture.getComposedFrame(10)

To visualize all the information of a sample, you can use this code:

import cv2
from ChalearnLAPSample import GestureSample

gestureSample = GestureSample("Samplexxxx.zip")
cv2.namedWindow("Samplexxxx",cv2.WINDOW_NORMAL) 
for x in range(1, gestureSamle.getNumFrames()):
img=gestureSample.getComposedFrame(x)
cv2.imshow("Samplexxxx",img)
cv2.waitKey(1)
del gestureSample
cv2.destroyAllWindows()

News


There are no news registered in Multimodal Gesture Recognition: Montalbano V2 (ECCV '14)