Wednesday, June 24, 2009

WiiCane System - Technical Description


Apologies for the long post, but Steven has asked me to present the tech side of the WiiCane system as it stands now. Lots of new stuff here - I hope some of it is of interest!

The system tested at Western Michigan University was made up of a strip of discretely-controllable infrared emitters (940 nm wavelength) fixed in a wooden strip on 12" centers. This light strip, composed of 32 individual emitters, together with a controller device attached to the host computer, provided reference lights detectable by the Wii Remote infrared sensor.

As the Wii Remote's infrared sensor is sensitive to light in the 940 nm spectrum, we are able to derive positional and orientation information of the Remote relative to the light strip by analyzing the perceived locations of lights. The sensor itself provides Cartesian coordinates and perceived brightness values for up to four lights in the sensor's field of view. In the presence of more than four lights, the sensor reports information on the four it perceives to be brightest. Generally these are the four lights closest to the camera, but the perceived brightness may be affected by other factors such as occlusion of lights or off-axis orientation of the Remote relative to the emitter's emission angle. The top picture in this post shows the host application view of four infrared emitters in a double strip.

The software used for the trial attempts to determine arc width and to measure the amount of veering. It does this by analyzing raw accelerometer and infrared sensor data provided by resources in the Remote. Accelerometer data is normalized and used to detect discrete "tap" events marking the beginning and end of an arc. During an arc, data from the infrared sensor is used to develop a model of the traveler's movements in order to determine arc width and amount of veering.

To analyze movement, the reported infrared sensor data is plotted and examined from frame to frame. In order to calculate arc width, the position of all visible lights is noted at the beginning of an arc (defined as the first "tap") and the changing position of these marker lights is then followed as the traveler completes the arc. The final position of the lights at the end of the arc or second "tap" is noted and the total movement is calculated based on the distance between the initial and final positions. A scaling factor (experimentally determined based on cane length) is then applied to the calculated distance value to produce a reading in mm.

Veering detection follows a similar method. Once an arc is completed, the arc time is first calculated based on the time between the first and second taps. The infrared sensor data collected for the arc is then examined - it is scanned to find the time when the Remote was most closely aligned with the light strip (determined by examining the light locations which outline the strip itself). Since the light strip makes up the desired course of travel and since the arc should cross the light strip approximately halfway through the arc when the traveler is not veering, it is possible, based on the timing figures just calculated, to determine how much veering has occurred and in which direction. The highlighted area in the bottom picture in this post shows typical results for the analysis of three cane arcs.

In addition to the analysis tasks performed by the host software, it also serves to log raw accelerometer and infrared sensor data received from the remote for later analysis. A simple display area is also provided to aid visual analysis and debugging.

1 comment:

GB said...

The results are disappointing but I think rethinking the design is a good idea. Here are some thoughts

* one light strip seems to introduce limitations that we might not want in the device. If a person is to the left or right of the strip, which they surely will be, and their body rotates in the same direction, the remote will not be facing the centered strip at all. I would think this would happen often and sometimes quickly. I cannot understand how a single centered strip of lights can stay within the camera's range of “vision” when it depends on the body orientation and the arc of the cane.

* I'm not an engineering guy --- just a poor country mobility specialist as Dr. McCoy used to say to Captain Kirk on the Enterprise -- but I think it would be great to base the arc width feedback system on the acceleration data as much as possible. The tests in the office without the lights were fairly good --- this does not need to be an excruciatingly exact measurement, since good arc width is tolerant to variation for effective travel.

* instead of mounting wii remotes on the cane, can we body mount them and have them look at two strips of light along the periphery of each side of the course.

* we need to be aware the subjects in the testing will be use 2-point-touch but really users may never tap their canes --- the end of the arc will be marked by simple directional change during a constant contact technique (or variation of the technique)

* my other thought was to put the light source on the subject, since the widest points of the traveler define the cane arc width dimensions --- all this mounting, cutting, and manipulation seems tenuous. Lights (is there another detectable source for the wii?) on the person with the wii point backwards? This has real advantages from a mobility teaching point of view, because not only would we detect arc width too great or too shallow, but we could see if the arc is right or wrong on both only-the-left or only-the-right --- NOT JUST THE DISTANCE THE CANE MOVED. This would improve the value of the fededback and automatically address another aspect of effective cane manipulation.