Access on Main Street

Hooking up a usable world, one mainstream product at a time.

iPad dock will add gesture control

Posted by Jim Tobias 27 December 2010

A dock for the Apple iPad will allow users to sweep and swipe in mid-air, as far away as a foot from the iPad.  No word yet on what gestures will be included, but they will let you control regular apps.  We may also see special apps written for the dock; maybe someone will be smart/kind enough to write apps for people with dexterity limitations, cognitive disabilities, etc. — this is a perfect gadget for adding even more accessibility to the already-stellar iPad.  Not having to hold the iPad will make it easier for dexterity impaired users, and with a camera-equipped iPad, it may facilitate sign language video.  (Not that the combo would recognize ASL — having the iPad in a dock, controllable from a certain distance would make it easier for someone standing back and signing.)

CES 2011: iPad dock with Motion Sensing Controls to Debut – I4U News

Sony motion pickup

Posted by Jim Tobias 5 July 2009

Sony is preparing a camera-based gesture input system that works with anything you can hold.   Just move it around in front of the camera, and the system will store the views from all angles.  Then use your burrito to play tennis, your kitten to shoot bad guys, whatever.  This may wind up being great for people with limited grip and strength.

Gizmodo – Sony Patent Controls Games with That Crap on Your Coffee Table – Sony Motion Controller

Microsoft’s motion move

Posted by Jim Tobias 1 June 2009

Ever since it was released, Nintendo’s Wii and its handheld gesture controllers have dominated the buzz in gaming interfaces.  Microsoft fired back at today’s E3 conference with Natal, a gesture and speech recognition system with no physical controller at all.  Using cameras and directional microphones, Natal will let players control the Xbox with kicks, twists, finger-pointing, and spoken commands.  The cameras will identify players and their motions, interpreting them for whatever game is being played.  These virtual gesture input systems can be adapted for use by people with dexterity impairments to play games, or, with suitable programming, to perform other functions. You could turn it into a virtual keyboard to control a word processor, for example.

If Natal enters the market at the estimated price of $99, it could revolutionize the field of alternative input devices.

Microsoft’s Project Natal: What does it mean for games industry? | Gaming and Culture – CNET News

Gaze in the millinery

Posted by Jim Tobias 22 March 2009

We’ve reached the point where a camera-based gesture interface, a pico-projector, and wireless computing has come together in SixthSense, a wearable prototype that lets the user grab information about anything in the vicinity.  Aim at a package on a supermarket shelf to see its environmental information; aim at a building to take its picture or see its layout in a map view; project a telephone keypad onto your hand and dial away.  Information that’s rich and relevant.

Right now, of course, you have to be able to see pretty well, be able to point your head at a target without shaking, and be able to move your hands and fingers accurately and consistently.  But we’re gonna fix that at today’s meeting, right, gang?  OK, everybody push on the big steel lab door!

There’s a video of this prototype in operation.  (Forget our negativity — this thing is bangin’.)

Make: Online : Wearable metadata

Scratch another gesture interface

Posted by Jim Tobias 17 November 2008

Now here’s another gesture interface prototype we’re itching to try.  Chris Harrison mounts a simple microphone on any surface — wall, desk, pants pocket, then scratches letters or patterns on that surface to control a computer or mp3 player.  Sound carries so well in solids that the audio pattern can be picked up far from the mic, and a two-stroke pattern like the letter ‘V’ is easily distinguishable from a circular one like the letter ‘O’.  Chris has in mind using your cell phone’s microphone as the pickup, so any horizontal surface becomes a gesture input device once you put your phone down on it.

Gesture interfaces seem to be falling into 2 categories: 2D, requiring contact like this prototype or a touch-sensitive panel (like Microsoft’s Surface), and 3D, like the camera-based systems we’ve also featured.  Their main advantages over last-generation touchscreens is that they don’t need to have a specific target area — you can perform the gesture anywhere in their range — and they can detect complex movements and multiple touches at once, like a chorded keyboard, increasing the encoding capability.  These two improvements work better for blind and low vision users, but gesture complexity may foil some users with impaired dexterity.

Chris Harrison – Scratch Input

All you need is glove

Posted by Jane Berliss-Vincent 1 September 2008

The Experience Monitor is a prototype glove that contains both video and still cameras as well as microphones. Recording start/stop is controlled by gestures, or there’s an “auto” mode to capture everything. We can think of a variety of uses that this would have for people with high-level mobility/dexterity disabilities, from note taking to personal security to recreational photography.

Electricpig: Experience Recorder captures everything we say and do

User interfist

Posted by Jim Tobias 29 August 2008

Another take on gesture control: Toshiba’s got a camera system that sits atop your screen facing the viewer.  Just make a fist (the angry-at-TV-pundit jokes write themselves) and move it around to summon up all the functions you normally have on a remote.  Plus, when was the last time you lost your fist among the cushions?

We love the basic idea of gesture interfaces, because they’re generally less demanding of hand function.  But we’ve gotta ask the Toshibanistas — how tight a fist?  How large and smooth does the movement have to be?  It’s all about the flexibility and personalization, right?

They’ve also got a cue card function: hold up an image and the TV knows what to do.  Maybe too dexterity demanding, but probably great for viewers with cognitive disabilities (more jokes writing themselves, dammit).  Speech recognition, too.

Toshiba’s Cambridge Research Lab shows off gesture-controlled TVs, image recognition – Engadget

Next Page »