PredictGaze: Using the Camera in Your Phone or Computer for Gesture Control, Eye Tracking, Face-Rec and More
Posted in: UXWhen we first saw the Leap gesture control interface for the Mac, we were blown away. Earlier than that, gamers and hackers were taken by the Wii and the Kinect. Now a new group of creators is working on the latest in gesture-control interfaces, which ought to have an advantage over the current competition: It’s software-based and requires no separate pieces of hardware, instead relying on the cameras now built into virtually every computer, tablet and smartphone.
PredictGaze is the brainchild of Aakash Jain, Abhilekh Agarwal, and Saurav Kumar, three computer scientists and friends based in California. Using a series of algorithms, their software analyzes images captured from your device’s camera—even in low light and near darkness, conditions that have stymied their competition—to deliver useful results. Face recognition, gesture control and eye-tracking are all things we’ve heard of before, but PredictGaze is wrapping it all into a single package, and making it scaleable as per the device it’s installed in.
Their system holds rich promise: Imagine being able to sit in front of your computer, or hold up your phone, and it knows its you through facial recognition, so unlocks itself with no need for a password. Or watching your television, and when you get up to go to the bathroom, it pauses; it resumes play when you’ve sat back down. Or being able to silence the audio by bringing your finger to your lips. And the eye-tracking-controlled browsing, while still a bit clunky looking in their demo, will be a godsend for paraplegics once it’s perfected.
Here are a few videos to give you an idea of what PredictGaze is currently capable of. In this first one, “Gaze Enabled Browser Demo,” you don’t need to watch more than 10 seconds of it to “get it.” (The remainder of the two-minute video has the test subject perform the same up-down scrolling while they gradually dim the lights.)
Post a Comment