Remember how Google Glass was going to help us all text, surf and navigate into the future totally hands-free? Only it kind of turned out to be a weird, mostly annoying pair of goofy specs that creeped people out, maybe screwed up your vision and was just a bit safer to use in your car than a regular old smartphone?
Well, researchers at Stanford have launched something called the Autism Glass Project, which aims to use Glass to help autistic children overcome one of the biggest hurdles many of them face: recognizing and understanding emotions.
More than 1 million children in the U.S. have been diagnosed with Autism Spectrum Disorder, which often results in their inability to recognize basic facial emotions (happy, sad, confused), leading to awkward social interactions and difficulty making or maintaining friendships.
According to the Stanford team, though, they've developed a system "using machine learning and artificial intelligence to automate facial expression recognition that runs on wearable glasses and delivers real-time social cues." In other words, using Glass's outward facing camera to read facial expressions, their program allows the user to look at someone who is smiling and provide an on-screen prompt that reads "happy," helping the child "see" the emotion they might be having trouble deciphering.
The researchers launched the second phase of the project this week, with a 100-person at-home study what will take 80 children with Autism Spectrum Disorder and 20 typically-developing subjects between the ages of 6-16 to see what kind of lasting "behavioral progression" they can track over a 4-month period. (Glass was taken off the market in January, but Google donated 35 pairs to Stanford to use in the project.)
Techcrunch spoke to one of the project's lead scientists, Nick Haber, who said the biggest issue the team faced in the first phase was "ensuring the device's use led to measurable learning when the children were no longer on the device." That's why the second phase will use a game developed by the MIT Media Lab called "Capture the Smile," which fellow researcher Catalin Voss said allows the children to more directly "interact with their surroundings."
In "Smile," the child wears Glass to look for subjects showing specific emotions, which it then compares to a databased of photos, sending a text message that shows up in front of the child's eye with a prompt. Common expressions are then memorized by Glass, as are recurring faces, and the amount and kind of eye contact is recorded. The data will eventually be collected and sent to an Android app, which parents and intervention specialists can review.
The team will track the subjects' performance over time and combine that data with video analysis and questionnaires with the goal of showing how the device "helps improve emotion recognition over the long term."