Two visually impaired users test OpenGlass' Question-Answering and Memento (real-time annotation of scenes/objects). Prior to what is shown, they had an hour of initial testing and hadn't used Glass previously.
(Question-Answer) The user takes a picture and asks a question. These get sent to cloud workers (mechanical turk) and twitter users who answer it. The answer received is read aloud to the user through Glass. Note: the voice that you hear is what the user hears, though it is slightly muffled because it is a bone conductive speaker (best heard through headphones, captions at the bottom of the screen note important user feedback). For more detail about our annotation approach see these videos http://www.youtube.com/watch?v=4v8Mm5JotrY and https://www.youtube.com/watch?v=mHWejjTwOMY.
(Memento) A native app streams real-time video frames to a cluster, which performs image matching against a dataset of images and annotations created by a sighted user. Annotations associated with matching images are sent back to Glass and read aloud to the user.
Best viewed in 1080p HD. This is part of the http://openglass.us project and uses http://picar.us for online annotation and rectification. Developed and demoed by Brandyn White (http://brandynwhite.com) and Andrew Miller. The answers were provided by us for demonstration purposes. Annotation with Picarus is compatible with Mechanical Turk/Twitter.
We'd like to thank our users for giving us great feedback and helping us test OpenGlass, Shaun Kane (UMBC) for letting us disrupt his lab for a day, and Jeffrey Bigham (CMU) for arranging the meeting.
Originally collected by
fetching...
(
less)