MIT researchers bring Javascript to Google Glass

image

Open Source Wearscript puts Javascript on Google Glass, with many new, and some unexpected, input choices.

Earlier this week, Brandyn White, a PhD candidate at the University of Maryland, and Scott Greenwald, a PhD candidate at MIT, led a workshop at the MIT Media Lab to showcase an open source project called WearScript, a Javascript environment that runs on Google Glass. The category of wearables is still evolving. Besides activity trackers and smartwatches, the killer wearable app is yet to be discovered because wearables don’t have the lean back or lean forward human-machine interface (HMI) of tablets and smartphones. Wearscript lets developers experiment with new user interface (UI) concepts and input devices to push beyond the HMI limits of wearables.

The overblown reports of Google Glass privacy distract from the really important Google Glass discussion – how Glass micro apps can compress the time between user intent and action. Micro apps are smaller than apps and are ephemeral because they are used in an instant and designed to disappear from the user’s perception once completing their tasks. Because of the Glass wearable form factor, micro apps deviate from the LCD square and touchscreen/keyboard design of smartphone, tablet, and PC apps, and are intended to be hands-free and responsive in the moment. Well-designed Glass apps employ its UI to let the user do something that they could not otherwise do with another device. Glass’s notifications are a good example of this; want to get breaking news or preview important email without interruption from a phone or PC? Tilt your head up slightly and capture it in a glance, but if you want to read the news or give a detailed response to an email – better to pick up a smartphone, tablet or PC. The best consumer-facing Google Glass experiences highlight how apps can leverage this micro app programmable wearable form factor.

Early on during the MIT Media Lab workshop, White demonstrated how Glass’s UI extends beyond its touchpad, winks, and head movements by adding a homemade eye tracker to Glass as an input device. The camera and controller were dissected from a $25 PC video camera and attached to the Glass frame with a 3D-printed mount. A few modifications were made, such as replacing the obtrusively bright LEDs with infrared LEDs, and a cable was added with a little soldering. The whole process takes about 15 minutes for someone with component soldering skills. With this eye tracker and a few lines of Wearscript, the researchers demonstrated a new interface by playing Super Mario on Google Glass with just eye movements.

To this audience of software engineers, wearable enthusiasts, students, and hardware hackers, repurposing an inexpensive device with some hacking and soldering is not unusual. But the impact of the demonstration set the tone for rethinking Glass apps with Wearscript and unconventional Glass input devices.

For more information follow the source link below.

Source: Network World

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s