<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Ergonomics Archives - Situated Research</title>
	<atom:link href="https://www.situatedresearch.com/tag/ergonomics/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.situatedresearch.com/tag/ergonomics/</link>
	<description>Usability Research and User Experience Testing</description>
	<lastBuildDate>Mon, 21 Apr 2014 18:34:31 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
<site xmlns="com-wordpress:feed-additions:1">122538981</site>	<item>
		<title>Leap Motion Review: Is It Time to Replace the Mouse?</title>
		<link>https://www.situatedresearch.com/2013/07/leap-motion-review-time-to-replace-the-mouse/</link>
					<comments>https://www.situatedresearch.com/2013/07/leap-motion-review-time-to-replace-the-mouse/#_comments</comments>
		
		<dc:creator><![CDATA[Matthew Sharritt, Ph.D.]]></dc:creator>
		<pubDate>Mon, 29 Jul 2013 16:15:05 +0000</pubDate>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[HCI]]></category>
		<category><![CDATA[Serious Games]]></category>
		<category><![CDATA[Simulations]]></category>
		<category><![CDATA[Usability]]></category>
		<category><![CDATA[Cognition]]></category>
		<category><![CDATA[Design]]></category>
		<category><![CDATA[Ergonomics]]></category>
		<category><![CDATA[Game Development]]></category>
		<category><![CDATA[Human Factors]]></category>
		<category><![CDATA[Learning]]></category>
		<category><![CDATA[Usability Research]]></category>
		<category><![CDATA[Usability Testing]]></category>
		<category><![CDATA[User Experience]]></category>
		<category><![CDATA[User Interface]]></category>
		<category><![CDATA[User-Centered Design]]></category>
		<guid isPermaLink="false">http://www.situatedresearch.com/?p=5264</guid>

					<description><![CDATA[<p>The Leap Motion releases today, promising to change the way we interact with the personal computer. It delivers on that promise, but change could mean for better or worse. On which side of the spectrum does the Leap land?  Perhaps oddly in this day and age, the Leap Motion is a cheap, potentially revolutionary computer&#8230;</p>
<p>The post <a href="https://www.situatedresearch.com/2013/07/leap-motion-review-time-to-replace-the-mouse/">Leap Motion Review: Is It Time to Replace the Mouse?</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>The Leap Motion releases today, promising to change the way we interact with the personal computer. It delivers on that promise, but change could mean for better or worse. On which side of the spectrum does the Leap land? <span id="more-5264"></span></p>
<p>Perhaps oddly in this day and age, the Leap Motion is a cheap, potentially revolutionary computer and video game peripheral that <em>wasn’t</em> funded on Kickstarter. Essentially, it’s an inexpensive, easy-to-setup Kinect for the PC. It’s a small — 0.5 inches tall, 1.2 inches wide, 3 inches deep, and weighs 0.1 pounds (1.27×3.04×7.62 centimeters) — unobtrusive device that’s similar in shape and size to one of the rectangular iPod Nanos. With a sleek black-and-silver aesthetic, it looks like a first-party Apple peripheral for the MacBook Pro — it won’t interrupt the vibe of your sweet battle station. It plugs into your computer through a standard USB 3.0 connection, and its placement is only limited to the length of the USB cable. Setup is simple: Plug in the Leap, press “Next” a few times on the installer, make an account for the Airspace Store (the Leap’s app store), then either download or load some apps. Easy peasy. What is not easy peasy, unfortunately, is using the Leap.</p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="aligncenter size-full wp-image-5265" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/07/Leap_Mac.jpg?resize=640%2C484&#038;ssl=1" alt="Leap_Mac" width="640" height="484" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/07/Leap_Mac.jpg?w=640&amp;ssl=1 640w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/07/Leap_Mac.jpg?resize=300%2C226&amp;ssl=1 300w" sizes="auto, (max-width: 640px) 100vw, 640px" /></p>
<p>The goal of the Leap, of course, is to allow you to just sit back in your chair, relax, and effortlessly wave a hand or poke a finger in the air to interact with your computer. It has not yet achieved that goal, mainly because of that “effortlessly” qualifier. Using the Leap is a chore.</p>
<p>The Leap creates a three-dimensional motion and gesture recognition zone around itself that measures in at eight cubic feet. Eight cubic feet sounds like a pretty large area in which to wave your hands, and considering both you and the Leap are sitting comfortably in each other’s vicinity, you would think that eight cubic feet would allow you to sit back, relax, and control your computer with the ease of a technopath. Unfortunately, the Leap seems to have a sweet spot of recognition much smaller than that eight cubic feet, and even more unfortunately, the sweet spot doesn’t ever seem to be in the same spot from app to app. To make that matter even more frustrating, <em>you</em> aren’t ever in the same spot. When you use your computer, you likely shift around in your chair, maybe lean your head in one hand while you use the mouse with another, sit up straight, slouch, and so on. Sneezing even puts your body in a slightly different place. While using the Leap, you constantly lose that sweet spot of being recognized, even if you’re completely aware that you have to remain still.</p>
<p>If you have only been paying attention to the Leap in an ancillary news capacity, you might think that it just plugs into your computer, and after installation, you can control your everyday computer tasks with the wave of a hand. This isn’t the case — the device requires Leap-specific apps, downloaded from the Airspace Store. You’ll find apps with which you’re familiar, such as a <em>Cut the Rope</em>, while other apps such as Google Earth have Leap Motion support. However, the Leap isn’t designed to take over your computer’s input methods. There’s an app for the Leap, Touchless, that attempts to achieve this goal — it essentially turns your monitor into a virtual touchscreen — but it’s disappointingly frustrating to use. This seems to be more due to the Leap’s finicky recognition than it is due to a poor implementation of the app.</p>
<p>There are quite a bit of mobile-style games, meaning nothing too in-depth, but some fun experiences to kill a few minutes with. Again, though, the Leap’s finicky recognition makes the majority of experiences to be had less than ideal. However, the recognition seems to vary from app to app. One game, <em>Boom Ball</em> — a 3D iteration of <em>Breakout</em> — works well for the most part. In a 3D space, the paddle is closest to you on the screen, while the blocks are in the distance. You simply wave your hand around to control the paddle and bounce the ball back toward the blocks. For some reason, <em>Boom Ball </em>also has acceptable recognition within menus — you can easily hover over an option and choose it without any hassle — whereas navigating menus in just about every other app I tried is a frustrating experience.</p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="aligncenter size-full wp-image-5267" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/07/Leap_Dropchord.jpg?resize=640%2C427&#038;ssl=1" alt="Leap_Dropchord" width="640" height="427" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/07/Leap_Dropchord.jpg?w=640&amp;ssl=1 640w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/07/Leap_Dropchord.jpg?resize=300%2C200&amp;ssl=1 300w" sizes="auto, (max-width: 640px) 100vw, 640px" /></p>
<p>While <em>Boom Ball</em> worked well enough to show what the Leap is going for, Double Fine’s music visualizer game, <em>Dropchord</em>, on the other hand, is a good example of what the Leap currently seems to be. The game itself is simple, but fun. Little circles appear on screen within a larger circle, which is the field of play. Using a finger on each of your hands, you point at the screen on either side of the field of play, and it creates a line between your two finger points. You touch the little circles with the line by moving your fingers around the field to maneuver the line, the little circles clear, and you get points. Sadly, the Leap just can’t seem to notice your fingers on a consistent basis, mainly because you’re moving them around and the Leap loses them. Furthermore, at the two different desk-and-computer setups I tried the game at, neither of which allowed me to rest my elbows on the chairs’ armrests. Regardless of how in shape your shoulders and forearms are, holding them up without rest and moving them around quickly becomes tiring, giving you gorilla arm.</p>
<p>Sadly, the finicky recognition isn’t just inconsistent the one way, as it also tends to notice appendages that it shouldn’t be noticing. A majority of the apps require you to point with one finger, but the Leap often times notices your knuckles on the hand you’re pointing with, and classifies them as other fingers. You can’t exactly remove your knuckles, so you end up moving your finger around a bunch in order to find a sweet spot of the Leap noticing your pointing finger, but not its companion knuckles. In the same vein, if you rest your head in the hand that isn’t in use, the Leap might recognize that hand’s fingers, as well as what appears to be your nose. Again, though, these issues seem to be more or less apparent from app to app.</p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="alignright wp-image-5268 size-full" style="margin-left: 5px;" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/07/Leap_Handsdemo-264x300.jpg?resize=264%2C300&#038;ssl=1" alt="" width="264" height="300" />You can calibrate the Leap in an attempt to rectify these issues, but even the calibration tool is finicky. A window appears on screen, at which you have to point the top part of the Leap. A score meter sits at the bottom of the window. You have to wave the top part of the Leap around the window, which moves a circular cursor and paints the window green. The more of the screen you paint, the more the score meter is moved. You have to reach a score of 80 for the Leap to be calibrated. At one computer, I was able to reach 80, but the Leap’s recognition didn’t improve when I loaded up some apps for a second time. At my more powerful gaming rig, after about five minutes of trying as patiently as possible, I could not crack the 65 mark and gave up.</p>
<p>Another odd issue is that you’re sitting right in front of your monitor, but have to wave your hands around in front of it. This partially obstructs your view. Your hands have to be situated above the Leap, and you can move the Leap anywhere the USB cable can reach. However, if you move it off to the side so you have a clear view of your monitor, you now have to awkwardly stretch your hands off to the side, <em>and</em> the gestures don’t exactly give you a precise feeling. When the Leap is off to the side, and you have to poke the screen to select an icon, for example, you’re now poking the Febreze bottle you keep next to your desk rather than that icon on the screen.</p>
<p>The majority of my time with the Leap was spent being frustrated. Either the Leap wouldn’t recognize my motions or appendages on a consistent basis, or it consistently recognized everything it shouldn’t, causing interference. I am honestly not entirely sure where the problems lie. Some apps, like <em>Boom Ball</em>, worked great — seemingly with the same exact motions and gestures with which other apps had trouble. The majority of the apps, though, were frustrating to use, and due to <em>Boom Ball</em><em>‘s</em> success, it’s difficult to tell whether or not the problem is with the Leap, or with the apps’ understanding of the Leap. Either way, though the Leap is only $80, it would seem like that money is better off buying you a week or two of groceries until the Leap can get itself sorted out. Until then, keep your fancy gaming mouse plugged in.</p>
<p>Written by: <a title="Posts by James Plafke" href="http://www.extremetech.com/author/jplafke">James Plafke</a>, <a href="http://www.extremetech.com/extreme/161813-leap-motion-review">ExtremeTech</a><br />
Posted by: <a href="https://www.situatedresearch.com">Situated Research</a></p>
<p>The post <a href="https://www.situatedresearch.com/2013/07/leap-motion-review-time-to-replace-the-mouse/">Leap Motion Review: Is It Time to Replace the Mouse?</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.situatedresearch.com/2013/07/leap-motion-review-time-to-replace-the-mouse/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5264</post-id>	</item>
		<item>
		<title>Beyond Google Glass: The Evolution of Augmented Reality</title>
		<link>https://www.situatedresearch.com/2013/06/beyond-google-glass-the-evolution-of-augmented-reality/</link>
					<comments>https://www.situatedresearch.com/2013/06/beyond-google-glass-the-evolution-of-augmented-reality/#_comments</comments>
		
		<dc:creator><![CDATA[Matthew Sharritt, Ph.D.]]></dc:creator>
		<pubDate>Mon, 17 Jun 2013 16:38:34 +0000</pubDate>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[HCI]]></category>
		<category><![CDATA[Simulations]]></category>
		<category><![CDATA[Usability]]></category>
		<category><![CDATA[Aesthetics]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Cognition]]></category>
		<category><![CDATA[Communication]]></category>
		<category><![CDATA[Ergonomics]]></category>
		<category><![CDATA[heads-up-display]]></category>
		<category><![CDATA[Human Factors]]></category>
		<category><![CDATA[Mobile Devices]]></category>
		<category><![CDATA[User Experience]]></category>
		<category><![CDATA[User Interface]]></category>
		<guid isPermaLink="false">http://www.situatedresearch.com/?p=5184</guid>

					<description><![CDATA[<p>The wearable revolution is heading beyond Google Glass, fitness tracking and health monitoring. The future is wearables that conjure up a digital layer in real space to “augment” reality. SANTA CLARA, Calif. — Reality isn’t what is used to be. With increasingly powerful technologies, the human universe is being reimagined way beyond Google Glass’ photo-tapping&#8230;</p>
<p>The post <a href="https://www.situatedresearch.com/2013/06/beyond-google-glass-the-evolution-of-augmented-reality/">Beyond Google Glass: The Evolution of Augmented Reality</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><b>The wearable revolution is heading beyond Google Glass, fitness tracking and health monitoring. The future is wearables that conjure up a digital layer in real space to “augment” reality.</b></p>
<p>SANTA CLARA, Calif. — Reality isn’t what is used to be. With increasingly powerful technologies, the human universe is being reimagined way beyond Google Glass’ photo-tapping and info cards floating in space above your eye. The future is fashionable eyewear, contact lenses or even bionic eyes with immersive 3D displays, conjuring up a digital layer to “augment” reality, enabling entire new classes of applications and user experiences. <span id="more-5184"></span></p>
<p>Like most technologies that eventually reach a mass market, augmented reality, or AR, has been gestating in university labs, as well as small companies focused on gaming and vertical applications, for nearly half a century. Emerging products like<a href="http://reviews.cnet.com/google-glass/">Google Glass</a> and Oculus Rift’s 3D virtual reality headset for immersive gaming are drawing attention to what could now be termed the “wearable revolution,” but they barely scratch the surface of what’s to come.</p>
<figure id="attachment_5186" aria-describedby="caption-attachment-5186" style="width: 577px" class="wp-caption aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-5186" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_11.34.51_AM_1.png?resize=577%2C450&#038;ssl=1" alt="The Sword of Damocles head-mounted display. &quot;The ultimate display would, of course, be a room within which the computer can control the existence of matter,&quot; Sutherland wrote in his 1965 essay. (Credit: Ivan Sutherland &quot;The Ultimate Display&quot;)" width="577" height="450" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_11.34.51_AM_1.png?w=577&amp;ssl=1 577w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_11.34.51_AM_1.png?resize=300%2C233&amp;ssl=1 300w" sizes="auto, (max-width: 577px) 100vw, 577px" /><figcaption id="caption-attachment-5186" class="wp-caption-text"><em>The Sword of Damocles head-mounted display. &#8220;The ultimate display would, of course, be a room within which the computer can control the existence of matter,&#8221; Sutherland wrote in his 1965 essay. (Credit: Ivan Sutherland &#8220;The Ultimate Display&#8221;)</em></figcaption></figure>
<p>The wearable revolution can be traced back to <a href="http://en.wikipedia.org/wiki/Ivan_Sutherland">Ivan Sutherland</a>, a ground-breaking computer scientist at the University of Utah who in 1965 first described a head-mounted display with half-silvered mirrors that let the wearer see a virtual world superimposed on the real world. In 1968 he was able to demonstrate the concept, which was dubbed “<a href="http://www.computerhistory.org/revolution/input-output/14/356/1830">The Sword of Damocles</a>.”</p>
<figure id="attachment_5187" aria-describedby="caption-attachment-5187" style="width: 610px" class="wp-caption aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-5187 " src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/P1040832_610x458.jpg?resize=610%2C458&#038;ssl=1" alt="P1040832_610x458" width="610" height="458" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/P1040832_610x458.jpg?w=610&amp;ssl=1 610w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/P1040832_610x458.jpg?resize=300%2C225&amp;ssl=1 300w" sizes="auto, (max-width: 610px) 100vw, 610px" /><figcaption id="caption-attachment-5187" class="wp-caption-text"><em>Steven Feiner of Columbia University and Steve Mann of the University of Toronto at the Augmented World Expo in Santa Clara, Calif., June 4, 2013. Both are now involved in the augmented reality startup Meta. (Credit: Dan Farber)</em></figcaption></figure>
<p>His work was followed up and advanced decades later by researchers including the University of Toronto’s Steve Mann and Columbia University’s Steven Feiner. In the second decade of the 21st century, the technology is finally catching up with their concepts.</p>
<p>The necessary apparatus of cameras, computers, sensors and connectivity is coming down in cost and size and increasing in speed, accuracy and resolution to point that wearable computers will be viewed as a cool accessory, mediating our interactions with the analog and digital worlds.</p>
<p><b>Augmented Reality past and future</b></p>
<p>“You need to have technology that is sufficiently comfortable and usable, and a set of potential adopters who would be comfortable wearing the technology,” said Feiner at the gathering of the fledgling AR industry at the <a href="http://augmentedworldexpo.com/">Augmented Reality Expo</a> here Wednesday. “It would be like moving from big headphones to earbuds. When they are very small and comfortable, you don’t feel weird, but cool.” He added that glasses with a “sexy lump of bump” with electronics and display could also be cool to the early adopters, especially the younger generation that has grown up digital. However, he didn’t have any prediction for when wearable computer would reach a mass market.</p>
<p>In the last decade, AR has been primarily focused on immersive gaming that teleports users to another world and on vertical applications, such as tethered, interactive 3D training simulations.</p>
<figure id="attachment_5188" aria-describedby="caption-attachment-5188" style="width: 610px" class="wp-caption aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-5188" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_2.43.54_PM_610x397.png?resize=610%2C397&#038;ssl=1" alt="Screen_Shot_2013-06-06_at_2.43.54_PM_610x397" width="610" height="397" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_2.43.54_PM_610x397.png?w=610&amp;ssl=1 610w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_2.43.54_PM_610x397.png?resize=300%2C195&amp;ssl=1 300w" sizes="auto, (max-width: 610px) 100vw, 610px" /><figcaption id="caption-attachment-5188" class="wp-caption-text"><em>Augmented reality can help in training, such as learning how to weld aided by a 3D environment that tracks user movements precisely. Seabery Augmented Training&#8217;s Soldamatic application, pictured here, could be used for medical training, bomb disposal and other industry verticals. (Credit: Dan Farber)</em></figcaption></figure>
<p>But now augmented reality is about to break out into free space. “AR will be the interface for the Internet of things,” said Greg Kipper, author of “Augmented Reality: An Emerging Technologies Guide to AR.”</p>
<p>“It is a transition time, like from the command line to graphical user interface,” he said. “Imagine trying to do PhotoShop in a command-line interface. Augmented reality will bring to the world things beyond the graphical user interface. With sensors, computational power, storage and bandwidth, we’ll see the world in a new way and make it very personal.”</p>
<figure id="attachment_5189" aria-describedby="caption-attachment-5189" style="width: 270px" class="wp-caption alignright"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-5189" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_2.34.00_PM_270x253.png?resize=270%2C253&#038;ssl=1" alt="Will Wright, the man behind The Sims, speaking at the Augmented Reality Expo on June 4, 2013. (Credit: Dan Farber)" width="270" height="253" /><figcaption id="caption-attachment-5189" class="wp-caption-text"><em>Will Wright, the man behind The Sims, speaking at the Augmented Reality Expo on June 4, 2013.</em><br /><em>(Credit: Dan Farber)</em></figcaption></figure>
<p>Will Wright, creator of the popular The Sims family games, likened AR to having super-sensory abilities, like flipping a switch to see what is underground, beneath your feet. “It’s not about bookmarks or restaurant reviews…it’s something that maps to my intuition.” He hoped that instead of augmenting reality, the technology could “decimate” reality, filtering out even more information than the brain already does to engage reality with less cacophony.</p>
<p>Steve Mann, who is rarely seen without one of his wearable computing rigs and is considered the father of AR, views the wearable revolution as a benefit to society. Quality of life can be improved with overlays of information, adding and subtracting it to facilitate improved “eyesight,” he said. “The first purpose is to help people see better,” he said during his keynote at the expo.</p>
<p>Just as the smartphone is compressing a lot of the function from antecedent computing devices into a single product, wearable computing will eventually make the handheld smartphone irrelevant.</p>
<p>“The value proposition of digital eyewear is having all devices in one, with a camera for each eye representing full body 3D, and the ability to interact with an infinite screen. We are architecting the future of interaction,” said Meron Gribetz of <a href="http://news.cnet.com/8301-11386_3-57584739-76/meta-glasses-bring-3d-and-your-hands-into-the-picture/">Meta, a Ycombinator startup</a> working on a new operating system and hardware interface for augmented reality computing.</p>
<p>“There is no other future of computing other than this technology, which can display information from the real world and control objects with your fingers at low latency and high dexterity. It’s the keyboard and mouse of the future,” he claimed.</p>
<figure id="attachment_5190" aria-describedby="caption-attachment-5190" style="width: 589px" class="wp-caption aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-5190 " src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-05-17_at_7.26.10_AM.png?resize=589%2C381&#038;ssl=1" alt="Screen_Shot_2013-05-17_at_7.26.10_AM" width="589" height="381" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-05-17_at_7.26.10_AM.png?w=589&amp;ssl=1 589w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-05-17_at_7.26.10_AM.png?resize=300%2C194&amp;ssl=1 300w" sizes="auto, (max-width: 589px) 100vw, 589px" /><figcaption id="caption-attachment-5190" class="wp-caption-text"><em>Meta can project a 3D image on a wall and users interact with their hands. (Credit: Meta)</em></figcaption></figure>
<p>Atheer, a Mountain View, Calif.-based AR startup, is <a href="http://news.cnet.com/8301-11386_3-57586750-76/atheer-bringing-3d-augmented-reality-and-gesture-control-to-android/">developing a platform that will work with existing mobile operating systems</a>, such as Google’s <a href="http://www.cnet.com/android-atlas/">Android</a>. “We are the first mobile 3D platform delivering the human interface. We are taking the touch experience on smart devices, getting the Internet out of these monitors and putting it everywhere in physical world around you,” said CEO Sulieman Itani. “In 3D, you can paint in the physical world. For example, you could leave a note to a friend in the air at restaurant, and when the friend walks into the restaurant, only they can see it.”</p>
<p>The company plans to seed its technology to developers this year and have its technology embedded in stylish, lightweight glasses with cameras next year.</p>
<p>The transition to touch and gesture interfaces doesn’t mean that the old modes of human-computer interaction go away. Just as TV didn’t replace radio, augmented reality won’t obliterate previous interfaces. The keyboard might still be the best interface for writing a book. Nor is waving your hands in front of your face all day a good interface.</p>
<p>“Holding hands out in front of self as primary interface is the ‘gorilla arm’ effect,” said Noah Zerkin, who is developing a full-body inertial motion-capture system for head-mounted displays. “You get tired. We need to have alternative interfaces. If not thought-based, it needs to be subtle gestures that don’t require that you to wave hands around in front of your face.”</p>
<p><a href="http://www.threegear.com/about.html">3Gear Systems</a> is working on technology that allows 3D cameras mounted above a keyboard, like a lamp, to detect smaller gestures just above the keyboard, such as pinching to rotate an object on a screen, and can use input from all 10 fingers with millimeter-level accuracy.</p>
<p>Some companies are taking less radical approaches, focusing on inserting a layer of digital information into scenes via smartphones. <a href="http://www.parworks.com/">Par Works</a>, for example, image recognition technology makes it possible overlays digital imagery on real world data, such as photos and videos, with precision. A person looking for an apartment takes a picture of a building with a smartphone and the app overlays information on the image, or a shopper will see coupons or other information for various products on a shelf in a drug store.</p>
<p style="text-align: center;"><iframe loading="lazy" src="http://player.vimeo.com/video/53109174?badge=0&amp;api=1" width="600" height="450" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>Brands are adopting AR technology to increase performance of ads and sales. Several companies provide ways to turn a print ad into an interactive experience just by pointing the camera at the paper or an object with a marker. <a href="http://blippar.com/about">Blippar</a>, for instance, recognizes images by pointing a phone camera at ads or object with its mark and inserts virtual layers of content.</p>
<h3>The future of augmented reality</h3>
<p>And where is all this heading over the next few years? It’s beginning to look like a real business, just as mobile did nearly a decade ago. Mobile analyst Tomi Ahonen expects AR to be adopted by a billion users by 2020. Intel is betting that AR will be big. The chip maker is <a href="http://news.cnet.com/8301-11386_3-57587699-76/intel-creates-$100-million-fund-for-perceptual-computing/">investing $100 million over the next 2 to 3 years</a> to fund companies developing “perceptual computing” software and apps, focusing on next-generation, natural user interfaces such as touch, gesture, voice, emotion sensing, biometrics, and image recognition.</p>
<p>Apple isn’t in the AR game yet, but the company has been <a href="http://news.cnet.com/8301-13579_3-57575121-37/apple-patents-augmented-reality-system/">awarded a U.S. patent</a>, “Synchronized, interactive augmented reality displays for multifunction devices,” for overlaying video on live video feeds.</p>
<figure id="attachment_5191" aria-describedby="caption-attachment-5191" style="width: 598px" class="wp-caption aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-5191" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_2.43.19_PM.png?resize=598%2C437&#038;ssl=1" alt="Screen_Shot_2013-06-06_at_2.43.19_PM" width="598" height="437" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_2.43.19_PM.png?w=598&amp;ssl=1 598w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_2.43.19_PM.png?resize=300%2C219&amp;ssl=1 300w" sizes="auto, (max-width: 598px) 100vw, 598px" /><figcaption id="caption-attachment-5191" class="wp-caption-text"><em>AR is looking is might be the 8th mass market to evolve, following print, recordings, cinema, radio, TV, the Internet and mobile, according mobile industry analyst Tomi Ahonen. (Credit: Tomi Ahonen)</em></figcaption></figure>
<p>Eyewear will evolve over the next year with comfortable stylish glasses with powerful embedded technology. They will range from Google Glass-style glance-at displays that also replace the phone to stereoscopic 3D-viewing wearables for everyday use.</p>
<p>“You’ll get 20/20, perfectly augmented vision by 2020, with movie-quality special effects blended seamlessly into the world around you,” said Dave Lorenzini, founder of AugmentedRealityCompany.com and former director at Keyhole.com, now known as Google Earth. “The effects will look so real, you’ll have to lift your display to see what’s really there. There’s more of the world than meets the eye, and that’s what’s coming.”</p>
<p>He cautioned that the growth of the AR industry could be slowed by a lack of standards to connect disparate players and their formats for bringing a 3D digital layer to life. “The AR industry has to get together to power the hallucination of what’s do come,” Lorenzini said. He added that a key turning point will be the availability of the WYSIWYG (What You See Is What You Get) real-world markup tools needed to bring this digital layer to life.</p>
<p>When the AR industry does take off, Lorenzini envisions a trillion dollar market for animated content, services and special effects layered into the real world. “Imagine people tagging friends with visual effects like a 3D halo and wings, or paying for a face recognition service to scan and add a floating name tag over the head of everyone in a room,” he said. “AR will grow from specific vertical uses to mass market appeal, driven by young, early adopters.</p>
<p>“Anyone reviewing devices like Google Glass needs to take it to their kids’ school before they pass judgement,” Lorenzini added. “This is not a device from our time, it’s from theirs. They love it, use it effortlessly, and are totally unfazed by ad targeting or privacy concerns. It will be be a natural part of who they are, how they learn, connect and play.”</p>
<p>Eventually, wearable technology will become more integrated with the human body. With advances in miniaturization and nanotechnology glasswear will be replaced with contact lens or even bionic eyes that record everything, make phone calls and allow you to use parts of your body, or even your thoughts, to navigate the world.</p>
<p>“Contact lenses are difficult now but the bionic eye will become commonplace and AR will just be a feature,” Kipper said. “Some may choose to have eyes in back of their heads, and some won’t. Some will want to be cyborgs. We will always use tools as advanced as they can be to help ourselves.”</p>
<p>Brian Mullins, CEO of Daqri, an augmented reality developer of custom solutions, went even further in melding humans and technology. “Thinking is the future of AR,” he said. Mullins talked about measuring “thought intensity” with EEG machines and focusing the mind to manipulate objects during a panel discussion at the Augmented Reality Expo.</p>
<p>Of course, the technical challenges are accompanied by issues of social etiquette and privacy. Smartphones are now a well-accepted part of daily life in most countries, but issues around data ownership and access to the data abound. The subtlety and potentially always-on capacity of wearable technologies will create more privacy concerns and challenges to acceptance.</p>
<p>Feiner acknowledged that it’s “scary” in terms of the information available, especially when billions of people with cameras and microphones can capture anything in public. “There are no laws against it,” he noted.</p>
<p>He gave Google some compliments for not overloading Glass with features. “It not suffering from doing too much too soon,” he said. Whether Google Glass is the tip of the spear for the mass adoption of far more powerful AR is uncertain, but it is doing a good job of surfacing the issues around the introduction of a disruptive, new way of computing.</p>
<p>Nicola Liberati, a Ph.D. student in philosophy at the University of Pisa studying the intersection of humans and technology, suggested another line of thinking about AR in his presentation at the expo. “We should not focus our attention only on what we can do with the such technology, but even on what we become by using it.”</p>
<p>So, when you are strolling down the street wearing the latest digital eyewear from Google, Apple or some as yet unformed or now early-stage company, with your continuous partial attention on the 3D holographic screen feeding you all kinds of personalized information about the environment around you, zeroing in on the people and places in your field of view or piped in remotely from around the real and virtual worlds, and spaces in between, think about what we have become.</p>
<p>It all depends on your perspective.</p>
<p>Written by: by <a href="http://www.cnet.com/profile/dfarber/">Dan Farber</a>, <a href="http://news.cnet.com/8301-11386_3-57588128-76/the-next-big-thing-in-tech-augmented-reality/">CNET</a> (via <a href="http://ispr.info/2013/06/12/beyond-google-glass-the-evolution-of-augmented-reality/">Presence</a>)<br />
Posted by: <a href="https://www.situatedresearch.com">Situated Research</a></p>
<p>The post <a href="https://www.situatedresearch.com/2013/06/beyond-google-glass-the-evolution-of-augmented-reality/">Beyond Google Glass: The Evolution of Augmented Reality</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.situatedresearch.com/2013/06/beyond-google-glass-the-evolution-of-augmented-reality/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5184</post-id>	</item>
		<item>
		<title>BrainDriver: A Mind Controlled Car</title>
		<link>https://www.situatedresearch.com/2011/03/braindriver-a-mind-controlled-car/</link>
					<comments>https://www.situatedresearch.com/2011/03/braindriver-a-mind-controlled-car/#_comments</comments>
		
		<dc:creator><![CDATA[Matthew Sharritt, Ph.D.]]></dc:creator>
		<pubDate>Wed, 23 Mar 2011 19:45:06 +0000</pubDate>
				<category><![CDATA[HCI]]></category>
		<category><![CDATA[Psychology]]></category>
		<category><![CDATA[Simulations]]></category>
		<category><![CDATA[Usability]]></category>
		<category><![CDATA[Affect / Emotion]]></category>
		<category><![CDATA[Cognition]]></category>
		<category><![CDATA[Communication]]></category>
		<category><![CDATA[Design]]></category>
		<category><![CDATA[Ergonomics]]></category>
		<category><![CDATA[Human Factors]]></category>
		<category><![CDATA[Learning]]></category>
		<category><![CDATA[Mental Models]]></category>
		<category><![CDATA[Usability Research]]></category>
		<category><![CDATA[User Experience]]></category>
		<category><![CDATA[User Interface]]></category>
		<category><![CDATA[User-Centered Design]]></category>
		<guid isPermaLink="false">http://www.situatedresearch.com/blog/?p=2047</guid>

					<description><![CDATA[<p>Imagine you could drive your car using only your thoughts. German researchers have just made that possible &#8211; and they have the video to prove it. Following his recent interview on the Robots Podcast about autonomous vehicles, Raul Rojas, an AI professor at the Freie Universitat Berlin, and his team have demonstrated how a driver&#8230;</p>
<p>The post <a href="https://www.situatedresearch.com/2011/03/braindriver-a-mind-controlled-car/">BrainDriver: A Mind Controlled Car</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Imagine you could drive your car using only your thoughts. German researchers have just made that possible &#8211; and they have the video to prove it. Following his recent <a href="http://www.robotspodcast.com/podcast/2010/11/robots-autonomous-vehicles/">interview on the Robots Podcast</a> about autonomous vehicles, <a href="http://www.inf.fu-berlin.de/inst/ag-ki/rojas_home/pmwiki/pmwiki.php">Raul Rojas</a>, an AI professor at the <a href="http://autonomos.inf.fu-berlin.de/">Freie Universitat Berlin</a>, and his team have demonstrated how a driver can use a brain interface to steer a vehicle. <span id="more-2047"></span>Here&#8217;s what the researchers <a href="http://autonomos-labs.com/">say</a> about the project, which they call the BrainDriver:</p>
<blockquote><p>After testing iPhone, iPad and an eye-tracking device as possible user interfaces to maneuver our research car, named &#8220;MadeInGermany,&#8221; we now also use Brain Power. The &#8220;BrainDriver&#8221; application is of course a demonstration and not roadworthy yet, but in the long run human-machine interfaces like this could bear huge potential in combination with autonomous driving.</p></blockquote>
<p>To record brain activity, the researchers use an <a href="http://www.emotiv.com/">Emotiv &#8220;neuroheadset&#8221;</a> an electroencephalography, or EEG, sensor by San Francisco-based company Emotiv, which design it for gaming. After a few rounds of &#8220;mental training,&#8221; the driver learns to move virtual objects only by thinking. Each action corresponds to a different brain activity pattern, and the BrainDriver software associates the patterns to specific commands &#8211; turn left, turn right, accelerate etc. The researchers then feed these commands to the drive-by-wire system of the vehicle, a modified Volkswagen Passat Variant 3c. Now the driver&#8217;s thoughts can control the engine, brakes, and steering.</p>
<p>To road test their brain-controlled car, the Germans headed out to the former airport in Berlin Tempelhof. The video below shows a driver thought-controlling the car, Yoda-style. &#8220;Don&#8217;t try this at home,&#8221; the narration says, only half-jokingly.</p>
<p>The researchers caution that the BrainDriver application is still a demonstration and is not ready for the road. But they say that future human-machine interfaces like this have huge potential to improve driving, especially in combination with autonomous vehicles. As an example, they mention an autonomous cab ride, where the passenger could decide, only by thinking, which route to take when more than one possibility exist.</p>
<p>This type of non-invasive brain interface could also allow disabled and paralyzed people to gain more mobility in the future, similarly to what is already happening in applications such as <a href="https://mindwalker-project.eu/">robotic exoskeletons</a> and <a href="http://www.bioe.umd.edu/news/news_story.php?id=4765">advanced prosthetics</a>.</p>
<p>Rojas group&#8217;s research is part of the <a href="http://www.autonomos.inf.fu-berlin.de/technology/made-in-germany">MadeInGermany</a> project and follows previous work on autonomous cars, including his <a href="http://www.autonomos.inf.fu-berlin.de/">AutoNOMOS Project</a>. A 2:26 minute video is available <a href="http://www.youtube.com/watch?v=iDV_62QoHjY">here</a>.</p>
<p>Written by: Markus Waibel, IEEE Spectrum blog <a href="http://spectrum.ieee.org/automaton/robotics/robotics-software/braindriver-a-mind-controlled-car">Automaton</a> (via <a href="http://sct.temple.edu/blogs/ispr/2011/03/23/braindriver-a-mind-controlled-car/">Presence</a>)<br />
Posted by: <a href="https://www.situatedresearch.com">Situated Research</a></p>
<p>The post <a href="https://www.situatedresearch.com/2011/03/braindriver-a-mind-controlled-car/">BrainDriver: A Mind Controlled Car</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.situatedresearch.com/2011/03/braindriver-a-mind-controlled-car/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2047</post-id>	</item>
	</channel>
</rss>
