<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>heads-up-display Archives - Situated Research</title>
	<atom:link href="https://www.situatedresearch.com/tag/heads-up-display/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.situatedresearch.com/tag/heads-up-display/</link>
	<description>Usability Research and User Experience Testing</description>
	<lastBuildDate>Tue, 19 Oct 2021 15:25:06 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
<site xmlns="com-wordpress:feed-additions:1">122538981</site>	<item>
		<title>Next Big Thing for Virtual Reality: Lasers in Your Eyes</title>
		<link>https://www.situatedresearch.com/2016/05/next-big-thing-virtual-reality-lasers-eyes/</link>
					<comments>https://www.situatedresearch.com/2016/05/next-big-thing-virtual-reality-lasers-eyes/#_comments</comments>
		
		<dc:creator><![CDATA[Matthew Sharritt, Ph.D.]]></dc:creator>
		<pubDate>Tue, 03 May 2016 21:29:30 +0000</pubDate>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[HCI]]></category>
		<category><![CDATA[Psychology]]></category>
		<category><![CDATA[Simulations]]></category>
		<category><![CDATA[Usability]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Cognition]]></category>
		<category><![CDATA[heads-up-display]]></category>
		<category><![CDATA[Human Factors]]></category>
		<category><![CDATA[Mobile Devices]]></category>
		<category><![CDATA[User Interface]]></category>
		<category><![CDATA[Virtual Reality]]></category>
		<guid isPermaLink="false">http://www.situatedresearch.com/?p=9341</guid>

					<description><![CDATA[<p>San Francisco – The next big leap for virtual and augmented reality headsets is likely to be eye-tracking, where headset-mounted laser beams aimed at eyeballs turn your peepers into a mouse.  A number of startups are working on this tech, with an aim to convince VR gear manufacturers such as Oculus Rift and HTC Vive&#8230;</p>
<p>The post <a href="https://www.situatedresearch.com/2016/05/next-big-thing-virtual-reality-lasers-eyes/">Next Big Thing for Virtual Reality: Lasers in Your Eyes</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>San Francisco – The next big leap for virtual and augmented reality headsets is likely to be eye-tracking, where headset-mounted laser beams aimed at eyeballs turn your peepers into a mouse. <span id="more-9341"></span></p>
<p>A number of startups are working on this tech, with an aim to convince VR gear manufacturers such as Oculus Rift and HTC Vive to incorporate the feature in a next generation device. They include SMI, Percept, Eyematic, Fove and Eyefluence, which recently allowed USA Today to demo its eye-tracking tech.</p>
<p>“Eye-tracking is almost guaranteed to be in second-generation VR headsets,” says Will Mason, cofounder of virtual reality media company UploadVR. “It’s an incredibly important piece of the VR puzzle.”</p>
<p><iframe title="USATODAY-Embed Player" width="850" height="480" frameborder="0" scrolling="no" allowfullscreen="true" marginheight="0" marginwidth="0" src="https://uw-media.usatoday.com/embed/video/82420346?placement=snow-embed"></iframe></p>
<p>At present, making selections in VR or AR environments typically involve moving the head so that your gaze lands on a clickable icon, and then either pressing a handheld remote or, in the case of Microsoft’s HoloLens or Meta 2, reaching out with your hand to make a selection by interacting with a hologram.</p>
<p>As shown in Eyefluence’s demonstration, all of that is accomplished by simply casting your eyes on a given icon and then activating it with another glance.</p>
<p>“The idea here is that anything you do with your finger on a smartphone you can do with your eyes in VR or AR,” says Eyefluence CEO Jim Marggraff, who cofounded the Milpitas, Calif-based company in 2013 with another entrepreneur, David Stiehr.</p>
<p>“Computers made a big leap when they went from punchcards to a keyboard, and then another from a keyboard to a mouse,” says Marggraff, who invented the kid-focused LeapFrog LeapPad device. “We want to again change the way we interface with data.”</p>
<h2>Eye Tech Not Due for Years</h2>
<p>As exciting as this may sound, the mainstreaming of eye-tracking technology is still a ways off. Eyefluence execs say that although they are in discussions with a variety of headset makers, their tech isn’t likely to debut until 2017. Other companies remain largely in R&amp;D mode, and Fove has a waitlist for its headset’s Kickstarter campaign.</p>
<p>The challenges for eye-tracking are both technological and financial. Creating hardware that consistently locks onto an infinite variety of eyeballs presents one hurdle, while doing so with gear that is light and consumes little power is another.</p>
<p>And while a number of companies in the space have managed to land funding – Eyefluence has raised $21.6 million in two rounds led by Intel Capital and Motorola Solutions – some tech-centric VCs are sitting on the sidelines while they wait for the technology to mature and for headset makers to make their moves.</p>
<p>“What eye-tracking will do will be powerful, but I’m not sure how valuable it will be from an investment standpoint,” says Kobie Fuller of Accel Partners. “Is there a multi-billion-dollar eye-tracking company out there? I don’t know.”</p>
<p>Among the unknowns: whether the tech will be disseminated through a licensed model or if existing headset companies will develop it on their own.</p>
<p>Still, once deployed eye-tracking has the potential to revolutionize the VR and AR experience, Fuller expects.</p>
<p>Specifically, eye-tracking will “greatly enhance interpersonal connections” in VR, he says, by applying realistic eye movements to avatars.</p>
<p>Facebook founder Mark Zuckerberg, who presciently bought Oculus for $2 billion, is banking on VR taking social interactions to a new level.</p>
<p>“The most exciting thing about eye-tracking is getting rid of that ‘uncanny valley’ (where disbelief sets in) when it comes to interacting through avatars,” says Fuller.</p>
<h2>Less Computing Power</h2>
<p>There are a few other ways in which successful eye-tracking tech could revolutionize AR and VR beyond just making such worlds easy to navigate without joysticks, remotes or hand gestures.</p>
<p>First, by tracking the eyes, such tech can telegraph to the VR device’s graphics processing unit, or GPU, that it needs to render only the images where the eyes are looking at that moment.</p>
<p>That means less computing power would be needed. Currently, a $700 Oculus headset requires a powerful computer to render its images. Oculus’s developer kit with a suitable computer costs $2,000. “If you can save on rendering power, that could significantly lower the barrier to entry into this market for consumers,” says UploadVR’s Mason.</p>
<p>And second, by not just tracking the eyeball but also potentially analyzing a person’s mood and logging in details about their gaze, AR/VR headsets are in a position to deliver targeted content as well as give third-party observers insights into the wearer’s state of mind and situational awareness.</p>
<h2>Police Use</h2>
<p>The former use case would appeal to in-VR advertisers, while the latter would come in handy for first responders.</p>
<p>“Police and paramedics are looking for an eyes-up, hands-free paradigm, and eye-tracking can bring that,” says Paul Steinberg, chief technology officer at Motorola Solutions, an investor in Eyefluence.</p>
<p>Steinberg sketches out a scene from what could be the near future.</p>
<p>A police officer on patrol has suddenly unholstered his gun. Via his augmented reality glasses with eye-tracking, colleagues at headquarters are instantly fed information about his stress level through pupil dilation information.</p>
<p>They can then both advise the officer through a radio as well as activate body cameras and other tech that he might have neglected to turn on in his stressed state. What’s more, another officer on the scene can instantly scan through a variety of command center video and data feeds through an AR headset, flipping through the options by simply looking at each one.</p>
<p>“We would have to work with our (first responder) customers to train them how to use this sort of tech of course, but the potential is there,” says Steinberg. “But we’re not months away, we’re more than that.”</p>
<h2>Demo Shows Off Ease of Use</h2>
<p>An Eyefluence indicates that eye-tracking technology isn’t a half-baked dream.</p>
<p>Navigating between a dozen tiles inside a first-generation Oculus headset proves as easy as shifting your gaze between them. Making selections – the equivalent of clicking on a mouse – is also equally intuitive. At no time does the head need to move, and hands remain at your side.</p>
<p><iframe loading="lazy" src="https://www.youtube.com/embed/iQsY3uLvYQ4" width="720" height="384" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>After about 10 minutes in the demo, it feels antiquated to pop on a VR headset and grab a remote to click through choices selected with head movements.</p>
<p>Marggraff says Eyefluence’s technical challenges included making technology that could respond in low and bright light, accounting for different size pupils and ensuring that power consumption is minimal.</p>
<p>But, he adds, his team remains convinced of the inevitability of its product: “Just like when we started tapping and swiping on our phones, we’re going to eventually need a better interface for AR and VR.”</p>
<p>Written by: <a href="http://www.usatoday.com/staff/1005/marco-della-cava/" target="_blank" rel="noopener">Marco della Cava</a>, <a href="http://www.usatoday.com/story/tech/news/2016/05/02/new-mouse-vr-could-your-eyes/83716986/" target="_blank" rel="noopener">USA Today</a> (via <a href="http://ispr.info/2016/05/03/next-big-thing-for-virtual-reality-eye-tracking-lasers-in-your-eyes/" target="_blank" rel="noopener">Presence</a>)<br />
Posted by: <a href="https://www.situatedresearch.com">Situated Research</a></p>
<p>The post <a href="https://www.situatedresearch.com/2016/05/next-big-thing-virtual-reality-lasers-eyes/">Next Big Thing for Virtual Reality: Lasers in Your Eyes</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.situatedresearch.com/2016/05/next-big-thing-virtual-reality-lasers-eyes/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">9341</post-id>	</item>
		<item>
		<title>Ohio State Doctor Shows Promise of Google Glass in Live Surgery</title>
		<link>https://www.situatedresearch.com/2013/09/ohio-state-doctor-shows-promise-google-glass-live-surgery/</link>
					<comments>https://www.situatedresearch.com/2013/09/ohio-state-doctor-shows-promise-google-glass-live-surgery/#_comments</comments>
		
		<dc:creator><![CDATA[Matthew Sharritt, Ph.D.]]></dc:creator>
		<pubDate>Thu, 12 Sep 2013 17:55:48 +0000</pubDate>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[HCI]]></category>
		<category><![CDATA[Serious Games]]></category>
		<category><![CDATA[Simulations]]></category>
		<category><![CDATA[Usability]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[heads-up-display]]></category>
		<category><![CDATA[Health Care]]></category>
		<category><![CDATA[Human Factors]]></category>
		<category><![CDATA[Mobile Devices]]></category>
		<category><![CDATA[User Interface]]></category>
		<guid isPermaLink="false">http://www.situatedresearch.com/?p=5320</guid>

					<description><![CDATA[<p>COLUMBUS, Ohio – A surgeon at The Ohio State University Wexner Medical Center is the first in the United States to consult with a distant colleague using live, point-of-view video from the operating room via Google Glass, a head-mounted computer and camera device.  “It’s a privilege to be a part of this project as we explore how&#8230;</p>
<p>The post <a href="https://www.situatedresearch.com/2013/09/ohio-state-doctor-shows-promise-google-glass-live-surgery/">Ohio State Doctor Shows Promise of Google Glass in Live Surgery</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>COLUMBUS, Ohio – A surgeon at <a href="http://www.medicalcenter.osu.edu/Pages/index.aspx">The Ohio State University Wexner Medical Center</a> is the first in the United States to consult with a distant colleague using live, point-of-view video from the operating room via Google Glass, a head-mounted computer and camera device. <span id="more-5320"></span></p>
<p>“It’s a privilege to be a part of this project as we explore how this exciting new technology might be incorporated into the everyday care of our patients,” said Dr.<a href="http://ortho.osu.edu/directories/faculty/christopherkaeding/">Christopher Kaeding, </a>the physician who performed the surgery and director of sports medicine at Ohio State.  “To be honest, once we got into the surgery, I often forgot the device was there. It just seemed very intuitive and fit seamlessly.”</p>
<p>Google Glass has a frame similar to traditional glasses, but instead of lenses, there is a small glass block that sits above the right eye.  On that glass is a computer screen that, with a simple voice command, allows users to pull up information as they would on any other computer.  Attached to the front of the device is a camera that offers a point-of-view image and the ability to take both photos and videos while the device is worn.</p>
<p>During this procedure at the medical center’s University East facility, Kaeding wore the device as he performed ACL surgery on Paula Kobalka, 47, from Westerville, Ohio, who hurt her knee playing softball.  As he performed her operation at a facility on the east side of Columbus, Google Glass showed his vantage point via the internet to audiences miles away.</p>
<p>Across town, one of Kaeding’s Ohio State colleagues, Dr. Robert Magnussen, watched the surgery his office, while on the main campus, several students at <a href="http://medicine.osu.edu/Pages/default.aspx">The Ohio State University College of Medicine</a> watched on their laptops.</p>
<p>“To have the opportunity to be a medical student and share in this technology is really exciting,” said Ryan Blackwell, a second-year medical student who watched the surgery remotely.   “This could have huge implications, not only from the medical education perspective, but because a doctor can use this technology remotely, it could spread patient care all over the world in places that we don’t have it already.”</p>
<p>“As an academic medical center, we’re very excited about the opportunities this device could provide for education,” said Dr. <a href="http://p4mi.org/clay-marsh-md">Clay Marsh,</a> chief innovation officer at The Ohio State University Wexner Medical Center. “But beyond, that, it could be a game-changer for the doctor during the surgery itself.”</p>
<p>Experts have theorized that during surgery doctors could use voice commands to instantly call up x-ray or MRI images of their patient, pathology reports or reference materials.  They could collaborate live and face-to-face with colleagues via the internet, anywhere in the world.</p>
<p>“It puts you right there, real time,” said Marsh, who is also the executive director of the Center for Personalized Health Care at Ohio State. “Not only might you be able to call up any kind of information you need or to get the help you need, but it’s the ability to do it immediately that’s so exciting,” he said.  “Now, we just have to start using it. Like many technologies, it needs to be evaluated in different situations to find out where the greatest value is and how it can impact the lives of our patients in a positive way.”</p>
<p>Only 1,000 people in the United States have been chosen to test Google Glass as part of Google’s Explorer Program. Dr. Ismail Nabeel, an assistant professor of general internal medicine at Ohio State applied and was chosen. He then partnered with Kaeding to perform this groundbreaking surgery and to help test technology that could change the way we see medicine in the future.</p>
<hr />
<p>Broadcast quality video and high-definition photos available for download: <a href="http://bit.ly/16jXc6c">http://bit.ly/16jXc6c</a></p>
<p>Written by: The <a href="http://www.medicalcenter.osu.edu/mediaroom/releases/Pages/Ohio-State-Doctor-Shows-Promise-of-Google-Glass-in-Live-Surgery.aspx">Ohio State University</a> (via <a href="http://ispr.info/2013/09/03/ohio-state-doctor-shows-promise-of-google-glass-in-live-surgery/">Presence</a>); for details about the first international Google Glass surgery, in June 2013, see <a href="http://www.clinicacemtro.com/index.php/en/sala-de-prensa-3/noticias/679-clinica-cemtro-first-ggogle-glass-surgery">Clinica Cemtro</a>; for a report about early reactions from those testing Glass see <a href="http://www.npr.org/templates/story/story.php?storyId=216094970">NPR</a></p>
<p>Image: Dr. Christopher Kaeding, an orthopedic surgeon at The Ohio State University Wexner Medical Center is shown wearing Google Glass</p>
<p>Posted by: <a href="https://www.situatedresearch.com">Situated Research</a></p>
<p>The post <a href="https://www.situatedresearch.com/2013/09/ohio-state-doctor-shows-promise-google-glass-live-surgery/">Ohio State Doctor Shows Promise of Google Glass in Live Surgery</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.situatedresearch.com/2013/09/ohio-state-doctor-shows-promise-google-glass-live-surgery/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5320</post-id>	</item>
		<item>
		<title>Beyond Google Glass: The Evolution of Augmented Reality</title>
		<link>https://www.situatedresearch.com/2013/06/beyond-google-glass-the-evolution-of-augmented-reality/</link>
					<comments>https://www.situatedresearch.com/2013/06/beyond-google-glass-the-evolution-of-augmented-reality/#_comments</comments>
		
		<dc:creator><![CDATA[Matthew Sharritt, Ph.D.]]></dc:creator>
		<pubDate>Mon, 17 Jun 2013 16:38:34 +0000</pubDate>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[HCI]]></category>
		<category><![CDATA[Simulations]]></category>
		<category><![CDATA[Usability]]></category>
		<category><![CDATA[Aesthetics]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Cognition]]></category>
		<category><![CDATA[Communication]]></category>
		<category><![CDATA[Ergonomics]]></category>
		<category><![CDATA[heads-up-display]]></category>
		<category><![CDATA[Human Factors]]></category>
		<category><![CDATA[Mobile Devices]]></category>
		<category><![CDATA[User Experience]]></category>
		<category><![CDATA[User Interface]]></category>
		<guid isPermaLink="false">http://www.situatedresearch.com/?p=5184</guid>

					<description><![CDATA[<p>The wearable revolution is heading beyond Google Glass, fitness tracking and health monitoring. The future is wearables that conjure up a digital layer in real space to “augment” reality. SANTA CLARA, Calif. — Reality isn’t what is used to be. With increasingly powerful technologies, the human universe is being reimagined way beyond Google Glass’ photo-tapping&#8230;</p>
<p>The post <a href="https://www.situatedresearch.com/2013/06/beyond-google-glass-the-evolution-of-augmented-reality/">Beyond Google Glass: The Evolution of Augmented Reality</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><b>The wearable revolution is heading beyond Google Glass, fitness tracking and health monitoring. The future is wearables that conjure up a digital layer in real space to “augment” reality.</b></p>
<p>SANTA CLARA, Calif. — Reality isn’t what is used to be. With increasingly powerful technologies, the human universe is being reimagined way beyond Google Glass’ photo-tapping and info cards floating in space above your eye. The future is fashionable eyewear, contact lenses or even bionic eyes with immersive 3D displays, conjuring up a digital layer to “augment” reality, enabling entire new classes of applications and user experiences. <span id="more-5184"></span></p>
<p>Like most technologies that eventually reach a mass market, augmented reality, or AR, has been gestating in university labs, as well as small companies focused on gaming and vertical applications, for nearly half a century. Emerging products like<a href="http://reviews.cnet.com/google-glass/">Google Glass</a> and Oculus Rift’s 3D virtual reality headset for immersive gaming are drawing attention to what could now be termed the “wearable revolution,” but they barely scratch the surface of what’s to come.</p>
<figure id="attachment_5186" aria-describedby="caption-attachment-5186" style="width: 577px" class="wp-caption aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-5186" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_11.34.51_AM_1.png?resize=577%2C450&#038;ssl=1" alt="The Sword of Damocles head-mounted display. &quot;The ultimate display would, of course, be a room within which the computer can control the existence of matter,&quot; Sutherland wrote in his 1965 essay. (Credit: Ivan Sutherland &quot;The Ultimate Display&quot;)" width="577" height="450" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_11.34.51_AM_1.png?w=577&amp;ssl=1 577w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_11.34.51_AM_1.png?resize=300%2C233&amp;ssl=1 300w" sizes="auto, (max-width: 577px) 100vw, 577px" /><figcaption id="caption-attachment-5186" class="wp-caption-text"><em>The Sword of Damocles head-mounted display. &#8220;The ultimate display would, of course, be a room within which the computer can control the existence of matter,&#8221; Sutherland wrote in his 1965 essay. (Credit: Ivan Sutherland &#8220;The Ultimate Display&#8221;)</em></figcaption></figure>
<p>The wearable revolution can be traced back to <a href="http://en.wikipedia.org/wiki/Ivan_Sutherland">Ivan Sutherland</a>, a ground-breaking computer scientist at the University of Utah who in 1965 first described a head-mounted display with half-silvered mirrors that let the wearer see a virtual world superimposed on the real world. In 1968 he was able to demonstrate the concept, which was dubbed “<a href="http://www.computerhistory.org/revolution/input-output/14/356/1830">The Sword of Damocles</a>.”</p>
<figure id="attachment_5187" aria-describedby="caption-attachment-5187" style="width: 610px" class="wp-caption aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-5187 " src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/P1040832_610x458.jpg?resize=610%2C458&#038;ssl=1" alt="P1040832_610x458" width="610" height="458" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/P1040832_610x458.jpg?w=610&amp;ssl=1 610w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/P1040832_610x458.jpg?resize=300%2C225&amp;ssl=1 300w" sizes="auto, (max-width: 610px) 100vw, 610px" /><figcaption id="caption-attachment-5187" class="wp-caption-text"><em>Steven Feiner of Columbia University and Steve Mann of the University of Toronto at the Augmented World Expo in Santa Clara, Calif., June 4, 2013. Both are now involved in the augmented reality startup Meta. (Credit: Dan Farber)</em></figcaption></figure>
<p>His work was followed up and advanced decades later by researchers including the University of Toronto’s Steve Mann and Columbia University’s Steven Feiner. In the second decade of the 21st century, the technology is finally catching up with their concepts.</p>
<p>The necessary apparatus of cameras, computers, sensors and connectivity is coming down in cost and size and increasing in speed, accuracy and resolution to point that wearable computers will be viewed as a cool accessory, mediating our interactions with the analog and digital worlds.</p>
<p><b>Augmented Reality past and future</b></p>
<p>“You need to have technology that is sufficiently comfortable and usable, and a set of potential adopters who would be comfortable wearing the technology,” said Feiner at the gathering of the fledgling AR industry at the <a href="http://augmentedworldexpo.com/">Augmented Reality Expo</a> here Wednesday. “It would be like moving from big headphones to earbuds. When they are very small and comfortable, you don’t feel weird, but cool.” He added that glasses with a “sexy lump of bump” with electronics and display could also be cool to the early adopters, especially the younger generation that has grown up digital. However, he didn’t have any prediction for when wearable computer would reach a mass market.</p>
<p>In the last decade, AR has been primarily focused on immersive gaming that teleports users to another world and on vertical applications, such as tethered, interactive 3D training simulations.</p>
<figure id="attachment_5188" aria-describedby="caption-attachment-5188" style="width: 610px" class="wp-caption aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-5188" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_2.43.54_PM_610x397.png?resize=610%2C397&#038;ssl=1" alt="Screen_Shot_2013-06-06_at_2.43.54_PM_610x397" width="610" height="397" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_2.43.54_PM_610x397.png?w=610&amp;ssl=1 610w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_2.43.54_PM_610x397.png?resize=300%2C195&amp;ssl=1 300w" sizes="auto, (max-width: 610px) 100vw, 610px" /><figcaption id="caption-attachment-5188" class="wp-caption-text"><em>Augmented reality can help in training, such as learning how to weld aided by a 3D environment that tracks user movements precisely. Seabery Augmented Training&#8217;s Soldamatic application, pictured here, could be used for medical training, bomb disposal and other industry verticals. (Credit: Dan Farber)</em></figcaption></figure>
<p>But now augmented reality is about to break out into free space. “AR will be the interface for the Internet of things,” said Greg Kipper, author of “Augmented Reality: An Emerging Technologies Guide to AR.”</p>
<p>“It is a transition time, like from the command line to graphical user interface,” he said. “Imagine trying to do PhotoShop in a command-line interface. Augmented reality will bring to the world things beyond the graphical user interface. With sensors, computational power, storage and bandwidth, we’ll see the world in a new way and make it very personal.”</p>
<figure id="attachment_5189" aria-describedby="caption-attachment-5189" style="width: 270px" class="wp-caption alignright"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-5189" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_2.34.00_PM_270x253.png?resize=270%2C253&#038;ssl=1" alt="Will Wright, the man behind The Sims, speaking at the Augmented Reality Expo on June 4, 2013. (Credit: Dan Farber)" width="270" height="253" /><figcaption id="caption-attachment-5189" class="wp-caption-text"><em>Will Wright, the man behind The Sims, speaking at the Augmented Reality Expo on June 4, 2013.</em><br /><em>(Credit: Dan Farber)</em></figcaption></figure>
<p>Will Wright, creator of the popular The Sims family games, likened AR to having super-sensory abilities, like flipping a switch to see what is underground, beneath your feet. “It’s not about bookmarks or restaurant reviews…it’s something that maps to my intuition.” He hoped that instead of augmenting reality, the technology could “decimate” reality, filtering out even more information than the brain already does to engage reality with less cacophony.</p>
<p>Steve Mann, who is rarely seen without one of his wearable computing rigs and is considered the father of AR, views the wearable revolution as a benefit to society. Quality of life can be improved with overlays of information, adding and subtracting it to facilitate improved “eyesight,” he said. “The first purpose is to help people see better,” he said during his keynote at the expo.</p>
<p>Just as the smartphone is compressing a lot of the function from antecedent computing devices into a single product, wearable computing will eventually make the handheld smartphone irrelevant.</p>
<p>“The value proposition of digital eyewear is having all devices in one, with a camera for each eye representing full body 3D, and the ability to interact with an infinite screen. We are architecting the future of interaction,” said Meron Gribetz of <a href="http://news.cnet.com/8301-11386_3-57584739-76/meta-glasses-bring-3d-and-your-hands-into-the-picture/">Meta, a Ycombinator startup</a> working on a new operating system and hardware interface for augmented reality computing.</p>
<p>“There is no other future of computing other than this technology, which can display information from the real world and control objects with your fingers at low latency and high dexterity. It’s the keyboard and mouse of the future,” he claimed.</p>
<figure id="attachment_5190" aria-describedby="caption-attachment-5190" style="width: 589px" class="wp-caption aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-5190 " src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-05-17_at_7.26.10_AM.png?resize=589%2C381&#038;ssl=1" alt="Screen_Shot_2013-05-17_at_7.26.10_AM" width="589" height="381" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-05-17_at_7.26.10_AM.png?w=589&amp;ssl=1 589w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-05-17_at_7.26.10_AM.png?resize=300%2C194&amp;ssl=1 300w" sizes="auto, (max-width: 589px) 100vw, 589px" /><figcaption id="caption-attachment-5190" class="wp-caption-text"><em>Meta can project a 3D image on a wall and users interact with their hands. (Credit: Meta)</em></figcaption></figure>
<p>Atheer, a Mountain View, Calif.-based AR startup, is <a href="http://news.cnet.com/8301-11386_3-57586750-76/atheer-bringing-3d-augmented-reality-and-gesture-control-to-android/">developing a platform that will work with existing mobile operating systems</a>, such as Google’s <a href="http://www.cnet.com/android-atlas/">Android</a>. “We are the first mobile 3D platform delivering the human interface. We are taking the touch experience on smart devices, getting the Internet out of these monitors and putting it everywhere in physical world around you,” said CEO Sulieman Itani. “In 3D, you can paint in the physical world. For example, you could leave a note to a friend in the air at restaurant, and when the friend walks into the restaurant, only they can see it.”</p>
<p>The company plans to seed its technology to developers this year and have its technology embedded in stylish, lightweight glasses with cameras next year.</p>
<p>The transition to touch and gesture interfaces doesn’t mean that the old modes of human-computer interaction go away. Just as TV didn’t replace radio, augmented reality won’t obliterate previous interfaces. The keyboard might still be the best interface for writing a book. Nor is waving your hands in front of your face all day a good interface.</p>
<p>“Holding hands out in front of self as primary interface is the ‘gorilla arm’ effect,” said Noah Zerkin, who is developing a full-body inertial motion-capture system for head-mounted displays. “You get tired. We need to have alternative interfaces. If not thought-based, it needs to be subtle gestures that don’t require that you to wave hands around in front of your face.”</p>
<p><a href="http://www.threegear.com/about.html">3Gear Systems</a> is working on technology that allows 3D cameras mounted above a keyboard, like a lamp, to detect smaller gestures just above the keyboard, such as pinching to rotate an object on a screen, and can use input from all 10 fingers with millimeter-level accuracy.</p>
<p>Some companies are taking less radical approaches, focusing on inserting a layer of digital information into scenes via smartphones. <a href="http://www.parworks.com/">Par Works</a>, for example, image recognition technology makes it possible overlays digital imagery on real world data, such as photos and videos, with precision. A person looking for an apartment takes a picture of a building with a smartphone and the app overlays information on the image, or a shopper will see coupons or other information for various products on a shelf in a drug store.</p>
<p style="text-align: center;"><iframe loading="lazy" src="http://player.vimeo.com/video/53109174?badge=0&amp;api=1" width="600" height="450" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>Brands are adopting AR technology to increase performance of ads and sales. Several companies provide ways to turn a print ad into an interactive experience just by pointing the camera at the paper or an object with a marker. <a href="http://blippar.com/about">Blippar</a>, for instance, recognizes images by pointing a phone camera at ads or object with its mark and inserts virtual layers of content.</p>
<h3>The future of augmented reality</h3>
<p>And where is all this heading over the next few years? It’s beginning to look like a real business, just as mobile did nearly a decade ago. Mobile analyst Tomi Ahonen expects AR to be adopted by a billion users by 2020. Intel is betting that AR will be big. The chip maker is <a href="http://news.cnet.com/8301-11386_3-57587699-76/intel-creates-$100-million-fund-for-perceptual-computing/">investing $100 million over the next 2 to 3 years</a> to fund companies developing “perceptual computing” software and apps, focusing on next-generation, natural user interfaces such as touch, gesture, voice, emotion sensing, biometrics, and image recognition.</p>
<p>Apple isn’t in the AR game yet, but the company has been <a href="http://news.cnet.com/8301-13579_3-57575121-37/apple-patents-augmented-reality-system/">awarded a U.S. patent</a>, “Synchronized, interactive augmented reality displays for multifunction devices,” for overlaying video on live video feeds.</p>
<figure id="attachment_5191" aria-describedby="caption-attachment-5191" style="width: 598px" class="wp-caption aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-5191" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_2.43.19_PM.png?resize=598%2C437&#038;ssl=1" alt="Screen_Shot_2013-06-06_at_2.43.19_PM" width="598" height="437" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_2.43.19_PM.png?w=598&amp;ssl=1 598w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_2.43.19_PM.png?resize=300%2C219&amp;ssl=1 300w" sizes="auto, (max-width: 598px) 100vw, 598px" /><figcaption id="caption-attachment-5191" class="wp-caption-text"><em>AR is looking is might be the 8th mass market to evolve, following print, recordings, cinema, radio, TV, the Internet and mobile, according mobile industry analyst Tomi Ahonen. (Credit: Tomi Ahonen)</em></figcaption></figure>
<p>Eyewear will evolve over the next year with comfortable stylish glasses with powerful embedded technology. They will range from Google Glass-style glance-at displays that also replace the phone to stereoscopic 3D-viewing wearables for everyday use.</p>
<p>“You’ll get 20/20, perfectly augmented vision by 2020, with movie-quality special effects blended seamlessly into the world around you,” said Dave Lorenzini, founder of AugmentedRealityCompany.com and former director at Keyhole.com, now known as Google Earth. “The effects will look so real, you’ll have to lift your display to see what’s really there. There’s more of the world than meets the eye, and that’s what’s coming.”</p>
<p>He cautioned that the growth of the AR industry could be slowed by a lack of standards to connect disparate players and their formats for bringing a 3D digital layer to life. “The AR industry has to get together to power the hallucination of what’s do come,” Lorenzini said. He added that a key turning point will be the availability of the WYSIWYG (What You See Is What You Get) real-world markup tools needed to bring this digital layer to life.</p>
<p>When the AR industry does take off, Lorenzini envisions a trillion dollar market for animated content, services and special effects layered into the real world. “Imagine people tagging friends with visual effects like a 3D halo and wings, or paying for a face recognition service to scan and add a floating name tag over the head of everyone in a room,” he said. “AR will grow from specific vertical uses to mass market appeal, driven by young, early adopters.</p>
<p>“Anyone reviewing devices like Google Glass needs to take it to their kids’ school before they pass judgement,” Lorenzini added. “This is not a device from our time, it’s from theirs. They love it, use it effortlessly, and are totally unfazed by ad targeting or privacy concerns. It will be be a natural part of who they are, how they learn, connect and play.”</p>
<p>Eventually, wearable technology will become more integrated with the human body. With advances in miniaturization and nanotechnology glasswear will be replaced with contact lens or even bionic eyes that record everything, make phone calls and allow you to use parts of your body, or even your thoughts, to navigate the world.</p>
<p>“Contact lenses are difficult now but the bionic eye will become commonplace and AR will just be a feature,” Kipper said. “Some may choose to have eyes in back of their heads, and some won’t. Some will want to be cyborgs. We will always use tools as advanced as they can be to help ourselves.”</p>
<p>Brian Mullins, CEO of Daqri, an augmented reality developer of custom solutions, went even further in melding humans and technology. “Thinking is the future of AR,” he said. Mullins talked about measuring “thought intensity” with EEG machines and focusing the mind to manipulate objects during a panel discussion at the Augmented Reality Expo.</p>
<p>Of course, the technical challenges are accompanied by issues of social etiquette and privacy. Smartphones are now a well-accepted part of daily life in most countries, but issues around data ownership and access to the data abound. The subtlety and potentially always-on capacity of wearable technologies will create more privacy concerns and challenges to acceptance.</p>
<p>Feiner acknowledged that it’s “scary” in terms of the information available, especially when billions of people with cameras and microphones can capture anything in public. “There are no laws against it,” he noted.</p>
<p>He gave Google some compliments for not overloading Glass with features. “It not suffering from doing too much too soon,” he said. Whether Google Glass is the tip of the spear for the mass adoption of far more powerful AR is uncertain, but it is doing a good job of surfacing the issues around the introduction of a disruptive, new way of computing.</p>
<p>Nicola Liberati, a Ph.D. student in philosophy at the University of Pisa studying the intersection of humans and technology, suggested another line of thinking about AR in his presentation at the expo. “We should not focus our attention only on what we can do with the such technology, but even on what we become by using it.”</p>
<p>So, when you are strolling down the street wearing the latest digital eyewear from Google, Apple or some as yet unformed or now early-stage company, with your continuous partial attention on the 3D holographic screen feeding you all kinds of personalized information about the environment around you, zeroing in on the people and places in your field of view or piped in remotely from around the real and virtual worlds, and spaces in between, think about what we have become.</p>
<p>It all depends on your perspective.</p>
<p>Written by: by <a href="http://www.cnet.com/profile/dfarber/">Dan Farber</a>, <a href="http://news.cnet.com/8301-11386_3-57588128-76/the-next-big-thing-in-tech-augmented-reality/">CNET</a> (via <a href="http://ispr.info/2013/06/12/beyond-google-glass-the-evolution-of-augmented-reality/">Presence</a>)<br />
Posted by: <a href="https://www.situatedresearch.com">Situated Research</a></p>
<p>The post <a href="https://www.situatedresearch.com/2013/06/beyond-google-glass-the-evolution-of-augmented-reality/">Beyond Google Glass: The Evolution of Augmented Reality</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.situatedresearch.com/2013/06/beyond-google-glass-the-evolution-of-augmented-reality/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5184</post-id>	</item>
		<item>
		<title>New Media Capture and Delivery System Gives Users Immersive “Experiences”</title>
		<link>https://www.situatedresearch.com/2013/04/new-media-capture-and-delivery-system-gives-users-immersive-experiences/</link>
					<comments>https://www.situatedresearch.com/2013/04/new-media-capture-and-delivery-system-gives-users-immersive-experiences/#_comments</comments>
		
		<dc:creator><![CDATA[Matthew Sharritt, Ph.D.]]></dc:creator>
		<pubDate>Fri, 12 Apr 2013 16:02:17 +0000</pubDate>
				<category><![CDATA[Education]]></category>
		<category><![CDATA[Gaming]]></category>
		<category><![CDATA[HCI]]></category>
		<category><![CDATA[Serious Games]]></category>
		<category><![CDATA[Simulations]]></category>
		<category><![CDATA[Aesthetics]]></category>
		<category><![CDATA[Game Development]]></category>
		<category><![CDATA[heads-up-display]]></category>
		<category><![CDATA[Learning]]></category>
		<category><![CDATA[User Experience]]></category>
		<category><![CDATA[User Interface]]></category>
		<category><![CDATA[Virtual Reality]]></category>
		<guid isPermaLink="false">http://www.situatedresearch.com/?p=5082</guid>

					<description><![CDATA[<p>Experience Media Studios today announced the worldwide launch of its patent-pending 3DPOV® system, a pioneering new solution for capturing, delivering, and experiencing immersive media. Experience Media Studios’ 3DPOV® system enables the capture of a three-dimensional visual and auditory experience from the first-person perspective. 3DPOV® media delivers a higher level of sensory engagement than virtual reality that replicates a true-to-life binocular&#8230;</p>
<p>The post <a href="https://www.situatedresearch.com/2013/04/new-media-capture-and-delivery-system-gives-users-immersive-experiences/">New Media Capture and Delivery System Gives Users Immersive “Experiences”</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><a href="http://www.experiencemediastudios.com/">Experience Media Studios</a> today announced the worldwide launch of its patent-pending <a href="http://www.3dpov.com/">3DPOV</a><sup>®</sup> system, a pioneering new solution for capturing, delivering, and experiencing immersive media.</p>
<p>Experience Media Studios’ 3DPOV<sup>®</sup> system enables the capture of a three-dimensional visual and auditory experience from the first-person perspective. 3DPOV<sup>®</sup> media delivers a higher level of sensory engagement than virtual reality that replicates a true-to-life binocular and peripheral visual field and a stereophonic auditory experience. <span id="more-5082"></span>The system also captures GPS coordinates and altitude information to further augment reality.</p>
<p><iframe loading="lazy" src="http://www.youtube.com/embed/VkWFjDOkU4M" height="360" width="640" allowfullscreen="" frameborder="0"></iframe></p>
<p>“Modern audiences demand more of their media experiences,” said <a href="http://en.wikipedia.org/wiki/Michael-Ryan_Fletchall">Michael-Ryan Fletchall</a>, CEO of Experience Media Studios. “With more control over how and when they consume media, audiences want new and individualized experiences offering deeper levels of engagement. 3DPOV delivers an experience that goes far beyond just watching.”</p>
<p>Immersive media quickly absorbs the viewer into the experience, providing implications for critical skills training, simulations, and experiential learning environments. Experience Media Studios formally launched 3DPOV<sup>®</sup> in conjunction with the Military and Government Summit at the <a href="http://www.nabshow.com/">National Association of Broadcasters (NAB) Show</a>. Today’s announcement underscores the value of 3DPOV<sup>®</sup> in these key segments where small details not available in virtual reality are integrated to assess and teach armed forces critical life-saving, decision-under-pressure skills through rapid processing and reaction according to policies and protocols.</p>
<p>“In developing this media for military and government blended learning simulations, we immediately recognized the opportunity to apply the technology to our wheelhouse of entertainment and advertising,” said Fletchall.</p>
<p>Experience Media Studios is currently in pre-production with <a href="http://www.3dpov.com/possessedsoul"><i>Possessed Soul</i></a>, its upcoming feature length horror “experience” shot entirely using 3DPOV<sup>®</sup>technology. The project is partially financed through pre-sales to the horror-genre fan community using the <a href="http://www.igg.me/at/possessedsoul">Indiegogo</a> crowdfunding platform. In 2012, Experience Media Studios released the Josh Hutcherson drama, <a href="http://www.facebook.com/TheForgerMovie"><i>The Forger</i></a>.</p>
<p>The 3DPOV<sup>®</sup> system also features a cloud-based digital delivery platform, connecting affiliated media production companies with 3DPOV<sup>®</sup> technology to build a high quality digital asset inventory for worldwide distribution to private and public end users via 2D and 3D televisions, personal computers, and mobile devices.</p>
<p>“Our goal was to build a complete front-to-backend solution for creating and directly distributing unique 3DPOV<sup>®</sup> content,” said Fletchall. “We have an exclusive content platform for creating, cataloging, managing and distributing experience-driven 3DPOV<sup>®</sup> assets through an industry-leading pipeline with a user-friendly interface.”</p>
<p>Experience Media Studios will roll out the consumer subscription service component of <a href="http://www.3dpov.com/">3DPOV.com</a> with limited content later in 2013.</p>
<p>Written by: <a href="http://www.prnewswire.com/news-releases-test/new-media-capture-and-delivery-system-gives-users-immersive-experiences-202152391.html">PR Newswire</a> (via <a href="http://ispr.info/2013/04/10/new-media-capturedelivery-system-3dpov-gives-users-immersive-experiences/">Presence</a>); more images available at the <a href="http://experiencemediastudios.com/3dpov/">Experience Media Studies website</a><br />
Posted by: <a href="https://www.situatedresearch.com">Situated Research</a></p>
<p>The post <a href="https://www.situatedresearch.com/2013/04/new-media-capture-and-delivery-system-gives-users-immersive-experiences/">New Media Capture and Delivery System Gives Users Immersive “Experiences”</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.situatedresearch.com/2013/04/new-media-capture-and-delivery-system-gives-users-immersive-experiences/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5082</post-id>	</item>
		<item>
		<title>Your Eyes Can Control Augmented Reality Glasses</title>
		<link>https://www.situatedresearch.com/2012/11/your-eyes-can-control-augmented-reality-glasses/</link>
					<comments>https://www.situatedresearch.com/2012/11/your-eyes-can-control-augmented-reality-glasses/#_comments</comments>
		
		<dc:creator><![CDATA[Matthew Sharritt, Ph.D.]]></dc:creator>
		<pubDate>Sun, 11 Nov 2012 17:33:03 +0000</pubDate>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[HCI]]></category>
		<category><![CDATA[Usability]]></category>
		<category><![CDATA[Aesthetics]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Communication]]></category>
		<category><![CDATA[heads-up-display]]></category>
		<category><![CDATA[Human Factors]]></category>
		<category><![CDATA[Mobile Devices]]></category>
		<category><![CDATA[User Interface]]></category>
		<category><![CDATA[User-Centered Design]]></category>
		<guid isPermaLink="false">http://www.situatedresearch.com/?p=4353</guid>

					<description><![CDATA[<p>The simple act of turning a page has begun to look outdated with iPads replacing books and manuals for many working professionals. But an augmented reality display similar to Google Glasses frees up wearers’ hands by allowing them to turn virtual pages using their eyes alone. Such a display comes in the form of futuristic&#8230;</p>
<p>The post <a href="https://www.situatedresearch.com/2012/11/your-eyes-can-control-augmented-reality-glasses/">Your Eyes Can Control Augmented Reality Glasses</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>The simple act of turning a page has begun to look outdated with iPads replacing books and manuals for many working professionals. But an augmented reality display similar to Google Glasses frees up wearers’ hands by allowing them to turn virtual pages using their eyes alone. <span id="more-4353"></span></p>
<p>Such a display comes in the form of <a href="http://www.technewsdaily.com/5804-google-glasses-touchpad-control.html">futuristic glasses</a> that allow wearers to see virtual maps, drawings or other images — up to 3 feet (1 meter) in size — projected in front of their eyes. A chip smaller than half the size of a postage stamp can detect the wearer’s eye movements so that they just need to glance at an arrow key to turn a page in a virtual instruction manual or book.</p>
<p>“The data glasses allow us to see the real world in the normal way, while at the same time registering our eye movements with the camera,” said Rigo Herold, project manager at the Fraunhofer Center for Organics, Materials and Electronic Devices Dresden in Germany.</p>
<p>Such eye movement control frees up the hands of the glasses wearers entirely so that they can focus on their real-world work — whether they’re U.S. military mechanics trying to fix armored vehicles or hospital surgeons doing a marathon operation.</p>
<p>The hands-free technology for the augmented reality glasses may also represent the future direction for Google Glasses. That head-worn display made by the Internet search giant was made public as something that requires hand control, but <a href="http://www.techradar.com/news/portable-devices/google-patents-eye-tracking-for-google-glass-1091428">TechRadar noticed</a> that Google has since patented eye-tracking technology for its glasses.</p>
<p>“Despite the fact that Google’s data glasses, for instance, might be a little more stylish in appearance, navigating through the menu still requires using joysticks, whereas our glasses do not,” Herold said.</p>
<p>Researchers plan to exhibit the technology at the Electronica 2012 trade fair in Munich from Nov. 13-16. But buyers will have the choice of ordering a computer with the system or installing the device’s software on their own computer. The system can run on Windows or Linux.</p>
<p>The system’s hardware and software came from German researchers at the Fraunhofer Institute for Optronics, System Technologies and Image Exploitation IOSB, while the company TRIVISIO produced the actual eyewear.</p>
<p>Written by: <a href="http://www.technewsdaily.com/15342-eyes-augmented-reality-glasses.html" target="_blank">TechNewsDaily</a> (via <a href="http://ispr.info/2012/11/07/your-eyes-can-control-augmented-reality-glasses/">Presence</a>)<br />
Posted by: <a href="https://www.situatedresearch.com">Situated Research</a></p>
<p>The post <a href="https://www.situatedresearch.com/2012/11/your-eyes-can-control-augmented-reality-glasses/">Your Eyes Can Control Augmented Reality Glasses</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.situatedresearch.com/2012/11/your-eyes-can-control-augmented-reality-glasses/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4353</post-id>	</item>
		<item>
		<title>Buttons Were An Inspired UI Hack, But Now We’ve Got Better Options</title>
		<link>https://www.situatedresearch.com/2012/03/buttons-were-an-inspired-ui-hack-but-now-weve-got-better-options/</link>
					<comments>https://www.situatedresearch.com/2012/03/buttons-were-an-inspired-ui-hack-but-now-weve-got-better-options/#_comments</comments>
		
		<dc:creator><![CDATA[Matthew Sharritt, Ph.D.]]></dc:creator>
		<pubDate>Tue, 13 Mar 2012 17:06:33 +0000</pubDate>
				<category><![CDATA[Education]]></category>
		<category><![CDATA[Gaming]]></category>
		<category><![CDATA[HCI]]></category>
		<category><![CDATA[Psychology]]></category>
		<category><![CDATA[Usability]]></category>
		<category><![CDATA[Aesthetics]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Cognition]]></category>
		<category><![CDATA[Communication]]></category>
		<category><![CDATA[Design]]></category>
		<category><![CDATA[heads-up-display]]></category>
		<category><![CDATA[Human Factors]]></category>
		<category><![CDATA[Mobile Devices]]></category>
		<category><![CDATA[Usability Research]]></category>
		<category><![CDATA[User Experience]]></category>
		<category><![CDATA[User Interface]]></category>
		<category><![CDATA[User-Centered Design]]></category>
		<guid isPermaLink="false">http://www.situatedresearch.com/blog/?p=2548</guid>

					<description><![CDATA[<p>Josh Clark on the future of touch and other types of UI. If you’ve ever seen a child interact with an iPad, you’ve seen the power of the touch interface in action. Is this a sign of what’s to come — will we be touching and swiping screens rather tapping buttons? I reached out to&#8230;</p>
<p>The post <a href="https://www.situatedresearch.com/2012/03/buttons-were-an-inspired-ui-hack-but-now-weve-got-better-options/">Buttons Were An Inspired UI Hack, But Now We’ve Got Better Options</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h3>Josh Clark on the future of touch and other types of UI.</h3>
<p>If you’ve ever seen a child interact with an iPad, you’ve seen the power of the touch interface in action. Is this a sign of what’s to come — will we be touching and swiping screens rather tapping buttons? I reached out to Josh Clark (<a href="https://twitter.com/#%21/globalmoxie">@globalmoxie</a>), founder of <a href="http://globalmoxie.com/about/index.shtml">Global Moxie</a> and author of <a href="http://shop.oreilly.com/product/0636920001133.do">“Tapworthy,”</a> to get his thoughts on the future of touch and computer interaction, and whether or not buttons face extinction.</p>
<p>Clark says a touch-based UI is more intuitive to the way we think and act in the world. He also says touch is just the beginning — speech, facial expression, and physical gestures are on they way, and we need to start thinking about content in these contexts. <span id="more-2548"></span></p>
<p style="text-align: center;"><img data-recalc-dims="1" loading="lazy" decoding="async" class="aligncenter wp-image-2549 size-full" title="apple-trackpad-instructions" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2012/03/apple-trackpad-instructions.jpg?resize=600%2C396&#038;ssl=1" alt="" width="600" height="396" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2012/03/apple-trackpad-instructions.jpg?w=600&amp;ssl=1 600w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2012/03/apple-trackpad-instructions.jpg?resize=300%2C198&amp;ssl=1 300w" sizes="auto, (max-width: 600px) 100vw, 600px" /><em>[Image: Screenshot from Apple&#8217;s trackpad tutorial.]</em></p>
<p>Clark will expand on these ideas at <a href="http://oreilly.com/minitoc-austin.html?cmp=il-radar-tc12-josh-clark-toc-austin-interview">Mini TOC Austin</a> on March 9 in Austin, Texas.</p>
<p>Our interview follows.</p>
<h3>Are we close to seeing the end of buttons?</h3>
<p><strong>Josh Clark:</strong> I frequently say that buttons are a hack, and people sometimes take that the wrong way. I don’t mean it in a particularly negative way. I think buttons are an inspired hack, a workaround that we’ve needed just to get stuff done. That’s true in the real world as well as the virtual: A light switch over here to turn on a light over there isn’t especially intuitive, and it’s something that has to be learned, and re-learned for every room we walk into. That light switch introduces a middle man, a layer of separation between the action and the thing you really want to work on, which is the light. The switch is a hack, but a brilliant one because it’s just not practical to climb up a ladder in a dark room to screw in the light bulb.</p>
<p>Buttons in interfaces are a similar kind of hack — an abstraction we’ve needed to make the desktop interface work for 30 years. The cursor, the mouse, buttons, tabs, menus … these are all prosthetics we’ve been using to wrangle content and information.</p>
<p>With touchscreen interfaces, though, designers can create the illusion of acting on information and content directly, manipulating it like a physical object that you can touch and stretch and drag and nudge. Those interactions tickle our brains in different ways from how traditional interfaces work because we don’t have to process that middle layer of UI conventions. We can just touch the content directly in many cases. It’s a great way to help cut through complexity.</p>
<p>The result is so much more intuitive, so much more natural to the way we think and act in the world. The proof is how quickly people with no computing experience — people like toddlers and seniors — take so quickly to the iPad. They’re actually better with these interfaces than the rest of us because they aren’t poisoned by 30 years of desktop interface conventions. Follow the toddlers; they’re better at it than we are.</p>
<p>So, yes, in some contexts, buttons and other administrative debris of the traditional interface have run their course. But buttons remain useful in some contexts, especially for more abstract tasks that aren’t easily represented physically. The keyboard is a great example, as are actions like “send to Twitter,” which don’t have readily obvious physical components. And just as important, buttons are labeled with clear calls to action. As we turn the corner into popularizing touch interactions, buttons will still have a place.</p>
<h3>What kinds of issues do touch- and gesture-oriented interfaces present?</h3>
<p><strong>Josh Clark:</strong> There are issues for both designers and users. In general, if a touchscreen element looks or behaves like a physical object, people will try to interact with it like one. If your interface looks like a book, people will try to turn its pages. For centuries, designers have dressed up their designs to look like physical objects, but that’s always just been eye candy in the past. With touch, users approach those designs very differently; they’re promises about how the interface works. So designers have to be careful to deliver on those promises. Don’t make your interface look like a book, for example, if it really works through desktop-like buttons. (I’m looking at you, <a href="http://www.apple.com/ipad/built-in-apps/contacts.html">Contacts app</a> for iPad.)</p>
<p>So, you can create really intuitive interfaces by making them look or behave like physical objects. That doesn’t mean that everything has to look just like a real-world object. Windows Phone and the forthcoming <a href="http://radar.oreilly.com/2011/09/windows8-metro-digital-book-design-ideas.html">Windows 8 interface</a>, for example, use a very flat tile-like metaphor. It doesn’t look like a 3-D gadget or artifact, but it does behave with real-world physics. It’s easy to figure out how to slide and work the content on the screen. People figure that stuff out really quickly.</p>
<p>The next hurdle — and the big opportunity for touch interfaces — is moving to more abstract gestures: two- and three-finger swipes, a full-hand pinch, and so on. In those cases, gestures become the keyboard shortcuts of touch and begin to let you create applications that you play more than you use, almost like an instrument. But wait, here I am talking about abstract gestures; didn’t I just say that abstractions — like buttons — are less than ideal? Well, yeah, the trouble is you don’t want to have the overhead of processing an interface, of thinking through how it works. The thing about physical abstractions (like gestures) versus visual abstractions (like buttons) is that physical actions can be absorbed into muscle memory. That kind of subconscious knowledge is actually much faster than visual processing — it’s why touch typists are so much faster than people who visually peck at the keys. So, once you learn and absorb those physical actions — a two-finger swipe always does this or that — then you can actually move really quickly through an interface in the same way a pianist or a typist moves through a keyboard. Intent fluidly translated to action.</p>
<p>But how do you teach that stuff? Swiping a card, pinching a map, or tapping a photo are all based on actions we know from the physical world. But a two-finger swipe has no prior meaning. It’s not something we’ll guess. Gestures are invisible with no labels, so that means they have to be taught.</p>
<h3>In what ways can UI design alleviate these learning issues?</h3>
<p><strong>Josh Clark:</strong> Designers should approach this by thinking through how we learn any physical action in the real world: observation of visual cues, demonstration, and practice. Too often, designers fall back on instruction manuals (iPad apps that open with a big screen of gesture diagrams) or screencasts. Neither are very effective.</p>
<p>Instead, designers have to do a better job of coaching people in context, showing our audiences how and when to use a gesture in the moment. More of us need to study video game design because games are great at this. In so many video games, you’re dropped into a world where you don’t even know what your goal is, let alone what you’re capable of or what obstacles you might encounter. The game rides along with you, tracking your progress, taking note of what you’ve encountered and what you haven’t, and giving in-context instruction, tips, and demonstrations as you go. That’s what more apps and websites should do. Don’t wait for people to somehow find a hidden gesture shortcut; tell people about it when they need it. Show an animation of the gesture and wait for them to copy it. Demonstration and practice — that’s how we learn all physical actions, from playing an instrument to learning a tennis serve.</p>
<h3>How do you see computer interaction evolving?</h3>
<p><strong>Josh Clark:</strong> It’s a really exciting time for interaction design because so many new technologies are becoming mature and affordable. Touch got there a few years ago. Speech is just now arriving. Computer vision with face recognition and gesture recognition like Kinect are coming along. So, we have all these areas where computers are learning to understand our particularly human forms of communication.</p>
<p>In the past, we had to learn to act and think like the machine. At the command line, we had to write in the computer’s language, not our own. The desktop graphical user interface was a big step forward in making things more humane through visuals, but it was still oriented around how computers saw the world, not humans. When you consider the additions of touch, speech, facial expression, and physical gesture, you have nearly the whole range of human (and humane) communication tools. As computers learn the subtleties of those expressions, our interfaces can become more human and more intuitive, too.</p>
<p>Touchscreens are leading this charge for now, but touch isn’t appropriate in every context. Speech is obviously great for the car, for walking, for any context where you need your eyes elsewhere. We’re going to see interfaces that use these different modes of communication in context-appropriate combinations. But that means we have to start thinking hard about how our content works in all these different contexts. So many are struggling just to figure out how to make the content adapt to a smaller screen. How about how your content sounds when spoken? How about when it can be touched, or how it should respond to physical gestures or facial expressions? There’s lots of work ahead.</p>
<h3>Are Google’s <a href="http://9to5google.com/2012/02/06/hud-google-glasses-are-real-and-they-are-coming-soon/">rumored heads-up-display glasses</a> a sign of things to come?</h3>
<p><strong>Josh Clark:</strong> I’m sure that all kinds of new displays will have a role in the digital future. I’m not especially clever about figuring out which technology will be a huge hit. If someone had told me five years ago that the immediate future would be all about a glass phone with no buttons, I’d have said they were nuts. I think both software and context and, above all, human empathy make the difference in how and when a hardware technology becomes truly useful. The stuff I’ve seen of the heads-up-display glasses seems a bit awkward and unnatural. The twitchy way you have to move your head to navigate the screen seems to ask you to behave a little robot-like. I think trends and expectations are moving in an opposite direction — technology that adapts to human means of expression, not humans adapting to technology.</p>
<p><em>This interview was edited and condensed.</em></p>
<p>Written by: <a href="http://radar.oreilly.com/jennw/index.html">Jenn Webb</a>, <a href="http://radar.oreilly.com/2012/03/touch-interface-user-experience-toc.html" target="_blank">O’Reilly Radar</a> (via <a href="http://ispr.info/2012/03/09/josh-clark-on-the-future-of-touch-and-other-types-of-ui/">Presence</a>)<br />
Posted by: <a href="https://www.situatedresearch.com">Situated Research</a></p>
<p>The post <a href="https://www.situatedresearch.com/2012/03/buttons-were-an-inspired-ui-hack-but-now-weve-got-better-options/">Buttons Were An Inspired UI Hack, But Now We’ve Got Better Options</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.situatedresearch.com/2012/03/buttons-were-an-inspired-ui-hack-but-now-weve-got-better-options/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2548</post-id>	</item>
		<item>
		<title>Sony Predicts Return of Virtual Reality</title>
		<link>https://www.situatedresearch.com/2011/07/sony-predicts-return-of-virtual-reality/</link>
					<comments>https://www.situatedresearch.com/2011/07/sony-predicts-return-of-virtual-reality/#_comments</comments>
		
		<dc:creator><![CDATA[Matthew Sharritt, Ph.D.]]></dc:creator>
		<pubDate>Tue, 05 Jul 2011 15:59:15 +0000</pubDate>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[HCI]]></category>
		<category><![CDATA[Simulations]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Design]]></category>
		<category><![CDATA[heads-up-display]]></category>
		<category><![CDATA[Human Factors]]></category>
		<category><![CDATA[Psychology]]></category>
		<category><![CDATA[Usability Research]]></category>
		<category><![CDATA[Virtual Reality]]></category>
		<guid isPermaLink="false">http://www.situatedresearch.com/blog/?p=2234</guid>

					<description><![CDATA[<p>Not content with attempting to usher in the advent of 3D console gaming, it seems Sony now has its sights set on the next quantum leap – virtual reality. Speaking in a video interview to promote next month’s b.tween 3D event in London, SCE Studios exec Mike Hocking explained that Sony’s recently announced HMD device&#8230;</p>
<p>The post <a href="https://www.situatedresearch.com/2011/07/sony-predicts-return-of-virtual-reality/">Sony Predicts Return of Virtual Reality</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Not content with attempting to usher in the advent of 3D console gaming, it seems Sony now has its sights set on the next quantum leap – virtual reality.</p>
<p>Speaking in a <a href="http://btween3d.co.uk/people/mick-hocking/">video interview</a> to promote next month’s b.tween 3D event in London, SCE Studios exec Mike Hocking explained that Sony’s <a href="http://www.eurogamer.net/articles/2011-01-06-sony-shows-off-3d-headset">recently announced</a> HMD device could represent the future of 3D gaming by allowing users access to a full ‘virtual’ world. <span id="more-2234"></span></p>
<p>“Another thing that’s coming back which I’m very excited is the notion of virtual reality,” he explained.</p>
<p>“So we have our new HMD – or head-mounted display – which was announced at CES earlier this year, and you can see that we can now get back to where we really wanted to get with virtual reality in the ’80s.</p>
<p>“We’ve now got the power to do it, we’ve got the screen resolution to do it, we’ve got the processing power to update fast enough so we can have very immersive experiences on head-mounted displays in gaming in the not too distant future.</p>
<p>“Being in a virtual world where I can see my virtual hands or a virtual gun with all the things we can do in the gaming world is going to be absolutely amazing,” he added.</p>
<p>There’s no release date set yet for Sony’s HMD or even a firm set of tech specs. Have a look at the <a href="http://www.eurogamer.net/articles/2011-06-28-sony-predicts-return-of-virtual-reality">clip</a> from Sony’s CES show to see what the gizmo looks like.</p>
<p>Staring even deeper into his crystal ball, Hocking also explained that he’d love to see AR-enabled contact lenses that would super-impose information about anything in your line of sight, Terminator-style.</p>
<p>“Most of the really exciting stuff is out there in the R&amp;D area at the moment – like having contact lenses with cameras and sensors built-in so that everything you see can be augmented with useful data,” he said.</p>
<p>Written by <a href="http://www.eurogamer.net/archive.php?author=650">Fred Dutton</a>, <a href="http://www.eurogamer.net/articles/2011-06-28-sony-predicts-return-of-virtual-reality">Eurogamer</a> (via <a href="http://sct.temple.edu/blogs/ispr/2011/07/05/sony-predicts-return-of-virtual-reality/">Presence</a>)<br />
Posted by: Situated Research</p>
<p>The post <a href="https://www.situatedresearch.com/2011/07/sony-predicts-return-of-virtual-reality/">Sony Predicts Return of Virtual Reality</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.situatedresearch.com/2011/07/sony-predicts-return-of-virtual-reality/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2234</post-id>	</item>
	</channel>
</rss>
