<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Augmented Reality Archives - Situated Research</title>
	<atom:link href="https://www.situatedresearch.com/tag/augmented-reality/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.situatedresearch.com/tag/augmented-reality/</link>
	<description>Usability Research and User Experience Testing</description>
	<lastBuildDate>Mon, 22 Nov 2021 17:33:24 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
<site xmlns="com-wordpress:feed-additions:1">122538981</site>	<item>
		<title>Nintendo’s newest Mario Kart is the best video game you never knew you wanted to play</title>
		<link>https://www.situatedresearch.com/2020/09/nintendos-newest-mario-kart-is-the-best-video-game-you-never-knew-you-wanted-to-play/</link>
					<comments>https://www.situatedresearch.com/2020/09/nintendos-newest-mario-kart-is-the-best-video-game-you-never-knew-you-wanted-to-play/#_comments</comments>
		
		<dc:creator><![CDATA[Matthew Sharritt, Ph.D.]]></dc:creator>
		<pubDate>Wed, 09 Sep 2020 14:22:15 +0000</pubDate>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[HCI]]></category>
		<category><![CDATA[Simulations]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Design]]></category>
		<category><![CDATA[Game Development]]></category>
		<category><![CDATA[Human Factors]]></category>
		<category><![CDATA[Mobile Devices]]></category>
		<category><![CDATA[User Experience]]></category>
		<guid isPermaLink="false">https://www.situatedresearch.com/?p=10126</guid>

					<description><![CDATA[<p>By now, Nintendo has made exactly 87,493,029 versions of Mario Kart since the game was first introduced in 1992 for the Super Nintendo. (Okay, the company has really made 13—which is still a lot!) But a new sequel coming this fall to the Nintendo Switch changes the formula in an enticing way, thanks to super&#8230;</p>
<p>The post <a href="https://www.situatedresearch.com/2020/09/nintendos-newest-mario-kart-is-the-best-video-game-you-never-knew-you-wanted-to-play/">Nintendo’s newest Mario Kart is the best video game you never knew you wanted to play</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div style="width: 980px;" class="wp-video"><video class="wp-video-shortcode" id="video-10126-1" width="980" height="550" loop autoplay preload="metadata" controls="controls"><source type="video/mp4" src="https://cdn.situatedresearch.com/wp-content/uploads/2020/09/p-2-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.mp4?_=1" /><source type="video/webm" src="https://cdn.situatedresearch.com/wp-content/uploads/2020/09/p-2-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.webm?_=1" /><a href="https://cdn.situatedresearch.com/wp-content/uploads/2020/09/p-2-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.mp4">https://www.situatedresearch.com/wp-content/uploads/2020/09/p-2-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.mp4</a></video></div>
<p>By now, Nintendo has made exactly 87,493,029 versions of Mario Kart since the game was first introduced in 1992 for the Super Nintendo. (Okay, the company has really made 13—which is still a lot!) But a new sequel coming this fall to the Nintendo Switch changes the formula in an enticing way, thanks to super experimental UX. <span id="more-10126"></span></p>
<p><em>Mario Kart Live: Home Circuit</em> transforms the Nintendo Switch into a controller for an actual toy race kart. The kart is fitted with a camera, giving the player a first-person view of its perspective as it whizzes around your living room, bedroom, or wherever you have some open floor space to play.</p>
<figure class="video-wrapper"><iframe title="Mario Kart Live: Home Circuit - Announcement Trailer - Nintendo Switch" src="https://www.youtube.com/embed/f2mCqUSDCJE?feature=oembed" width="720" height="480" frameborder="0" allowfullscreen="allowfullscreen"><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start">﻿</span></iframe></figure>
<p>How does the game build your course? You place a few gates that are bundled with the game on the floor. From there, how the exact setup and customization works is unclear (perhaps vision AI is involved?), but Nintendo—alongside its partner developer <a href="https://www.velanstudios.com/" target="_blank" rel="noopener noreferrer">Velan Studios</a>—demonstrates that one of several tracks, from a simple oval to complicated curves, can be set up to avoid existing couches, coffee tables, and perhaps even sleeping cats.</p>
<figure class="wp-caption alignnone image-wrapper" aria-describedby="caption-attachment-90547236"><figcaption id="caption-attachment-90547236" class="wp-caption-text"><div style="width: 596px;" class="wp-video"><video class="wp-video-shortcode" id="video-10126-2" width="596" height="334" loop autoplay preload="metadata" controls="controls"><source type="video/mp4" src="https://cdn.situatedresearch.com/wp-content/uploads/2020/09/p-1-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.mp4?_=2" /><source type="video/webm" src="https://cdn.situatedresearch.com/wp-content/uploads/2020/09/p-1-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.webm?_=2" /><a href="https://cdn.situatedresearch.com/wp-content/uploads/2020/09/p-1-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.mp4">https://www.situatedresearch.com/wp-content/uploads/2020/09/p-1-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.mp4</a></video></div>
[Image: Nintendo]
</figcaption></figure>
<p>As you race your kart around the course, all sorts of augmented reality (AR) effects, ranging from glowing boundaries, to power ups, to your racing competitors, will appear on the screen, as if they exist in your actual home. If you run over a virtual item, like a nitro-boosting mushroom, the kart will actually accelerate. If you hit a troublesome banana peel, the car will actually lose some control. Oh, and assuming you have friends with their own games, up to four players can race their karts together in the same space.</p>
<figure class="wp-caption image-wrapper alignnone" aria-describedby="caption-attachment-90547239"><img data-recalc-dims="1" loading="lazy" decoding="async" class="aligncenter size-full wp-image-10130" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2020/09/i-1-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.jpg?resize=596%2C335&#038;ssl=1" alt="" width="596" height="335" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2020/09/i-1-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.jpg?w=596&amp;ssl=1 596w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2020/09/i-1-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.jpg?resize=300%2C169&amp;ssl=1 300w" sizes="auto, (max-width: 596px) 100vw, 596px" /></figure>
<figure class="wp-caption image-wrapper alignnone" aria-describedby="caption-attachment-90547239"><figcaption id="caption-attachment-90547239" class="wp-caption-text">[Image: Nintendo]</figcaption></figure>
<p>With few exceptions, augmented reality has been little more than a gimmick. Snapchat’s zany face filters are still the most successful commercialization of this technology that, not so long ago, the tech world heralded as the next big thing.</p>
<p>Microsoft’s Hololens AR headset is technically impressive, but it’s being marketed as an enterprise tool to businesses (which demonstrates pretty clearly that it’s not ready for the mainstream just yet). The hyped company Magic Leap, with billions in venture capital from investors like Google, has done little more than release a developer version of its headset to mediocre reviews while it hangs on for life. The hardware is simply too expensive, too bulky, but, most of all, too useless to really be worth buying for a vast majority of people. Plus, it’s antisocial by nature to be experiencing a different version of reality than the people around you.</p>
<figure class="wp-caption alignnone image-wrapper" aria-describedby="caption-attachment-90547241"><figcaption id="caption-attachment-90547241" class="wp-caption-text"><div style="width: 596px;" class="wp-video"><video class="wp-video-shortcode" id="video-10126-3" width="596" height="334" loop autoplay preload="metadata" controls="controls"><source type="video/mp4" src="https://cdn.situatedresearch.com/wp-content/uploads/2020/09/i-4-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.mp4?_=3" /><source type="video/webm" src="https://cdn.situatedresearch.com/wp-content/uploads/2020/09/i-4-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.webm?_=3" /><a href="https://cdn.situatedresearch.com/wp-content/uploads/2020/09/i-4-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.mp4">https://www.situatedresearch.com/wp-content/uploads/2020/09/i-4-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.mp4</a></video></div>
[Image: Nintendo]
</figcaption></figure>
<p>But Nintendo is doing what it does best. It’s figuring out how to transform a gimmick into shared fun—and make it halfway affordable, too. A lot of that comes down to Nintendo just understanding the ergonomics around technology and play. For years, AR demos tasked you to hold up your phone like a little window to peek through, to do something like transform <a href="https://www.youtube.com/watch?v=r5ziOSjXdo4" target="_blank" rel="noopener noreferrer">a magazine cover into an animation</a>. These novelties wore thin quickly because they’re more physically awkward than visually amazing.</p>
<p>Nintendo is taking a similar approach here to its predecessors. But instead of utilizing the camera in your phone, it’s built it into the kart. That allows you to play a game like you always do (sitting on your couch), but experience all of these enticing and additive effects of AR. No, Nintendo isn’t aiming as high as Magic Leap, teasing an entire world of digital objects that you can reach out and touch. But Nintendo is competent enough at game design that it’s figured out how to work with what it has to create an AR experience that’s both new and destined to be massively successful.</p>
<p><em>Mario Kart Live: Home Circuit</em> will be out for $100 on October 16. The last version of Mario Kart sold <a href="https://www.gamereactor.eu/25-million-mario-kart-8-deluxe-copies-sold/" target="_blank" rel="noopener noreferrer">more than 25 million copies</a> to date. And if <em>Home Circuit</em> is only a fraction as successful, it will still be one of the most profitable demonstrations of AR ever built.</p>
<p>Written by: <a href="https://www.fastcompany.com/user/mark-wilson" target="_blank" rel="noopener noreferrer">Mark Wilson</a>, <a href="https://www.fastcompany.com/90546982/nintendos-newest-mario-kart-is-the-best-video-game-you-never-knew-you-wanted-to-play" target="_blank" rel="noopener noreferrer">Fast Company</a><br />
Posted by: <a href="https://www.situatedresearch.com/">Situated Research</a></p>
<p>The post <a href="https://www.situatedresearch.com/2020/09/nintendos-newest-mario-kart-is-the-best-video-game-you-never-knew-you-wanted-to-play/">Nintendo’s newest Mario Kart is the best video game you never knew you wanted to play</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.situatedresearch.com/2020/09/nintendos-newest-mario-kart-is-the-best-video-game-you-never-knew-you-wanted-to-play/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure url="https://cdn.situatedresearch.com/wp-content/uploads/2020/09/p-2-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.webm" length="897423" type="video/webm" />
<enclosure url="https://cdn.situatedresearch.com/wp-content/uploads/2020/09/p-1-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.webm" length="884575" type="video/webm" />
<enclosure url="https://cdn.situatedresearch.com/wp-content/uploads/2020/09/i-4-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.webm" length="374989" type="video/webm" />
<enclosure url="https://cdn.situatedresearch.com/wp-content/uploads/2020/09/p-2-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.mp4" length="1616122" type="video/mp4" />
<enclosure url="https://cdn.situatedresearch.com/wp-content/uploads/2020/09/p-1-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.mp4" length="1116213" type="video/mp4" />
<enclosure url="https://cdn.situatedresearch.com/wp-content/uploads/2020/09/i-4-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.mp4" length="474880" type="video/mp4" />

		<post-id xmlns="com-wordpress:feed-additions:1">10126</post-id>	</item>
		<item>
		<title>Next Big Thing for Virtual Reality: Lasers in Your Eyes</title>
		<link>https://www.situatedresearch.com/2016/05/next-big-thing-virtual-reality-lasers-eyes/</link>
					<comments>https://www.situatedresearch.com/2016/05/next-big-thing-virtual-reality-lasers-eyes/#_comments</comments>
		
		<dc:creator><![CDATA[Matthew Sharritt, Ph.D.]]></dc:creator>
		<pubDate>Tue, 03 May 2016 21:29:30 +0000</pubDate>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[HCI]]></category>
		<category><![CDATA[Psychology]]></category>
		<category><![CDATA[Simulations]]></category>
		<category><![CDATA[Usability]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Cognition]]></category>
		<category><![CDATA[heads-up-display]]></category>
		<category><![CDATA[Human Factors]]></category>
		<category><![CDATA[Mobile Devices]]></category>
		<category><![CDATA[User Interface]]></category>
		<category><![CDATA[Virtual Reality]]></category>
		<guid isPermaLink="false">http://www.situatedresearch.com/?p=9341</guid>

					<description><![CDATA[<p>San Francisco – The next big leap for virtual and augmented reality headsets is likely to be eye-tracking, where headset-mounted laser beams aimed at eyeballs turn your peepers into a mouse.  A number of startups are working on this tech, with an aim to convince VR gear manufacturers such as Oculus Rift and HTC Vive&#8230;</p>
<p>The post <a href="https://www.situatedresearch.com/2016/05/next-big-thing-virtual-reality-lasers-eyes/">Next Big Thing for Virtual Reality: Lasers in Your Eyes</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>San Francisco – The next big leap for virtual and augmented reality headsets is likely to be eye-tracking, where headset-mounted laser beams aimed at eyeballs turn your peepers into a mouse. <span id="more-9341"></span></p>
<p>A number of startups are working on this tech, with an aim to convince VR gear manufacturers such as Oculus Rift and HTC Vive to incorporate the feature in a next generation device. They include SMI, Percept, Eyematic, Fove and Eyefluence, which recently allowed USA Today to demo its eye-tracking tech.</p>
<p>“Eye-tracking is almost guaranteed to be in second-generation VR headsets,” says Will Mason, cofounder of virtual reality media company UploadVR. “It’s an incredibly important piece of the VR puzzle.”</p>
<p><iframe loading="lazy" title="USATODAY-Embed Player" width="850" height="480" frameborder="0" scrolling="no" allowfullscreen="true" marginheight="0" marginwidth="0" src="https://uw-media.usatoday.com/embed/video/82420346?placement=snow-embed"></iframe></p>
<p>At present, making selections in VR or AR environments typically involve moving the head so that your gaze lands on a clickable icon, and then either pressing a handheld remote or, in the case of Microsoft’s HoloLens or Meta 2, reaching out with your hand to make a selection by interacting with a hologram.</p>
<p>As shown in Eyefluence’s demonstration, all of that is accomplished by simply casting your eyes on a given icon and then activating it with another glance.</p>
<p>“The idea here is that anything you do with your finger on a smartphone you can do with your eyes in VR or AR,” says Eyefluence CEO Jim Marggraff, who cofounded the Milpitas, Calif-based company in 2013 with another entrepreneur, David Stiehr.</p>
<p>“Computers made a big leap when they went from punchcards to a keyboard, and then another from a keyboard to a mouse,” says Marggraff, who invented the kid-focused LeapFrog LeapPad device. “We want to again change the way we interface with data.”</p>
<h2>Eye Tech Not Due for Years</h2>
<p>As exciting as this may sound, the mainstreaming of eye-tracking technology is still a ways off. Eyefluence execs say that although they are in discussions with a variety of headset makers, their tech isn’t likely to debut until 2017. Other companies remain largely in R&amp;D mode, and Fove has a waitlist for its headset’s Kickstarter campaign.</p>
<p>The challenges for eye-tracking are both technological and financial. Creating hardware that consistently locks onto an infinite variety of eyeballs presents one hurdle, while doing so with gear that is light and consumes little power is another.</p>
<p>And while a number of companies in the space have managed to land funding – Eyefluence has raised $21.6 million in two rounds led by Intel Capital and Motorola Solutions – some tech-centric VCs are sitting on the sidelines while they wait for the technology to mature and for headset makers to make their moves.</p>
<p>“What eye-tracking will do will be powerful, but I’m not sure how valuable it will be from an investment standpoint,” says Kobie Fuller of Accel Partners. “Is there a multi-billion-dollar eye-tracking company out there? I don’t know.”</p>
<p>Among the unknowns: whether the tech will be disseminated through a licensed model or if existing headset companies will develop it on their own.</p>
<p>Still, once deployed eye-tracking has the potential to revolutionize the VR and AR experience, Fuller expects.</p>
<p>Specifically, eye-tracking will “greatly enhance interpersonal connections” in VR, he says, by applying realistic eye movements to avatars.</p>
<p>Facebook founder Mark Zuckerberg, who presciently bought Oculus for $2 billion, is banking on VR taking social interactions to a new level.</p>
<p>“The most exciting thing about eye-tracking is getting rid of that ‘uncanny valley’ (where disbelief sets in) when it comes to interacting through avatars,” says Fuller.</p>
<h2>Less Computing Power</h2>
<p>There are a few other ways in which successful eye-tracking tech could revolutionize AR and VR beyond just making such worlds easy to navigate without joysticks, remotes or hand gestures.</p>
<p>First, by tracking the eyes, such tech can telegraph to the VR device’s graphics processing unit, or GPU, that it needs to render only the images where the eyes are looking at that moment.</p>
<p>That means less computing power would be needed. Currently, a $700 Oculus headset requires a powerful computer to render its images. Oculus’s developer kit with a suitable computer costs $2,000. “If you can save on rendering power, that could significantly lower the barrier to entry into this market for consumers,” says UploadVR’s Mason.</p>
<p>And second, by not just tracking the eyeball but also potentially analyzing a person’s mood and logging in details about their gaze, AR/VR headsets are in a position to deliver targeted content as well as give third-party observers insights into the wearer’s state of mind and situational awareness.</p>
<h2>Police Use</h2>
<p>The former use case would appeal to in-VR advertisers, while the latter would come in handy for first responders.</p>
<p>“Police and paramedics are looking for an eyes-up, hands-free paradigm, and eye-tracking can bring that,” says Paul Steinberg, chief technology officer at Motorola Solutions, an investor in Eyefluence.</p>
<p>Steinberg sketches out a scene from what could be the near future.</p>
<p>A police officer on patrol has suddenly unholstered his gun. Via his augmented reality glasses with eye-tracking, colleagues at headquarters are instantly fed information about his stress level through pupil dilation information.</p>
<p>They can then both advise the officer through a radio as well as activate body cameras and other tech that he might have neglected to turn on in his stressed state. What’s more, another officer on the scene can instantly scan through a variety of command center video and data feeds through an AR headset, flipping through the options by simply looking at each one.</p>
<p>“We would have to work with our (first responder) customers to train them how to use this sort of tech of course, but the potential is there,” says Steinberg. “But we’re not months away, we’re more than that.”</p>
<h2>Demo Shows Off Ease of Use</h2>
<p>An Eyefluence indicates that eye-tracking technology isn’t a half-baked dream.</p>
<p>Navigating between a dozen tiles inside a first-generation Oculus headset proves as easy as shifting your gaze between them. Making selections – the equivalent of clicking on a mouse – is also equally intuitive. At no time does the head need to move, and hands remain at your side.</p>
<p><iframe loading="lazy" src="https://www.youtube.com/embed/iQsY3uLvYQ4" width="720" height="384" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>After about 10 minutes in the demo, it feels antiquated to pop on a VR headset and grab a remote to click through choices selected with head movements.</p>
<p>Marggraff says Eyefluence’s technical challenges included making technology that could respond in low and bright light, accounting for different size pupils and ensuring that power consumption is minimal.</p>
<p>But, he adds, his team remains convinced of the inevitability of its product: “Just like when we started tapping and swiping on our phones, we’re going to eventually need a better interface for AR and VR.”</p>
<p>Written by: <a href="http://www.usatoday.com/staff/1005/marco-della-cava/" target="_blank" rel="noopener">Marco della Cava</a>, <a href="http://www.usatoday.com/story/tech/news/2016/05/02/new-mouse-vr-could-your-eyes/83716986/" target="_blank" rel="noopener">USA Today</a> (via <a href="http://ispr.info/2016/05/03/next-big-thing-for-virtual-reality-eye-tracking-lasers-in-your-eyes/" target="_blank" rel="noopener">Presence</a>)<br />
Posted by: <a href="https://www.situatedresearch.com">Situated Research</a></p>
<p>The post <a href="https://www.situatedresearch.com/2016/05/next-big-thing-virtual-reality-lasers-eyes/">Next Big Thing for Virtual Reality: Lasers in Your Eyes</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.situatedresearch.com/2016/05/next-big-thing-virtual-reality-lasers-eyes/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">9341</post-id>	</item>
		<item>
		<title>The Future of Consumer Tech Is About Making You Forget It’s There</title>
		<link>https://www.situatedresearch.com/2015/03/the-future-of-consumer-tech-is-about-making-you-forget-its-there/</link>
					<comments>https://www.situatedresearch.com/2015/03/the-future-of-consumer-tech-is-about-making-you-forget-its-there/#_comments</comments>
		
		<dc:creator><![CDATA[Matthew Sharritt, Ph.D.]]></dc:creator>
		<pubDate>Sat, 07 Mar 2015 20:46:04 +0000</pubDate>
				<category><![CDATA[HCI]]></category>
		<category><![CDATA[Usability]]></category>
		<category><![CDATA[Aesthetics]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Cognition]]></category>
		<category><![CDATA[Design]]></category>
		<category><![CDATA[Human Factors]]></category>
		<category><![CDATA[Mobile Devices]]></category>
		<category><![CDATA[Personalization]]></category>
		<category><![CDATA[Psychology]]></category>
		<category><![CDATA[User Interface]]></category>
		<category><![CDATA[User-Centered Design]]></category>
		<guid isPermaLink="false">http://www.situatedresearch.com/?p=8836</guid>

					<description><![CDATA[<p>Microsoft, Samsung, GoPro, and others take their best guesses at the next five years of consumer electronics. When Apple introduced the iPad 2 in 2011, it laid out a noble goal for the future of technology. “Technology alone is not enough,” an Apple ad proclaimed. “Faster, thinner, lighter, those are all good things, but when&#8230;</p>
<p>The post <a href="https://www.situatedresearch.com/2015/03/the-future-of-consumer-tech-is-about-making-you-forget-its-there/">The Future of Consumer Tech Is About Making You Forget It’s There</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><strong>Microsoft, Samsung, GoPro, and others take their best guesses at the next five years of consumer electronics.</strong></p>
<p>When Apple introduced the iPad 2 in 2011, it laid out a noble goal for the future of technology.</p>
<p>“Technology alone is not enough,” an <a href="https://www.youtube.com/watch?v=b2LLSrlKr3c" target="_blank">Apple ad proclaimed</a>. “Faster, thinner, lighter, those are all good things, but when technology gets out of the way, everything becomes more delightful, even magical. That’s when you leap forward.” <span id="more-8836"></span></p>
<p>With the iPad, the notion of technology getting out of the way meant designing a computer so easy to use that the apps took center stage. But the result was in some sense counterproductive; we’ve become so sucked into our phones and tablets that technology is actually getting in the way of the real world.</p>
<p>It’s not going to be like that forever. In talking to leaders from some of the most innovative companies in consumer electronics, it’s clear that the next five years will represent an attempt to bring us back to reality. This may seem paradoxical, but a proliferation of wearable devices, smart-home gizmos, smart cameras, and augmented-reality systems will exist largely to save us from our screens.<span id="more-20160"></span></p>
<p><strong>Wearables return to the real world</strong></p>
<p>The cynical way to view wearable technology is as yet another intrusion—another set of screens to keep us separated from the physical world. But Yusuf Mehdi, Microsoft’s corporate vice president of devices and studios, doesn’t see it that way. He believes these devices, more than ever, will help technology fade into the background.</p>
<p>Mehdi gives a basic example that applies today: Instead of sounding an alarm, many fitness trackers can wake you with a gentle vibration to avoid disturbing your spouse. It’s a seemingly minor feature, but one that takes the focus off the device itself and onto the people around you. “That’s an interesting thing where people are taking a personal device and saying, ‘Well, the win is for my spouse, not for me,&#8217;” Mehdi says.</p>
<p>Moving forward, Mehdi sees devices like Microsoft’s recently announced HoloLens as a way to stay present in the physical world without completely shutting out technology. The still-experimental headset works by projecting 3-D images into a head-mounted visor so they appear to be part of your natural surroundings.</p>
<p>Imagine a scenario in which two coworkers collaborate on a 3-D model projected into their headsets, or someone walking down the street who can see information about surrounding shops and restaurants. Mehdi points out that the original codename for HoloLens was “analog,” for the way it blends with the physical world.</p>
<figure id="attachment_8838" aria-describedby="caption-attachment-8838" style="width: 640px" class="wp-caption aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" class="wp-image-8838 size-full" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-hololens.png?resize=640%2C360&#038;ssl=1" alt="Microsoft's HoloLens augmented-reality visor" width="640" height="360" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-hololens.png?w=640&amp;ssl=1 640w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-hololens.png?resize=300%2C169&amp;ssl=1 300w" sizes="auto, (max-width: 640px) 100vw, 640px" /><figcaption id="caption-attachment-8838" class="wp-caption-text">Microsoft&#8217;s HoloLens augmented-reality visor</figcaption></figure>
<p>“If you have a device that you’re wearing, and information is being overlaid on top of that, now you’re back in the real world, and you’re interacting, and you’re not missing what’s going on around you, because your head’s there,” Mehdi says.</p>
<p>On a more practical level, these “mixed reality” devices—as Mehdi calls them—will pave the way for more natural input methods like gesture control and eye tracking, which never quite made sense on tablets and laptops. “A lot of things become more human, and the technology kind of goes back out of the way, and we think that’s a big opportunity,” Mehdi says.</p>
<p><strong>The Disappearing Smart Home</strong></p>
<p>Wearable tech will also play a starring role in smart homes—at least if we expect them to offer the kind of breezy convenience that tech companies have been promising.</p>
<figure id="attachment_8839" aria-describedby="caption-attachment-8839" style="width: 260px" class="wp-caption alignright"><img data-recalc-dims="1" loading="lazy" decoding="async" class="wp-image-8839 size-full" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-dennismiloseski.jpg?resize=260%2C260&#038;ssl=1" alt="3042948-inline-dennismiloseski" width="260" height="260" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-dennismiloseski.jpg?w=260&amp;ssl=1 260w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-dennismiloseski.jpg?resize=150%2C150&amp;ssl=1 150w" sizes="auto, (max-width: 260px) 100vw, 260px" /><figcaption id="caption-attachment-8839" class="wp-caption-text">Samsung&#8217;s Dennis Miloseski</figcaption></figure>
<p>Dennis Miloseski, Samsung’s U.S. head of design, describes the dream scenario: You pull into your garage and your wearable connects to your Wi-Fi network, which in turn triggers your hallway lights and queues up some music on the living room stereo. “I like to call it the automatic future,” he says.</p>
<p>But he also notes how easily things can go wrong. Maybe your spouse is sleeping on the couch and doesn’t want the lights to come on. That’s why it’ll be so important to have intelligence that figures out what you want, along with some sort of way to confirm your intentions on a wearable device.</p>
<p>“We’re sort of in this archaic age right now, where we’re in this raw form of data readout, meaning, ‘this is how many steps you’ve taken,’ or ‘this is your heart rate,’ or the light is on or off,&#8217;” Miloseski says. “I think the next magical innovation is how do we take that data and actually create a form of valuable experience of that data.”</p>
<figure id="attachment_8840" aria-describedby="caption-attachment-8840" style="width: 640px" class="wp-caption aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-8840" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-smartthings.jpg?resize=640%2C360&#038;ssl=1" alt="Gadgets from SmartThings, now part of Samsung" width="640" height="360" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-smartthings.jpg?w=640&amp;ssl=1 640w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-smartthings.jpg?resize=300%2C169&amp;ssl=1 300w" sizes="auto, (max-width: 640px) 100vw, 640px" /><figcaption id="caption-attachment-8840" class="wp-caption-text">Gadgets from SmartThings, now part of Samsung</figcaption></figure>
<p>Again, it all comes back to getting the technology out of the way so the user doesn’t have to think about logistics or shuffle through a bunch of apps just to have a fully functional smart home. Miloseski likens it to starting a car or turning on a light switch, in that the complexity is completely hidden from the user.</p>
<p>“I think that we will hit a point in time where, when we think of technology and devices and gadgets and all these things, when they actually impact the social fabric and they become an essential part of how we live our lives, they will become invisible,” Miloseski says.</p>
<p><strong>New Smarts for Dumb Cameras</strong></p>
<p>Photography might be the one area where Apple’s vision of getting technology out of the way seems fully realized. Smartphone cameras are no longer just a quick and dirty image capture tool; they’re the best way to take photos that you can immediately touch up and share with the world.</p>
<p>But as wearable cameras like the GoPro and drone-based ones like DJI’s Phantom enable new kinds of photography, they’ve yet to receive phone-like smarts. Expect that to change in the coming years as capturing and sharing footage from these devices starts to feel as effortless as using the camera in your pocket.</p>
<figure id="attachment_8841" aria-describedby="caption-attachment-8841" style="width: 260px" class="wp-caption alignright"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-8841" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-cjprobergopro.jpg?resize=260%2C260&#038;ssl=1" alt="GoPro's CJ Prober" width="260" height="260" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-cjprobergopro.jpg?w=260&amp;ssl=1 260w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-cjprobergopro.jpg?resize=150%2C150&amp;ssl=1 150w" sizes="auto, (max-width: 260px) 100vw, 260px" /><figcaption id="caption-attachment-8841" class="wp-caption-text">GoPro&#8217;s CJ Prober</figcaption></figure>
<p>“If I think specifically about us, and the things that we get super-jazzed about, that’s a big piece of it, it’s the whole solving of pain points from when you first capture content to seamlessly sharing it,” says CJ Prober, GoPro’s senior vice president of software and services.</p>
<p>Today, when you capture footage on a GoPro, you’ve got to load it into your computer—itself a time-consuming process—and spend hours looking for highlights and turning them into a YouTube-worthy video. But in the future, a wearable camera might tap into gyroscopes and accelerometers to flag exciting moments, or use machine learning algorithms to sniff out quality footage. It could even tie into other wearable sensors to measure things like jump height or speed, and bring those details straight into the video.</p>
<p>“It’s really important to not think of video and photo capture as an independent thing to do on the device,” Prober says. “It’s really, ‘What do you do with the content when it’s captured?&#8217;”</p>
<figure id="attachment_8842" aria-describedby="caption-attachment-8842" style="width: 260px" class="wp-caption alignright"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-8842" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-goprohero4.jpg?resize=260%2C146&#038;ssl=1" alt="GoPro's Hero 4 camera" width="260" height="146" /><figcaption id="caption-attachment-8842" class="wp-caption-text">GoPro&#8217;s Hero 4 camera</figcaption></figure>
<p>That question will become even more important as new tools like 360-degree cameras become available. Suddenly, you have a lot more footage to work with, which means cameras will need to get smarter at helping you tell the best story.</p>
<p>Drone camera makers like DJI face a slightly different challenge, but with similar overall goals. In the near term, it’ll need to make the actual flight mechanisms smarter so that drones can safely navigate on their own. But once that happens, and the drones themselves become cheaper and more commoditized, it’ll open up all kinds of new smart applications.</p>
<figure id="attachment_8843" aria-describedby="caption-attachment-8843" style="width: 640px" class="wp-caption aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-8843" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-ericcheng.jpg?resize=640%2C480&#038;ssl=1" alt="GoPro's Hero 4 camera" width="640" height="480" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-ericcheng.jpg?w=640&amp;ssl=1 640w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-ericcheng.jpg?resize=300%2C225&amp;ssl=1 300w" sizes="auto, (max-width: 640px) 100vw, 640px" /><figcaption id="caption-attachment-8843" class="wp-caption-text">GoPro&#8217;s Hero 4 camera</figcaption></figure>
<p>“There could be a really rich app economy that’s task-driven instead of product-driven,” says Eric Cheng, DJI’s ‎general manager in San Francisco.</p>
<p>A basic example, he said, would be some kind of live-blogging application that steers a drone as it follows you down the street. Or maybe you’d have an application that can automatically capture and reconstruct a scene in 3-D using cameras. “You can imagine a whole lot of functionality moving into the domain-specific and being a lot smarter,” Cheng says.</p>
<p><strong>Room For The Familiar</strong></p>
<p>None of this is to suggest that the tools we use today are going to vanish, or that you’ll never have occasion to get sucked into your phone, tablet, or computer for a while.</p>
<figure id="attachment_8844" aria-describedby="caption-attachment-8844" style="width: 260px" class="wp-caption alignright"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-8844" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-rickosterloh.jpg?resize=260%2C239&#038;ssl=1" alt="Motorola's Rick Osterloh" width="260" height="239" /><figcaption id="caption-attachment-8844" class="wp-caption-text">Motorola&#8217;s Rick Osterloh</figcaption></figure>
<p>Rick Osterloh, president of Motorola Mobility, now part of China’s Lenovo, says that if anything, the smartphone will remain at the center of all these new smart devices. “It’s resonated so well because it’s actually well-designed, for both utility and a critical feature, which is carryability and pocketability,” he says.</p>
<p>While Osterloh imagines we will see some new technological twists for the smartphone in the form of folding screens and superfast charging, the biggest advances will come from all the different types of data a phone can gather and interpret. Think of it kind of like the <a href="https://play.google.com/store/apps/details?id=com.motorola.contextual.smartrules2&amp;hl=en" target="_blank">Assist</a> feature in Motorola’s current phones, but with more automation and intelligence.</p>
<p>“That is a pretty interesting area writ large, we believe, for the future, where the combination of context and probably sensors will give you a user experience that just helps your phone adapt to what you want,” Osterloh says, “like the magical ‘do what I want’ machine that people in computer science have been trying to develop for decades.”</p>
<figure id="attachment_8845" aria-describedby="caption-attachment-8845" style="width: 260px" class="wp-caption alignright"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-8845" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-motorolax.jpg?resize=260%2C253&#038;ssl=1" alt="Motorola's Moto X smartphone" width="260" height="253" /><figcaption id="caption-attachment-8845" class="wp-caption-text">Motorola&#8217;s Moto X smartphone</figcaption></figure>
<p>Likewise, Microsoft’s Mehdi doesn’t see mouse-and-keyboard devices going away, since there’s nothing better for tasks like writing or data entry. “I don’t think this is like tapes and CDs that go away,” he says. “I think it’s more like TV and radio, that don’t actually go away. They just become another part of the media that you consume, and over time they kind of get tuned for the use case.”</p>
<p>The question, then, is how we’re actually going to make room for this expanding roster of wearables, drones, headsets, and smart-home devices. At some point, it might be too much to wrangle, but as Medhi points out, it wasn’t long ago that owning just a cell phone and a computer was hard to fathom. People make room for more devices when there’s sufficient value.</p>
<p>In other words, making all that technology disappear may only work if we own a whole lot more of it.</p>
<p>Written by: <a href="http://www.fastcompany.com/user/jared-newman" target="_blank">Jared Newman</a>, <a href="http://www.fastcompany.com/3042948/sector-forecasting/the-future-of-consumer-tech-is-about-making-you-forget-its-there">Fast Company</a> (via <a href="http://ispr.info/2015/03/06/the-future-of-consumer-tech-is-about-making-you-forget-its-there/">Presence</a>)<br />
Posted by: <a href="https://www.situatedresearch.com">Situated Research</a></p>
<p>The post <a href="https://www.situatedresearch.com/2015/03/the-future-of-consumer-tech-is-about-making-you-forget-its-there/">The Future of Consumer Tech Is About Making You Forget It’s There</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.situatedresearch.com/2015/03/the-future-of-consumer-tech-is-about-making-you-forget-its-there/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">8836</post-id>	</item>
		<item>
		<title>Hands-on with Mattel’s new AR, VR View-Master</title>
		<link>https://www.situatedresearch.com/2015/02/hands-on-with-mattels-new-ar-vr-view-master/</link>
					<comments>https://www.situatedresearch.com/2015/02/hands-on-with-mattels-new-ar-vr-view-master/#_comments</comments>
		
		<dc:creator><![CDATA[Matthew Sharritt, Ph.D.]]></dc:creator>
		<pubDate>Fri, 20 Feb 2015 15:54:37 +0000</pubDate>
				<category><![CDATA[Education]]></category>
		<category><![CDATA[Gaming]]></category>
		<category><![CDATA[HCI]]></category>
		<category><![CDATA[Simulations]]></category>
		<category><![CDATA[Usability]]></category>
		<category><![CDATA[Aesthetics]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Design]]></category>
		<category><![CDATA[Game Development]]></category>
		<category><![CDATA[Games for Learning]]></category>
		<category><![CDATA[Human Factors]]></category>
		<category><![CDATA[Mobile Devices]]></category>
		<category><![CDATA[User Interface]]></category>
		<category><![CDATA[Virtual Reality]]></category>
		<guid isPermaLink="false">http://www.situatedresearch.com/?p=8819</guid>

					<description><![CDATA[<p>A View-Master for virtual reality: Hands-on with Mattel&#8217;s new AR, VR phone toy Mattel is relaunching View-Master, but as a virtual reality and augmented-reality phone toy. And I got to play around with it for a bit…or at least, some of the tech behind it.  Announced at an event in New York City, the new&#8230;</p>
<p>The post <a href="https://www.situatedresearch.com/2015/02/hands-on-with-mattels-new-ar-vr-view-master/">Hands-on with Mattel’s new AR, VR View-Master</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><strong>A View-Master for virtual reality: Hands-on with Mattel&#8217;s new AR, VR phone toy</strong></p>
<p><span style="line-height: 1.5;">Mattel is relaunching View-Master, but as a virtual reality and augmented-reality phone toy. And I got to play around with it for a bit…or at least, some of the tech behind it. </span><span id="more-8819"></span></p>
<p>Announced at an event in New York City, <a href="http://www.cnet.com/news/google-mattel-announce-a-virtual-reality-view-master/" target="_blank">the new View-Master</a> is a collaboration between Mattel and Google, whose virtual reality Cardboard app has enabled cheap do-it-yourself accessories to turn any Android phone into a mini-VR viewer. Mattel’s plastic toy, which will debut in October, is like a more durable, plastic version of <a href="http://www.cnet.com/news/googles-cardboard-vr-headset-is-no-joke-its-great-for-the-oculus-rift/" target="_blank">Google Cardboard</a>, designed entirely for kids…or, maybe, also for grown-up kids like me. And the most brilliant part is it’ll only cost $30.<span id="more-20098"></span></p>
<p><iframe loading="lazy" src="http://www.cnet.com/videos/share/id/tUlXVC5TlPLbcmd7Lo7cfkU6k0P1Edow/" width="960" height="540" frameborder="0" seamless="seamless" scrolling="no" allowfullscreen="allowfullscreen"></iframe></p>
<p>I used View-Master back when I was a little — who didn’t? It’s a classic 3D stereoscopic picture viewer. Many people had even said Google Cardboard looked a bit like a View-Master. So is isn’t a huge surprise that Mattel has suddenly announced a new View-Master with Google Cardboard VR capabilities added. I’ve always felt that virtual reality reminded me of early stereoscopic toys. And Mattel has keyed onto the same idea.</p>
<figure id="attachment_8821" aria-describedby="caption-attachment-8821" style="width: 770px" class="wp-caption aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-8821" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/02/viewmaster1.jpg?resize=770%2C577&#038;ssl=1" alt="The View-Master will fit most phones, according to Mattel: iPhone and Android alike." width="770" height="577" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/02/viewmaster1.jpg?w=770&amp;ssl=1 770w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/02/viewmaster1.jpg?resize=300%2C225&amp;ssl=1 300w" sizes="auto, (max-width: 770px) 100vw, 770px" /><figcaption id="caption-attachment-8821" class="wp-caption-text">The View-Master will fit most phones, according to Mattel: iPhone and Android alike.</figcaption></figure>
<p>The toy was only viewable in a mock-up prototype form at Mattel’s event, but the design’s pretty cool: it looks half old-school View-Master, half Oculus Rift. The inner plastic housing extends to hold many types of phones: Mattel says it’s designed to fit the largest existing phones, and will even work with the <a href="http://www.cnet.com/products/apple-iphone-6-plus/" target="_blank">iPhone 6 Plus</a> and <a href="http://www.cnet.com/products/google-nexus-6/" target="_blank">Nexus 6</a>. A capacitive-touch side lever is used to “click” through scenes or into virtual environments, like the magnetized side switch on Google’s Cardboard viewers.</p>
<p>Mattel’s headset is designed with Google and Android in mind, but at launch is intended to work on “nearly all platforms,” which includes iOS. That would mean a dedicated Mattel app which interfaces with the View-Master, but Google’s Cardboard and Cardboard-ready apps — many of which already exist on iOS, like VRSE — will work too.</p>
<p>Mattel is planning to use View-Master not just for VR, but also for AR; little plastic reels that look like the old cardboard ones are really just flat coasters this time around, now with images on top which the View-Master reads and turns into pop-up augmented-reality models on your table, desktop or wherever else you place it. Multiple View-Masters could use one reel to access content if put down on a table, unlike the old pop-in reels. This type of augmented-reality tech has already existed for years in many apps and on some children’s toys like the Nintendo 3DS (with its AR cards) and PlayStation Vita, but mixing it into a VR headset is a novel idea.</p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="aligncenter size-full wp-image-8822" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/02/viewmaster3.jpg?resize=770%2C577&#038;ssl=1" alt="viewmaster3" width="770" height="577" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/02/viewmaster3.jpg?w=770&amp;ssl=1 770w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/02/viewmaster3.jpg?resize=300%2C225&amp;ssl=1 300w" sizes="auto, (max-width: 770px) 100vw, 770px" /></p>
<p>I didn’t get to use the actual Mattel prototype, but we tried View-Master’s augmented-reality tech on phones and Google Cardboard viewers. There were three reels to try: a dinosaur one made a little dinosaur pop up on the disc on the table in front of me. When I aimed a dot and clicked on it, I was suddenly surrounded by a prehistoric 360-degree panorama with 3D dinosaurs. Clicking on them brought up facts, too.</p>
<p>Looking at the space disc with Cardboard on brought up a pop-up moon and Earth; clicking on it took me to a panorama of the moon, with pop-up clickable photos of NASA missions. A third, San Francisco-themed, had little mini-models of Alcatraz and the Golden Gate Bridge that turned into VR photo panoramas. To exit any of the virtual panoramas, you look down and click on the side…or, remove the View-Master from your face. The View-Master comes with one reel in its $30 package, and extra reels will cost around $15 each. No, older View-Master reels don’t work in here, but it sounds like Mattel is exploring re-releasing content from some of the back catalog 10,000 older ViewMaster reels.</p>
<figure id="attachment_8823" aria-describedby="caption-attachment-8823" style="width: 770px" class="wp-caption aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-8823" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/02/viewmaster4.jpg?resize=770%2C577&#038;ssl=1" alt="The &quot;reels&quot; don't actually go in the View-Master, they simply sit on your table." width="770" height="577" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/02/viewmaster4.jpg?w=770&amp;ssl=1 770w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/02/viewmaster4.jpg?resize=300%2C225&amp;ssl=1 300w" sizes="auto, (max-width: 770px) 100vw, 770px" /><figcaption id="caption-attachment-8823" class="wp-caption-text">The &#8220;reels&#8221; don&#8217;t actually go in the View-Master, they simply sit on your table.</figcaption></figure>
<p>There’s no strap to keep the View-Master on: this is a hold-to-your-face toy, much like older View-Masters and Google Cardboard. Mattel has promised that the tech has already been vetted by pediatric ophthalmologists, and is meant for children ages 7 and up — in short, bite-sized sessions.</p>
<p>The View-Master may work with other toys, too, like other app-ified toys in the past, but for now it’s really a fancier plastic Google Cardboard viewer, with additional Mattel support. That’s not a bad thing at all: at $30, this is a pretty awesome little stocking-stuffer idea, and a fun phone toy. Just keep in mind that if you give this to your kid, it won’t work without a phone popped into it.</p>
<p>By the time fall rolls around, Mattel may have other toys ready to work with it. Or, there might be many other companies ready to make cheap phone-enabled VR headsets, too.</p>
<p>Written by: <a href="http://www.cnet.com/profiles/scottstein8/" target="_blank">Scott Stein</a>, <a href="http://www.cnet.com/products/new-view-master/" target="_blank">CNET</a> (via <a href="http://ispr.info/2015/02/20/hands-on-with-mattels-new-ar-vr-view-master/" target="_blank">Presence</a>)<br />
Posted by: <a href="https://www.situatedresearch.com">Situated Research</a></p>
<p>The post <a href="https://www.situatedresearch.com/2015/02/hands-on-with-mattels-new-ar-vr-view-master/">Hands-on with Mattel’s new AR, VR View-Master</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.situatedresearch.com/2015/02/hands-on-with-mattels-new-ar-vr-view-master/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">8819</post-id>	</item>
		<item>
		<title>Welcome to the Age of Holographs</title>
		<link>https://www.situatedresearch.com/2015/01/welcome-age-holographs/</link>
					<comments>https://www.situatedresearch.com/2015/01/welcome-age-holographs/#_comments</comments>
		
		<dc:creator><![CDATA[Matthew Sharritt, Ph.D.]]></dc:creator>
		<pubDate>Thu, 22 Jan 2015 22:18:54 +0000</pubDate>
				<category><![CDATA[Education]]></category>
		<category><![CDATA[Gaming]]></category>
		<category><![CDATA[HCI]]></category>
		<category><![CDATA[Psychology]]></category>
		<category><![CDATA[Serious Games]]></category>
		<category><![CDATA[Simulations]]></category>
		<category><![CDATA[Usability]]></category>
		<category><![CDATA[Aesthetics]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Design]]></category>
		<category><![CDATA[Game Development]]></category>
		<category><![CDATA[Games for Learning]]></category>
		<category><![CDATA[Human Factors]]></category>
		<category><![CDATA[Mental Models]]></category>
		<category><![CDATA[Personalization]]></category>
		<category><![CDATA[User Experience]]></category>
		<category><![CDATA[User Interface]]></category>
		<category><![CDATA[User-Centered Design]]></category>
		<category><![CDATA[Virtual Reality]]></category>
		<guid isPermaLink="false">http://www.situatedresearch.com/?p=8792</guid>

					<description><![CDATA[<p>Up close with the HoloLens, Microsoft’s most intriguing product in years We just finished a heavily scripted, carefully managed, and completely amazing demonstration of Microsoft’s HoloLens technology. Four demos, actually, each designed to show off a different use case for a headset that projects holograms into real space. We played Minecraft on a coffee table.&#8230;</p>
<p>The post <a href="https://www.situatedresearch.com/2015/01/welcome-age-holographs/">Welcome to the Age of Holographs</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><strong>Up close with the HoloLens, Microsoft’s most intriguing product in years</strong></p>
<p>We just finished a heavily scripted, carefully managed, and completely amazing demonstration of Microsoft’s HoloLens technology. Four demos, actually, each designed to show off a different use case for a headset that projects holograms into real space. We played <em>Minecraft</em> on a coffee table. We had somebody chart out how to fix a light switch right on top of the very thing we were fixing. <span id="more-8792"></span></p>
<p>We walked on Mars.</p>
<p>You’ll notice there aren’t photos here, and that’s because before we were even allowed into the labs where the HoloLens team tests out its user experiences, we had to deposit our cameras and phones into a locker. No recording equipment of any kind was allowed, not even audio. We entered the basement below Microsoft’s visitor center laughing at the absurdity of it all — many reporters needed to get notepads from the company and weren’t carrying pens, either.</p>
<p>But it was all worth it, because HoloLens is probably the most intriguing (and, in many ways, most infuriating) technology we’ve experienced since the Oculus Rift. And there are many parallels with the Rift to be had: both are immersive, but in different ways; both require you to strap a weird thing on your head; both leave you grinning like at absolute idiot at a scene only you can see. And, crucially, both need more work when it comes to thinking through exactly how to control and interact with virtual things.</p>
<p><script height="575px" width="1023px" src="https://player.ooyala.com/iframe.js#ec=lsOGp3cjqUFwNW0FqImWpiKsqIdSTEX-&#038;pbid=dcc84e41db014454b08662a766057e2b"></script></p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="aligncenter size-full wp-image-8793" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/01/d76d1d6c-a4e6-41d2-bc43-c9b49041a219.0.png?resize=864%2C392&#038;ssl=1" alt="d76d1d6c-a4e6-41d2-bc43-c9b49041a219.0" width="864" height="392" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/01/d76d1d6c-a4e6-41d2-bc43-c9b49041a219.0.png?w=864&amp;ssl=1 864w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/01/d76d1d6c-a4e6-41d2-bc43-c9b49041a219.0.png?resize=300%2C136&amp;ssl=1 300w" sizes="auto, (max-width: 864px) 100vw, 864px" /></p>
<p><strong><em>Minecraft</em> IRL<br />
</strong>by Dieter Bohn</p>
<p>By far, <a href="https://www.theverge.com/2015/1/21/7868363/minecraft-hololens-microsoft-freecell" target="_blank" rel="noopener">the most impressive demo for my money was the <em>Minecraft</em> demo</a> — though Microsoft called it something like “Building Blocks” or some such, presumably so as not to fully commit to releasing a full holograph version of<em>Minecraft</em>. But before we could enter this virtual world — actually, the virtual entered <em>our</em> world — we had to strap on the development unit for the HoloLens.</p>
<p>It’s a contraption, to be sure. There’s a small, heavy block you hang around your neck which contains all the computing power. It’s comprised of lenses and tiny projectors and motion sensors and speakers (or <em>something</em> that makes sound, anyway), and god knows what else. And then there’s a screen right there in your field of view.</p>
<p>A “screen in your field of view” is the right way to think about HoloLens, too. It’s immersive, but not nearly as immersive as proper virtual reality is. You still see the real world in between the virtual objects; you can see where the magic holograph world ends and your peripheral vision begins.</p>
<p>But before you can apply your jaded “I’ve done VR before” attitude to this situation, you look down at the coffee table and there’s a <strong>castle sitting right on the damn thing.</strong> It’s not shimmery, but it’s not quite real, either. It’s just sitting there, perfectly flat on the table, reacting in space to your head movements. It’s nearly as lifelike as the actual table, and there’s no lag at all. The castle is there. It’s simply magic.</p>
<p>You definitely have a big stupid grin on your face even though the contraption that’s strapped to it is pressing your eyeglasses into the bridge of your nose in a painful way.</p>
<p>Then it’s demo time. You can’t touch anything, but you can look and point a little circle at objects on it by moving your head around. You learn how a “glance” is just you looking at things and pointing your reticle at them, and an “AirTap” is the equivalent of clicking your mouse. The demo involves digging <em>Minecraft</em> holes and blowing up <em>Minecraft</em> zombies with <em>Minecraft</em> TNT. It’s basically incredible to see these digital things in real space.</p>
<p>You blow up a hole in the table and then you look <em>through</em> it to more digital objects on the floor. You blow up a hole in the wall and tiny bats fly out and you see that behind your very normal wall is a virtual hellscape of lava and rock. You peer into the hole, around the corner, and see that dark realm extend far into space.</p>
<p>And then the demo’s over.</p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="aligncenter size-large wp-image-8794" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/01/a67d3d33-e1e5-4cf7-bf3d-dbe1befc8d8c.0.jpg?resize=980%2C655&#038;ssl=1" alt="a67d3d33-e1e5-4cf7-bf3d-dbe1befc8d8c.0" width="980" height="655" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/01/a67d3d33-e1e5-4cf7-bf3d-dbe1befc8d8c.0.jpg?resize=1024%2C684&amp;ssl=1 1024w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/01/a67d3d33-e1e5-4cf7-bf3d-dbe1befc8d8c.0.jpg?resize=300%2C200&amp;ssl=1 300w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/01/a67d3d33-e1e5-4cf7-bf3d-dbe1befc8d8c.0.jpg?w=1200&amp;ssl=1 1200w" sizes="auto, (max-width: 980px) 100vw, 980px" /></p>
<p><strong>Skype<br />
</strong>by Tom Warren</p>
<p>Microsoft’s Skype demo was as equally impressive to me as playing around with<em>Minecraft </em>blocks in a living room. After a two-hour keynote, Microsoft wanted me to fix a light switch. It all started by sitting down and facing some tools and a socket with exposed wiring. A little dazed and confused, I looked up and scanned across the Skype interface which was suddenly appearing in front of me, and picked a face to call. The video call popped into a little window, and my journey to fix a light switch began.</p>
<p>On the other end of the call was a Microsoft engineer. I could see and hear her, but she could only hear me and see exactly what I was seeing in front of me. My eyes, or the headset on my head, was relaying everything over Skype. It was a support call of sorts — here she was to help me fix a light switch. We started by pinning her little window on top of a lamp. I could then look around the room and return to the lamp to see her face. She guided me where to go. It felt strangely natural, and I didn’t need to configure anything or learn gestures other than the same “Air Tap” you use to simulate a mouse click.</p>
<p>While I was being talked through which real world tools we needed for the job, the Microsoft engineer called my attention to the wall with wiring and then started drawing where to position the light switch right on the wall. Thinking about it now it sounds totally surreal, but during the demo I didn’t even think about it — it just felt like I was being guided around with annotations and a helpful friend. We connected the wiring, tested it for an electrical current, and then turned the power back on and switched the light on. It was all fixed, and all by using a crazy combination of a headset, augmented reality, and Skype. It might sound gimmicky, but the applications here are truly impressive. I use YouTube guides to figure out home improvements or to service my car, but this is on another level. Imagine a surgeon performing complex surgery and writing notes in real time and guiding a colleague through it all. Imagine support calls to resolve a problem with your PC. If this works as well as Microsoft’s controlled demo, then this really has the ability to change how we communicate and learn.</p>
<p>Microsoft’s next demo didn’t have us using the HoloLens prototypes directly. Instead, we watched as “Nick” (nobody in Microsoft’s blue-tinted demonstration basement has last names. I asked.) manipulate objects in digital space so he could build a Koala bear or a pickup truck. It was actually quite impressive, as cameras filmed him and screens showed both Alex and the virtual objects he was manipulating in the same space in real time.</p>
<p>The idea was to convince us that HoloLens would unleash a wave of creators who would be able to dream up 3D objects with little to no training. It’s much easier to understand what a thing is in your living room than it is in AutoCad.</p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="aligncenter size-full wp-image-8795" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/01/hololens.0.gif?resize=663%2C373&#038;ssl=1" alt="hololens.0" width="663" height="373" /></p>
<p>But sitting there after our whirlwind of actually <em>experiencing</em> HoloLens, my mind was elsewhere. For example, there are only a few ways to interact with this system so far:</p>
<ul>
<li>Glance: you point your head at something.</li>
<li>AirTap: you make a “Number 1″ sign with your hand, then move your finger down like you’re depressing a lever.</li>
<li>Voice: you can issue commands, usually to switch what “tool” you’re using.</li>
<li>Mouse: So actually the neatest thing is that objects you use to interact with computers can be used to interact with holograms.</li>
</ul>
<p>That seems like enough, but it’s not nearly enough. It’s wildly impressive that these objects really do feel like they’re out there in your living room, but it’s equally depressing to know that you can’t treat them like real objects.</p>
<p>At one point in the demo, Alex needed to put a tire on his pickup. He had to twist his body and head around to get his pointer in just the right spot and get the tire arranged just right to fix on the axle. Then, AirTap! the tire is connected. But how much easier would it be if you could grab the tire in your actual hands?</p>
<p>Our hands are simply more dextrous than our necks. You have finer control over small motions, you can move your hands in so many different ways and vectors, with pressure and nuance and delicacy. Your neck and head, well, not so much.</p>
<p>But then Microsoft gave us 3D printed Koalas with a USB drive inside them, which was nice. And if this HoloLens thing takes off, you will be able to design your own and it will be way easier than learning current 3D design software. But not as easy as it would be if you just imagined building with holograms.</p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="aligncenter size-full wp-image-8796" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/01/microsoft-windows-10-live-verge-_1662.0.jpg?resize=980%2C654&#038;ssl=1" alt="microsoft-windows-10-live-verge-_1662.0" width="980" height="654" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/01/microsoft-windows-10-live-verge-_1662.0.jpg?w=1000&amp;ssl=1 1000w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/01/microsoft-windows-10-live-verge-_1662.0.jpg?resize=300%2C200&amp;ssl=1 300w" sizes="auto, (max-width: 980px) 100vw, 980px" /></p>
<p><strong>Walking on Mars<br />
</strong>By Tom Warren</p>
<p>Microsoft has teamed up with NASA to let scientists explore what Curiosity sees on Mars. Instead of panoramic imagery on a computer screen, Microsoft’s demo lit up a room and turned it into Mars. I walked around the rocky terrain, bumped into the Curiosity rover, and generally just checked out a planet I will never visit in my lifetime. It’s a totally new perspective that felt like I was immersed in touring Mars, but not necessarily there. The field of view felt a little too limited to truly immerse myself and trick my brain into thinking I was really on another planet, but what impressed me most is what Microsoft has built into this experience.</p>
<p>I held a call with a NASA engineer and he talked me through the terrain. I squatted to look more closely at rocks, took snapshots of various rock formations, and even planted flags for points of interest. My jaw dropped when I ventured over to a PC in the room and started to experiment with the mouse. I pulled the mouse pointer off the screen and suddenly it was on the floor next to me, allowing me to set markers in the virtual environment. It’s everything I’ve seen in demonstrations from Microsoft Research before, but here it was on my head and working.</p>
<p>The collaboration part was the key here, allowing me to interact with this data in a unique way, but also alongside the NASA engineer who could drop flags on the Mars terrain and guide me to look at certain sections. While this isn’t traditional productivity with a mouse and keyboard, it’s certainly something new and intriguing. I could see this type of scenario working for big teams that need to communicate across time zones and on big sets of complex data.</p>
<p>Overall, HoloLens is Microsoft at its most ambitious. It’s a big bet on the future of computing, the future of Windows, and ultimately the future of Microsoft itself. While the company is struggling at mobile, it wants to catch the next wave of computing and lead. Is HoloLens the next wave? Developers and consumers will be the ultimate test of that, but if anything HoloLens is an incredibly brave and impressive project from Microsoft. It’s true innovation, which is something Microsoft has lacked during its obsession with protecting Windows. It’s also another example of <a href="https://www.theverge.com/2014/11/6/7164623/microsoft-3d-sound-headset-guide-dogs" target="_blank" rel="noopener">an experience that takes the complex technology out of the way</a>, leaving you to experience what really matters.</p>
<p>Written by: <a href="https://www.theverge.com/users/Dieter%20Bohn" target="_blank" rel="noopener">Dieter Bohn</a> and <a href="https://www.theverge.com/users/tomwarren" target="_blank" rel="noopener">Tom Warren</a>, <a href="https://www.theverge.com/2015/1/21/7868251/microsoft-hololens-hologram-hands-on-experience" target="_blank" rel="noopener">The Verge</a> (via <a href="https://ispr.info/2015/01/22/up-close-with-the-hololens-microsofts-intriguing-mixed-reality-product/" target="_blank" rel="noopener">Presence</a>)<br />
Posted by: <a href="https://www.situatedresearch.com">Situated Research</a></p>
<p>The post <a href="https://www.situatedresearch.com/2015/01/welcome-age-holographs/">Welcome to the Age of Holographs</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.situatedresearch.com/2015/01/welcome-age-holographs/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">8792</post-id>	</item>
		<item>
		<title>London Firm Creates Mind-Controlled Commands for Google Glass</title>
		<link>https://www.situatedresearch.com/2014/07/london-firm-creates-mind-controlled-commands-google-glass/</link>
					<comments>https://www.situatedresearch.com/2014/07/london-firm-creates-mind-controlled-commands-google-glass/#_comments</comments>
		
		<dc:creator><![CDATA[Matthew Sharritt, Ph.D.]]></dc:creator>
		<pubDate>Tue, 15 Jul 2014 20:28:01 +0000</pubDate>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[HCI]]></category>
		<category><![CDATA[Psychology]]></category>
		<category><![CDATA[Serious Games]]></category>
		<category><![CDATA[Simulations]]></category>
		<category><![CDATA[Usability]]></category>
		<category><![CDATA[Aesthetics]]></category>
		<category><![CDATA[Affect / Emotion]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Cognition]]></category>
		<category><![CDATA[Communication]]></category>
		<category><![CDATA[Design]]></category>
		<category><![CDATA[Human Factors]]></category>
		<category><![CDATA[Mental Models]]></category>
		<category><![CDATA[Personalization]]></category>
		<category><![CDATA[User Experience]]></category>
		<category><![CDATA[User Interface]]></category>
		<category><![CDATA[User-Centered Design]]></category>
		<guid isPermaLink="false">http://www.situatedresearch.com/?p=8608</guid>

					<description><![CDATA[<p>Forget voice commands and touch gestures: A London firm has developed a way for Google Glass users to control their devices just by thinking. This Place, an agency that specializes in creating user interfaces and experiences for programs used in the medical industry, developed a software called MindRDR that allows Google Glass to connect with&#8230;</p>
<p>The post <a href="https://www.situatedresearch.com/2014/07/london-firm-creates-mind-controlled-commands-google-glass/">London Firm Creates Mind-Controlled Commands for Google Glass</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Forget voice commands and touch gestures: A London firm has developed a way for Google Glass users to control their devices just by thinking.</p>
<p>This Place, an agency that specializes in creating user interfaces and experiences for programs used in the medical industry, developed a software called MindRDR that allows Google Glass to connect with the Neurosky MindWave Mobile EEG biosensor, a head-mounted device that can detect a person’s brain waves. <span id="more-8608"></span></p>
<p>EEG stands for electroencephalography, which is the measurement and recording of electrical activity in the brain. EEG biosensors have been around for decades, but until recently they were very expensive. Neurosky is a Silicon Valley company that sells EEG biosensors, some for as little as $79.99 from Amazon.com.</p>
<p>The system works by pairing the EEG biosensor with Google’s $1,500 Glass device using Bluetooth. Once the connection has been made, the user fires up MindRDR, which takes what the EEG biosensor detects and converts it into commands that Glass can process.</p>
<p>After turning on the app, users will see a camera interface on the screen of their Google Glass. They can then pick a subject, aim their head in its direction, and concentrate on it while Glass displays a meter showing the level of their brain waves. The more intently a user focuses, the higher the meter climbs until it reaches the top, triggering Glass’ camera. By repeating the process, users can direct MindRDR to upload the photo to one of their social networks.</p>
<p>For now MindRDR can only be used to snap pictures, but This Place Chief Executive Dusan Hamlin said he hoped the agency would continue developing the software so that it could eventually help users overcome mobility limitations. Specifically, Hamlin said he would like MindRDR to help people who suffer from locked-in syndrome, in which a patient has lost motor control but remains aware and alert, as well as quadriplegia.</p>
<p>“The ability to be able to use their mind to make outputs to a device could be a huge thing for them,” Hamlin told the Los Angeles Times in a Skype interview.</p>
<p>But the possibilities for MindRDR extend beyond the medical field. Hamlin said he sees MindRDR as the launching point for a world where people can interact with their digital devices by simply thinking about what they want. To that end, This Place has uploaded the code for its software onto <a href="https://github.com/ThisPlace/MindRDR" target="_blank">GitHub</a>, a popular website used by developers to share code they create with others for free.</p>
<p>“What we’ve done is just scratch the surface, and we hope that we’ve inspired people to build on what we’ve started,” Hamlin said.</p>
<p>Written by: <a href="http://www.latimes.com/la-bio-salvador-rodriguez-staff.html" target="_blank">Salvador Rodriguez</a>, the <a href="http://www.latimes.com/business/technology/la-fi-tn-google-glass-mindrdr-20140711-story.html" target="_blank">Los Angeles Times</a> (via <a href="http://ispr.info/2014/07/14/mindrdr-lets-users-control-google-glass-with-their-thoughts/" target="_blank">Presence</a>); more information is available from <a href="http://mindrdr.thisplace.com/" target="_blank">This Place</a> and an article in <a href="http://www.bbc.com/news/technology-28237582" target="_blank">BBC News</a><br />
Posted by: <a href="https://www.situatedresearch.com">Situated Research</a></p>
<p>The post <a href="https://www.situatedresearch.com/2014/07/london-firm-creates-mind-controlled-commands-google-glass/">London Firm Creates Mind-Controlled Commands for Google Glass</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.situatedresearch.com/2014/07/london-firm-creates-mind-controlled-commands-google-glass/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">8608</post-id>	</item>
		<item>
		<title>IBM Forecasts Major Advances in Cognitive Computing</title>
		<link>https://www.situatedresearch.com/2013/12/ibm-forecasts-major-advances-cognitive-computing/</link>
					<comments>https://www.situatedresearch.com/2013/12/ibm-forecasts-major-advances-cognitive-computing/#_comments</comments>
		
		<dc:creator><![CDATA[Matthew Sharritt, Ph.D.]]></dc:creator>
		<pubDate>Fri, 27 Dec 2013 16:59:58 +0000</pubDate>
				<category><![CDATA[Collaboration]]></category>
		<category><![CDATA[Psychology]]></category>
		<category><![CDATA[Serious Games]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Cognition]]></category>
		<category><![CDATA[Games for Learning]]></category>
		<category><![CDATA[Usability Research]]></category>
		<guid isPermaLink="false">http://www.situatedresearch.com/?p=5532</guid>

					<description><![CDATA[<p>IBM on Tuesday released its annual &#8220;5 in 5&#8221; list of predictions about technological innovations that will change the way we live in the next five years, with the theme this year being cognitive advances in computing that help machines &#8220;learn&#8221; how to better serve us.  Last year&#8217;s 5 in 5 list also focused on&#8230;</p>
<p>The post <a href="https://www.situatedresearch.com/2013/12/ibm-forecasts-major-advances-cognitive-computing/">IBM Forecasts Major Advances in Cognitive Computing</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>IBM on Tuesday released its annual &#8220;5 in 5&#8221; list of predictions about technological innovations that will change the way we live in the next five years, with the theme this year being cognitive advances in computing that help machines &#8220;learn&#8221; how to better serve us. <span id="more-5532"></span></p>
<p>Last year&#8217;s 5 in 5 list also focused on the <a href="http://www.pcmag.com/article2/0,2817,2413300,00.asp" data-ls-seen="1">rise of cognition in computing</a> and how the five senses humans use to gain information about and manipulate the physical world are being emulated by computing systems like IBM&#8217;s own Watson artificial intelligence framework.</p>
<p>For this year&#8217;s edition, IBM got a little more specific about the ways that such advances in machine learning will affect us, touching more on data analytics and offering up the following predictions:</p>
<p><b>The classroom will learn you:</b> Kerrie Holley of IBM described this as a concept &#8220;built on a lot of the technologies you see with how the Khan Academy works, cloud-based computing, and the like.&#8221; In the years to come, new learning technologies will use advanced analytics of &#8220;longitudinal student records&#8221; to help teachers better assess what individual students need, which ones are at risk, and how to help them in their education, he said.</p>
<p><iframe loading="lazy" frameborder="0" allowfullscreen src="//www.youtube.com/embed/hTA5GyWamR0" width="650" height="390"></iframe></p>
<p><b>Buying local will beat online.</b> Less about a specific tech advance, this prediction is based on the idea that the &#8220;tables will turn&#8221; in terms of access to the kind of technology, cloud services, and analytics that can help &#8220;mom and pop&#8221; businesses compete more readily with big national and global retailers, Holley said. &#8220;Technology costs are dropping and as they do, proximity will allow local retailers to create experiences the big retailers are not able to do online.&#8221;</p>
<p><iframe loading="lazy" frameborder="0" allowfullscreen src="//www.youtube.com/embed/yKNSOwLcrkE" width="650" height="390"></iframe></p>
<p><b>Doctors will use your DNA to keep you well.</b> IBM presented this prediction as one involving more advanced computational work than some of the others in its 5-in-5 list. &#8220;Cognitive-based systems like Watson, along with breakthroughs in genomic research, will enable doctors to be better able to diagnose cancer and offer better treatments,&#8221; Holley said.</p>
<p><iframe loading="lazy" frameborder="0" allowfullscreen src="//www.youtube.com/embed/0M1DMdc1mQ0" width="650" height="390"></iframe></p>
<p><b>The city will help you live in it.</b> In just a few decades, as many as seven out of 10 people around the world will live in cities, according to some projections. We&#8217;re already seeing more computational resources being dedicated to helping those city dwellers manage their urban lives and that will only accelerate, according to IBM.</p>
<p><iframe loading="lazy" frameborder="0" allowfullscreen src="//www.youtube.com/embed/tVGviMIMjN0" width="650" height="390"></iframe></p>
<p><b>A digital guardian will protect you online.</b> Holley explained this prediction as an expansion on financial fraud protection services offered by banks and credit card companies, only much more personally tailored to individuals to safeguard their entire digital lives.</p>
<p><iframe loading="lazy" frameborder="0" allowfullscreen src="//www.youtube.com/embed/al8ng82nRss" width="650" height="390"></iframe></p>
<p>&#8220;This year&#8217;s IBM 5 in 5 explores the idea that everything will learn—driven by a new era of cognitive systems where machines will learn, reason and engage with us in a more natural and personalized way. These innovations are beginning to emerge enabled by cloud computing, big data analytics, and learning technologies all coming together,&#8221; the research team behind the company&#8217;s annual list of predictions said in a statement.</p>
<p>&#8220;Over time these computers will get smarter and more customized through interactions with data, devices, and people, helping us take on what may have been seen as unsolvable problems by using all the information that surrounds us and bringing the right insight or suggestion to our fingertips right when it&#8217;s most needed. A new era in computing will lead to breakthroughs that will amplify human abilities, assist us in making good choices, look out for us, and help us navigate our world in powerful new ways.&#8221;</p>
<p>Written by: <a href="http://www.pcmag.com/author-bio/damon-poeter">Damon Poeter</a>, <a href="http://www.pcmag.com/article2/0,2817,2428432,00.asp">PC Mag</a><br />
Posted by: <a href="https://www.situatedresearch.com">Situated Research</a></p>
<p>The post <a href="https://www.situatedresearch.com/2013/12/ibm-forecasts-major-advances-cognitive-computing/">IBM Forecasts Major Advances in Cognitive Computing</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.situatedresearch.com/2013/12/ibm-forecasts-major-advances-cognitive-computing/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5532</post-id>	</item>
		<item>
		<title>Ohio State Doctor Shows Promise of Google Glass in Live Surgery</title>
		<link>https://www.situatedresearch.com/2013/09/ohio-state-doctor-shows-promise-google-glass-live-surgery/</link>
					<comments>https://www.situatedresearch.com/2013/09/ohio-state-doctor-shows-promise-google-glass-live-surgery/#_comments</comments>
		
		<dc:creator><![CDATA[Matthew Sharritt, Ph.D.]]></dc:creator>
		<pubDate>Thu, 12 Sep 2013 17:55:48 +0000</pubDate>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[HCI]]></category>
		<category><![CDATA[Serious Games]]></category>
		<category><![CDATA[Simulations]]></category>
		<category><![CDATA[Usability]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[heads-up-display]]></category>
		<category><![CDATA[Health Care]]></category>
		<category><![CDATA[Human Factors]]></category>
		<category><![CDATA[Mobile Devices]]></category>
		<category><![CDATA[User Interface]]></category>
		<guid isPermaLink="false">http://www.situatedresearch.com/?p=5320</guid>

					<description><![CDATA[<p>COLUMBUS, Ohio – A surgeon at The Ohio State University Wexner Medical Center is the first in the United States to consult with a distant colleague using live, point-of-view video from the operating room via Google Glass, a head-mounted computer and camera device.  “It’s a privilege to be a part of this project as we explore how&#8230;</p>
<p>The post <a href="https://www.situatedresearch.com/2013/09/ohio-state-doctor-shows-promise-google-glass-live-surgery/">Ohio State Doctor Shows Promise of Google Glass in Live Surgery</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>COLUMBUS, Ohio – A surgeon at <a href="http://www.medicalcenter.osu.edu/Pages/index.aspx">The Ohio State University Wexner Medical Center</a> is the first in the United States to consult with a distant colleague using live, point-of-view video from the operating room via Google Glass, a head-mounted computer and camera device. <span id="more-5320"></span></p>
<p>“It’s a privilege to be a part of this project as we explore how this exciting new technology might be incorporated into the everyday care of our patients,” said Dr.<a href="http://ortho.osu.edu/directories/faculty/christopherkaeding/">Christopher Kaeding, </a>the physician who performed the surgery and director of sports medicine at Ohio State.  “To be honest, once we got into the surgery, I often forgot the device was there. It just seemed very intuitive and fit seamlessly.”</p>
<p>Google Glass has a frame similar to traditional glasses, but instead of lenses, there is a small glass block that sits above the right eye.  On that glass is a computer screen that, with a simple voice command, allows users to pull up information as they would on any other computer.  Attached to the front of the device is a camera that offers a point-of-view image and the ability to take both photos and videos while the device is worn.</p>
<p>During this procedure at the medical center’s University East facility, Kaeding wore the device as he performed ACL surgery on Paula Kobalka, 47, from Westerville, Ohio, who hurt her knee playing softball.  As he performed her operation at a facility on the east side of Columbus, Google Glass showed his vantage point via the internet to audiences miles away.</p>
<p>Across town, one of Kaeding’s Ohio State colleagues, Dr. Robert Magnussen, watched the surgery his office, while on the main campus, several students at <a href="http://medicine.osu.edu/Pages/default.aspx">The Ohio State University College of Medicine</a> watched on their laptops.</p>
<p>“To have the opportunity to be a medical student and share in this technology is really exciting,” said Ryan Blackwell, a second-year medical student who watched the surgery remotely.   “This could have huge implications, not only from the medical education perspective, but because a doctor can use this technology remotely, it could spread patient care all over the world in places that we don’t have it already.”</p>
<p>“As an academic medical center, we’re very excited about the opportunities this device could provide for education,” said Dr. <a href="http://p4mi.org/clay-marsh-md">Clay Marsh,</a> chief innovation officer at The Ohio State University Wexner Medical Center. “But beyond, that, it could be a game-changer for the doctor during the surgery itself.”</p>
<p>Experts have theorized that during surgery doctors could use voice commands to instantly call up x-ray or MRI images of their patient, pathology reports or reference materials.  They could collaborate live and face-to-face with colleagues via the internet, anywhere in the world.</p>
<p>“It puts you right there, real time,” said Marsh, who is also the executive director of the Center for Personalized Health Care at Ohio State. “Not only might you be able to call up any kind of information you need or to get the help you need, but it’s the ability to do it immediately that’s so exciting,” he said.  “Now, we just have to start using it. Like many technologies, it needs to be evaluated in different situations to find out where the greatest value is and how it can impact the lives of our patients in a positive way.”</p>
<p>Only 1,000 people in the United States have been chosen to test Google Glass as part of Google’s Explorer Program. Dr. Ismail Nabeel, an assistant professor of general internal medicine at Ohio State applied and was chosen. He then partnered with Kaeding to perform this groundbreaking surgery and to help test technology that could change the way we see medicine in the future.</p>
<hr />
<p>Broadcast quality video and high-definition photos available for download: <a href="http://bit.ly/16jXc6c">http://bit.ly/16jXc6c</a></p>
<p>Written by: The <a href="http://www.medicalcenter.osu.edu/mediaroom/releases/Pages/Ohio-State-Doctor-Shows-Promise-of-Google-Glass-in-Live-Surgery.aspx">Ohio State University</a> (via <a href="http://ispr.info/2013/09/03/ohio-state-doctor-shows-promise-of-google-glass-in-live-surgery/">Presence</a>); for details about the first international Google Glass surgery, in June 2013, see <a href="http://www.clinicacemtro.com/index.php/en/sala-de-prensa-3/noticias/679-clinica-cemtro-first-ggogle-glass-surgery">Clinica Cemtro</a>; for a report about early reactions from those testing Glass see <a href="http://www.npr.org/templates/story/story.php?storyId=216094970">NPR</a></p>
<p>Image: Dr. Christopher Kaeding, an orthopedic surgeon at The Ohio State University Wexner Medical Center is shown wearing Google Glass</p>
<p>Posted by: <a href="https://www.situatedresearch.com">Situated Research</a></p>
<p>The post <a href="https://www.situatedresearch.com/2013/09/ohio-state-doctor-shows-promise-google-glass-live-surgery/">Ohio State Doctor Shows Promise of Google Glass in Live Surgery</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.situatedresearch.com/2013/09/ohio-state-doctor-shows-promise-google-glass-live-surgery/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5320</post-id>	</item>
		<item>
		<title>Beyond Google Glass: The Evolution of Augmented Reality</title>
		<link>https://www.situatedresearch.com/2013/06/beyond-google-glass-the-evolution-of-augmented-reality/</link>
					<comments>https://www.situatedresearch.com/2013/06/beyond-google-glass-the-evolution-of-augmented-reality/#_comments</comments>
		
		<dc:creator><![CDATA[Matthew Sharritt, Ph.D.]]></dc:creator>
		<pubDate>Mon, 17 Jun 2013 16:38:34 +0000</pubDate>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[HCI]]></category>
		<category><![CDATA[Simulations]]></category>
		<category><![CDATA[Usability]]></category>
		<category><![CDATA[Aesthetics]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Cognition]]></category>
		<category><![CDATA[Communication]]></category>
		<category><![CDATA[Ergonomics]]></category>
		<category><![CDATA[heads-up-display]]></category>
		<category><![CDATA[Human Factors]]></category>
		<category><![CDATA[Mobile Devices]]></category>
		<category><![CDATA[User Experience]]></category>
		<category><![CDATA[User Interface]]></category>
		<guid isPermaLink="false">http://www.situatedresearch.com/?p=5184</guid>

					<description><![CDATA[<p>The wearable revolution is heading beyond Google Glass, fitness tracking and health monitoring. The future is wearables that conjure up a digital layer in real space to “augment” reality. SANTA CLARA, Calif. — Reality isn’t what is used to be. With increasingly powerful technologies, the human universe is being reimagined way beyond Google Glass’ photo-tapping&#8230;</p>
<p>The post <a href="https://www.situatedresearch.com/2013/06/beyond-google-glass-the-evolution-of-augmented-reality/">Beyond Google Glass: The Evolution of Augmented Reality</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><b>The wearable revolution is heading beyond Google Glass, fitness tracking and health monitoring. The future is wearables that conjure up a digital layer in real space to “augment” reality.</b></p>
<p>SANTA CLARA, Calif. — Reality isn’t what is used to be. With increasingly powerful technologies, the human universe is being reimagined way beyond Google Glass’ photo-tapping and info cards floating in space above your eye. The future is fashionable eyewear, contact lenses or even bionic eyes with immersive 3D displays, conjuring up a digital layer to “augment” reality, enabling entire new classes of applications and user experiences. <span id="more-5184"></span></p>
<p>Like most technologies that eventually reach a mass market, augmented reality, or AR, has been gestating in university labs, as well as small companies focused on gaming and vertical applications, for nearly half a century. Emerging products like<a href="http://reviews.cnet.com/google-glass/">Google Glass</a> and Oculus Rift’s 3D virtual reality headset for immersive gaming are drawing attention to what could now be termed the “wearable revolution,” but they barely scratch the surface of what’s to come.</p>
<figure id="attachment_5186" aria-describedby="caption-attachment-5186" style="width: 577px" class="wp-caption aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-5186" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_11.34.51_AM_1.png?resize=577%2C450&#038;ssl=1" alt="The Sword of Damocles head-mounted display. &quot;The ultimate display would, of course, be a room within which the computer can control the existence of matter,&quot; Sutherland wrote in his 1965 essay. (Credit: Ivan Sutherland &quot;The Ultimate Display&quot;)" width="577" height="450" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_11.34.51_AM_1.png?w=577&amp;ssl=1 577w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_11.34.51_AM_1.png?resize=300%2C233&amp;ssl=1 300w" sizes="auto, (max-width: 577px) 100vw, 577px" /><figcaption id="caption-attachment-5186" class="wp-caption-text"><em>The Sword of Damocles head-mounted display. &#8220;The ultimate display would, of course, be a room within which the computer can control the existence of matter,&#8221; Sutherland wrote in his 1965 essay. (Credit: Ivan Sutherland &#8220;The Ultimate Display&#8221;)</em></figcaption></figure>
<p>The wearable revolution can be traced back to <a href="http://en.wikipedia.org/wiki/Ivan_Sutherland">Ivan Sutherland</a>, a ground-breaking computer scientist at the University of Utah who in 1965 first described a head-mounted display with half-silvered mirrors that let the wearer see a virtual world superimposed on the real world. In 1968 he was able to demonstrate the concept, which was dubbed “<a href="http://www.computerhistory.org/revolution/input-output/14/356/1830">The Sword of Damocles</a>.”</p>
<figure id="attachment_5187" aria-describedby="caption-attachment-5187" style="width: 610px" class="wp-caption aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-5187 " src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/P1040832_610x458.jpg?resize=610%2C458&#038;ssl=1" alt="P1040832_610x458" width="610" height="458" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/P1040832_610x458.jpg?w=610&amp;ssl=1 610w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/P1040832_610x458.jpg?resize=300%2C225&amp;ssl=1 300w" sizes="auto, (max-width: 610px) 100vw, 610px" /><figcaption id="caption-attachment-5187" class="wp-caption-text"><em>Steven Feiner of Columbia University and Steve Mann of the University of Toronto at the Augmented World Expo in Santa Clara, Calif., June 4, 2013. Both are now involved in the augmented reality startup Meta. (Credit: Dan Farber)</em></figcaption></figure>
<p>His work was followed up and advanced decades later by researchers including the University of Toronto’s Steve Mann and Columbia University’s Steven Feiner. In the second decade of the 21st century, the technology is finally catching up with their concepts.</p>
<p>The necessary apparatus of cameras, computers, sensors and connectivity is coming down in cost and size and increasing in speed, accuracy and resolution to point that wearable computers will be viewed as a cool accessory, mediating our interactions with the analog and digital worlds.</p>
<p><b>Augmented Reality past and future</b></p>
<p>“You need to have technology that is sufficiently comfortable and usable, and a set of potential adopters who would be comfortable wearing the technology,” said Feiner at the gathering of the fledgling AR industry at the <a href="http://augmentedworldexpo.com/">Augmented Reality Expo</a> here Wednesday. “It would be like moving from big headphones to earbuds. When they are very small and comfortable, you don’t feel weird, but cool.” He added that glasses with a “sexy lump of bump” with electronics and display could also be cool to the early adopters, especially the younger generation that has grown up digital. However, he didn’t have any prediction for when wearable computer would reach a mass market.</p>
<p>In the last decade, AR has been primarily focused on immersive gaming that teleports users to another world and on vertical applications, such as tethered, interactive 3D training simulations.</p>
<figure id="attachment_5188" aria-describedby="caption-attachment-5188" style="width: 610px" class="wp-caption aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-5188" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_2.43.54_PM_610x397.png?resize=610%2C397&#038;ssl=1" alt="Screen_Shot_2013-06-06_at_2.43.54_PM_610x397" width="610" height="397" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_2.43.54_PM_610x397.png?w=610&amp;ssl=1 610w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_2.43.54_PM_610x397.png?resize=300%2C195&amp;ssl=1 300w" sizes="auto, (max-width: 610px) 100vw, 610px" /><figcaption id="caption-attachment-5188" class="wp-caption-text"><em>Augmented reality can help in training, such as learning how to weld aided by a 3D environment that tracks user movements precisely. Seabery Augmented Training&#8217;s Soldamatic application, pictured here, could be used for medical training, bomb disposal and other industry verticals. (Credit: Dan Farber)</em></figcaption></figure>
<p>But now augmented reality is about to break out into free space. “AR will be the interface for the Internet of things,” said Greg Kipper, author of “Augmented Reality: An Emerging Technologies Guide to AR.”</p>
<p>“It is a transition time, like from the command line to graphical user interface,” he said. “Imagine trying to do PhotoShop in a command-line interface. Augmented reality will bring to the world things beyond the graphical user interface. With sensors, computational power, storage and bandwidth, we’ll see the world in a new way and make it very personal.”</p>
<figure id="attachment_5189" aria-describedby="caption-attachment-5189" style="width: 270px" class="wp-caption alignright"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-5189" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_2.34.00_PM_270x253.png?resize=270%2C253&#038;ssl=1" alt="Will Wright, the man behind The Sims, speaking at the Augmented Reality Expo on June 4, 2013. (Credit: Dan Farber)" width="270" height="253" /><figcaption id="caption-attachment-5189" class="wp-caption-text"><em>Will Wright, the man behind The Sims, speaking at the Augmented Reality Expo on June 4, 2013.</em><br /><em>(Credit: Dan Farber)</em></figcaption></figure>
<p>Will Wright, creator of the popular The Sims family games, likened AR to having super-sensory abilities, like flipping a switch to see what is underground, beneath your feet. “It’s not about bookmarks or restaurant reviews…it’s something that maps to my intuition.” He hoped that instead of augmenting reality, the technology could “decimate” reality, filtering out even more information than the brain already does to engage reality with less cacophony.</p>
<p>Steve Mann, who is rarely seen without one of his wearable computing rigs and is considered the father of AR, views the wearable revolution as a benefit to society. Quality of life can be improved with overlays of information, adding and subtracting it to facilitate improved “eyesight,” he said. “The first purpose is to help people see better,” he said during his keynote at the expo.</p>
<p>Just as the smartphone is compressing a lot of the function from antecedent computing devices into a single product, wearable computing will eventually make the handheld smartphone irrelevant.</p>
<p>“The value proposition of digital eyewear is having all devices in one, with a camera for each eye representing full body 3D, and the ability to interact with an infinite screen. We are architecting the future of interaction,” said Meron Gribetz of <a href="http://news.cnet.com/8301-11386_3-57584739-76/meta-glasses-bring-3d-and-your-hands-into-the-picture/">Meta, a Ycombinator startup</a> working on a new operating system and hardware interface for augmented reality computing.</p>
<p>“There is no other future of computing other than this technology, which can display information from the real world and control objects with your fingers at low latency and high dexterity. It’s the keyboard and mouse of the future,” he claimed.</p>
<figure id="attachment_5190" aria-describedby="caption-attachment-5190" style="width: 589px" class="wp-caption aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-5190 " src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-05-17_at_7.26.10_AM.png?resize=589%2C381&#038;ssl=1" alt="Screen_Shot_2013-05-17_at_7.26.10_AM" width="589" height="381" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-05-17_at_7.26.10_AM.png?w=589&amp;ssl=1 589w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-05-17_at_7.26.10_AM.png?resize=300%2C194&amp;ssl=1 300w" sizes="auto, (max-width: 589px) 100vw, 589px" /><figcaption id="caption-attachment-5190" class="wp-caption-text"><em>Meta can project a 3D image on a wall and users interact with their hands. (Credit: Meta)</em></figcaption></figure>
<p>Atheer, a Mountain View, Calif.-based AR startup, is <a href="http://news.cnet.com/8301-11386_3-57586750-76/atheer-bringing-3d-augmented-reality-and-gesture-control-to-android/">developing a platform that will work with existing mobile operating systems</a>, such as Google’s <a href="http://www.cnet.com/android-atlas/">Android</a>. “We are the first mobile 3D platform delivering the human interface. We are taking the touch experience on smart devices, getting the Internet out of these monitors and putting it everywhere in physical world around you,” said CEO Sulieman Itani. “In 3D, you can paint in the physical world. For example, you could leave a note to a friend in the air at restaurant, and when the friend walks into the restaurant, only they can see it.”</p>
<p>The company plans to seed its technology to developers this year and have its technology embedded in stylish, lightweight glasses with cameras next year.</p>
<p>The transition to touch and gesture interfaces doesn’t mean that the old modes of human-computer interaction go away. Just as TV didn’t replace radio, augmented reality won’t obliterate previous interfaces. The keyboard might still be the best interface for writing a book. Nor is waving your hands in front of your face all day a good interface.</p>
<p>“Holding hands out in front of self as primary interface is the ‘gorilla arm’ effect,” said Noah Zerkin, who is developing a full-body inertial motion-capture system for head-mounted displays. “You get tired. We need to have alternative interfaces. If not thought-based, it needs to be subtle gestures that don’t require that you to wave hands around in front of your face.”</p>
<p><a href="http://www.threegear.com/about.html">3Gear Systems</a> is working on technology that allows 3D cameras mounted above a keyboard, like a lamp, to detect smaller gestures just above the keyboard, such as pinching to rotate an object on a screen, and can use input from all 10 fingers with millimeter-level accuracy.</p>
<p>Some companies are taking less radical approaches, focusing on inserting a layer of digital information into scenes via smartphones. <a href="http://www.parworks.com/">Par Works</a>, for example, image recognition technology makes it possible overlays digital imagery on real world data, such as photos and videos, with precision. A person looking for an apartment takes a picture of a building with a smartphone and the app overlays information on the image, or a shopper will see coupons or other information for various products on a shelf in a drug store.</p>
<p style="text-align: center;"><iframe loading="lazy" src="http://player.vimeo.com/video/53109174?badge=0&amp;api=1" width="600" height="450" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>Brands are adopting AR technology to increase performance of ads and sales. Several companies provide ways to turn a print ad into an interactive experience just by pointing the camera at the paper or an object with a marker. <a href="http://blippar.com/about">Blippar</a>, for instance, recognizes images by pointing a phone camera at ads or object with its mark and inserts virtual layers of content.</p>
<h3>The future of augmented reality</h3>
<p>And where is all this heading over the next few years? It’s beginning to look like a real business, just as mobile did nearly a decade ago. Mobile analyst Tomi Ahonen expects AR to be adopted by a billion users by 2020. Intel is betting that AR will be big. The chip maker is <a href="http://news.cnet.com/8301-11386_3-57587699-76/intel-creates-$100-million-fund-for-perceptual-computing/">investing $100 million over the next 2 to 3 years</a> to fund companies developing “perceptual computing” software and apps, focusing on next-generation, natural user interfaces such as touch, gesture, voice, emotion sensing, biometrics, and image recognition.</p>
<p>Apple isn’t in the AR game yet, but the company has been <a href="http://news.cnet.com/8301-13579_3-57575121-37/apple-patents-augmented-reality-system/">awarded a U.S. patent</a>, “Synchronized, interactive augmented reality displays for multifunction devices,” for overlaying video on live video feeds.</p>
<figure id="attachment_5191" aria-describedby="caption-attachment-5191" style="width: 598px" class="wp-caption aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-5191" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_2.43.19_PM.png?resize=598%2C437&#038;ssl=1" alt="Screen_Shot_2013-06-06_at_2.43.19_PM" width="598" height="437" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_2.43.19_PM.png?w=598&amp;ssl=1 598w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_2.43.19_PM.png?resize=300%2C219&amp;ssl=1 300w" sizes="auto, (max-width: 598px) 100vw, 598px" /><figcaption id="caption-attachment-5191" class="wp-caption-text"><em>AR is looking is might be the 8th mass market to evolve, following print, recordings, cinema, radio, TV, the Internet and mobile, according mobile industry analyst Tomi Ahonen. (Credit: Tomi Ahonen)</em></figcaption></figure>
<p>Eyewear will evolve over the next year with comfortable stylish glasses with powerful embedded technology. They will range from Google Glass-style glance-at displays that also replace the phone to stereoscopic 3D-viewing wearables for everyday use.</p>
<p>“You’ll get 20/20, perfectly augmented vision by 2020, with movie-quality special effects blended seamlessly into the world around you,” said Dave Lorenzini, founder of AugmentedRealityCompany.com and former director at Keyhole.com, now known as Google Earth. “The effects will look so real, you’ll have to lift your display to see what’s really there. There’s more of the world than meets the eye, and that’s what’s coming.”</p>
<p>He cautioned that the growth of the AR industry could be slowed by a lack of standards to connect disparate players and their formats for bringing a 3D digital layer to life. “The AR industry has to get together to power the hallucination of what’s do come,” Lorenzini said. He added that a key turning point will be the availability of the WYSIWYG (What You See Is What You Get) real-world markup tools needed to bring this digital layer to life.</p>
<p>When the AR industry does take off, Lorenzini envisions a trillion dollar market for animated content, services and special effects layered into the real world. “Imagine people tagging friends with visual effects like a 3D halo and wings, or paying for a face recognition service to scan and add a floating name tag over the head of everyone in a room,” he said. “AR will grow from specific vertical uses to mass market appeal, driven by young, early adopters.</p>
<p>“Anyone reviewing devices like Google Glass needs to take it to their kids’ school before they pass judgement,” Lorenzini added. “This is not a device from our time, it’s from theirs. They love it, use it effortlessly, and are totally unfazed by ad targeting or privacy concerns. It will be be a natural part of who they are, how they learn, connect and play.”</p>
<p>Eventually, wearable technology will become more integrated with the human body. With advances in miniaturization and nanotechnology glasswear will be replaced with contact lens or even bionic eyes that record everything, make phone calls and allow you to use parts of your body, or even your thoughts, to navigate the world.</p>
<p>“Contact lenses are difficult now but the bionic eye will become commonplace and AR will just be a feature,” Kipper said. “Some may choose to have eyes in back of their heads, and some won’t. Some will want to be cyborgs. We will always use tools as advanced as they can be to help ourselves.”</p>
<p>Brian Mullins, CEO of Daqri, an augmented reality developer of custom solutions, went even further in melding humans and technology. “Thinking is the future of AR,” he said. Mullins talked about measuring “thought intensity” with EEG machines and focusing the mind to manipulate objects during a panel discussion at the Augmented Reality Expo.</p>
<p>Of course, the technical challenges are accompanied by issues of social etiquette and privacy. Smartphones are now a well-accepted part of daily life in most countries, but issues around data ownership and access to the data abound. The subtlety and potentially always-on capacity of wearable technologies will create more privacy concerns and challenges to acceptance.</p>
<p>Feiner acknowledged that it’s “scary” in terms of the information available, especially when billions of people with cameras and microphones can capture anything in public. “There are no laws against it,” he noted.</p>
<p>He gave Google some compliments for not overloading Glass with features. “It not suffering from doing too much too soon,” he said. Whether Google Glass is the tip of the spear for the mass adoption of far more powerful AR is uncertain, but it is doing a good job of surfacing the issues around the introduction of a disruptive, new way of computing.</p>
<p>Nicola Liberati, a Ph.D. student in philosophy at the University of Pisa studying the intersection of humans and technology, suggested another line of thinking about AR in his presentation at the expo. “We should not focus our attention only on what we can do with the such technology, but even on what we become by using it.”</p>
<p>So, when you are strolling down the street wearing the latest digital eyewear from Google, Apple or some as yet unformed or now early-stage company, with your continuous partial attention on the 3D holographic screen feeding you all kinds of personalized information about the environment around you, zeroing in on the people and places in your field of view or piped in remotely from around the real and virtual worlds, and spaces in between, think about what we have become.</p>
<p>It all depends on your perspective.</p>
<p>Written by: by <a href="http://www.cnet.com/profile/dfarber/">Dan Farber</a>, <a href="http://news.cnet.com/8301-11386_3-57588128-76/the-next-big-thing-in-tech-augmented-reality/">CNET</a> (via <a href="http://ispr.info/2013/06/12/beyond-google-glass-the-evolution-of-augmented-reality/">Presence</a>)<br />
Posted by: <a href="https://www.situatedresearch.com">Situated Research</a></p>
<p>The post <a href="https://www.situatedresearch.com/2013/06/beyond-google-glass-the-evolution-of-augmented-reality/">Beyond Google Glass: The Evolution of Augmented Reality</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.situatedresearch.com/2013/06/beyond-google-glass-the-evolution-of-augmented-reality/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5184</post-id>	</item>
		<item>
		<title>How Google is Melding Our Real and Virtual Worlds with Games, Apps … and Glass</title>
		<link>https://www.situatedresearch.com/2013/05/how-google-is-melding-our-real-and-virtual-worlds-with-games-apps-and-glass/</link>
					<comments>https://www.situatedresearch.com/2013/05/how-google-is-melding-our-real-and-virtual-worlds-with-games-apps-and-glass/#_comments</comments>
		
		<dc:creator><![CDATA[Matthew Sharritt, Ph.D.]]></dc:creator>
		<pubDate>Mon, 20 May 2013 19:25:11 +0000</pubDate>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[HCI]]></category>
		<category><![CDATA[Psychology]]></category>
		<category><![CDATA[Simulations]]></category>
		<category><![CDATA[Usability]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Communication]]></category>
		<category><![CDATA[Communities]]></category>
		<category><![CDATA[Design]]></category>
		<category><![CDATA[Games for Learning]]></category>
		<category><![CDATA[Human Factors]]></category>
		<category><![CDATA[Mobile Devices]]></category>
		<category><![CDATA[User Experience]]></category>
		<category><![CDATA[User Interface]]></category>
		<category><![CDATA[User-Centered Design]]></category>
		<category><![CDATA[Virtual Reality]]></category>
		<guid isPermaLink="false">http://www.situatedresearch.com/?p=5126</guid>

					<description><![CDATA[<p>“The world around you is not what it seems,” says Ingress, the virtual game that uses the real world as its gamespace. And, perhaps, when Google’s semi-independent division Niantic Labs is finished with its mission, we humans won’t be, either. Google’s mission is to organize the world’s information and make it universally accessible and usable. Note&#8230;</p>
<p>The post <a href="https://www.situatedresearch.com/2013/05/how-google-is-melding-our-real-and-virtual-worlds-with-games-apps-and-glass/">How Google is Melding Our Real and Virtual Worlds with Games, Apps … and Glass</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>“The world around you is not what it seems,” says <a href="http://www.ingress.com/">Ingress</a>, the virtual game that uses the real world as its gamespace. And, perhaps, when Google’s semi-independent division Niantic Labs is finished with its mission, we humans won’t be, either.</p>
<p>Google’s mission is to organize the world’s information and make it universally accessible and usable. Note carefully that Google says nothing about the Internet in that statement. <span id="more-5126"></span></p>
<p style="text-align: center;"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-5128 aligncenter" alt="Ingress" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/Ingress1.jpg?resize=558%2C353&#038;ssl=1" width="558" height="353" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/Ingress1.jpg?w=558&amp;ssl=1 558w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/Ingress1.jpg?resize=300%2C189&amp;ssl=1 300w" sizes="auto, (max-width: 558px) 100vw, 558px" /></p>
<p>In the last few eye-blinks of human history, we’ve created virtual worlds: cyberspace, virtual reality, the World Wide Web … places that exist in our devices, on our computers, in our servers, on the internet, and in our heads. But there’s also a space in which we live and walk and eat and breathe. Realspace. Meatspace. IRL. The real world, so we say, that we can touch and taste and smell.</p>
<p>Google’s trying to bring those worlds together, partly through the work of Niantic Labs.</p>
<p>Augmented reality is nothing new, of course, with marketing-focused companies like Layar building connections between physical and virtual reality and Ikea’s <a href="http://venturebeat.com/2012/12/23/augmented-reality/">most-downloaded branded app of 2012</a> doing similar things. Other startups have explored AR capabilities as well, such as Caterina Fake’s <a href="https://findery.com/">Findery</a>, which invites people to leave geo-tied notes that others can discover and read.</p>
<p style="text-align: center;"><img data-recalc-dims="1" loading="lazy" decoding="async" class="aligncenter  wp-image-5129" alt="screen-shot-2013-04-29-at-6-49-41-am" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/screen-shot-2013-04-29-at-6-49-41-am.png?resize=590%2C346&#038;ssl=1" width="590" height="346" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/screen-shot-2013-04-29-at-6-49-41-am.png?w=737&amp;ssl=1 737w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/screen-shot-2013-04-29-at-6-49-41-am.png?resize=300%2C176&amp;ssl=1 300w" sizes="auto, (max-width: 590px) 100vw, 590px" /></p>
<p>But when a company with the resources of a Google tackles the problem, and has a tool in Google Glass that seems destined for significant developer (and probably user) penetration that can actually create interconnections between the real and the virtual perhaps more efficiently than any other previous product, you’ve got something interesting. And potentially huge.</p>
<p>So a couple of weeks ago, I chatted with the man who’s leading that effort.</p>
<h3>John Hanke: the missionary of mapping</h3>
<p>John Hanke is vice president of product for Niantic Labs, the year-old Google-but-not-Google division of just a few dozen engineers that brought us <a href="http://venturebeat.com/2012/09/27/googles-new-field-trip-virtually-augmenting-the-awesomeness-of-reality/">Field Trip, the app to explore the world around us with a virtual docent</a>. And, of course, the virtual/real game Ingress.</p>
<figure id="attachment_5130" aria-describedby="caption-attachment-5130" style="width: 359px" class="wp-caption alignright"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-5130" alt="John Hanke" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/12926c4.jpg?resize=359%2C359&#038;ssl=1" width="359" height="359" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/12926c4.jpg?w=359&amp;ssl=1 359w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/12926c4.jpg?resize=150%2C150&amp;ssl=1 150w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/12926c4.jpg?resize=300%2C300&amp;ssl=1 300w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/12926c4.jpg?resize=90%2C90&amp;ssl=1 90w" sizes="auto, (max-width: 359px) 100vw, 359px" /><figcaption id="caption-attachment-5130" class="wp-caption-text"><em>John Hanke</em></figcaption></figure>
<p>Before Niantic, Hanke ran Google Maps, Google Earth, and other geo areas, and before Google, he was the cofounder and CEO of Keyhole, the innovative geo-mapping and visualization company. Google bought Keyhole in 2004, which brought Hanke in the search engine’s fold to lead the its maps, earth, street view, and local divisions.</p>
<p>Now, he told me, rather than let him leave to scratch his entrepreneurial itch yet again and do another startup, Google gave him a semi-autonomous group to, as his LinkedIn profile suggests, experiment at the “intersection of mobility, real world, and the Internet.”</p>
<p>“We set up Niantic as a group that could explore new types of mobile apps with ubiquitous always-on features,” Hanke said. “And we’re set up to act like a start-up.”</p>
<h3>Virtual + physical = field trip</h3>
<p>Field Trip was one of Niantic’s first creations, and while on the surface it’s an app that helps you find cool stuff, ultimately it’s a tool to merge metadata and data and then present them together. While you’re in the physical world, Field Trip pulls data about that experience from digital sources, feeding you that information, and changing — deepening, enriching — your experience of place. Layering with history, perhaps, or science, or culture.</p>
<p>Because, after all, one rock is very much like another rock, but if this is the precise rock where Geronimo attacked Mexican soldiers armed with only a knife and his courage, that changes our experience of this particular place. And the merging/melding/layering of virtual and physical makes it more real, in a sense — hyperreal.</p>
<figure id="attachment_5131" aria-describedby="caption-attachment-5131" style="width: 245px" class="wp-caption alignright"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-5131" alt="Google’s Field Trip app helps you explore “reality.”" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/ft-screenshot-5.png?resize=245%2C435&#038;ssl=1" width="245" height="435" /><figcaption id="caption-attachment-5131" class="wp-caption-text"><em>Google’s Field Trip app helps you explore “reality.”</em></figcaption></figure>
<p>Enabling that, of course, requires extensive virtual enhancement of the what-you-see-is-what-you-get world.</p>
<p>“One of the things that we’re trying to evangelize is the concept of geo-tagging everything,” Hanke told me. “I would have expected eight years ago that it would be ubiquitous now, but it’s still not. But I think we’ll get there.”</p>
<p>Geotagging everything digital is a key intersection point between virtual and real. If this blog post is written <i>here</i>, and not <i>there</i>, that adds flavor and nuance to the information. And if a particular historical fact is geotagged to a specific mapped location, that adds depth and dimension to our experience of that place.</p>
<p>“We’re applying some of the same techniques we currently use in standard web search, and the same kind of discipline, to pull really interesting, really good places up from everything else,” Hanke says. “The model is that you’re walking through an unfamiliar neighborhood, but with a friend who is telling you the best things around you. You enjoy it just like before, but you’re a little more informed.”</p>
<h3>AR + MMO + IRL</h3>
<p>Depth and dimension are definitely core components of Ingress, another Niantic Labs app/experiment/game. Ingress is a — take a deep breath — augmented reality massively multiplayer online video game.</p>
<p>The real world is real, but it’s fought over virtually by two shadowy groups: the Enlightened and the Resistance. Niantic has filled the Earth with virtual portals, usually coincident with actual physical landmarks or monuments, that players need to capture in order to gain territory. Capture territory with large numbers of people (aka “mind units”) and your faction gets more powerful.</p>
<p>Clearly, the massive integration of Google mapping technology with a sophisticated gaming engine is required. And the result is another intersection between the real and the virtual.</p>
<p>“Ingress is a massively multiplayer online game designed for mobile, with real location-based connections,” Hanke told me.</p>
<p>You play with everyone in your faction, and you might meet up with other players in real life, or you may just know them virtually as team members in another area. Along the way, Google learns an awful lot about how you use your mobile devices, about mapping physical locations, and about overlaying cyberspace on meatspace.</p>
<figure id="attachment_5132" aria-describedby="caption-attachment-5132" style="width: 633px" class="wp-caption aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" class=" wp-image-5132 " alt="Ingress’ field of play is the world, layered with virtual data." src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/screen-shot-2013-04-29-at-7-05-27-am.png?resize=633%2C383&#038;ssl=1" width="633" height="383" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/screen-shot-2013-04-29-at-7-05-27-am.png?w=703&amp;ssl=1 703w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/screen-shot-2013-04-29-at-7-05-27-am.png?resize=300%2C181&amp;ssl=1 300w" sizes="auto, (max-width: 633px) 100vw, 633px" /><figcaption id="caption-attachment-5132" class="wp-caption-text"><em>Ingress’ field of play is the world, layered with virtual data.</em></figcaption></figure>
<p>All of that knowledge is going to come in very handy with Google Glass.</p>
<h3>Endgame: Google Glass?</h3>
<p>Hanke is cautious when speaking about Google Glass, as is the PR handler who is copiloting our conversation. Even already public information is a question mark as we chat: Google is definitely being Apple-like in the control and distribution of Glass and its future.</p>
<p>But something tantalizing tidbits do come out.</p>
<p>“We definitely kinda had Google Glass in mind when we started work on apps at Niantic,” Hanke says. “We need mobile devices that are less intrusive than the phone is.”</p>
<figure id="attachment_5133" aria-describedby="caption-attachment-5133" style="width: 300px" class="wp-caption alignright"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-5133" alt="A model demonstrates Google’s new Project Glass technology." src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/glass.jpg?resize=300%2C238&#038;ssl=1" width="300" height="238" /><figcaption id="caption-attachment-5133" class="wp-caption-text"><em>A model demonstrates Google’s new Project Glass technology.</em></figcaption></figure>
<p>And we need devices with different input/output modalities, he says. After all, it’s not easy to play Ingress running around holding an expensive and fragile device in front of you like a window ripped from its frame. And yet you need that portal from the physical to the virtual. For instance, while Field Trip is great to open the doors on human context for the world around us, it threatens to detract from our experience of the world by redirecting our eyes from the ultimate big screen of reality to the small screen of our mobile device.</p>
<p>Google Glass, on the other hand, sits unobtrusively on our foreheads, leaving our hands free and providing data as an overlay on top of the physical world rather than an alternative to the physical world. That model of layering, mixing, and intersecting is top-of-mind for Hanke.</p>
<p>“It just can’t be the case that people are walking around heads down tapping on a screen,” he says. “That just can’t be the future of the human race.”</p>
<h3>Cyborg me now</h3>
<p>Which, of course, is exactly what’s at issue: the future of the human race. Or, at least how we ingest, consume, and reconstitute digital data. And analog data. And meld the two into one harmonious whole of knowing.</p>
<p>That’s perhaps a little metaphysical for a small division of Google that focuses on maps and games and apps.</p>
<p>But the web has <a href="http://www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/306868/">rewired our brains</a> in a decade or so of virtually ubiquitous Internet access, and the smartphone has rewired our behavior in five years, taking us from creatures who look up to to see others to beings that look down at any opportunity to see small bits of plastic and glass and metal in our hands.</p>
<p>So is it really too much to expect from a transformation that brings us from clear divisions between what is real and what is virtual to an elegant blend of the two?</p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="aligncenter size-full wp-image-5134" alt="screen-shot-2013-04-29-at-7-08-50-am" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/screen-shot-2013-04-29-at-7-08-50-am.png?resize=558%2C293&#038;ssl=1" width="558" height="293" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/screen-shot-2013-04-29-at-7-08-50-am.png?w=558&amp;ssl=1 558w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/screen-shot-2013-04-29-at-7-08-50-am.png?resize=300%2C157&amp;ssl=1 300w" sizes="auto, (max-width: 558px) 100vw, 558px" /></p>
<p>“This is not psychosis or some cognitive break, but an actual takeover of the mind,” Google’s introductory video for the Ingress game says ominously.</p>
<p>Art imitates life, I suppose, and life, in turn, imitates art.</p>
<p>Written by: <a href="http://venturebeat.com/author/johnkoetsier/">John Koetsier</a>, <a href="http://venturebeat.com/2013/05/01/how-google-is-melding-our-real-and-virtual-worlds-with-games-apps-and-glass/">VentureBeat</a> (via <a href="http://ispr.info/2013/05/14/how-google-is-melding-our-real-and-virtual-worlds-with-games-apps-and-glass/">Presence</a>)<br />
Posted by: <a href="https://www.situatedresearch.com">Situated Research</a></p>
<p>The post <a href="https://www.situatedresearch.com/2013/05/how-google-is-melding-our-real-and-virtual-worlds-with-games-apps-and-glass/">How Google is Melding Our Real and Virtual Worlds with Games, Apps … and Glass</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.situatedresearch.com/2013/05/how-google-is-melding-our-real-and-virtual-worlds-with-games-apps-and-glass/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5126</post-id>	</item>
	</channel>
</rss>
