<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Mobile Devices Archives - Situated Research</title>
	<atom:link href="https://www.situatedresearch.com/tag/mobile-devices/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.situatedresearch.com/tag/mobile-devices/</link>
	<description>Usability Research and User Experience Testing</description>
	<lastBuildDate>Tue, 19 Oct 2021 15:25:06 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
<site xmlns="com-wordpress:feed-additions:1">122538981</site>	<item>
		<title>Nintendo’s newest Mario Kart is the best video game you never knew you wanted to play</title>
		<link>https://www.situatedresearch.com/2020/09/nintendos-newest-mario-kart-is-the-best-video-game-you-never-knew-you-wanted-to-play/</link>
					<comments>https://www.situatedresearch.com/2020/09/nintendos-newest-mario-kart-is-the-best-video-game-you-never-knew-you-wanted-to-play/#_comments</comments>
		
		<dc:creator><![CDATA[Matthew Sharritt, Ph.D.]]></dc:creator>
		<pubDate>Wed, 09 Sep 2020 14:22:15 +0000</pubDate>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[HCI]]></category>
		<category><![CDATA[Simulations]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Design]]></category>
		<category><![CDATA[Game Development]]></category>
		<category><![CDATA[Human Factors]]></category>
		<category><![CDATA[Mobile Devices]]></category>
		<category><![CDATA[User Experience]]></category>
		<guid isPermaLink="false">https://www.situatedresearch.com/?p=10126</guid>

					<description><![CDATA[<p>By now, Nintendo has made exactly 87,493,029 versions of Mario Kart since the game was first introduced in 1992 for the Super Nintendo. (Okay, the company has really made 13—which is still a lot!) But a new sequel coming this fall to the Nintendo Switch changes the formula in an enticing way, thanks to super&#8230;</p>
<p>The post <a href="https://www.situatedresearch.com/2020/09/nintendos-newest-mario-kart-is-the-best-video-game-you-never-knew-you-wanted-to-play/">Nintendo’s newest Mario Kart is the best video game you never knew you wanted to play</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div style="width: 980px;" class="wp-video"><video class="wp-video-shortcode" id="video-10126-1" width="980" height="550" loop autoplay preload="metadata" controls="controls"><source type="video/mp4" src="https://cdn.situatedresearch.com/wp-content/uploads/2020/09/p-2-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.mp4?_=1" /><source type="video/webm" src="https://cdn.situatedresearch.com/wp-content/uploads/2020/09/p-2-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.webm?_=1" /><a href="https://cdn.situatedresearch.com/wp-content/uploads/2020/09/p-2-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.mp4">https://www.situatedresearch.com/wp-content/uploads/2020/09/p-2-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.mp4</a></video></div>
<p>By now, Nintendo has made exactly 87,493,029 versions of Mario Kart since the game was first introduced in 1992 for the Super Nintendo. (Okay, the company has really made 13—which is still a lot!) But a new sequel coming this fall to the Nintendo Switch changes the formula in an enticing way, thanks to super experimental UX. <span id="more-10126"></span></p>
<p><em>Mario Kart Live: Home Circuit</em> transforms the Nintendo Switch into a controller for an actual toy race kart. The kart is fitted with a camera, giving the player a first-person view of its perspective as it whizzes around your living room, bedroom, or wherever you have some open floor space to play.</p>
<figure class="video-wrapper"><iframe title="Mario Kart Live: Home Circuit - Announcement Trailer - Nintendo Switch" src="https://www.youtube.com/embed/f2mCqUSDCJE?feature=oembed" width="720" height="480" frameborder="0" allowfullscreen="allowfullscreen"><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start">﻿</span></iframe></figure>
<p>How does the game build your course? You place a few gates that are bundled with the game on the floor. From there, how the exact setup and customization works is unclear (perhaps vision AI is involved?), but Nintendo—alongside its partner developer <a href="https://www.velanstudios.com/" target="_blank" rel="noopener noreferrer">Velan Studios</a>—demonstrates that one of several tracks, from a simple oval to complicated curves, can be set up to avoid existing couches, coffee tables, and perhaps even sleeping cats.</p>
<figure class="wp-caption alignnone image-wrapper" aria-describedby="caption-attachment-90547236"><figcaption id="caption-attachment-90547236" class="wp-caption-text"><div style="width: 596px;" class="wp-video"><video class="wp-video-shortcode" id="video-10126-2" width="596" height="334" loop autoplay preload="metadata" controls="controls"><source type="video/mp4" src="https://cdn.situatedresearch.com/wp-content/uploads/2020/09/p-1-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.mp4?_=2" /><source type="video/webm" src="https://cdn.situatedresearch.com/wp-content/uploads/2020/09/p-1-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.webm?_=2" /><a href="https://cdn.situatedresearch.com/wp-content/uploads/2020/09/p-1-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.mp4">https://www.situatedresearch.com/wp-content/uploads/2020/09/p-1-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.mp4</a></video></div>
[Image: Nintendo]
</figcaption></figure>
<p>As you race your kart around the course, all sorts of augmented reality (AR) effects, ranging from glowing boundaries, to power ups, to your racing competitors, will appear on the screen, as if they exist in your actual home. If you run over a virtual item, like a nitro-boosting mushroom, the kart will actually accelerate. If you hit a troublesome banana peel, the car will actually lose some control. Oh, and assuming you have friends with their own games, up to four players can race their karts together in the same space.</p>
<figure class="wp-caption image-wrapper alignnone" aria-describedby="caption-attachment-90547239"><img data-recalc-dims="1" loading="lazy" decoding="async" class="aligncenter size-full wp-image-10130" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2020/09/i-1-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.jpg?resize=596%2C335&#038;ssl=1" alt="" width="596" height="335" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2020/09/i-1-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.jpg?w=596&amp;ssl=1 596w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2020/09/i-1-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.jpg?resize=300%2C169&amp;ssl=1 300w" sizes="auto, (max-width: 596px) 100vw, 596px" /></figure>
<figure class="wp-caption image-wrapper alignnone" aria-describedby="caption-attachment-90547239"><figcaption id="caption-attachment-90547239" class="wp-caption-text">[Image: Nintendo]</figcaption></figure>
<p>With few exceptions, augmented reality has been little more than a gimmick. Snapchat’s zany face filters are still the most successful commercialization of this technology that, not so long ago, the tech world heralded as the next big thing.</p>
<p>Microsoft’s Hololens AR headset is technically impressive, but it’s being marketed as an enterprise tool to businesses (which demonstrates pretty clearly that it’s not ready for the mainstream just yet). The hyped company Magic Leap, with billions in venture capital from investors like Google, has done little more than release a developer version of its headset to mediocre reviews while it hangs on for life. The hardware is simply too expensive, too bulky, but, most of all, too useless to really be worth buying for a vast majority of people. Plus, it’s antisocial by nature to be experiencing a different version of reality than the people around you.</p>
<figure class="wp-caption alignnone image-wrapper" aria-describedby="caption-attachment-90547241"><figcaption id="caption-attachment-90547241" class="wp-caption-text"><div style="width: 596px;" class="wp-video"><video class="wp-video-shortcode" id="video-10126-3" width="596" height="334" loop autoplay preload="metadata" controls="controls"><source type="video/mp4" src="https://cdn.situatedresearch.com/wp-content/uploads/2020/09/i-4-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.mp4?_=3" /><source type="video/webm" src="https://cdn.situatedresearch.com/wp-content/uploads/2020/09/i-4-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.webm?_=3" /><a href="https://cdn.situatedresearch.com/wp-content/uploads/2020/09/i-4-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.mp4">https://www.situatedresearch.com/wp-content/uploads/2020/09/i-4-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.mp4</a></video></div>
[Image: Nintendo]
</figcaption></figure>
<p>But Nintendo is doing what it does best. It’s figuring out how to transform a gimmick into shared fun—and make it halfway affordable, too. A lot of that comes down to Nintendo just understanding the ergonomics around technology and play. For years, AR demos tasked you to hold up your phone like a little window to peek through, to do something like transform <a href="https://www.youtube.com/watch?v=r5ziOSjXdo4" target="_blank" rel="noopener noreferrer">a magazine cover into an animation</a>. These novelties wore thin quickly because they’re more physically awkward than visually amazing.</p>
<p>Nintendo is taking a similar approach here to its predecessors. But instead of utilizing the camera in your phone, it’s built it into the kart. That allows you to play a game like you always do (sitting on your couch), but experience all of these enticing and additive effects of AR. No, Nintendo isn’t aiming as high as Magic Leap, teasing an entire world of digital objects that you can reach out and touch. But Nintendo is competent enough at game design that it’s figured out how to work with what it has to create an AR experience that’s both new and destined to be massively successful.</p>
<p><em>Mario Kart Live: Home Circuit</em> will be out for $100 on October 16. The last version of Mario Kart sold <a href="https://www.gamereactor.eu/25-million-mario-kart-8-deluxe-copies-sold/" target="_blank" rel="noopener noreferrer">more than 25 million copies</a> to date. And if <em>Home Circuit</em> is only a fraction as successful, it will still be one of the most profitable demonstrations of AR ever built.</p>
<p>Written by: <a href="https://www.fastcompany.com/user/mark-wilson" target="_blank" rel="noopener noreferrer">Mark Wilson</a>, <a href="https://www.fastcompany.com/90546982/nintendos-newest-mario-kart-is-the-best-video-game-you-never-knew-you-wanted-to-play" target="_blank" rel="noopener noreferrer">Fast Company</a><br />
Posted by: <a href="https://www.situatedresearch.com/">Situated Research</a></p>
<p>The post <a href="https://www.situatedresearch.com/2020/09/nintendos-newest-mario-kart-is-the-best-video-game-you-never-knew-you-wanted-to-play/">Nintendo’s newest Mario Kart is the best video game you never knew you wanted to play</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.situatedresearch.com/2020/09/nintendos-newest-mario-kart-is-the-best-video-game-you-never-knew-you-wanted-to-play/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure url="https://cdn.situatedresearch.com/wp-content/uploads/2020/09/p-2-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.webm" length="897423" type="video/webm" />
<enclosure url="https://cdn.situatedresearch.com/wp-content/uploads/2020/09/p-1-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.webm" length="884575" type="video/webm" />
<enclosure url="https://cdn.situatedresearch.com/wp-content/uploads/2020/09/i-4-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.webm" length="374989" type="video/webm" />
<enclosure url="https://cdn.situatedresearch.com/wp-content/uploads/2020/09/p-2-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.mp4" length="1616122" type="video/mp4" />
<enclosure url="https://cdn.situatedresearch.com/wp-content/uploads/2020/09/p-1-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.mp4" length="1116213" type="video/mp4" />
<enclosure url="https://cdn.situatedresearch.com/wp-content/uploads/2020/09/i-4-90546982-the-new-mario-kart-proves-nintendoand8217s-low-key-design-genius.mp4" length="474880" type="video/mp4" />

		<post-id xmlns="com-wordpress:feed-additions:1">10126</post-id>	</item>
		<item>
		<title>6 Things That Take Your UX from Above Average to World Class</title>
		<link>https://www.situatedresearch.com/2016/10/6-things-take-ux-average-world-class/</link>
					<comments>https://www.situatedresearch.com/2016/10/6-things-take-ux-average-world-class/#_comments</comments>
		
		<dc:creator><![CDATA[Matthew Sharritt, Ph.D.]]></dc:creator>
		<pubDate>Fri, 28 Oct 2016 13:56:07 +0000</pubDate>
				<category><![CDATA[HCI]]></category>
		<category><![CDATA[Psychology]]></category>
		<category><![CDATA[Usability]]></category>
		<category><![CDATA[Web Design]]></category>
		<category><![CDATA[Cognition]]></category>
		<category><![CDATA[Mobile Devices]]></category>
		<category><![CDATA[Personalization]]></category>
		<category><![CDATA[User Experience]]></category>
		<category><![CDATA[User Interface]]></category>
		<category><![CDATA[User-Centered Design]]></category>
		<guid isPermaLink="false">http://www.situatedresearch.com/?p=9505</guid>

					<description><![CDATA[<p>Many parts of applications are rarely experienced, yet we have to consider how the presence or absence of these states affect a user’s experience. It’s the UX designer’s job to go beyond visual design and make the best experience possible—including all the parts of the experience that nobody thinks to design.  Onboarding The first thing&#8230;</p>
<p>The post <a href="https://www.situatedresearch.com/2016/10/6-things-take-ux-average-world-class/">6 Things That Take Your UX from Above Average to World Class</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Many parts of applications are rarely experienced, yet we have to consider how the presence or absence of these states affect a user’s experience. It’s the UX designer’s job to go beyond visual design and make the best experience possible—including all the parts of the experience that nobody thinks to design. <span id="more-9505"></span></p>
<h2>Onboarding</h2>
<p>The first thing your user experiences is onboarding. The knee-jerk response to onboarding is to darken everything besides the button the user should click on, make detailed instructions telling the user how this button is used, and repeat for each button.</p>
<p>Standard. Easy. Done. Right? Wrong.</p>
<blockquote><p><a class="inv-tweet-sa no-redirect" href="https://twitter.com/intent/tweet?text=%22%27I+wish+this+app+had+a+splash+page%2C%27+said+no+one+ever.%22+http%3A%2F%2Fblog.invisionapp.com%2Fworld-class-ux%2F+via+%40InVisionApp" target="_blank">‘I</a><a class="inv-tweet-sa no-redirect" href="https://twitter.com/intent/tweet?text=%22%27I+wish+this+app+had+a+splash+page%2C%27+said+no+one+ever.%22+http%3A%2F%2Fblog.invisionapp.com%2Fworld-class-ux%2F+via+%40InVisionApp" target="_blank"> wish this app had a splash page,’ said no one ever.”</a></p></blockquote>
<p>For one, nobody remembers all those tips along the way—they just click right through because they want to get to the experience. Secondly, you should have created an experience that’s so easy, the user’s only choice is to be right. If your design is so complex that users need a step-by-step walk through, you need to reconsider what you’ve created.</p>
<p>The best way to onboard a user is to <a href="https://uxplanet.org/mobile-onboarding-interact-don-t-tell-f0c35da2b2b4#.9kh288dut" rel="nofollow" data-href="https://uxplanet.org/mobile-onboarding-interact-don-t-tell-f0c35da2b2b4#.9kh288dut"><strong>not onboard them at all</strong></a>. Let the user dive right into the experience and play around, free of anxiety or fear that they might get lost and not know what to do.</p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="aligncenter size-full wp-image-9508" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2016/10/thenticate_n5.jpg?resize=800%2C531&#038;ssl=1" alt="thenticate_n5" width="800" height="531" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2016/10/thenticate_n5.jpg?w=800&amp;ssl=1 800w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2016/10/thenticate_n5.jpg?resize=300%2C199&amp;ssl=1 300w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2016/10/thenticate_n5.jpg?resize=768%2C510&amp;ssl=1 768w" sizes="auto, (max-width: 800px) 100vw, 800px" /></p>
<p class="wp-caption-text" style="text-align: center;">Design by <a href="http://www.dribbble.com/geesignz" rel="nofollow" data-href="http://www.dribbble.com/geesignz">Georg Bednorz</a>.</p>
<h2>Tips for a better onboarding experience</h2>
<ol>
<li>Let users interact with your experience—don’t hide it behind some pay wall, info grab, or layer of instructions.</li>
<li><span class="copy">Don’t interrupt the user’s experience just because you want their data.</span> Users know you make your money off their data, and they’re not as excited to give it out as you are to take it. If you’re going to ask for info, make sure the user feels like they’ve gotten something in return.</li>
</ol>
<blockquote><p><a class="inv-tweet-sa no-redirect" href="https://twitter.com/intent/tweet?text=%22The+best+way+to+onboard+a+user%3A+don%27t+onboard+them+at+all.%22+http%3A%2F%2Fblog.invisionapp.com%2Fworld-class-ux%2F+via+%40InVisionApp" target="_blank">The best way to onboard a user: don’t onboard them at all.”</a></p></blockquote>
<h2>Form fields</h2>
<p>Form fields are often the hardest to design, both from a design and a code perspective. From a design perspective they have to be fairly straight forward so they’re understandable, and from a code perspective they’re pretty much the farthest behind of all HTML elements in terms of ability to customize.</p>
<p>However, form fields are often one of the most crucial pieces of your site or application—they’re where you get info from your users. If your form fields create a poor experience for your user you need to reconsider what you’re doing.</p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="aligncenter size-full wp-image-9509" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2016/10/RegisterFlow_V1R1.gif?resize=800%2C600&#038;ssl=1" alt="registerflow_v1r1" width="800" height="600" /></p>
<p style="text-align: center;">Animation by <a href="https://dribbble.com/XavierCoulombeM" rel="nofollow" data-href="https://dribbble.com/XavierCoulombeM">Xavier Coulombe-Murray</a>.</p>
<h2>Tips for better form fields</h2>
<ol>
<li><span class="copy">Make sure your user always knows what form they’re interacting with.</span> Getting up for something then coming back and forgetting what you were doing is totally a thing.</li>
<li><span class="copy">Use autofill whenever possible—especially on mobile.</span> Not only does it speed up the process, but on mobile it also means less typing on that tiny mobile keyboard!</li>
<li>Give users real-time feedback about whether the info they gave you is valid, and what to do if they messed up.</li>
<li>Let the user know how far along they are in the process. Sometimes just knowing <em>there is</em>an end is comforting.</li>
<li>Design the experience to feel secure, especially if you’re asking for sensitive or private information.</li>
</ol>
<blockquote><p><a class="inv-tweet-sa no-redirect" href="https://twitter.com/intent/tweet?text=%22Design+the+experience+to+feel+secure.%22+http%3A%2F%2Fblog.invisionapp.com%2Fworld-class-ux%2F+via+%40InVisionApp" target="_blank">Design the experience to feel secure.”</a></p></blockquote>
<h2>Copy</h2>
<p>Right off the bat, I already know what you’re thinking—“I’m a UX designer. I wasn’t hired to be a copywriter!” And you’re absolutely right—you are a UX designer. But here’s the deal: <a href="https://uxplanet.org/microcopy-tiny-words-with-a-huge-ux-impact-90140acc6e42" rel="nofollow" data-href="https://uxplanet.org/microcopy-tiny-words-with-a-huge-ux-impact-90140acc6e42">The copy users read and interact with is part of the experience you’re designing</a>.</p>
<p class="wp-caption-text"><img data-recalc-dims="1" loading="lazy" decoding="async" class="aligncenter size-full wp-image-9510" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2016/10/loliful_landing_800.png?resize=800%2C600&#038;ssl=1" alt="loliful_landing_800" width="800" height="600" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2016/10/loliful_landing_800.png?w=800&amp;ssl=1 800w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2016/10/loliful_landing_800.png?resize=300%2C225&amp;ssl=1 300w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2016/10/loliful_landing_800.png?resize=768%2C576&amp;ssl=1 768w" sizes="auto, (max-width: 800px) 100vw, 800px" /></p>
<p class="wp-caption-text" style="text-align: center;">Design by <a href="https://dribbble.com/izhik" rel="nofollow" data-href="https://dribbble.com/izhik">Igor Izhik</a>.</p>
<h2>Tips for better copy</h2>
<ol>
<li>Be creative, but make sure it also makes sense</li>
<li>Know your audience. A younger audience might be fine with your fun button copy but an audience that’s less tech-savvy probably just wants to know what happens when they click.</li>
<li>A/B test your copy—either online or with users in real life</li>
</ol>
<h2>404 errors</h2>
<p>The 404 page is one of the most forgotten experiences outside of empty states. While it is our hope that users will never have to see the 404 error as long as the experience is developed well, we can’t just forget about it. The 404 page is one of the best places to turn a frustrated user into a delighted, returning user.</p>
<p>One of my favorite 404 error examples is in the Chrome browser. Not because it’s the most incredible visual design, but because it’s something you wouldn’t expect, it’s entertaining, and it distracts you from the fact that you don’t have internet—and, if anything, it makes you feel like you do.</p>
<p>However, I realize it isn’t the sexiest 404 error out there. For 404 error inspiration, check out this <a href="https://medium.com/muzli-design-inspiration/404-page-inspiration-de4ec8618693" data-href="https://medium.com/muzli-design-inspiration/404-page-inspiration-de4ec8618693"><strong>404 inspiration post</strong></a> by <a href="https://medium.com/u/c6fbb86f1069" data-href="https://medium.com/u/c6fbb86f1069" data-anchor-type="2" data-user-id="c6fbb86f1069" data-action="show-user-card" data-action-type="hover">Muzli</a>.</p>
<p class="wp-caption-text"><img data-recalc-dims="1" loading="lazy" decoding="async" class="aligncenter size-full wp-image-9511" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2016/10/viktor_kern_404_error_msg.png?resize=800%2C500&#038;ssl=1" alt="viktor_kern_404_error_msg" width="800" height="500" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2016/10/viktor_kern_404_error_msg.png?w=800&amp;ssl=1 800w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2016/10/viktor_kern_404_error_msg.png?resize=300%2C188&amp;ssl=1 300w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2016/10/viktor_kern_404_error_msg.png?resize=768%2C480&amp;ssl=1 768w" sizes="auto, (max-width: 800px) 100vw, 800px" /></p>
<p class="wp-caption-text" style="text-align: center;">Design by <a href="https://dribbble.com/viktorkern" rel="nofollow" data-href="https://dribbble.com/viktorkern">Viktor Kern</a>.</p>
<h2>Tips for better 404 errors</h2>
<ol>
<li>Make the user laugh or distract them otherwise.</li>
<li>Admit your failure.</li>
<li>Give your users directions on how to fix the situation or when you will have it fixed for them.</li>
</ol>
<h2>Empty states</h2>
<p>Empty states are places users don’t expect to be engaged but are surprisingly delighted when they are. Only true <a href="http://scotthurff.com/posts/why-your-user-interface-is-awkward-youre-ignoring-the-ui-stack?mc_cid=41d8a26eaf&amp;mc_eid=fa2114c999" rel="nofollow" data-href="http://scotthurff.com/posts/why-your-user-interface-is-awkward-youre-ignoring-the-ui-stack?mc_cid=41d8a26eaf&amp;mc_eid=fa2114c999">UX pros take advantage of the empty states</a>.</p>
<p>An empty state can include the view a user experiences when they first arrive on a screen, the view they see when they zero out their inbox, the loading screen, and many more. You can consider an empty state pretty much any state that has no data to interact with.</p>
<p class="wp-caption-text"><img data-recalc-dims="1" loading="lazy" decoding="async" class="aligncenter size-full wp-image-9512" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2016/10/trip-emptystate.png?resize=800%2C600&#038;ssl=1" alt="trip-emptystate" width="800" height="600" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2016/10/trip-emptystate.png?w=800&amp;ssl=1 800w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2016/10/trip-emptystate.png?resize=300%2C225&amp;ssl=1 300w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2016/10/trip-emptystate.png?resize=768%2C576&amp;ssl=1 768w" sizes="auto, (max-width: 800px) 100vw, 800px" /></p>
<p class="wp-caption-text" style="text-align: center;">Design by <a href="https://dribbble.com/kimberlyksull" rel="nofollow" data-href="https://dribbble.com/kimberlyksull">Kim Sullivan</a>.</p>
<p>For empty state inspiration, check out a site dedicated solely to the best of them: <a href="http://emptystat.es/" rel="nofollow" data-href="http://emptystat.es/">emptystat.es</a>.</p>
<h2>Tips for better empty states</h2>
<ol>
<li>Give directions on how to change the empty state to an active one</li>
<li>Make calls to action very clear, with well-written copy</li>
<li>Distract the user from excessive load times with dummy content or entertaining animations</li>
</ol>
<h3>Notifications</h3>
<p>Lastly, when creating the best experience possible we can’t forget about notifications. Why does our user need to be notified, when do they need to be notified, and how?</p>
<p>So much data is involved in notifications, yet they’re often the most overlooked part of our application because they aren’t states our users experience within the app. Notifications represent the experience beyond the interface, and only the best UX designers consider how they fit in the experience they’re designing.</p>
<blockquote><p><a class="inv-tweet-sa no-redirect" href="https://twitter.com/intent/tweet?text=%22We+need+to+consider+how+notifications+fit+into+the+experience+we%27re+designing.%22+http%3A%2F%2Fblog.invisionapp.com%2Fworld-class-ux%2F+via+%40InVisionApp" target="_blank">We need to consider how notifications fit into the experience we’re designing.”</a></p></blockquote>
<p>Being <a href="https://medium.com/@intercom/designing-smart-notifications-36336b9c58fb#.d9vkztg9d" data-href="https://medium.com/@intercom/designing-smart-notifications-36336b9c58fb#.d9vkztg9d">intelligent about how we design our notification experience</a> can be a make or break feature of our application. One of my favorite is the ESPN app.</p>
<p>My favorite basketball team is the Golden State Warriors, but I’m a busy man and I don’t always notice when things outside of my immediate attention are happening. ESPN allows me to customize my notifications so that I receive alerts about what is happening with my favorite team(s). But they don’t just send me every notification in their system, they allow me to choose which ones I want.</p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="aligncenter wp-image-9513 size-large" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2016/10/espn-notification.png?resize=576%2C1024&#038;ssl=1" alt="espn-notification" width="576" height="1024" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2016/10/espn-notification.png?resize=576%2C1024&amp;ssl=1 576w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2016/10/espn-notification.png?resize=169%2C300&amp;ssl=1 169w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2016/10/espn-notification.png?w=750&amp;ssl=1 750w" sizes="auto, (max-width: 576px) 100vw, 576px" /></p>
<p>Beyond team notifications, I can set notifications per game as well.</p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="aligncenter size-large wp-image-9514" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2016/10/espn-notification-2.png?resize=576%2C1024&#038;ssl=1" alt="espn-notification-2" width="576" height="1024" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2016/10/espn-notification-2.png?resize=576%2C1024&amp;ssl=1 576w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2016/10/espn-notification-2.png?resize=169%2C300&amp;ssl=1 169w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2016/10/espn-notification-2.png?w=750&amp;ssl=1 750w" sizes="auto, (max-width: 576px) 100vw, 576px" /></p>
<p>The control ESPN gives me when it comes to managing my sports news and information is incredibly well thought through. It makes me feel important and like I’m in control of my experience, not someone that created the app.</p>
<h2>Tips for better notifications</h2>
<ol>
<li>Ask users what kind of content they want to be notified about</li>
<li>Ask your users whether there’s a specific time of day they want to be notified or if they want notifications to be real time</li>
<li>Monitor your analytics to find out if your notifications are increasing the KPIs you hoped they would. If they’re not, figure out how they should change based on that data.</li>
</ol>
<h2>Design is in the details</h2>
<p>Steve Jobs’ dad used to make him <a href="https://www.ted.com/talks/paul_bennett_finds_design_in_the_details?language=en" rel="nofollow" data-href="https://www.ted.com/talks/paul_bennett_finds_design_in_the_details?language=en">build the inside of their fence just as nice as the outside</a>—not because the inside mattered as much, but because “For you to sleep well at night, the aesthetic, the quality, has to be carried all the way through.”</p>
<p>With the amount of data we’re sending, receiving, and tracking daily, with the availability of users and test users, with all the inspiration out there on the web, there’s really no reason we shouldn’t be doing these 6 things well. <span class="copy">The details are what separate the good UX </span><span class="copy">designers</span><span class="copy"> from the great.</span></p>
<p>It’s easy to forget about the pieces that we feel no one is ever going to see. Remember, though, <a href="https://medium.com/@cwodtke/the-myths-of-ux-design-product-design-whatever-they-call-it-this-week-ef37a39cac6b#.tol9pzbpt" data-href="https://medium.com/@cwodtke/the-myths-of-ux-design-product-design-whatever-they-call-it-this-week-ef37a39cac6b#.tol9pzbpt">every piece of the experience is an opportunity</a> for engaging design.</p>
<p>To be a <a href="http://blog.invisionapp.com/how-to-become-a-great-ux-designer-without-a-degree/">great UX designer</a>, we have to push ourselves to create the best experience possible!</p>
<p>Written by: <a href="https://twitter.com/realjoet">Joe Toscano</a>, via <a href="http://blog.invisionapp.com/world-class-ux/">InVision Blog</a><br />
Posted by: <a href="https://www.situatedresearch.com">Situated Research</a></p>
<p>The post <a href="https://www.situatedresearch.com/2016/10/6-things-take-ux-average-world-class/">6 Things That Take Your UX from Above Average to World Class</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.situatedresearch.com/2016/10/6-things-take-ux-average-world-class/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">9505</post-id>	</item>
		<item>
		<title>Next Big Thing for Virtual Reality: Lasers in Your Eyes</title>
		<link>https://www.situatedresearch.com/2016/05/next-big-thing-virtual-reality-lasers-eyes/</link>
					<comments>https://www.situatedresearch.com/2016/05/next-big-thing-virtual-reality-lasers-eyes/#_comments</comments>
		
		<dc:creator><![CDATA[Matthew Sharritt, Ph.D.]]></dc:creator>
		<pubDate>Tue, 03 May 2016 21:29:30 +0000</pubDate>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[HCI]]></category>
		<category><![CDATA[Psychology]]></category>
		<category><![CDATA[Simulations]]></category>
		<category><![CDATA[Usability]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Cognition]]></category>
		<category><![CDATA[heads-up-display]]></category>
		<category><![CDATA[Human Factors]]></category>
		<category><![CDATA[Mobile Devices]]></category>
		<category><![CDATA[User Interface]]></category>
		<category><![CDATA[Virtual Reality]]></category>
		<guid isPermaLink="false">http://www.situatedresearch.com/?p=9341</guid>

					<description><![CDATA[<p>San Francisco – The next big leap for virtual and augmented reality headsets is likely to be eye-tracking, where headset-mounted laser beams aimed at eyeballs turn your peepers into a mouse.  A number of startups are working on this tech, with an aim to convince VR gear manufacturers such as Oculus Rift and HTC Vive&#8230;</p>
<p>The post <a href="https://www.situatedresearch.com/2016/05/next-big-thing-virtual-reality-lasers-eyes/">Next Big Thing for Virtual Reality: Lasers in Your Eyes</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>San Francisco – The next big leap for virtual and augmented reality headsets is likely to be eye-tracking, where headset-mounted laser beams aimed at eyeballs turn your peepers into a mouse. <span id="more-9341"></span></p>
<p>A number of startups are working on this tech, with an aim to convince VR gear manufacturers such as Oculus Rift and HTC Vive to incorporate the feature in a next generation device. They include SMI, Percept, Eyematic, Fove and Eyefluence, which recently allowed USA Today to demo its eye-tracking tech.</p>
<p>“Eye-tracking is almost guaranteed to be in second-generation VR headsets,” says Will Mason, cofounder of virtual reality media company UploadVR. “It’s an incredibly important piece of the VR puzzle.”</p>
<p><iframe loading="lazy" title="USATODAY-Embed Player" width="850" height="480" frameborder="0" scrolling="no" allowfullscreen="true" marginheight="0" marginwidth="0" src="https://uw-media.usatoday.com/embed/video/82420346?placement=snow-embed"></iframe></p>
<p>At present, making selections in VR or AR environments typically involve moving the head so that your gaze lands on a clickable icon, and then either pressing a handheld remote or, in the case of Microsoft’s HoloLens or Meta 2, reaching out with your hand to make a selection by interacting with a hologram.</p>
<p>As shown in Eyefluence’s demonstration, all of that is accomplished by simply casting your eyes on a given icon and then activating it with another glance.</p>
<p>“The idea here is that anything you do with your finger on a smartphone you can do with your eyes in VR or AR,” says Eyefluence CEO Jim Marggraff, who cofounded the Milpitas, Calif-based company in 2013 with another entrepreneur, David Stiehr.</p>
<p>“Computers made a big leap when they went from punchcards to a keyboard, and then another from a keyboard to a mouse,” says Marggraff, who invented the kid-focused LeapFrog LeapPad device. “We want to again change the way we interface with data.”</p>
<h2>Eye Tech Not Due for Years</h2>
<p>As exciting as this may sound, the mainstreaming of eye-tracking technology is still a ways off. Eyefluence execs say that although they are in discussions with a variety of headset makers, their tech isn’t likely to debut until 2017. Other companies remain largely in R&amp;D mode, and Fove has a waitlist for its headset’s Kickstarter campaign.</p>
<p>The challenges for eye-tracking are both technological and financial. Creating hardware that consistently locks onto an infinite variety of eyeballs presents one hurdle, while doing so with gear that is light and consumes little power is another.</p>
<p>And while a number of companies in the space have managed to land funding – Eyefluence has raised $21.6 million in two rounds led by Intel Capital and Motorola Solutions – some tech-centric VCs are sitting on the sidelines while they wait for the technology to mature and for headset makers to make their moves.</p>
<p>“What eye-tracking will do will be powerful, but I’m not sure how valuable it will be from an investment standpoint,” says Kobie Fuller of Accel Partners. “Is there a multi-billion-dollar eye-tracking company out there? I don’t know.”</p>
<p>Among the unknowns: whether the tech will be disseminated through a licensed model or if existing headset companies will develop it on their own.</p>
<p>Still, once deployed eye-tracking has the potential to revolutionize the VR and AR experience, Fuller expects.</p>
<p>Specifically, eye-tracking will “greatly enhance interpersonal connections” in VR, he says, by applying realistic eye movements to avatars.</p>
<p>Facebook founder Mark Zuckerberg, who presciently bought Oculus for $2 billion, is banking on VR taking social interactions to a new level.</p>
<p>“The most exciting thing about eye-tracking is getting rid of that ‘uncanny valley’ (where disbelief sets in) when it comes to interacting through avatars,” says Fuller.</p>
<h2>Less Computing Power</h2>
<p>There are a few other ways in which successful eye-tracking tech could revolutionize AR and VR beyond just making such worlds easy to navigate without joysticks, remotes or hand gestures.</p>
<p>First, by tracking the eyes, such tech can telegraph to the VR device’s graphics processing unit, or GPU, that it needs to render only the images where the eyes are looking at that moment.</p>
<p>That means less computing power would be needed. Currently, a $700 Oculus headset requires a powerful computer to render its images. Oculus’s developer kit with a suitable computer costs $2,000. “If you can save on rendering power, that could significantly lower the barrier to entry into this market for consumers,” says UploadVR’s Mason.</p>
<p>And second, by not just tracking the eyeball but also potentially analyzing a person’s mood and logging in details about their gaze, AR/VR headsets are in a position to deliver targeted content as well as give third-party observers insights into the wearer’s state of mind and situational awareness.</p>
<h2>Police Use</h2>
<p>The former use case would appeal to in-VR advertisers, while the latter would come in handy for first responders.</p>
<p>“Police and paramedics are looking for an eyes-up, hands-free paradigm, and eye-tracking can bring that,” says Paul Steinberg, chief technology officer at Motorola Solutions, an investor in Eyefluence.</p>
<p>Steinberg sketches out a scene from what could be the near future.</p>
<p>A police officer on patrol has suddenly unholstered his gun. Via his augmented reality glasses with eye-tracking, colleagues at headquarters are instantly fed information about his stress level through pupil dilation information.</p>
<p>They can then both advise the officer through a radio as well as activate body cameras and other tech that he might have neglected to turn on in his stressed state. What’s more, another officer on the scene can instantly scan through a variety of command center video and data feeds through an AR headset, flipping through the options by simply looking at each one.</p>
<p>“We would have to work with our (first responder) customers to train them how to use this sort of tech of course, but the potential is there,” says Steinberg. “But we’re not months away, we’re more than that.”</p>
<h2>Demo Shows Off Ease of Use</h2>
<p>An Eyefluence indicates that eye-tracking technology isn’t a half-baked dream.</p>
<p>Navigating between a dozen tiles inside a first-generation Oculus headset proves as easy as shifting your gaze between them. Making selections – the equivalent of clicking on a mouse – is also equally intuitive. At no time does the head need to move, and hands remain at your side.</p>
<p><iframe loading="lazy" src="https://www.youtube.com/embed/iQsY3uLvYQ4" width="720" height="384" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>After about 10 minutes in the demo, it feels antiquated to pop on a VR headset and grab a remote to click through choices selected with head movements.</p>
<p>Marggraff says Eyefluence’s technical challenges included making technology that could respond in low and bright light, accounting for different size pupils and ensuring that power consumption is minimal.</p>
<p>But, he adds, his team remains convinced of the inevitability of its product: “Just like when we started tapping and swiping on our phones, we’re going to eventually need a better interface for AR and VR.”</p>
<p>Written by: <a href="http://www.usatoday.com/staff/1005/marco-della-cava/" target="_blank" rel="noopener">Marco della Cava</a>, <a href="http://www.usatoday.com/story/tech/news/2016/05/02/new-mouse-vr-could-your-eyes/83716986/" target="_blank" rel="noopener">USA Today</a> (via <a href="http://ispr.info/2016/05/03/next-big-thing-for-virtual-reality-eye-tracking-lasers-in-your-eyes/" target="_blank" rel="noopener">Presence</a>)<br />
Posted by: <a href="https://www.situatedresearch.com">Situated Research</a></p>
<p>The post <a href="https://www.situatedresearch.com/2016/05/next-big-thing-virtual-reality-lasers-eyes/">Next Big Thing for Virtual Reality: Lasers in Your Eyes</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.situatedresearch.com/2016/05/next-big-thing-virtual-reality-lasers-eyes/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">9341</post-id>	</item>
		<item>
		<title>The Future of Consumer Tech Is About Making You Forget It’s There</title>
		<link>https://www.situatedresearch.com/2015/03/the-future-of-consumer-tech-is-about-making-you-forget-its-there/</link>
					<comments>https://www.situatedresearch.com/2015/03/the-future-of-consumer-tech-is-about-making-you-forget-its-there/#_comments</comments>
		
		<dc:creator><![CDATA[Matthew Sharritt, Ph.D.]]></dc:creator>
		<pubDate>Sat, 07 Mar 2015 20:46:04 +0000</pubDate>
				<category><![CDATA[HCI]]></category>
		<category><![CDATA[Usability]]></category>
		<category><![CDATA[Aesthetics]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Cognition]]></category>
		<category><![CDATA[Design]]></category>
		<category><![CDATA[Human Factors]]></category>
		<category><![CDATA[Mobile Devices]]></category>
		<category><![CDATA[Personalization]]></category>
		<category><![CDATA[Psychology]]></category>
		<category><![CDATA[User Interface]]></category>
		<category><![CDATA[User-Centered Design]]></category>
		<guid isPermaLink="false">http://www.situatedresearch.com/?p=8836</guid>

					<description><![CDATA[<p>Microsoft, Samsung, GoPro, and others take their best guesses at the next five years of consumer electronics. When Apple introduced the iPad 2 in 2011, it laid out a noble goal for the future of technology. “Technology alone is not enough,” an Apple ad proclaimed. “Faster, thinner, lighter, those are all good things, but when&#8230;</p>
<p>The post <a href="https://www.situatedresearch.com/2015/03/the-future-of-consumer-tech-is-about-making-you-forget-its-there/">The Future of Consumer Tech Is About Making You Forget It’s There</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><strong>Microsoft, Samsung, GoPro, and others take their best guesses at the next five years of consumer electronics.</strong></p>
<p>When Apple introduced the iPad 2 in 2011, it laid out a noble goal for the future of technology.</p>
<p>“Technology alone is not enough,” an <a href="https://www.youtube.com/watch?v=b2LLSrlKr3c" target="_blank">Apple ad proclaimed</a>. “Faster, thinner, lighter, those are all good things, but when technology gets out of the way, everything becomes more delightful, even magical. That’s when you leap forward.” <span id="more-8836"></span></p>
<p>With the iPad, the notion of technology getting out of the way meant designing a computer so easy to use that the apps took center stage. But the result was in some sense counterproductive; we’ve become so sucked into our phones and tablets that technology is actually getting in the way of the real world.</p>
<p>It’s not going to be like that forever. In talking to leaders from some of the most innovative companies in consumer electronics, it’s clear that the next five years will represent an attempt to bring us back to reality. This may seem paradoxical, but a proliferation of wearable devices, smart-home gizmos, smart cameras, and augmented-reality systems will exist largely to save us from our screens.<span id="more-20160"></span></p>
<p><strong>Wearables return to the real world</strong></p>
<p>The cynical way to view wearable technology is as yet another intrusion—another set of screens to keep us separated from the physical world. But Yusuf Mehdi, Microsoft’s corporate vice president of devices and studios, doesn’t see it that way. He believes these devices, more than ever, will help technology fade into the background.</p>
<p>Mehdi gives a basic example that applies today: Instead of sounding an alarm, many fitness trackers can wake you with a gentle vibration to avoid disturbing your spouse. It’s a seemingly minor feature, but one that takes the focus off the device itself and onto the people around you. “That’s an interesting thing where people are taking a personal device and saying, ‘Well, the win is for my spouse, not for me,&#8217;” Mehdi says.</p>
<p>Moving forward, Mehdi sees devices like Microsoft’s recently announced HoloLens as a way to stay present in the physical world without completely shutting out technology. The still-experimental headset works by projecting 3-D images into a head-mounted visor so they appear to be part of your natural surroundings.</p>
<p>Imagine a scenario in which two coworkers collaborate on a 3-D model projected into their headsets, or someone walking down the street who can see information about surrounding shops and restaurants. Mehdi points out that the original codename for HoloLens was “analog,” for the way it blends with the physical world.</p>
<figure id="attachment_8838" aria-describedby="caption-attachment-8838" style="width: 640px" class="wp-caption aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" class="wp-image-8838 size-full" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-hololens.png?resize=640%2C360&#038;ssl=1" alt="Microsoft's HoloLens augmented-reality visor" width="640" height="360" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-hololens.png?w=640&amp;ssl=1 640w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-hololens.png?resize=300%2C169&amp;ssl=1 300w" sizes="auto, (max-width: 640px) 100vw, 640px" /><figcaption id="caption-attachment-8838" class="wp-caption-text">Microsoft&#8217;s HoloLens augmented-reality visor</figcaption></figure>
<p>“If you have a device that you’re wearing, and information is being overlaid on top of that, now you’re back in the real world, and you’re interacting, and you’re not missing what’s going on around you, because your head’s there,” Mehdi says.</p>
<p>On a more practical level, these “mixed reality” devices—as Mehdi calls them—will pave the way for more natural input methods like gesture control and eye tracking, which never quite made sense on tablets and laptops. “A lot of things become more human, and the technology kind of goes back out of the way, and we think that’s a big opportunity,” Mehdi says.</p>
<p><strong>The Disappearing Smart Home</strong></p>
<p>Wearable tech will also play a starring role in smart homes—at least if we expect them to offer the kind of breezy convenience that tech companies have been promising.</p>
<figure id="attachment_8839" aria-describedby="caption-attachment-8839" style="width: 260px" class="wp-caption alignright"><img data-recalc-dims="1" loading="lazy" decoding="async" class="wp-image-8839 size-full" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-dennismiloseski.jpg?resize=260%2C260&#038;ssl=1" alt="3042948-inline-dennismiloseski" width="260" height="260" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-dennismiloseski.jpg?w=260&amp;ssl=1 260w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-dennismiloseski.jpg?resize=150%2C150&amp;ssl=1 150w" sizes="auto, (max-width: 260px) 100vw, 260px" /><figcaption id="caption-attachment-8839" class="wp-caption-text">Samsung&#8217;s Dennis Miloseski</figcaption></figure>
<p>Dennis Miloseski, Samsung’s U.S. head of design, describes the dream scenario: You pull into your garage and your wearable connects to your Wi-Fi network, which in turn triggers your hallway lights and queues up some music on the living room stereo. “I like to call it the automatic future,” he says.</p>
<p>But he also notes how easily things can go wrong. Maybe your spouse is sleeping on the couch and doesn’t want the lights to come on. That’s why it’ll be so important to have intelligence that figures out what you want, along with some sort of way to confirm your intentions on a wearable device.</p>
<p>“We’re sort of in this archaic age right now, where we’re in this raw form of data readout, meaning, ‘this is how many steps you’ve taken,’ or ‘this is your heart rate,’ or the light is on or off,&#8217;” Miloseski says. “I think the next magical innovation is how do we take that data and actually create a form of valuable experience of that data.”</p>
<figure id="attachment_8840" aria-describedby="caption-attachment-8840" style="width: 640px" class="wp-caption aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-8840" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-smartthings.jpg?resize=640%2C360&#038;ssl=1" alt="Gadgets from SmartThings, now part of Samsung" width="640" height="360" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-smartthings.jpg?w=640&amp;ssl=1 640w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-smartthings.jpg?resize=300%2C169&amp;ssl=1 300w" sizes="auto, (max-width: 640px) 100vw, 640px" /><figcaption id="caption-attachment-8840" class="wp-caption-text">Gadgets from SmartThings, now part of Samsung</figcaption></figure>
<p>Again, it all comes back to getting the technology out of the way so the user doesn’t have to think about logistics or shuffle through a bunch of apps just to have a fully functional smart home. Miloseski likens it to starting a car or turning on a light switch, in that the complexity is completely hidden from the user.</p>
<p>“I think that we will hit a point in time where, when we think of technology and devices and gadgets and all these things, when they actually impact the social fabric and they become an essential part of how we live our lives, they will become invisible,” Miloseski says.</p>
<p><strong>New Smarts for Dumb Cameras</strong></p>
<p>Photography might be the one area where Apple’s vision of getting technology out of the way seems fully realized. Smartphone cameras are no longer just a quick and dirty image capture tool; they’re the best way to take photos that you can immediately touch up and share with the world.</p>
<p>But as wearable cameras like the GoPro and drone-based ones like DJI’s Phantom enable new kinds of photography, they’ve yet to receive phone-like smarts. Expect that to change in the coming years as capturing and sharing footage from these devices starts to feel as effortless as using the camera in your pocket.</p>
<figure id="attachment_8841" aria-describedby="caption-attachment-8841" style="width: 260px" class="wp-caption alignright"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-8841" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-cjprobergopro.jpg?resize=260%2C260&#038;ssl=1" alt="GoPro's CJ Prober" width="260" height="260" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-cjprobergopro.jpg?w=260&amp;ssl=1 260w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-cjprobergopro.jpg?resize=150%2C150&amp;ssl=1 150w" sizes="auto, (max-width: 260px) 100vw, 260px" /><figcaption id="caption-attachment-8841" class="wp-caption-text">GoPro&#8217;s CJ Prober</figcaption></figure>
<p>“If I think specifically about us, and the things that we get super-jazzed about, that’s a big piece of it, it’s the whole solving of pain points from when you first capture content to seamlessly sharing it,” says CJ Prober, GoPro’s senior vice president of software and services.</p>
<p>Today, when you capture footage on a GoPro, you’ve got to load it into your computer—itself a time-consuming process—and spend hours looking for highlights and turning them into a YouTube-worthy video. But in the future, a wearable camera might tap into gyroscopes and accelerometers to flag exciting moments, or use machine learning algorithms to sniff out quality footage. It could even tie into other wearable sensors to measure things like jump height or speed, and bring those details straight into the video.</p>
<p>“It’s really important to not think of video and photo capture as an independent thing to do on the device,” Prober says. “It’s really, ‘What do you do with the content when it’s captured?&#8217;”</p>
<figure id="attachment_8842" aria-describedby="caption-attachment-8842" style="width: 260px" class="wp-caption alignright"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-8842" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-goprohero4.jpg?resize=260%2C146&#038;ssl=1" alt="GoPro's Hero 4 camera" width="260" height="146" /><figcaption id="caption-attachment-8842" class="wp-caption-text">GoPro&#8217;s Hero 4 camera</figcaption></figure>
<p>That question will become even more important as new tools like 360-degree cameras become available. Suddenly, you have a lot more footage to work with, which means cameras will need to get smarter at helping you tell the best story.</p>
<p>Drone camera makers like DJI face a slightly different challenge, but with similar overall goals. In the near term, it’ll need to make the actual flight mechanisms smarter so that drones can safely navigate on their own. But once that happens, and the drones themselves become cheaper and more commoditized, it’ll open up all kinds of new smart applications.</p>
<figure id="attachment_8843" aria-describedby="caption-attachment-8843" style="width: 640px" class="wp-caption aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-8843" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-ericcheng.jpg?resize=640%2C480&#038;ssl=1" alt="GoPro's Hero 4 camera" width="640" height="480" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-ericcheng.jpg?w=640&amp;ssl=1 640w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-ericcheng.jpg?resize=300%2C225&amp;ssl=1 300w" sizes="auto, (max-width: 640px) 100vw, 640px" /><figcaption id="caption-attachment-8843" class="wp-caption-text">GoPro&#8217;s Hero 4 camera</figcaption></figure>
<p>“There could be a really rich app economy that’s task-driven instead of product-driven,” says Eric Cheng, DJI’s ‎general manager in San Francisco.</p>
<p>A basic example, he said, would be some kind of live-blogging application that steers a drone as it follows you down the street. Or maybe you’d have an application that can automatically capture and reconstruct a scene in 3-D using cameras. “You can imagine a whole lot of functionality moving into the domain-specific and being a lot smarter,” Cheng says.</p>
<p><strong>Room For The Familiar</strong></p>
<p>None of this is to suggest that the tools we use today are going to vanish, or that you’ll never have occasion to get sucked into your phone, tablet, or computer for a while.</p>
<figure id="attachment_8844" aria-describedby="caption-attachment-8844" style="width: 260px" class="wp-caption alignright"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-8844" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-rickosterloh.jpg?resize=260%2C239&#038;ssl=1" alt="Motorola's Rick Osterloh" width="260" height="239" /><figcaption id="caption-attachment-8844" class="wp-caption-text">Motorola&#8217;s Rick Osterloh</figcaption></figure>
<p>Rick Osterloh, president of Motorola Mobility, now part of China’s Lenovo, says that if anything, the smartphone will remain at the center of all these new smart devices. “It’s resonated so well because it’s actually well-designed, for both utility and a critical feature, which is carryability and pocketability,” he says.</p>
<p>While Osterloh imagines we will see some new technological twists for the smartphone in the form of folding screens and superfast charging, the biggest advances will come from all the different types of data a phone can gather and interpret. Think of it kind of like the <a href="https://play.google.com/store/apps/details?id=com.motorola.contextual.smartrules2&amp;hl=en" target="_blank">Assist</a> feature in Motorola’s current phones, but with more automation and intelligence.</p>
<p>“That is a pretty interesting area writ large, we believe, for the future, where the combination of context and probably sensors will give you a user experience that just helps your phone adapt to what you want,” Osterloh says, “like the magical ‘do what I want’ machine that people in computer science have been trying to develop for decades.”</p>
<figure id="attachment_8845" aria-describedby="caption-attachment-8845" style="width: 260px" class="wp-caption alignright"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-8845" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/03/3042948-inline-motorolax.jpg?resize=260%2C253&#038;ssl=1" alt="Motorola's Moto X smartphone" width="260" height="253" /><figcaption id="caption-attachment-8845" class="wp-caption-text">Motorola&#8217;s Moto X smartphone</figcaption></figure>
<p>Likewise, Microsoft’s Mehdi doesn’t see mouse-and-keyboard devices going away, since there’s nothing better for tasks like writing or data entry. “I don’t think this is like tapes and CDs that go away,” he says. “I think it’s more like TV and radio, that don’t actually go away. They just become another part of the media that you consume, and over time they kind of get tuned for the use case.”</p>
<p>The question, then, is how we’re actually going to make room for this expanding roster of wearables, drones, headsets, and smart-home devices. At some point, it might be too much to wrangle, but as Medhi points out, it wasn’t long ago that owning just a cell phone and a computer was hard to fathom. People make room for more devices when there’s sufficient value.</p>
<p>In other words, making all that technology disappear may only work if we own a whole lot more of it.</p>
<p>Written by: <a href="http://www.fastcompany.com/user/jared-newman" target="_blank">Jared Newman</a>, <a href="http://www.fastcompany.com/3042948/sector-forecasting/the-future-of-consumer-tech-is-about-making-you-forget-its-there">Fast Company</a> (via <a href="http://ispr.info/2015/03/06/the-future-of-consumer-tech-is-about-making-you-forget-its-there/">Presence</a>)<br />
Posted by: <a href="https://www.situatedresearch.com">Situated Research</a></p>
<p>The post <a href="https://www.situatedresearch.com/2015/03/the-future-of-consumer-tech-is-about-making-you-forget-its-there/">The Future of Consumer Tech Is About Making You Forget It’s There</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.situatedresearch.com/2015/03/the-future-of-consumer-tech-is-about-making-you-forget-its-there/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">8836</post-id>	</item>
		<item>
		<title>Hands-on with Mattel’s new AR, VR View-Master</title>
		<link>https://www.situatedresearch.com/2015/02/hands-on-with-mattels-new-ar-vr-view-master/</link>
					<comments>https://www.situatedresearch.com/2015/02/hands-on-with-mattels-new-ar-vr-view-master/#_comments</comments>
		
		<dc:creator><![CDATA[Matthew Sharritt, Ph.D.]]></dc:creator>
		<pubDate>Fri, 20 Feb 2015 15:54:37 +0000</pubDate>
				<category><![CDATA[Education]]></category>
		<category><![CDATA[Gaming]]></category>
		<category><![CDATA[HCI]]></category>
		<category><![CDATA[Simulations]]></category>
		<category><![CDATA[Usability]]></category>
		<category><![CDATA[Aesthetics]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Design]]></category>
		<category><![CDATA[Game Development]]></category>
		<category><![CDATA[Games for Learning]]></category>
		<category><![CDATA[Human Factors]]></category>
		<category><![CDATA[Mobile Devices]]></category>
		<category><![CDATA[User Interface]]></category>
		<category><![CDATA[Virtual Reality]]></category>
		<guid isPermaLink="false">http://www.situatedresearch.com/?p=8819</guid>

					<description><![CDATA[<p>A View-Master for virtual reality: Hands-on with Mattel&#8217;s new AR, VR phone toy Mattel is relaunching View-Master, but as a virtual reality and augmented-reality phone toy. And I got to play around with it for a bit…or at least, some of the tech behind it.  Announced at an event in New York City, the new&#8230;</p>
<p>The post <a href="https://www.situatedresearch.com/2015/02/hands-on-with-mattels-new-ar-vr-view-master/">Hands-on with Mattel’s new AR, VR View-Master</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><strong>A View-Master for virtual reality: Hands-on with Mattel&#8217;s new AR, VR phone toy</strong></p>
<p><span style="line-height: 1.5;">Mattel is relaunching View-Master, but as a virtual reality and augmented-reality phone toy. And I got to play around with it for a bit…or at least, some of the tech behind it. </span><span id="more-8819"></span></p>
<p>Announced at an event in New York City, <a href="http://www.cnet.com/news/google-mattel-announce-a-virtual-reality-view-master/" target="_blank">the new View-Master</a> is a collaboration between Mattel and Google, whose virtual reality Cardboard app has enabled cheap do-it-yourself accessories to turn any Android phone into a mini-VR viewer. Mattel’s plastic toy, which will debut in October, is like a more durable, plastic version of <a href="http://www.cnet.com/news/googles-cardboard-vr-headset-is-no-joke-its-great-for-the-oculus-rift/" target="_blank">Google Cardboard</a>, designed entirely for kids…or, maybe, also for grown-up kids like me. And the most brilliant part is it’ll only cost $30.<span id="more-20098"></span></p>
<p><iframe loading="lazy" src="http://www.cnet.com/videos/share/id/tUlXVC5TlPLbcmd7Lo7cfkU6k0P1Edow/" width="960" height="540" frameborder="0" seamless="seamless" scrolling="no" allowfullscreen="allowfullscreen"></iframe></p>
<p>I used View-Master back when I was a little — who didn’t? It’s a classic 3D stereoscopic picture viewer. Many people had even said Google Cardboard looked a bit like a View-Master. So is isn’t a huge surprise that Mattel has suddenly announced a new View-Master with Google Cardboard VR capabilities added. I’ve always felt that virtual reality reminded me of early stereoscopic toys. And Mattel has keyed onto the same idea.</p>
<figure id="attachment_8821" aria-describedby="caption-attachment-8821" style="width: 770px" class="wp-caption aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-8821" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/02/viewmaster1.jpg?resize=770%2C577&#038;ssl=1" alt="The View-Master will fit most phones, according to Mattel: iPhone and Android alike." width="770" height="577" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/02/viewmaster1.jpg?w=770&amp;ssl=1 770w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/02/viewmaster1.jpg?resize=300%2C225&amp;ssl=1 300w" sizes="auto, (max-width: 770px) 100vw, 770px" /><figcaption id="caption-attachment-8821" class="wp-caption-text">The View-Master will fit most phones, according to Mattel: iPhone and Android alike.</figcaption></figure>
<p>The toy was only viewable in a mock-up prototype form at Mattel’s event, but the design’s pretty cool: it looks half old-school View-Master, half Oculus Rift. The inner plastic housing extends to hold many types of phones: Mattel says it’s designed to fit the largest existing phones, and will even work with the <a href="http://www.cnet.com/products/apple-iphone-6-plus/" target="_blank">iPhone 6 Plus</a> and <a href="http://www.cnet.com/products/google-nexus-6/" target="_blank">Nexus 6</a>. A capacitive-touch side lever is used to “click” through scenes or into virtual environments, like the magnetized side switch on Google’s Cardboard viewers.</p>
<p>Mattel’s headset is designed with Google and Android in mind, but at launch is intended to work on “nearly all platforms,” which includes iOS. That would mean a dedicated Mattel app which interfaces with the View-Master, but Google’s Cardboard and Cardboard-ready apps — many of which already exist on iOS, like VRSE — will work too.</p>
<p>Mattel is planning to use View-Master not just for VR, but also for AR; little plastic reels that look like the old cardboard ones are really just flat coasters this time around, now with images on top which the View-Master reads and turns into pop-up augmented-reality models on your table, desktop or wherever else you place it. Multiple View-Masters could use one reel to access content if put down on a table, unlike the old pop-in reels. This type of augmented-reality tech has already existed for years in many apps and on some children’s toys like the Nintendo 3DS (with its AR cards) and PlayStation Vita, but mixing it into a VR headset is a novel idea.</p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="aligncenter size-full wp-image-8822" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/02/viewmaster3.jpg?resize=770%2C577&#038;ssl=1" alt="viewmaster3" width="770" height="577" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/02/viewmaster3.jpg?w=770&amp;ssl=1 770w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/02/viewmaster3.jpg?resize=300%2C225&amp;ssl=1 300w" sizes="auto, (max-width: 770px) 100vw, 770px" /></p>
<p>I didn’t get to use the actual Mattel prototype, but we tried View-Master’s augmented-reality tech on phones and Google Cardboard viewers. There were three reels to try: a dinosaur one made a little dinosaur pop up on the disc on the table in front of me. When I aimed a dot and clicked on it, I was suddenly surrounded by a prehistoric 360-degree panorama with 3D dinosaurs. Clicking on them brought up facts, too.</p>
<p>Looking at the space disc with Cardboard on brought up a pop-up moon and Earth; clicking on it took me to a panorama of the moon, with pop-up clickable photos of NASA missions. A third, San Francisco-themed, had little mini-models of Alcatraz and the Golden Gate Bridge that turned into VR photo panoramas. To exit any of the virtual panoramas, you look down and click on the side…or, remove the View-Master from your face. The View-Master comes with one reel in its $30 package, and extra reels will cost around $15 each. No, older View-Master reels don’t work in here, but it sounds like Mattel is exploring re-releasing content from some of the back catalog 10,000 older ViewMaster reels.</p>
<figure id="attachment_8823" aria-describedby="caption-attachment-8823" style="width: 770px" class="wp-caption aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-8823" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/02/viewmaster4.jpg?resize=770%2C577&#038;ssl=1" alt="The &quot;reels&quot; don't actually go in the View-Master, they simply sit on your table." width="770" height="577" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/02/viewmaster4.jpg?w=770&amp;ssl=1 770w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2015/02/viewmaster4.jpg?resize=300%2C225&amp;ssl=1 300w" sizes="auto, (max-width: 770px) 100vw, 770px" /><figcaption id="caption-attachment-8823" class="wp-caption-text">The &#8220;reels&#8221; don&#8217;t actually go in the View-Master, they simply sit on your table.</figcaption></figure>
<p>There’s no strap to keep the View-Master on: this is a hold-to-your-face toy, much like older View-Masters and Google Cardboard. Mattel has promised that the tech has already been vetted by pediatric ophthalmologists, and is meant for children ages 7 and up — in short, bite-sized sessions.</p>
<p>The View-Master may work with other toys, too, like other app-ified toys in the past, but for now it’s really a fancier plastic Google Cardboard viewer, with additional Mattel support. That’s not a bad thing at all: at $30, this is a pretty awesome little stocking-stuffer idea, and a fun phone toy. Just keep in mind that if you give this to your kid, it won’t work without a phone popped into it.</p>
<p>By the time fall rolls around, Mattel may have other toys ready to work with it. Or, there might be many other companies ready to make cheap phone-enabled VR headsets, too.</p>
<p>Written by: <a href="http://www.cnet.com/profiles/scottstein8/" target="_blank">Scott Stein</a>, <a href="http://www.cnet.com/products/new-view-master/" target="_blank">CNET</a> (via <a href="http://ispr.info/2015/02/20/hands-on-with-mattels-new-ar-vr-view-master/" target="_blank">Presence</a>)<br />
Posted by: <a href="https://www.situatedresearch.com">Situated Research</a></p>
<p>The post <a href="https://www.situatedresearch.com/2015/02/hands-on-with-mattels-new-ar-vr-view-master/">Hands-on with Mattel’s new AR, VR View-Master</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.situatedresearch.com/2015/02/hands-on-with-mattels-new-ar-vr-view-master/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">8819</post-id>	</item>
		<item>
		<title>Ohio State Doctor Shows Promise of Google Glass in Live Surgery</title>
		<link>https://www.situatedresearch.com/2013/09/ohio-state-doctor-shows-promise-google-glass-live-surgery/</link>
					<comments>https://www.situatedresearch.com/2013/09/ohio-state-doctor-shows-promise-google-glass-live-surgery/#_comments</comments>
		
		<dc:creator><![CDATA[Matthew Sharritt, Ph.D.]]></dc:creator>
		<pubDate>Thu, 12 Sep 2013 17:55:48 +0000</pubDate>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[HCI]]></category>
		<category><![CDATA[Serious Games]]></category>
		<category><![CDATA[Simulations]]></category>
		<category><![CDATA[Usability]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[heads-up-display]]></category>
		<category><![CDATA[Health Care]]></category>
		<category><![CDATA[Human Factors]]></category>
		<category><![CDATA[Mobile Devices]]></category>
		<category><![CDATA[User Interface]]></category>
		<guid isPermaLink="false">http://www.situatedresearch.com/?p=5320</guid>

					<description><![CDATA[<p>COLUMBUS, Ohio – A surgeon at The Ohio State University Wexner Medical Center is the first in the United States to consult with a distant colleague using live, point-of-view video from the operating room via Google Glass, a head-mounted computer and camera device.  “It’s a privilege to be a part of this project as we explore how&#8230;</p>
<p>The post <a href="https://www.situatedresearch.com/2013/09/ohio-state-doctor-shows-promise-google-glass-live-surgery/">Ohio State Doctor Shows Promise of Google Glass in Live Surgery</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>COLUMBUS, Ohio – A surgeon at <a href="http://www.medicalcenter.osu.edu/Pages/index.aspx">The Ohio State University Wexner Medical Center</a> is the first in the United States to consult with a distant colleague using live, point-of-view video from the operating room via Google Glass, a head-mounted computer and camera device. <span id="more-5320"></span></p>
<p>“It’s a privilege to be a part of this project as we explore how this exciting new technology might be incorporated into the everyday care of our patients,” said Dr.<a href="http://ortho.osu.edu/directories/faculty/christopherkaeding/">Christopher Kaeding, </a>the physician who performed the surgery and director of sports medicine at Ohio State.  “To be honest, once we got into the surgery, I often forgot the device was there. It just seemed very intuitive and fit seamlessly.”</p>
<p>Google Glass has a frame similar to traditional glasses, but instead of lenses, there is a small glass block that sits above the right eye.  On that glass is a computer screen that, with a simple voice command, allows users to pull up information as they would on any other computer.  Attached to the front of the device is a camera that offers a point-of-view image and the ability to take both photos and videos while the device is worn.</p>
<p>During this procedure at the medical center’s University East facility, Kaeding wore the device as he performed ACL surgery on Paula Kobalka, 47, from Westerville, Ohio, who hurt her knee playing softball.  As he performed her operation at a facility on the east side of Columbus, Google Glass showed his vantage point via the internet to audiences miles away.</p>
<p>Across town, one of Kaeding’s Ohio State colleagues, Dr. Robert Magnussen, watched the surgery his office, while on the main campus, several students at <a href="http://medicine.osu.edu/Pages/default.aspx">The Ohio State University College of Medicine</a> watched on their laptops.</p>
<p>“To have the opportunity to be a medical student and share in this technology is really exciting,” said Ryan Blackwell, a second-year medical student who watched the surgery remotely.   “This could have huge implications, not only from the medical education perspective, but because a doctor can use this technology remotely, it could spread patient care all over the world in places that we don’t have it already.”</p>
<p>“As an academic medical center, we’re very excited about the opportunities this device could provide for education,” said Dr. <a href="http://p4mi.org/clay-marsh-md">Clay Marsh,</a> chief innovation officer at The Ohio State University Wexner Medical Center. “But beyond, that, it could be a game-changer for the doctor during the surgery itself.”</p>
<p>Experts have theorized that during surgery doctors could use voice commands to instantly call up x-ray or MRI images of their patient, pathology reports or reference materials.  They could collaborate live and face-to-face with colleagues via the internet, anywhere in the world.</p>
<p>“It puts you right there, real time,” said Marsh, who is also the executive director of the Center for Personalized Health Care at Ohio State. “Not only might you be able to call up any kind of information you need or to get the help you need, but it’s the ability to do it immediately that’s so exciting,” he said.  “Now, we just have to start using it. Like many technologies, it needs to be evaluated in different situations to find out where the greatest value is and how it can impact the lives of our patients in a positive way.”</p>
<p>Only 1,000 people in the United States have been chosen to test Google Glass as part of Google’s Explorer Program. Dr. Ismail Nabeel, an assistant professor of general internal medicine at Ohio State applied and was chosen. He then partnered with Kaeding to perform this groundbreaking surgery and to help test technology that could change the way we see medicine in the future.</p>
<hr />
<p>Broadcast quality video and high-definition photos available for download: <a href="http://bit.ly/16jXc6c">http://bit.ly/16jXc6c</a></p>
<p>Written by: The <a href="http://www.medicalcenter.osu.edu/mediaroom/releases/Pages/Ohio-State-Doctor-Shows-Promise-of-Google-Glass-in-Live-Surgery.aspx">Ohio State University</a> (via <a href="http://ispr.info/2013/09/03/ohio-state-doctor-shows-promise-of-google-glass-in-live-surgery/">Presence</a>); for details about the first international Google Glass surgery, in June 2013, see <a href="http://www.clinicacemtro.com/index.php/en/sala-de-prensa-3/noticias/679-clinica-cemtro-first-ggogle-glass-surgery">Clinica Cemtro</a>; for a report about early reactions from those testing Glass see <a href="http://www.npr.org/templates/story/story.php?storyId=216094970">NPR</a></p>
<p>Image: Dr. Christopher Kaeding, an orthopedic surgeon at The Ohio State University Wexner Medical Center is shown wearing Google Glass</p>
<p>Posted by: <a href="https://www.situatedresearch.com">Situated Research</a></p>
<p>The post <a href="https://www.situatedresearch.com/2013/09/ohio-state-doctor-shows-promise-google-glass-live-surgery/">Ohio State Doctor Shows Promise of Google Glass in Live Surgery</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.situatedresearch.com/2013/09/ohio-state-doctor-shows-promise-google-glass-live-surgery/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5320</post-id>	</item>
		<item>
		<title>Xbox One: The Wobbly Third Leg of Microsoft’s Non-Desktop Trifecta</title>
		<link>https://www.situatedresearch.com/2013/06/xbox-one-the-wobbly-third-leg-of-microsofts-non-desktop-trifecta/</link>
					<comments>https://www.situatedresearch.com/2013/06/xbox-one-the-wobbly-third-leg-of-microsofts-non-desktop-trifecta/#_comments</comments>
		
		<dc:creator><![CDATA[Matthew Sharritt, Ph.D.]]></dc:creator>
		<pubDate>Thu, 27 Jun 2013 17:58:53 +0000</pubDate>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[HCI]]></category>
		<category><![CDATA[Usability]]></category>
		<category><![CDATA[Design]]></category>
		<category><![CDATA[Game Development]]></category>
		<category><![CDATA[Mobile Devices]]></category>
		<category><![CDATA[User Experience]]></category>
		<category><![CDATA[User Interface]]></category>
		<category><![CDATA[User-Centered Design]]></category>
		<guid isPermaLink="false">http://www.situatedresearch.com/?p=5236</guid>

					<description><![CDATA[<p>With the complete hardware, services, and pricing unveiled for the Xbox One at E3, we now have the totality of Microsoft’s “next-generation” consumer-oriented lineup: Windows 8 on the desktop, laptop, and tablet, Windows Phone 8 on the smartphone, and Xbox One in the living room. On paper, this trifecta, seamlessly connected via Microsoft Account, SkyDrive, and Xbox Live,&#8230;</p>
<p>The post <a href="https://www.situatedresearch.com/2013/06/xbox-one-the-wobbly-third-leg-of-microsofts-non-desktop-trifecta/">Xbox One: The Wobbly Third Leg of Microsoft’s Non-Desktop Trifecta</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>With the complete hardware, services, and pricing unveiled for the Xbox One at E3, we now have the totality of Microsoft’s “next-generation” consumer-oriented lineup: Windows 8 on the desktop, laptop, and tablet, Windows Phone 8 on the smartphone, and Xbox One in the living room.</p>
<p>On paper, this trifecta, seamlessly connected via Microsoft Account, SkyDrive, and Xbox Live, is almost perfect. In reality, though, this couldn’t be further from the truth. Where did it all go wrong for Microsoft? <span id="more-5236"></span></p>
<p>From an objective standpoint, all of Microsoft’s new-for-2013 offerings — Windows 8, Windows Phone 8, and the Xbox One — are perfect. Windows 8 capitalizes on the slow death of the desktop and the rush towards mobile; Xbox One is a powerful and feature-rich games console that could dominate the living room; and Windows Phone 8 is a sharp and savvy smartphone OS that ties everything together, while on the move or as a second screen. As a tech writer and a self-confessed life-long Microsoft fan, I have never been more excited about Microsoft’s future than over the last two years of covering Windows 8, Windows Phone 8, and the &#8220;Xbox 720&#8221;.</p>
<p>From a subjective standpoint, though, each of Microsoft’s new offerings is intrinsically flawed and bogged down by crippling policy decisions no doubt handed down from Microsoft’s besuited higher echelons. Windows 8 and 8.1, despite “responding to customer feedback,” still forces users to use the Metro interface, even when a touchscreen isn’t present. Windows Phone 8 is one of Microsoft’s most polished products, but a smartphone OS is only as strong as its app ecosystem, and due to its minuscule market share WP8 still lacks the ecosystem to pull consumers away from iOS and Android — an unfortunate Catch-22 if I ever saw one. The Xbox One, depending on your point of view, is either an awesome all-in-one living room box that plays games, or an awful DRM-restricted games machine that acts as an HDMI passthrough for your cable box — the very same thing that the tried-and-failed Google TV attempted to do.</p>
<p>How did Microsoft manage to take three exciting, technologically advanced products and turn them into mediocre, humdrum devices that have had all of the fun and adventure sucked out of them?</p>
<p style="text-align: center;"><img data-recalc-dims="1" loading="lazy" decoding="async" class="aligncenter size-full wp-image-5238" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/windows-3.0-workspace.jpg?resize=640%2C480&#038;ssl=1" alt="windows-3.0-workspace" width="640" height="480" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/windows-3.0-workspace.jpg?w=640&amp;ssl=1 640w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/windows-3.0-workspace.jpg?resize=300%2C225&amp;ssl=1 300w" sizes="auto, (max-width: 640px) 100vw, 640px" /><em>Windows 3.0</em></p>
<h3>A history lesson</h3>
<p>For the past 20 years, Microsoft hasn’t done much more than double, triple, and quadruple down on the desktop ecosystem. Since the launch and success of Windows 3, almost all of Microsoft’s decisions have revolved around the maximizing of Windows-derived profits. The success of Office, one of Microsoft’s most profitable divisions, is entirely underpinned by Windows’ 95%+ desktop penetration — ditto the Server division. It’s even possible to draw a link from Windows’s dominance in the desktop market, to DirectX and PC gaming, to the Xbox.</p>
<p>To be honest, at the time, this all made perfect sense. Windows, Office, and Server were, and are, monumentally large profit drivers. But if technology has taught us anything it’s that nothing lasts forever — especially business models predicated on a large, bulky form factor that is virtually guaranteed to go the way of the dodo as technology advances. If anything, Microsoft has done incredibly well to maintain the desktop-dominated status quo for as long as it has.</p>
<p>Now, though, with tablets and smartphones exploding <a href="http://www.extremetech.com/computing/154859-the-tablet-market-is-now-almost-pc-size-but-windows-8-and-rt-continue-to-flounder">much faster than anyone could’ve anticipated</a>, Microsoft is forced to adapt. Windows 8 and Windows Phone 8 are <em>so</em> different from their predecessors. The Xbox One isn’t that different, but it’s about as far from the Xbox 360 as Microsoft could get without completely redefining the games console paradigm. On paper, these massive changes all make sense, and if they were executed properly they really could give Microsoft the beachhead in the mobile market that it so desperately needs.</p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="aligncenter wp-image-5237 size-full" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/windows-8-metro-vs-640x353-e1398095509551.jpg?resize=640%2C352&#038;ssl=1" alt="windows-8-metro-vs-640x353" width="640" height="352" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/windows-8-metro-vs-640x353-e1398095509551.jpg?w=640&amp;ssl=1 640w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/windows-8-metro-vs-640x353-e1398095509551.jpg?resize=300%2C165&amp;ssl=1 300w" sizes="auto, (max-width: 640px) 100vw, 640px" /></p>
<h3>Gutless equivocation</h3>
<p>Unfortunately, such changes simply can’t be easily made by a large multinational bureaucracy that generally works with three-year product cycles, rather than 12 months. From a strictly fiscal and pleasing-the-stock-holders point of view, too, Microsoft can’t just kill off the desktop. As it stands, Microsoft is massively profitable and will be for years to come. But at the same time, Microsoft knows that it must change now or face being squeezed out of the market by iOS, Android, and other upstarts. Faced with such a dilemma, Microsoft hedged its bets and created Windows 8, a Frankensteinian operating system that is <a href="http://www.extremetech.com/computing/138701-windows-8-the-disastrous-result-of-microsofts-gutless-equivocation">the jack of all trades but the master of none</a>.</p>
<p>Where does Microsoft go from here? It’s not too late for Windows 8, especially with Windows 8.1 coming up. If Windows 8 is a success, then there might be a knock-on effect that finally gets Windows Phone 8 off the ground. Finally, if the Xbox One works, it could be the perfect centerpiece of a new, non-desktop-oriented Windows ecosystem. That’s a lot of ifs, though, and given how wobbly the Xbox One looks in comparison to the PS4, and WP8′s consistently ailing market share, I think Microsoft has a rough few years ahead.</p>
<p>Written by: <a title="Posts by Sebastian Anthony" href="http://www.extremetech.com/author/santhony" rel="author">Sebastian Anthony</a>, <a href="http://www.extremetech.com/gaming/158496-xbox-one-the-wobbly-third-leg-of-microsofts-non-desktop-trifecta">ExtremeTech</a><br />
Posted by: <a href="https://www.situatedresearch.com">Situated Research</a></p>
<p>The post <a href="https://www.situatedresearch.com/2013/06/xbox-one-the-wobbly-third-leg-of-microsofts-non-desktop-trifecta/">Xbox One: The Wobbly Third Leg of Microsoft’s Non-Desktop Trifecta</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.situatedresearch.com/2013/06/xbox-one-the-wobbly-third-leg-of-microsofts-non-desktop-trifecta/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5236</post-id>	</item>
		<item>
		<title>Beyond Google Glass: The Evolution of Augmented Reality</title>
		<link>https://www.situatedresearch.com/2013/06/beyond-google-glass-the-evolution-of-augmented-reality/</link>
					<comments>https://www.situatedresearch.com/2013/06/beyond-google-glass-the-evolution-of-augmented-reality/#_comments</comments>
		
		<dc:creator><![CDATA[Matthew Sharritt, Ph.D.]]></dc:creator>
		<pubDate>Mon, 17 Jun 2013 16:38:34 +0000</pubDate>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[HCI]]></category>
		<category><![CDATA[Simulations]]></category>
		<category><![CDATA[Usability]]></category>
		<category><![CDATA[Aesthetics]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Cognition]]></category>
		<category><![CDATA[Communication]]></category>
		<category><![CDATA[Ergonomics]]></category>
		<category><![CDATA[heads-up-display]]></category>
		<category><![CDATA[Human Factors]]></category>
		<category><![CDATA[Mobile Devices]]></category>
		<category><![CDATA[User Experience]]></category>
		<category><![CDATA[User Interface]]></category>
		<guid isPermaLink="false">http://www.situatedresearch.com/?p=5184</guid>

					<description><![CDATA[<p>The wearable revolution is heading beyond Google Glass, fitness tracking and health monitoring. The future is wearables that conjure up a digital layer in real space to “augment” reality. SANTA CLARA, Calif. — Reality isn’t what is used to be. With increasingly powerful technologies, the human universe is being reimagined way beyond Google Glass’ photo-tapping&#8230;</p>
<p>The post <a href="https://www.situatedresearch.com/2013/06/beyond-google-glass-the-evolution-of-augmented-reality/">Beyond Google Glass: The Evolution of Augmented Reality</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><b>The wearable revolution is heading beyond Google Glass, fitness tracking and health monitoring. The future is wearables that conjure up a digital layer in real space to “augment” reality.</b></p>
<p>SANTA CLARA, Calif. — Reality isn’t what is used to be. With increasingly powerful technologies, the human universe is being reimagined way beyond Google Glass’ photo-tapping and info cards floating in space above your eye. The future is fashionable eyewear, contact lenses or even bionic eyes with immersive 3D displays, conjuring up a digital layer to “augment” reality, enabling entire new classes of applications and user experiences. <span id="more-5184"></span></p>
<p>Like most technologies that eventually reach a mass market, augmented reality, or AR, has been gestating in university labs, as well as small companies focused on gaming and vertical applications, for nearly half a century. Emerging products like<a href="http://reviews.cnet.com/google-glass/">Google Glass</a> and Oculus Rift’s 3D virtual reality headset for immersive gaming are drawing attention to what could now be termed the “wearable revolution,” but they barely scratch the surface of what’s to come.</p>
<figure id="attachment_5186" aria-describedby="caption-attachment-5186" style="width: 577px" class="wp-caption aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-5186" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_11.34.51_AM_1.png?resize=577%2C450&#038;ssl=1" alt="The Sword of Damocles head-mounted display. &quot;The ultimate display would, of course, be a room within which the computer can control the existence of matter,&quot; Sutherland wrote in his 1965 essay. (Credit: Ivan Sutherland &quot;The Ultimate Display&quot;)" width="577" height="450" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_11.34.51_AM_1.png?w=577&amp;ssl=1 577w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_11.34.51_AM_1.png?resize=300%2C233&amp;ssl=1 300w" sizes="auto, (max-width: 577px) 100vw, 577px" /><figcaption id="caption-attachment-5186" class="wp-caption-text"><em>The Sword of Damocles head-mounted display. &#8220;The ultimate display would, of course, be a room within which the computer can control the existence of matter,&#8221; Sutherland wrote in his 1965 essay. (Credit: Ivan Sutherland &#8220;The Ultimate Display&#8221;)</em></figcaption></figure>
<p>The wearable revolution can be traced back to <a href="http://en.wikipedia.org/wiki/Ivan_Sutherland">Ivan Sutherland</a>, a ground-breaking computer scientist at the University of Utah who in 1965 first described a head-mounted display with half-silvered mirrors that let the wearer see a virtual world superimposed on the real world. In 1968 he was able to demonstrate the concept, which was dubbed “<a href="http://www.computerhistory.org/revolution/input-output/14/356/1830">The Sword of Damocles</a>.”</p>
<figure id="attachment_5187" aria-describedby="caption-attachment-5187" style="width: 610px" class="wp-caption aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-5187 " src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/P1040832_610x458.jpg?resize=610%2C458&#038;ssl=1" alt="P1040832_610x458" width="610" height="458" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/P1040832_610x458.jpg?w=610&amp;ssl=1 610w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/P1040832_610x458.jpg?resize=300%2C225&amp;ssl=1 300w" sizes="auto, (max-width: 610px) 100vw, 610px" /><figcaption id="caption-attachment-5187" class="wp-caption-text"><em>Steven Feiner of Columbia University and Steve Mann of the University of Toronto at the Augmented World Expo in Santa Clara, Calif., June 4, 2013. Both are now involved in the augmented reality startup Meta. (Credit: Dan Farber)</em></figcaption></figure>
<p>His work was followed up and advanced decades later by researchers including the University of Toronto’s Steve Mann and Columbia University’s Steven Feiner. In the second decade of the 21st century, the technology is finally catching up with their concepts.</p>
<p>The necessary apparatus of cameras, computers, sensors and connectivity is coming down in cost and size and increasing in speed, accuracy and resolution to point that wearable computers will be viewed as a cool accessory, mediating our interactions with the analog and digital worlds.</p>
<p><b>Augmented Reality past and future</b></p>
<p>“You need to have technology that is sufficiently comfortable and usable, and a set of potential adopters who would be comfortable wearing the technology,” said Feiner at the gathering of the fledgling AR industry at the <a href="http://augmentedworldexpo.com/">Augmented Reality Expo</a> here Wednesday. “It would be like moving from big headphones to earbuds. When they are very small and comfortable, you don’t feel weird, but cool.” He added that glasses with a “sexy lump of bump” with electronics and display could also be cool to the early adopters, especially the younger generation that has grown up digital. However, he didn’t have any prediction for when wearable computer would reach a mass market.</p>
<p>In the last decade, AR has been primarily focused on immersive gaming that teleports users to another world and on vertical applications, such as tethered, interactive 3D training simulations.</p>
<figure id="attachment_5188" aria-describedby="caption-attachment-5188" style="width: 610px" class="wp-caption aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-5188" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_2.43.54_PM_610x397.png?resize=610%2C397&#038;ssl=1" alt="Screen_Shot_2013-06-06_at_2.43.54_PM_610x397" width="610" height="397" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_2.43.54_PM_610x397.png?w=610&amp;ssl=1 610w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_2.43.54_PM_610x397.png?resize=300%2C195&amp;ssl=1 300w" sizes="auto, (max-width: 610px) 100vw, 610px" /><figcaption id="caption-attachment-5188" class="wp-caption-text"><em>Augmented reality can help in training, such as learning how to weld aided by a 3D environment that tracks user movements precisely. Seabery Augmented Training&#8217;s Soldamatic application, pictured here, could be used for medical training, bomb disposal and other industry verticals. (Credit: Dan Farber)</em></figcaption></figure>
<p>But now augmented reality is about to break out into free space. “AR will be the interface for the Internet of things,” said Greg Kipper, author of “Augmented Reality: An Emerging Technologies Guide to AR.”</p>
<p>“It is a transition time, like from the command line to graphical user interface,” he said. “Imagine trying to do PhotoShop in a command-line interface. Augmented reality will bring to the world things beyond the graphical user interface. With sensors, computational power, storage and bandwidth, we’ll see the world in a new way and make it very personal.”</p>
<figure id="attachment_5189" aria-describedby="caption-attachment-5189" style="width: 270px" class="wp-caption alignright"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-5189" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_2.34.00_PM_270x253.png?resize=270%2C253&#038;ssl=1" alt="Will Wright, the man behind The Sims, speaking at the Augmented Reality Expo on June 4, 2013. (Credit: Dan Farber)" width="270" height="253" /><figcaption id="caption-attachment-5189" class="wp-caption-text"><em>Will Wright, the man behind The Sims, speaking at the Augmented Reality Expo on June 4, 2013.</em><br /><em>(Credit: Dan Farber)</em></figcaption></figure>
<p>Will Wright, creator of the popular The Sims family games, likened AR to having super-sensory abilities, like flipping a switch to see what is underground, beneath your feet. “It’s not about bookmarks or restaurant reviews…it’s something that maps to my intuition.” He hoped that instead of augmenting reality, the technology could “decimate” reality, filtering out even more information than the brain already does to engage reality with less cacophony.</p>
<p>Steve Mann, who is rarely seen without one of his wearable computing rigs and is considered the father of AR, views the wearable revolution as a benefit to society. Quality of life can be improved with overlays of information, adding and subtracting it to facilitate improved “eyesight,” he said. “The first purpose is to help people see better,” he said during his keynote at the expo.</p>
<p>Just as the smartphone is compressing a lot of the function from antecedent computing devices into a single product, wearable computing will eventually make the handheld smartphone irrelevant.</p>
<p>“The value proposition of digital eyewear is having all devices in one, with a camera for each eye representing full body 3D, and the ability to interact with an infinite screen. We are architecting the future of interaction,” said Meron Gribetz of <a href="http://news.cnet.com/8301-11386_3-57584739-76/meta-glasses-bring-3d-and-your-hands-into-the-picture/">Meta, a Ycombinator startup</a> working on a new operating system and hardware interface for augmented reality computing.</p>
<p>“There is no other future of computing other than this technology, which can display information from the real world and control objects with your fingers at low latency and high dexterity. It’s the keyboard and mouse of the future,” he claimed.</p>
<figure id="attachment_5190" aria-describedby="caption-attachment-5190" style="width: 589px" class="wp-caption aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-5190 " src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-05-17_at_7.26.10_AM.png?resize=589%2C381&#038;ssl=1" alt="Screen_Shot_2013-05-17_at_7.26.10_AM" width="589" height="381" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-05-17_at_7.26.10_AM.png?w=589&amp;ssl=1 589w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-05-17_at_7.26.10_AM.png?resize=300%2C194&amp;ssl=1 300w" sizes="auto, (max-width: 589px) 100vw, 589px" /><figcaption id="caption-attachment-5190" class="wp-caption-text"><em>Meta can project a 3D image on a wall and users interact with their hands. (Credit: Meta)</em></figcaption></figure>
<p>Atheer, a Mountain View, Calif.-based AR startup, is <a href="http://news.cnet.com/8301-11386_3-57586750-76/atheer-bringing-3d-augmented-reality-and-gesture-control-to-android/">developing a platform that will work with existing mobile operating systems</a>, such as Google’s <a href="http://www.cnet.com/android-atlas/">Android</a>. “We are the first mobile 3D platform delivering the human interface. We are taking the touch experience on smart devices, getting the Internet out of these monitors and putting it everywhere in physical world around you,” said CEO Sulieman Itani. “In 3D, you can paint in the physical world. For example, you could leave a note to a friend in the air at restaurant, and when the friend walks into the restaurant, only they can see it.”</p>
<p>The company plans to seed its technology to developers this year and have its technology embedded in stylish, lightweight glasses with cameras next year.</p>
<p>The transition to touch and gesture interfaces doesn’t mean that the old modes of human-computer interaction go away. Just as TV didn’t replace radio, augmented reality won’t obliterate previous interfaces. The keyboard might still be the best interface for writing a book. Nor is waving your hands in front of your face all day a good interface.</p>
<p>“Holding hands out in front of self as primary interface is the ‘gorilla arm’ effect,” said Noah Zerkin, who is developing a full-body inertial motion-capture system for head-mounted displays. “You get tired. We need to have alternative interfaces. If not thought-based, it needs to be subtle gestures that don’t require that you to wave hands around in front of your face.”</p>
<p><a href="http://www.threegear.com/about.html">3Gear Systems</a> is working on technology that allows 3D cameras mounted above a keyboard, like a lamp, to detect smaller gestures just above the keyboard, such as pinching to rotate an object on a screen, and can use input from all 10 fingers with millimeter-level accuracy.</p>
<p>Some companies are taking less radical approaches, focusing on inserting a layer of digital information into scenes via smartphones. <a href="http://www.parworks.com/">Par Works</a>, for example, image recognition technology makes it possible overlays digital imagery on real world data, such as photos and videos, with precision. A person looking for an apartment takes a picture of a building with a smartphone and the app overlays information on the image, or a shopper will see coupons or other information for various products on a shelf in a drug store.</p>
<p style="text-align: center;"><iframe loading="lazy" src="http://player.vimeo.com/video/53109174?badge=0&amp;api=1" width="600" height="450" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>Brands are adopting AR technology to increase performance of ads and sales. Several companies provide ways to turn a print ad into an interactive experience just by pointing the camera at the paper or an object with a marker. <a href="http://blippar.com/about">Blippar</a>, for instance, recognizes images by pointing a phone camera at ads or object with its mark and inserts virtual layers of content.</p>
<h3>The future of augmented reality</h3>
<p>And where is all this heading over the next few years? It’s beginning to look like a real business, just as mobile did nearly a decade ago. Mobile analyst Tomi Ahonen expects AR to be adopted by a billion users by 2020. Intel is betting that AR will be big. The chip maker is <a href="http://news.cnet.com/8301-11386_3-57587699-76/intel-creates-$100-million-fund-for-perceptual-computing/">investing $100 million over the next 2 to 3 years</a> to fund companies developing “perceptual computing” software and apps, focusing on next-generation, natural user interfaces such as touch, gesture, voice, emotion sensing, biometrics, and image recognition.</p>
<p>Apple isn’t in the AR game yet, but the company has been <a href="http://news.cnet.com/8301-13579_3-57575121-37/apple-patents-augmented-reality-system/">awarded a U.S. patent</a>, “Synchronized, interactive augmented reality displays for multifunction devices,” for overlaying video on live video feeds.</p>
<figure id="attachment_5191" aria-describedby="caption-attachment-5191" style="width: 598px" class="wp-caption aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-5191" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_2.43.19_PM.png?resize=598%2C437&#038;ssl=1" alt="Screen_Shot_2013-06-06_at_2.43.19_PM" width="598" height="437" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_2.43.19_PM.png?w=598&amp;ssl=1 598w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/06/Screen_Shot_2013-06-06_at_2.43.19_PM.png?resize=300%2C219&amp;ssl=1 300w" sizes="auto, (max-width: 598px) 100vw, 598px" /><figcaption id="caption-attachment-5191" class="wp-caption-text"><em>AR is looking is might be the 8th mass market to evolve, following print, recordings, cinema, radio, TV, the Internet and mobile, according mobile industry analyst Tomi Ahonen. (Credit: Tomi Ahonen)</em></figcaption></figure>
<p>Eyewear will evolve over the next year with comfortable stylish glasses with powerful embedded technology. They will range from Google Glass-style glance-at displays that also replace the phone to stereoscopic 3D-viewing wearables for everyday use.</p>
<p>“You’ll get 20/20, perfectly augmented vision by 2020, with movie-quality special effects blended seamlessly into the world around you,” said Dave Lorenzini, founder of AugmentedRealityCompany.com and former director at Keyhole.com, now known as Google Earth. “The effects will look so real, you’ll have to lift your display to see what’s really there. There’s more of the world than meets the eye, and that’s what’s coming.”</p>
<p>He cautioned that the growth of the AR industry could be slowed by a lack of standards to connect disparate players and their formats for bringing a 3D digital layer to life. “The AR industry has to get together to power the hallucination of what’s do come,” Lorenzini said. He added that a key turning point will be the availability of the WYSIWYG (What You See Is What You Get) real-world markup tools needed to bring this digital layer to life.</p>
<p>When the AR industry does take off, Lorenzini envisions a trillion dollar market for animated content, services and special effects layered into the real world. “Imagine people tagging friends with visual effects like a 3D halo and wings, or paying for a face recognition service to scan and add a floating name tag over the head of everyone in a room,” he said. “AR will grow from specific vertical uses to mass market appeal, driven by young, early adopters.</p>
<p>“Anyone reviewing devices like Google Glass needs to take it to their kids’ school before they pass judgement,” Lorenzini added. “This is not a device from our time, it’s from theirs. They love it, use it effortlessly, and are totally unfazed by ad targeting or privacy concerns. It will be be a natural part of who they are, how they learn, connect and play.”</p>
<p>Eventually, wearable technology will become more integrated with the human body. With advances in miniaturization and nanotechnology glasswear will be replaced with contact lens or even bionic eyes that record everything, make phone calls and allow you to use parts of your body, or even your thoughts, to navigate the world.</p>
<p>“Contact lenses are difficult now but the bionic eye will become commonplace and AR will just be a feature,” Kipper said. “Some may choose to have eyes in back of their heads, and some won’t. Some will want to be cyborgs. We will always use tools as advanced as they can be to help ourselves.”</p>
<p>Brian Mullins, CEO of Daqri, an augmented reality developer of custom solutions, went even further in melding humans and technology. “Thinking is the future of AR,” he said. Mullins talked about measuring “thought intensity” with EEG machines and focusing the mind to manipulate objects during a panel discussion at the Augmented Reality Expo.</p>
<p>Of course, the technical challenges are accompanied by issues of social etiquette and privacy. Smartphones are now a well-accepted part of daily life in most countries, but issues around data ownership and access to the data abound. The subtlety and potentially always-on capacity of wearable technologies will create more privacy concerns and challenges to acceptance.</p>
<p>Feiner acknowledged that it’s “scary” in terms of the information available, especially when billions of people with cameras and microphones can capture anything in public. “There are no laws against it,” he noted.</p>
<p>He gave Google some compliments for not overloading Glass with features. “It not suffering from doing too much too soon,” he said. Whether Google Glass is the tip of the spear for the mass adoption of far more powerful AR is uncertain, but it is doing a good job of surfacing the issues around the introduction of a disruptive, new way of computing.</p>
<p>Nicola Liberati, a Ph.D. student in philosophy at the University of Pisa studying the intersection of humans and technology, suggested another line of thinking about AR in his presentation at the expo. “We should not focus our attention only on what we can do with the such technology, but even on what we become by using it.”</p>
<p>So, when you are strolling down the street wearing the latest digital eyewear from Google, Apple or some as yet unformed or now early-stage company, with your continuous partial attention on the 3D holographic screen feeding you all kinds of personalized information about the environment around you, zeroing in on the people and places in your field of view or piped in remotely from around the real and virtual worlds, and spaces in between, think about what we have become.</p>
<p>It all depends on your perspective.</p>
<p>Written by: by <a href="http://www.cnet.com/profile/dfarber/">Dan Farber</a>, <a href="http://news.cnet.com/8301-11386_3-57588128-76/the-next-big-thing-in-tech-augmented-reality/">CNET</a> (via <a href="http://ispr.info/2013/06/12/beyond-google-glass-the-evolution-of-augmented-reality/">Presence</a>)<br />
Posted by: <a href="https://www.situatedresearch.com">Situated Research</a></p>
<p>The post <a href="https://www.situatedresearch.com/2013/06/beyond-google-glass-the-evolution-of-augmented-reality/">Beyond Google Glass: The Evolution of Augmented Reality</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.situatedresearch.com/2013/06/beyond-google-glass-the-evolution-of-augmented-reality/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5184</post-id>	</item>
		<item>
		<title>How Google is Melding Our Real and Virtual Worlds with Games, Apps … and Glass</title>
		<link>https://www.situatedresearch.com/2013/05/how-google-is-melding-our-real-and-virtual-worlds-with-games-apps-and-glass/</link>
					<comments>https://www.situatedresearch.com/2013/05/how-google-is-melding-our-real-and-virtual-worlds-with-games-apps-and-glass/#_comments</comments>
		
		<dc:creator><![CDATA[Matthew Sharritt, Ph.D.]]></dc:creator>
		<pubDate>Mon, 20 May 2013 19:25:11 +0000</pubDate>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[HCI]]></category>
		<category><![CDATA[Psychology]]></category>
		<category><![CDATA[Simulations]]></category>
		<category><![CDATA[Usability]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Communication]]></category>
		<category><![CDATA[Communities]]></category>
		<category><![CDATA[Design]]></category>
		<category><![CDATA[Games for Learning]]></category>
		<category><![CDATA[Human Factors]]></category>
		<category><![CDATA[Mobile Devices]]></category>
		<category><![CDATA[User Experience]]></category>
		<category><![CDATA[User Interface]]></category>
		<category><![CDATA[User-Centered Design]]></category>
		<category><![CDATA[Virtual Reality]]></category>
		<guid isPermaLink="false">http://www.situatedresearch.com/?p=5126</guid>

					<description><![CDATA[<p>“The world around you is not what it seems,” says Ingress, the virtual game that uses the real world as its gamespace. And, perhaps, when Google’s semi-independent division Niantic Labs is finished with its mission, we humans won’t be, either. Google’s mission is to organize the world’s information and make it universally accessible and usable. Note&#8230;</p>
<p>The post <a href="https://www.situatedresearch.com/2013/05/how-google-is-melding-our-real-and-virtual-worlds-with-games-apps-and-glass/">How Google is Melding Our Real and Virtual Worlds with Games, Apps … and Glass</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>“The world around you is not what it seems,” says <a href="http://www.ingress.com/">Ingress</a>, the virtual game that uses the real world as its gamespace. And, perhaps, when Google’s semi-independent division Niantic Labs is finished with its mission, we humans won’t be, either.</p>
<p>Google’s mission is to organize the world’s information and make it universally accessible and usable. Note carefully that Google says nothing about the Internet in that statement. <span id="more-5126"></span></p>
<p style="text-align: center;"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-5128 aligncenter" alt="Ingress" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/Ingress1.jpg?resize=558%2C353&#038;ssl=1" width="558" height="353" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/Ingress1.jpg?w=558&amp;ssl=1 558w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/Ingress1.jpg?resize=300%2C189&amp;ssl=1 300w" sizes="auto, (max-width: 558px) 100vw, 558px" /></p>
<p>In the last few eye-blinks of human history, we’ve created virtual worlds: cyberspace, virtual reality, the World Wide Web … places that exist in our devices, on our computers, in our servers, on the internet, and in our heads. But there’s also a space in which we live and walk and eat and breathe. Realspace. Meatspace. IRL. The real world, so we say, that we can touch and taste and smell.</p>
<p>Google’s trying to bring those worlds together, partly through the work of Niantic Labs.</p>
<p>Augmented reality is nothing new, of course, with marketing-focused companies like Layar building connections between physical and virtual reality and Ikea’s <a href="http://venturebeat.com/2012/12/23/augmented-reality/">most-downloaded branded app of 2012</a> doing similar things. Other startups have explored AR capabilities as well, such as Caterina Fake’s <a href="https://findery.com/">Findery</a>, which invites people to leave geo-tied notes that others can discover and read.</p>
<p style="text-align: center;"><img data-recalc-dims="1" loading="lazy" decoding="async" class="aligncenter  wp-image-5129" alt="screen-shot-2013-04-29-at-6-49-41-am" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/screen-shot-2013-04-29-at-6-49-41-am.png?resize=590%2C346&#038;ssl=1" width="590" height="346" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/screen-shot-2013-04-29-at-6-49-41-am.png?w=737&amp;ssl=1 737w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/screen-shot-2013-04-29-at-6-49-41-am.png?resize=300%2C176&amp;ssl=1 300w" sizes="auto, (max-width: 590px) 100vw, 590px" /></p>
<p>But when a company with the resources of a Google tackles the problem, and has a tool in Google Glass that seems destined for significant developer (and probably user) penetration that can actually create interconnections between the real and the virtual perhaps more efficiently than any other previous product, you’ve got something interesting. And potentially huge.</p>
<p>So a couple of weeks ago, I chatted with the man who’s leading that effort.</p>
<h3>John Hanke: the missionary of mapping</h3>
<p>John Hanke is vice president of product for Niantic Labs, the year-old Google-but-not-Google division of just a few dozen engineers that brought us <a href="http://venturebeat.com/2012/09/27/googles-new-field-trip-virtually-augmenting-the-awesomeness-of-reality/">Field Trip, the app to explore the world around us with a virtual docent</a>. And, of course, the virtual/real game Ingress.</p>
<figure id="attachment_5130" aria-describedby="caption-attachment-5130" style="width: 359px" class="wp-caption alignright"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-5130" alt="John Hanke" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/12926c4.jpg?resize=359%2C359&#038;ssl=1" width="359" height="359" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/12926c4.jpg?w=359&amp;ssl=1 359w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/12926c4.jpg?resize=150%2C150&amp;ssl=1 150w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/12926c4.jpg?resize=300%2C300&amp;ssl=1 300w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/12926c4.jpg?resize=90%2C90&amp;ssl=1 90w" sizes="auto, (max-width: 359px) 100vw, 359px" /><figcaption id="caption-attachment-5130" class="wp-caption-text"><em>John Hanke</em></figcaption></figure>
<p>Before Niantic, Hanke ran Google Maps, Google Earth, and other geo areas, and before Google, he was the cofounder and CEO of Keyhole, the innovative geo-mapping and visualization company. Google bought Keyhole in 2004, which brought Hanke in the search engine’s fold to lead the its maps, earth, street view, and local divisions.</p>
<p>Now, he told me, rather than let him leave to scratch his entrepreneurial itch yet again and do another startup, Google gave him a semi-autonomous group to, as his LinkedIn profile suggests, experiment at the “intersection of mobility, real world, and the Internet.”</p>
<p>“We set up Niantic as a group that could explore new types of mobile apps with ubiquitous always-on features,” Hanke said. “And we’re set up to act like a start-up.”</p>
<h3>Virtual + physical = field trip</h3>
<p>Field Trip was one of Niantic’s first creations, and while on the surface it’s an app that helps you find cool stuff, ultimately it’s a tool to merge metadata and data and then present them together. While you’re in the physical world, Field Trip pulls data about that experience from digital sources, feeding you that information, and changing — deepening, enriching — your experience of place. Layering with history, perhaps, or science, or culture.</p>
<p>Because, after all, one rock is very much like another rock, but if this is the precise rock where Geronimo attacked Mexican soldiers armed with only a knife and his courage, that changes our experience of this particular place. And the merging/melding/layering of virtual and physical makes it more real, in a sense — hyperreal.</p>
<figure id="attachment_5131" aria-describedby="caption-attachment-5131" style="width: 245px" class="wp-caption alignright"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-5131" alt="Google’s Field Trip app helps you explore “reality.”" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/ft-screenshot-5.png?resize=245%2C435&#038;ssl=1" width="245" height="435" /><figcaption id="caption-attachment-5131" class="wp-caption-text"><em>Google’s Field Trip app helps you explore “reality.”</em></figcaption></figure>
<p>Enabling that, of course, requires extensive virtual enhancement of the what-you-see-is-what-you-get world.</p>
<p>“One of the things that we’re trying to evangelize is the concept of geo-tagging everything,” Hanke told me. “I would have expected eight years ago that it would be ubiquitous now, but it’s still not. But I think we’ll get there.”</p>
<p>Geotagging everything digital is a key intersection point between virtual and real. If this blog post is written <i>here</i>, and not <i>there</i>, that adds flavor and nuance to the information. And if a particular historical fact is geotagged to a specific mapped location, that adds depth and dimension to our experience of that place.</p>
<p>“We’re applying some of the same techniques we currently use in standard web search, and the same kind of discipline, to pull really interesting, really good places up from everything else,” Hanke says. “The model is that you’re walking through an unfamiliar neighborhood, but with a friend who is telling you the best things around you. You enjoy it just like before, but you’re a little more informed.”</p>
<h3>AR + MMO + IRL</h3>
<p>Depth and dimension are definitely core components of Ingress, another Niantic Labs app/experiment/game. Ingress is a — take a deep breath — augmented reality massively multiplayer online video game.</p>
<p>The real world is real, but it’s fought over virtually by two shadowy groups: the Enlightened and the Resistance. Niantic has filled the Earth with virtual portals, usually coincident with actual physical landmarks or monuments, that players need to capture in order to gain territory. Capture territory with large numbers of people (aka “mind units”) and your faction gets more powerful.</p>
<p>Clearly, the massive integration of Google mapping technology with a sophisticated gaming engine is required. And the result is another intersection between the real and the virtual.</p>
<p>“Ingress is a massively multiplayer online game designed for mobile, with real location-based connections,” Hanke told me.</p>
<p>You play with everyone in your faction, and you might meet up with other players in real life, or you may just know them virtually as team members in another area. Along the way, Google learns an awful lot about how you use your mobile devices, about mapping physical locations, and about overlaying cyberspace on meatspace.</p>
<figure id="attachment_5132" aria-describedby="caption-attachment-5132" style="width: 633px" class="wp-caption aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" class=" wp-image-5132 " alt="Ingress’ field of play is the world, layered with virtual data." src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/screen-shot-2013-04-29-at-7-05-27-am.png?resize=633%2C383&#038;ssl=1" width="633" height="383" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/screen-shot-2013-04-29-at-7-05-27-am.png?w=703&amp;ssl=1 703w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/screen-shot-2013-04-29-at-7-05-27-am.png?resize=300%2C181&amp;ssl=1 300w" sizes="auto, (max-width: 633px) 100vw, 633px" /><figcaption id="caption-attachment-5132" class="wp-caption-text"><em>Ingress’ field of play is the world, layered with virtual data.</em></figcaption></figure>
<p>All of that knowledge is going to come in very handy with Google Glass.</p>
<h3>Endgame: Google Glass?</h3>
<p>Hanke is cautious when speaking about Google Glass, as is the PR handler who is copiloting our conversation. Even already public information is a question mark as we chat: Google is definitely being Apple-like in the control and distribution of Glass and its future.</p>
<p>But something tantalizing tidbits do come out.</p>
<p>“We definitely kinda had Google Glass in mind when we started work on apps at Niantic,” Hanke says. “We need mobile devices that are less intrusive than the phone is.”</p>
<figure id="attachment_5133" aria-describedby="caption-attachment-5133" style="width: 300px" class="wp-caption alignright"><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-5133" alt="A model demonstrates Google’s new Project Glass technology." src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/glass.jpg?resize=300%2C238&#038;ssl=1" width="300" height="238" /><figcaption id="caption-attachment-5133" class="wp-caption-text"><em>A model demonstrates Google’s new Project Glass technology.</em></figcaption></figure>
<p>And we need devices with different input/output modalities, he says. After all, it’s not easy to play Ingress running around holding an expensive and fragile device in front of you like a window ripped from its frame. And yet you need that portal from the physical to the virtual. For instance, while Field Trip is great to open the doors on human context for the world around us, it threatens to detract from our experience of the world by redirecting our eyes from the ultimate big screen of reality to the small screen of our mobile device.</p>
<p>Google Glass, on the other hand, sits unobtrusively on our foreheads, leaving our hands free and providing data as an overlay on top of the physical world rather than an alternative to the physical world. That model of layering, mixing, and intersecting is top-of-mind for Hanke.</p>
<p>“It just can’t be the case that people are walking around heads down tapping on a screen,” he says. “That just can’t be the future of the human race.”</p>
<h3>Cyborg me now</h3>
<p>Which, of course, is exactly what’s at issue: the future of the human race. Or, at least how we ingest, consume, and reconstitute digital data. And analog data. And meld the two into one harmonious whole of knowing.</p>
<p>That’s perhaps a little metaphysical for a small division of Google that focuses on maps and games and apps.</p>
<p>But the web has <a href="http://www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/306868/">rewired our brains</a> in a decade or so of virtually ubiquitous Internet access, and the smartphone has rewired our behavior in five years, taking us from creatures who look up to to see others to beings that look down at any opportunity to see small bits of plastic and glass and metal in our hands.</p>
<p>So is it really too much to expect from a transformation that brings us from clear divisions between what is real and what is virtual to an elegant blend of the two?</p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="aligncenter size-full wp-image-5134" alt="screen-shot-2013-04-29-at-7-08-50-am" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/screen-shot-2013-04-29-at-7-08-50-am.png?resize=558%2C293&#038;ssl=1" width="558" height="293" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/screen-shot-2013-04-29-at-7-08-50-am.png?w=558&amp;ssl=1 558w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2013/05/screen-shot-2013-04-29-at-7-08-50-am.png?resize=300%2C157&amp;ssl=1 300w" sizes="auto, (max-width: 558px) 100vw, 558px" /></p>
<p>“This is not psychosis or some cognitive break, but an actual takeover of the mind,” Google’s introductory video for the Ingress game says ominously.</p>
<p>Art imitates life, I suppose, and life, in turn, imitates art.</p>
<p>Written by: <a href="http://venturebeat.com/author/johnkoetsier/">John Koetsier</a>, <a href="http://venturebeat.com/2013/05/01/how-google-is-melding-our-real-and-virtual-worlds-with-games-apps-and-glass/">VentureBeat</a> (via <a href="http://ispr.info/2013/05/14/how-google-is-melding-our-real-and-virtual-worlds-with-games-apps-and-glass/">Presence</a>)<br />
Posted by: <a href="https://www.situatedresearch.com">Situated Research</a></p>
<p>The post <a href="https://www.situatedresearch.com/2013/05/how-google-is-melding-our-real-and-virtual-worlds-with-games-apps-and-glass/">How Google is Melding Our Real and Virtual Worlds with Games, Apps … and Glass</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.situatedresearch.com/2013/05/how-google-is-melding-our-real-and-virtual-worlds-with-games-apps-and-glass/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5126</post-id>	</item>
		<item>
		<title>Windows 8 — Disappointing Usability for Both Novice and Power Users</title>
		<link>https://www.situatedresearch.com/2012/11/windows-8-disappointing-usability-for-both-novice-and-power-users/</link>
					<comments>https://www.situatedresearch.com/2012/11/windows-8-disappointing-usability-for-both-novice-and-power-users/#_comments</comments>
		
		<dc:creator><![CDATA[Matthew Sharritt, Ph.D.]]></dc:creator>
		<pubDate>Mon, 26 Nov 2012 17:26:23 +0000</pubDate>
				<category><![CDATA[HCI]]></category>
		<category><![CDATA[Psychology]]></category>
		<category><![CDATA[Usability]]></category>
		<category><![CDATA[Aesthetics]]></category>
		<category><![CDATA[Cognition]]></category>
		<category><![CDATA[Communication]]></category>
		<category><![CDATA[Design]]></category>
		<category><![CDATA[Human Factors]]></category>
		<category><![CDATA[Mobile Devices]]></category>
		<category><![CDATA[Personalization]]></category>
		<category><![CDATA[Usability Research]]></category>
		<category><![CDATA[Usability Testing]]></category>
		<category><![CDATA[User Experience]]></category>
		<category><![CDATA[User Interface]]></category>
		<category><![CDATA[User-Centered Design]]></category>
		<guid isPermaLink="false">http://www.situatedresearch.com/?p=4632</guid>

					<description><![CDATA[<p>Summary: Hidden features, reduced discoverability, cognitive overhead from dual environments, and reduced power from a single-window UI and low information density. Too bad. With the recent launch of Windows 8 and the Surface tablets, Microsoft has reversed its user interface strategy. From a traditional Gates-driven GUI style that emphasized powerful commands to the point of&#8230;</p>
<p>The post <a href="https://www.situatedresearch.com/2012/11/windows-8-disappointing-usability-for-both-novice-and-power-users/">Windows 8 — Disappointing Usability for Both Novice and Power Users</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<blockquote><p><strong>Summary:</strong> Hidden features, reduced discoverability, cognitive overhead from dual environments, and reduced power from a single-window UI and low information density. Too bad.</p></blockquote>
<p>With the recent launch of Windows 8 and the Surface tablets, Microsoft has reversed its user interface strategy. From a traditional Gates-driven GUI style that emphasized powerful commands to the point of featuritis, Microsoft has gone soft and now smothers usability with big colorful tiles while hiding needed features. <span id="more-4632"></span></p>
<p>The new design is obviously optimized for touchscreen use (where big targets <em>are</em> helpful), but Microsoft is also imposing this style on its traditional PC users because all of Windows 8 is permeated by the tablet sensibility.</p>
<p>How well does this work for real users performing real tasks? To find out, we invited 12 experienced PC users to test Windows 8 on both regular computers and Microsoft&#8217;s new <strong>Surface RT</strong> tablets.</p>
<h2>Double Desktop = Cognitive Overhead and Added Memory Load</h2>
<p>The Roman god Janus; Dr. Jekyll and Mr. Hyde; even Batman&#8217;s arch-foe Two-Face — human culture is fascinated by duality. We can now add Windows 8 to this list. The product shows two faces to the user: a tablet-oriented Start screen and a PC-oriented desktop screen.</p>
<p>Unfortunately, having <strong>two environments on a single device</strong> is a prescription for usability problems for several reasons:</p>
<ul>
<li>Users have to learn and <strong>remember where to go</strong> for which features.</li>
<li>When running web browsers in both device areas, users will only <strong>see (and be reminded of) a subset</strong> of their open web pages at any given time.</li>
<li><strong>Switching</strong> between environments increases the <strong>interaction cost</strong> of using multiple features.</li>
<li>The two environments work differently, making for an <strong>inconsistent</strong> user experience.</li>
</ul>
<h2>Lack of Multiple Windows = Memory Overload for Complex Tasks</h2>
<p>One of the worst aspects of Windows 8 for power users is that the product&#8217;s very name has become a misnomer. &#8220;Windows&#8221; <strong>no longer supports multiple windows</strong> on the screen. Win8 does have an option to temporarily show a second area in a small part of the screen, but none of our test users were able to make this work. Also, the main UI restricts users to a single window, so the product ought to be renamed &#8220;<strong>Microsoft Window</strong>.&#8221;</p>
<p>The single-window strategy works well on tablets and is required on a small phone screen. But with a big monitor and dozens of applications and websites running simultaneously, a high-end PC user definitely benefits from the ability to see multiple windows at the same time. Indeed, the most important web use cases involve collecting, comparing, and choosing among several web pages, and such tasks are much easier with several windows when you have the screen space to see many things at once.</p>
<p>When users can&#8217;t view several windows simultaneously, they must keep information from one window in short-term memory while they activate another window. This is problematic for two reasons. First, human short-term memory is notoriously weak, and second, the very task of having to manipulate a window—instead of simply glancing at one that&#8217;s already open—further taxes the user&#8217;s cognitive resources.</p>
<h2>Flat Style Reduces Discoverability</h2>
<p>The Windows 8 UI <strong>is completely flat </strong>in what used to be called the &#8220;Metro&#8221; style and is now called the &#8220;Modern UI.&#8221; There&#8217;s no pseudo-3D or lighting model to cast subtle shadows that indicate what&#8217;s clickable (because it looks raised above the rest) or where you can type (because it looks indented below the page surface).</p>
<p>I do think Metro/Modern has more elegant typography than past UI styles and that the brightly colored tiles feel fresh.</p>
<p>But the new look sacrifices usability on the altar of looking different than traditional GUIs. There&#8217;s a reason GUI designers used to make objects look more detailed and actionable than they do in the Metro design. As an example, look at this settings menu:</p>
<p style="text-align: center;"><img data-recalc-dims="1" loading="lazy" decoding="async" class="aligncenter size-full wp-image-4633" title="win8-settings-menu" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2012/11/win8-settings-menu.png?resize=344%2C281&#038;ssl=1" alt="" width="344" height="281" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2012/11/win8-settings-menu.png?w=344&amp;ssl=1 344w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2012/11/win8-settings-menu.png?resize=300%2C245&amp;ssl=1 300w" sizes="auto, (max-width: 344px) 100vw, 344px" /><em>The bottom of the Windows 8 settings menu on Surface RT.</em></p>
<p>Where can you click? Everything looks flat, and in fact &#8220;Change PC settings&#8221; looks more like the label for the icon group than a clickable command. As a result, many users in our testing didn&#8217;t click this command when they were trying to access one of the features it hides.</p>
<p>(In that task, we asked users to change the start screen background color. As a further problem, the very command label had misleading information scent for some users; they thought of the Surface as a tablet, not a &#8220;PC.&#8221;)</p>
<p>We also saw problems with users overlooking or misinterpreting tabbed GUI components because of the low distinctiveness of the tab selection and the poor perceived affordance of the very concept of clickable tabs.</p>
<p>Icons are flat, monochromatic, and coarsely simplified. This is no doubt a retort to Apple&#8217;s overly tangible, colorful, and extremely detailed &#8220;skeuomorphic&#8221; design style in iOS. For once, I think a compromise would be better than either extreme. In this case, we often saw users either not relating to the icons or simply not understanding them.</p>
<p>Icons are supposed to (a) help users interpret the system, and (b) attract clicks. Not the Win8 icons.</p>
<h2>Low Information Density</h2>
<p>The available advice on designing for the &#8220;modern UI style&#8221; seems to guide designers to create applications with extraordinarily low information density. See, for example, the following screenshots:</p>
<p style="text-align: center;"><img data-recalc-dims="1" loading="lazy" decoding="async" class="aligncenter size-full wp-image-4634" title="win8-low-info-density-apps" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2012/11/win8-low-info-density-apps.jpg?resize=615%2C701&#038;ssl=1" alt="" width="615" height="701" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2012/11/win8-low-info-density-apps.jpg?w=615&amp;ssl=1 615w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2012/11/win8-low-info-density-apps.jpg?resize=263%2C300&amp;ssl=1 263w" sizes="auto, (max-width: 615px) 100vw, 615px" /><em>Start screens from the </em>Bing Finance<em> (top) and </em>Los Angeles Times<em> (bottom) apps for the Surface tablet.</em></p>
<p>Despite running on a huge 10.6-inch tablet, <cite>Bing Finance</cite> shows only a single story (plus 3 stock market quotes) on the initial screen. The <cite>Los Angeles Times</cite> is not much better: this newspaper app&#8217;s initial screen is limited to 3 headlines and an advertisement. In fact, they don&#8217;t even show the lead story&#8217;s full headline and the summary has room for only 7 words. Come on, this tiny amount of news is all you can fit into 1366 × 768 pixels?</p>
<p style="text-align: center;"><img data-recalc-dims="1" loading="lazy" decoding="async" class="aligncenter size-full wp-image-4635" title="win8-latimes-browser" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2012/11/win8-latimes-browser.jpg?resize=615%2C346&#038;ssl=1" alt="" width="615" height="346" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2012/11/win8-latimes-browser.jpg?w=615&amp;ssl=1 615w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2012/11/win8-latimes-browser.jpg?resize=300%2C168&amp;ssl=1 300w" sizes="auto, (max-width: 615px) 100vw, 615px" /><em>www.latimes.com in the tablet-mode browser.</em></p>
<p>Visiting the newspaper&#8217;s website in Internet Explorer gives you much more information, though it&#8217;s unfortunate that the site doesn&#8217;t exploit the real estate offered by the widescreen aspect ratio on the Surface (and many full-sized computers). The website shows <strong>9 stories (and 3 ads) in the same space as the 3 stories</strong> offered by the Metro app. Plus we get <em>full</em> summaries of the top articles.</p>
<p>Yes, big photos are nice. Yes, spacious layouts are nice. But you don&#8217;t have to be a fanatic follower of Edward Tufte to want a bit more &#8220;data ink&#8221; on the screen.</p>
<p>As a result of the Surface&#8217;s incredibly low information density, users are relegated to incessant scrolling to get even a modest overview of the available information.</p>
<p>As it turns out, users didn&#8217;t mind horizontal scrolling on the Surface, which is interesting given that horizontal scrolling is a usability disaster for websites on desktop computers. Still, there&#8217;s such a thing as too much scrolling, and users won&#8217;t spend the time to move through large masses of low-density information.</p>
<h2>Overly Live Tiles Backfire</h2>
<p>Live tiles are one of the UI advances in Windows 8. Instead of always representing an app with the same static icon, a <strong>live tile summarizes current information</strong> from within the app. This works well when used judiciously. Good examples include:</p>
<ul>
<li>Weather app showing current (or predicted) temperature and precipitation</li>
<li>Email app showing the subject line of the latest incoming message</li>
<li>Calendar app showing your next appointment</li>
<li>Stock market app showing the current market level</li>
</ul>
<p>Unfortunately, application designers immediately went overboard and went from live tiles to hyper-energized ones. To illustrate &#8230;</p>
<p>Quick, without reading the caption, which apps do the following 4 tiles represent?</p>
<p style="text-align: center;"><img data-recalc-dims="1" loading="lazy" decoding="async" class="aligncenter size-full wp-image-4636" title="win8-live-tiles" src="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2012/11/win8-live-tiles.png?resize=646%2C328&#038;ssl=1" alt="" width="646" height="328" srcset="https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2012/11/win8-live-tiles.png?w=646&amp;ssl=1 646w, https://i0.wp.com/www.situatedresearch.com/wp-content/uploads/2012/11/win8-live-tiles.png?resize=300%2C152&amp;ssl=1 300w" sizes="auto, (max-width: 646px) 100vw, 646px" /><em>Live tiles for (clockwise from upper left): Urbanspoon, <cite>Los Angeles Times</cite>, Newegg, and Epicurious.</em></p>
<p>Newegg is the only app that includes its full name in the tile. When we asked participants to use the other apps, they couldn&#8217;t find them. This on a new tablet with only a few applications installed. We know from our user testing of other tablets and mobile devices that users quickly accumulate numerous applications, most of which they rarely use and can barely recognize—even with static icons that never change.</p>
<p>The theory, no doubt, is to attract users by constantly previewing new photos and other interesting content within the tiles. But the result makes the Surface start screen into an incessantly blinking, unruly environment that feels like dozens of carnival barkers yelling at you simultaneously.</p>
<h2>Charms Are Hidden Generic Commands</h2>
<p>One of the most promising design ideas in Windows 8 is the enhanced use of generic commands in the form of the so-called &#8220;charms.&#8221; The charms are a panel of icons that slide in from the screen&#8217;s right side after a flicking gesture from its right edge (on a tablet) or after pointing the mouse to the screen&#8217;s upper-right corner (on a computer).</p>
<p>The charms panel includes features like <em>Search</em>, <em>Share</em> (including email), and <em>Settings</em> that apply to whatever content the user is currently viewing. In principle, it&#8217;s great to have these commands universally available in a single, uniform design that&#8217;s always accessed the same way.</p>
<p>In practice, the charms work poorly — at least for new users. The old saying, <strong>out of sight, out of mind</strong>, turned out to be accurate. Because the charms are hidden, our users often forgot to summon them, even when they needed them. In applications such as Epicurious, which included a visible reminder of the search feature, users turned to search much more frequently.</p>
<p>Hiding commands and other GUI chrome makes sense on small mobile phones. It makes less sense on bigger tablet screens. And it makes no sense at all on huge PC screens.</p>
<p>Furthermore, the charms don&#8217;t actually work universally because they&#8217;re not true generic commands. In our test, users often clicked <em>Search</em> only to be told, &#8220;This application cannot be searched.&#8221; Enough disappointments and users will stop trying a feature. (Also, of course, it violates basic usability guidelines; that is, you shouldn&#8217;t tease users by offering a feature that isn&#8217;t actually available.)</p>
<p>Finally, not all users understood that the commands are context dependent and do different things on different pages.</p>
<p>Many other features are initially hidden and are revealed only when users perform specific and often convoluted gestures. For example, all of our users had great difficulty with an extraordinarily basic task: changing the city in the weather app. Obvious gestures, such as clicking the name of the current city to change locations, didn&#8217;t work. Users&#8217; difficulties were exacerbated by the fact that the &#8220;Modern&#8221; GUI style doesn&#8217;t indicate which words and fields are active and/or can be changed.</p>
<p>What&#8217;s the long-term usability of the hidden features in Windows 8? We might expect users to grow accustomed to the need to reveal the charms and other non-visible commands, even though this imposes additional cognitive overhead on using the system. That is, people must <em>think</em> to do something, rather than being <em>reminded</em> to do something, and thus users will sometimes neglect useful Win8 features.</p>
<p>Also, the familiarity bred by long-term use might be counteracted by the fact that well-designed <strong>websites have trained users to expect important features to be shown</strong> directly in the context in which they&#8217;re needed. You simply can&#8217;t design a website with hidden features and expect it to be used: website features are usually ephemeral, meaning that they must be explicitly represented if they&#8217;re to gather any use.</p>
<p>Thus, people&#8217;s experience with the web excerpts a powerful pull in the direction of expecting visible features. It remains to be seen whether the Surface tablet&#8217;s physical presence creates enough of an opposing pull to remind people to look for hidden features when they&#8217;re using Surface apps.</p>
<h2>Error-Prone Gestures</h2>
<p>The tablet version of Windows 8 introduces a bunch of complicated gestures that are easy to get wrong and thus dramatically reduce the UI&#8217;s learnability. If something doesn&#8217;t work, users don&#8217;t know whether they did the gesture wrong, the gesture doesn&#8217;t work in the current context, or they need to do a different gesture entirely. This makes it hard to learn and remember the gestures. And it makes actual use highly error-prone and more time-consuming than necessary.</p>
<p>The worst gesture might be the one to reveal the list of currently running applications: you need to first swipe from the screen&#8217;s left edge, and then immediately reverse direction and do a small swipe the other way, and finally make a 90-degree turn to move your finger to a thumbnail of the desired application. The slightest mistake in any of these steps gives you a different result.</p>
<p>The UI is littered with <strong>swipe ambiguity</strong>, where similar (or identical) gestures have different outcomes depending on subtle details in how they&#8217;re activated or executed. For example, start swiping from the right to the left and you will either scroll the screen horizontally or reveal the charm bar, depending on exactly where your finger first touched the screen. This was very confusing to the users in our study.</p>
<h2>Windows 8 UX: Weak on Tablets, Terrible for PCs</h2>
<p>As mentioned in the introduction, Windows 8 encompasses two UI styles within one product. Windows 8 on mobile devices and tablets is akin to Dr. Jekyll: a tortured soul hoping for redemption. <strong>On a regular PC, Windows 8 is Mr. Hyde</strong>: a monster that terrorizes poor office workers and strangles their productivity.</p>
<p>Although Win8 has usability issues on tablets, there&#8217;s nothing that a modest redesign can&#8217;t fix. In fact, usability could be substantially improved by revising the application guidelines to emphasize restrained use of active tiles, higher information density, better visibility of key features, and many other usability guidelines we&#8217;ve already discovered in testing other tablets.</p>
<p>(I was stunned to see the <cite>Architectural Digest</cite> app for Surface replicate a host of well-documented usability bloopers, such as not making the cover headlines clickable. Swipe ambiguity ran rampant, and users were often lost in this app&#8217;s confusing combination of vertical and horizontal scrolling. All of this could have been avoided by reading reports we have published <em>for free</em>. I can just barely understand companies that ruin their user experience because they don&#8217;t want to pay $298 to find out what the usability research says. But to create a bad app to save no money seems a puzzle.)</p>
<p>I have great hopes for Windows 9 on mobile and tablets. Just as Windows 7 was &#8220;Vista Done Right,&#8221; it&#8217;s quite likely that the touchscreen version of Windows 9 will be &#8220;Metro Done Right.&#8221;</p>
<p>The situation is much worse on regular PCs, particularly for knowledge workers doing productivity tasks in the office. This used to be Microsoft&#8217;s core audience, and it has now <strong>thrown the old customer base under the bus</strong> by designing an operating system that removes a powerful PC&#8217;s benefits in order to work better on smaller devices.</p>
<p>The underlying problem is the idea of recycling a single software UI for two very different classes of hardware devices. It would have been much better to have two different designs: one for mobile and tablets, and one for the PC.</p>
<p>I understand why Microsoft likes the marketing message of &#8220;One Windows, Everywhere.&#8221; But this strategy is wrong for users.</p>
<h2>I Don&#8217;t Hate Microsoft</h2>
<p>Because this column is very critical of Microsoft&#8217;s main product, some people will no doubt accuse me of being an Apple fanboy or a Microsoft hater. I&#8217;m neither. I switched from Macintosh to Windows many years ago and have been very pleased with Windows 7.</p>
<p>I am a great fan of the dramatic &#8220;ribbon&#8221; redesign of Office (we later gave several awards to other applications that adapted this UI innovation), and I proclaimed the Kinect an &#8220;exciting advance in UI technology.&#8221; I have many friends who work at Microsoft and know that it has many very talented usability researchers and UI designers on staff.</p>
<p>I have nothing against Microsoft. I happen to think that Windows 7 is a good product and that Windows 8 is a misguided one. I derived these conclusions from first principles of human–computer interaction theory and from watching users in our new research. One doesn&#8217;t have to hate or love a company in order to analyze its UI designs.</p>
<p>I&#8217;ll stay with Win7 the next few years and hope for better times with Windows 9. One great thing about Microsoft is that they do have a history of correcting their mistakes.</p>
<p>Written by: <a href="http://www.useit.com/alertbox/windows-8.html">Jakob Nielsen, Alertbox</a><br />
Posted by: <a href="https://www.situatedresearch.com">Situated Research</a></p>
<p>The post <a href="https://www.situatedresearch.com/2012/11/windows-8-disappointing-usability-for-both-novice-and-power-users/">Windows 8 — Disappointing Usability for Both Novice and Power Users</a> appeared first on <a href="https://www.situatedresearch.com">Situated Research</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.situatedresearch.com/2012/11/windows-8-disappointing-usability-for-both-novice-and-power-users/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4632</post-id>	</item>
	</channel>
</rss>
