Archive

Archive for the ‘Usability’ Category

Why I Don’t Like Infinite Scrolling

May 31, 2014 Comments off

Infinite scrolling, a somewhat recent trend in Web design, is a technique in which long lists of items, rather than being broken into separate pages, are loaded a few at a time via AJAX and appended to the current page. If you’re not familiar with it, you can find more information in a Smashing Magazine article by Yogev Ahuvia: “Infinite Scrolling: Let’s Get To The Bottom Of This.” Ahuvia tries to present a balanced look at the strengths and weaknesses of the technique, but it seems that there are more cons than pros. The comments are overwhelmingly negative.

In addition to Ahuvia’s piece, Hoa Langer’s “Infinite Scrolling is Not for Every Website” says that infinite scrolling “plays a nasty trick” because it “breaks the scrollbar,” and concludes that the technique is “not the answer for most websites.” Dan Nguyen and Dmitry Fadeyev both write about how infinite scrolling didn’t work when Etsy tried using it for search results. There’s even an xkcd cartoon. 

I’ll admit to being a bit biased in my selections, but I haven’t seen nearly as much praise for the technique as I have criticism of it. This doesn’t surprise me. Personally, I don’t like infinite scrolling at all. It doesn’t seem to be solving a real problem, at least as far as I can tell, but it certainly causes problems.

The problem that most often affects me personally is the jerking effect that occurs when I try to scroll by clicking-and-dragging the scrollbar. When there’s not a lot of content loaded, the sliding portion (called the “thumb” if the Wikipedia article is to be believed) is fairly tall. As more content loads, not only does the thumb shrink, but the point on the scrollbar representing where I was also moves out from under the pointer. As soon as I move the mouse again, the thumb jumps toward the pointer and the viewport winds up somewhere I didn’t expect. It’s very disorienting.

I’ve noticed that this isn’t exactly the behavior I’ve been encountering lately. Instead, on occasion, I find that my mouse pointer is below the scrollbar’s slider, but it still moves with the mouse, not unlike the way it continues to move even when the mouse slides off it to the left or right. Unfortunately, in my experience, this doesn’t stop the page from jumping around a bit when the new content first loads. Consequently, I still lose my place even if the viewport does end up in more or less the same spot. I’m not sure if there’s a script that fixes it, or if browser vendors have made efforts to accommodate infinite scrolling; Benjamin Milde mentions in a comment on Ahuvia’s article that he sees the above behavior in Firefox but not Chrome, so maybe that’s it.

One especially annoying situation occurs when infinite scrolling is implemented on a page that has a footer. There is something at the bottom of the page, but the user can’t actually read it, because as soon as it’s scrolled into view, it gets pushed back off-screen by the newly-loaded content. Making sure there’s nothing under the infinitely-scrollable column might seem obvious enough, but it does get overlooked every now and then. MorgueFile, for instance, has this problem.

In fact, according to Ahuvia, even Facebook did this (at least at the time that article was written). As I look at Facebook now, it seems like there’s a quasi-footer at the bottom of the right-hand column, but it doesn’t have nearly as many links as the footer in Ahuvia’s screen shot. As far as I can tell, Facebook doesn’t have the footer at all anymore; after several minutes, I gave up on trying to reach the point when Facebook refuses to load any more posts on the news feed, so I can’t say that for sure.

Another issue is that infinite scrolling automatically loads content in response to an action, namely scrolling, that normally doesn’t prompt that action. It’s bad enough that the page is taking action without the user’s permission, but downloading additional content in such a fashion can a problem for people who have slow connections or data caps. Whether this is a serious problem depends on what’s being loaded. Another handful of DuckDuckGo search results won’t hurt much, but another couple dozen Google Image Search results may be a problem. Anyway, I think users would like to decide for themselves how much whittling away at their data allowances is acceptable.

Finally, infinite scrolling tends to create a continuous stream of content with no end in sight. This problem is not unique to infinite scrolling: Some pages on deviantArt (but not others) have back/next buttons but no way to jump to specific pages and no indication of how many pages there are in total or which page is the current one. Neither is it impossible for an infinitely scrolling page to avoid this problem: Discourse, an open-source forum project that uses infinite scrolling, solves it with a floating box indicating the post currently being viewed and the total number of posts in the thread.

It’s worth noting that infinite scrolling (without an indicator like Discourse’s) is often used for things like social network posts and search results for which people frequently don’t care about being able to keep their place; indeed, keeping “a place” in such contexts is often meaningless, because what’s on “page 5" of 10,123” today might be on “page 120 of 11,050” tomorrow as new content is posted and sort algorithms are adjusted. On the other hand, even if the association of a certain page number to certain results is ephemeral, it can still be useful for users returning to the result list using the Back button. Besides, I prefer to be able to decide for myself whether I need pagination.

One thing that would solve most of my complaints would be the solution that deviantArt uses (in addition to optionally switching to back/next buttons): Instead of loading more content as soon as the bottom of the page is scrolled into view, the page displays a “Show more” button. This adds a bit of friction to the process of loading more content, but it also puts control back in the user’s hands. It still has the potential to break the things that AJAX in general breaks, such as the back button and the ability to bookmark or share URLs (especially when sharing with non-Javascript users), but so does infinite scrolling, and in either case these problems already have solutions in widespread use.

For that matter, simply using AJAX to implement pagination would solve the problems as well, not add much more friction than the “Show more” button, and not lack much of anything that infinite scrolling offers except the ability to return to previous pages just by scrolling up. A hybrid design could potentially address even that issue, if the feature turns out to be really necessary to some application.

To be honest, I just don’t see an advantage to infinite scrolling. There may be a few minor benefits, but there are other ways to get them, and they don’t justify the high cost of usability. As far as I’m concerned, infinite scrolling is a bad idea and it should probably be avoided.

Functionality hidden behind badly named settings

December 28, 2013 Comments off

One of the things that annoys me the most about software is stupid usability issues.

For instance, when you see a checkbox on an options page with the label, “check my spelling as I type,” what do you think it does? I think it’s a reasonable assumption that it refers to the automatic spell-check that puts squiggly red lines under any words that the spell-checker doesn’t understand. After all, that is the spell-checking that occurs “as I type.”

Screen shot of Windows Live Mail options, including "Check my spelling As I type" checkbox

“Check my spelling as I type” in Windows Live Mail options

Screen shot of the new message window with a misspelled word underlined in red

Automatic spell-check underlining a misspelled word

In reality, however, that checkbox controls more than just the automatic highlighting of misspellings. Unchecking it also disables the on-demand spell check that would normally be available from the menu bar. The button that normally activates the spell check is grayed out.

Screenshot of new message window's with Editing menu open and spell-check icon enabled

Editing menu with the spell-check icon enabled when “Check my spelling as I type” is selected

Screen shot of Editing menu with spell-check icon disabled

Editing menu with the spell-check icon disabled when “Check my spelling as I type” is cleared.

The problem is not simply that the description is unclear. As I said before, it seemed extremely clear that the checkbox enables and disables the spell-checking that occurs as the user types. The problem is, that’s not what the setting actually does.

A simple solution would be to rename the setting to “Enable spell-checking” or something along those lines. More usefully, the setting could be modified to do what it says, and a separate setting could be provided to turn the on-demand spell checker on and off. I honestly don’t know why they share the setting in the first place.

Hat tip to the user poor1 at ComputerAct!ve forums for posting the solution to the grayed-out spell-check problem.

Categories: Software, Usability Tags:

The Self-Fulfilling Prophecy of Computer-Illiteracy

Working in tech support, I often hear people tell me how computer-illiterate they are. Sometimes they even tell me that they think they’re stupid. I don’t think that computer-illiteracy is the result of stupidity so much as a self-fulfilling prophecy, fed by an industry that makes money off the myth that products can just work with no understanding necessary. I believe that everyone who uses a computer would benefit from a little basic knowledge about how they work, so I think this cycle of ignorance needs to be broken.

Part of the problem is that thinking you’re bad at anything can be a self-fulfilling prophecy: People who have a tough time with computers convince themselves that they’re not good with computers, so they have lose interest or get frustrated and so spend less time trying to learn. This means they end up not knowing much about computers, which causes them to have a tough time with computers. You could replace computers with sports, music, or just about anything, and the principle would still apply.

But the industry seems to feed into this problem by convincing people that their devices should just work without their having to know very much about them. So when, inevitably, something doesn’t just work, these people think that they must be extra stupid. They don’t understand the basics because they think everything should be obvious. Because they think they’re missing something obvious, and because manufacturers try so hard to hide how the computer actually works, these less tech-savvy users get fed up and discouraged from even trying to learn the basics.

Consequently, many users are missing fundamental information about the computer systems they use. They don’t know the difference between a Web page and a native smart phone app—or even a desktop app in some cases. They don’t know what a Web browser is, the difference between it and a search engine, or that there’s a boundary between the browser and the page it’s displaying. Some don’t even know that the desktop or the Windows 8 Start screen isn’t the Internet. Never mind the distinction between the local wireless network and the Internet.

I’m sure that, to many, this all sounds like technobabble that they can safely ignore. It’s not. It’s basic stuff. It’s like the difference between your car and the highway and gasoline.

If we’re going to break this cycle, we’ll have to convince people of two things: First, that they are smart enough to understand basic information about computers, and second, that this information is worth learning.

The second point is easy enough to demonstrate if we go back to that bit I mentioned earlier about things not just working like they’re supposed to. When you expect things to just work and you don’t know or care how, you’re lost when they inevitably fail. But if you know a little bit about how the pieces fit together, you’ll usually be able to take the first steps toward figuring out what went wrong and why, which in turn is the first step toward fixing it. Does that mean you’ll actually be able to fix it yourself? Not always, but it will give you a better idea of whom you need to call for help and what to tell them. That can save you time, and maybe even money. Sound good?

The first point might be a little bit harder to demonstrate, since it contradicts the deeply-entrenched idea that computers are just too hard for normal people, who should therefore let someone else hold their hands and do the thinking for them. I like to hope that we could fight this by putting the information out there in a really simple and easily-understood form, and letting people decide for themselves whether it’s too hard. Maybe I’m wrong about that, but I’d be willing to try it.

Categories: Usability Tags:

Google and Responsibility

July 24, 2012 Comments off

Last month, I wrote about Google’s URL rewriting. I made the (admittedly a bit hyperbolic) claim that Google’s URLs broke UI across the Web, because people copy-paste links from Google search pages, and end up pasting links that point not to the intended Web page, but to Google’s redirection service. This makes it difficult to see the target URL by checking the status bar, and it causes the link not to be displayed in the visited-link color even if the user has visited the target page.

The question: Is that really Google’s fault?

My first instinct is to say yes. For one thing, Google holds a lot of power. It is basically the start page for the Internet. There are people who can’t even find Facebook without doing a Google search for it. (Perhaps I shouldn’t criticize them, though, as I once stupidly did a Google search for Bing.) Whatever Google does is bound to have an effect on everyone else on the Internet.

Besides, common wisdom dictates that Web site owners use the rel="nofollow" attribute, which Google introduced to cut down on Blog spam and keep spammers from affecting search results, and more generally that site owners optimize their sites to be readable by Google’s crawler. This applies to all search engines, really, but because (again) Google is so popular, it seems like it really is all about Google. I’m given the impression that Google thinks it’s everyone else’s responsibility to ensure that Googlebot can crawl the Web. (I do not claim that it is a correct impression, but it’s worth noting that I’m not the only one who got it.)

On the other hand, site owners do have certain responsibilities to their users. For instance, they should choose meaningful text and title attributes for links. They should use the alt and title attributes for images, especially where those images have important semantic meaning. Basic usability and accessibility measures like these help Googlebot just as much as they help users running Lynx or a screen reader. Googlebot can be seen as just another user, albeit a user on whom multitudes of other users depend. Consequently, being a good citizen of the Internet means playing nice with Google. The same goes for search engines in general.

Getting back to the redirects, I think the argument could be made that making sure you paste the right URLs to spare your users as much confusion as possible is one of those responsibilities that site owners have. I’m not saying everyone who posts a link to a forum or a blog’s comments should be held to this standard, but site owners ought to know that links to sites other than Google shouldn’t start with http://www.google.com.

Really, there should have been two layers of protection against the broken UI caused by Google’s redirects: Google and site owners. Both of those layers failed. All site owners, including Google, ought to make usability a top priority. Google added an unnecessary layer of difficulty for other site owners, who in turn should have paid closer attention to what they were pasting.

The bottom line is, if you’re on the Web, you have to behave responsibly, whether you’re Google or not.

Categories: Web usability Tags: ,

Google’s Redirect URLs are a Pain

I’m a big believer in keeping the Web as simple as possible. Things that complicate the experience but don’t add any real value tend to annoy me. Not only are they (by definition) unnecessary, but they tend to have unwanted side effects.

One such annoyance is Google’s habit of displaying links on its search results page that lead, not to the actual result, but to a redirect script that forwards you to the result.

Among the (presumably) unintended side effects are various privacy and security issues, but I think Google’s privacy problems are well-covered enough that I don’t need to dwell on them here. If you’re interested, you might start with Lorelle VanFossen’s post (on Google+, incidentally), Google URL Redirect Issues in Google Search Results, Privacy, Security, and Ewe.

The problems I want to highlight have more to do with usability.

To illustrate, let’s try a little experiment. First, look up something on Wikipedia. For example, antelope. Next, try searching for it on Google. In the antelope example, the Wikipedia article should be the first result. (There’s no guarantee, of course.) Finally, try the same search on Bing. Again, for antelope, the Wikipedia article is first. (If you don’t like antelope, just use both search engines to look for any page you know you’ve already visited.)

Screenshot of a Google Search for "antelope"

Screenshot of a Bing search for "antelope" with the first result (the Wikipedia article) displayed in a different color

Notice a difference?

The first thing I want to point out, and the reason I posted these screenshots, is that the link in the Google results isn’t purple. When a link points to a URL the user has already visited, the browser usually displays it in a different color. Users depend on link color to navigate the Web. If a link ultimately leads to the same place the user has been but is directly pointing to a URL the user hasn’t visited yet, then it won’t be colored.

Another problem, which was brought up in one of the posts linked by VanFossen, is that the correct URL is no longer displayed in the status bar when the user hovers over or focuses on the link. Looking at the status bar to see where a link goes is a deeply-ingrained habit among users–so much so that browsers still display URLs in that area despite not even having status bars anymore. Google’s redirects break this not only by providing a different URL, but by making it so long and full of gobbledygook that the browser cuts it off. Thanks to that, even advanced users who know to look for the encoded original URL can’t see it. (Admittedly, Google displays the URL in plain text next to each result, but unfortunately, long URLs have the middles cut out of them.)

I should point out that these first two complaints aren’t quite true–at least not usually. Google uses some JavaScript that changes the link’s href attribute to point to the correct location when the page loads, but the onmousedown event changes it back to the redirect URL as soon as you try to click on it. This means that the browser renders the link in the right color and puts the right address in the status bar. Users with JavaScript disabled are out of luck, though.

Moreover, the JavaScript may help users viewing a Google results page, but it doesn’t stop the redirects from breaking copy-paste. It’s not uncommon for users to right-click a result in order to copy the URL. There is no onmouseup script that restores the correct URL, and even if there were, it wouldn’t work for users who navigate context menus by right-click-and-dragging to the desired option. This means that the redirect URL, and not the URL of the actual result, gets copied and pasted into blog posts, E-Mails, and forum posts.

It would seem to me that, in addition to breaking UI all across the Web, this would also work against what I assume is the whole point of the redirects in the first place: If Google is trying to track which search results people click, wouldn’t having them click the same links from completely different Web sites result in a bunch of false positives? They must be using referer headers or some other means to strip those out, since they wouldn’t be redirecting if it weren’t benefiting them. Maybe they’re gathering data on who’s copy-pasting from Google results.

Finally, the redirect script is another request that the user’s browser has to make. On modern, high-speed connections, this isn’t a big delay, but it is still noticeable. There are still dialup users out there, however, and things like this can be a major inconvenience for them.

The most annoying thing, I think, is that Google could solve many of these problems by using JavaScript to track outbound links instead of making users go through the redirect. Admittedly, sending the information back to the server is still another request, but it would fix the UI problems. I suppose Google just didn’t want to give up the data from users with JavaScript disabled. They get more information, and we get a less-usable Web. It just doesn’t sound like a fair trade-off to me.

Categories: Web usability Tags: ,

Just Google It

August 2, 2011 Comments off

Something I see frequently online is people trying to help other people find something online by telling them to search for it. This typically takes a form something like this: “It’s really easy to find. Just Google ‘[name of person]’ and ‘[topic].'” Sometimes they’ll even tell you how many links down from the top of the page. I see three main problems with this:

  1. Search engine results won’t be the same for everyone. Google, for instance, takes your location into account, and of course the company has been toying with using its social networking services to enhance search. (Take Google +1, for instance.) It’s also been my experience that the same keywords in a different order will yield slightly different results.
    • Ads are a bonus problem for people who tell you to click, let’s say, the third result. Not everyone knows to skip the ads at the top, and there’s no way of knowing whether the person posting the number knew it, either. Worse, the number of ads fluctuates.
  2. Things change. Even if everyone in the world using the same keywords would get the same results right now, there’s no guarantee that the same results would appear tomorrow, or in a week, or in five years. In fact, since the Web constantly changes, you can very nearly guarantee that won’t be the case.
  3. If it’s so easy to find, it seems to me that there’s no legitimate reason that the person sharing the information can’t do the search himself, post the link instead of the search terms, and save everyone else a step. It comes off looking like laziness.

The bottom line is this: If you want others to find a particular page on the Web, you should provide a link to that page. That’s the only way to ensure they see the page you had in mind, and it makes their lives easier.