…by which I mean CES, of course.
Granted, half of the things presented this week will never see the light of day, and the other half will be three to six months later than they announced. But still, today alone was just staggering. Motorola gave an Apple-grade presentation (if you can look past its comically bad audio). Olympus finally announced a Micro Four Thirds camera I’m willing to take the plunge on. Even Microsoft may be at risk of becoming relevant again.
I think what can be said about this batch of announcements is that this is the year everything is good enough. What I mean to say is that, of all the devices I’ve seen in the last couple of days, nearly all of them are capable of convincing someone to give up the PC as their primary computer device. These aren’t just rehashed netbooksâ€“relatively few, in fact, even have an Atom CPUâ€“but devices everywhere from 3 inches on up that have enough juice to browse the web, handle email, play games, watch movies, find yourself on a map, and generally do what 90% of the market does with their PCs.
I’ve seen a lot of CES presentations in my time, but this is the first year that I’ve seen the writing on the wall for PCs as we know them. Now, okay, if you’re reading this, you’re one of two kinds of people: the ones who will be using your phone or tablet as your primary computing device by the end of the year, if you’re not already; or the ones who will still be lugging a 5-lb. clamshell device with a keyboard to your neighborhood Starbucks. Either way, in my opinion, you’re an outlier. You may need to type so frequently that a keyboard is always in your plans. Or you may be editing 4k video, or compiling an operating system. And that’s fine. PCs will still exist for those cases. But you’re still going to be affected by the trends in the industry.
What I want you to think about as you contemplate the death of the PC (or, say, Wintel, or the WIMP model, or what have you) is someone you know who’s not at all a geek. Maybe your mom, or the partner who stares glassy-eyed at you when you come home complaining about the latency of your DNS at work. Now, think: what do these people do with their computers all day? They browse the web. And by “the web”, I mean web-based email, Facebook, YouTube, Twitter, Netflix, their bank accounts, their stocks. Name me one thing 90% of these users need a Core i7 CPU for. Games? Only if they’re hardcore. Editing images or videos? Probably not worth the investment.
In the overall cost-to-benefit calculation, there’s going to be a lot more value given to size and battery life than to raw horsepower. And raw horsepower per dollar is really the only remaining benefit of the PC. They’re complicated, bulky, virus-prone, and get slower over time. I looked at my in-laws’ mid-tower Windows machine like it was a record player: it’s big, loud, sucks down a lot of juice… and most importantly, it was asleep most of the time I was there, since they got my hand-me-down netbook for a present.
Meanwhile, you can walk into any mobile phone store in the US today and pick up a 1GHz computer with a half-decent browser for anywhere from $200 to nothing. Then you can shove it in your pocket. That’s powerful. And what we’re seeing this week shows us that the gap between the desktop and the pocket is not only narrowing, but it’s morphing in all kinds of ways. If Motorola is to be believed, the tablet battle will be joined by the Android Honeycomb-powered Xoom this spring; there will be at least one 960×540 phone in the near future; and Windows 8 is aiming for low-power CPUs as well. Consumer electronics companies aren’t tailoring their offerings for power users: they’re aiming squarely at the non-geek in the house. (Don’t feel threatened. It’s for the best.)
This week, we’re seeing what the non-Apple players in the market are seeing as the future of computing. This looks to be the first time Apple has to look at the competition seriously.
Twitter unveiled a new redesign today. Very little of it is really noticeably different, until you look at the underlying code. Which, last I recall, used to be pretty goodâ€“they even used the fancy-pants XHTML 1.0 Strict doctype (though still using tables for layout, which the spec says it shouldn’t do).
But one thing about the latest version makes me wonder just what the hell they’re doing these days.
The <center> element. In September 2008. From the “it” Web 2.0 company. Seriously.
I know this will make me sound like the annoying standardista, but how could anyone who still uses <center> still be doing web design professionally in, of all places, San Francisco? This is an element which has been deprecated for eleven years. Do we really have people who haven’t changed their coding practices since before 1997?
Sadly, yes. And the worse news is, they’re writing books. I just saw a book whose first edition was published in July 2008, which teaches users to use <center>, to do layout tables, to use CSS primarily just for font selections, and loads of other outdated guidance. This is material from the bad old days of web design, and it simply gets regurgitated over and over again. To quote the late, great George Carlin, “it’s all bullshit, and it’s bad for ya.”
I don’t know what it is going to take to finally cull the proverbial herd of these kinds of authors and designers. But each time I see this, it makes me wonder when we can expect some kind of professionalism out of the average content producer. Many of us have been talking about this stuff for years. It’s de rigueur at many web conferences, to the point that people now roll their eyes at it. And yet, it continues. I also don’t know whether Twitter is doing this in-house or if they hired an external designer. But certainly, somebody there dropped the ball.
And I know that one <center> is not a big thing. It’s just a symptom of a larger disease: that of lazy, ignorant and/or incurious designers. When someone sticks to one way to do something without ever updating their own skill set, their designs get more and more inflexible. Which makes redesigns more and more difficult, and more expensive, all with less to show for it. Which brings us to the boxy gridlock we experienced in the 90s. Which is why standardistas get so angry about this stuff. We know that customers deserve better than this. We know that when customers find out how their designer painted them into a corner, it casts a shadow over all of us in those customers’ eyes.
The question that remains from all this is, how can the professionals in this field separate themselves from the amateurs? Really. I want suggestions. What concrete steps can we take to ensure that the good designers and developers, the ones who are always learning, who have a full and balanced skill set, don’t get lumped in (or worse, beaten out by) the ones who are locked in 1995? Who’s got an idea?
It’s funny how sometimes things get wrapped up in a little bow.
Last April, I was in San Francisco, giving my “Accessibility 2.0” talk at the first O’Reilly Web 2.0 Expo. Out of that conference came the seed for the project that I’ve been working on, and now, I’m happy to unveil it. This Wednesday, I’m flying off to speak at Web 2.0 Expo New York, to give a talk called “Universal Design for Web Applications” with my longtime colleague Wendy Chisholm.
What’s gone on in the intervening 17 months has been our work on a book of the same name.
Universal Design for Web Applications just reached final manuscript status last Thursday. It’s scheduled to be published by O’Reilly in November.
We’re really excited about how the book turned out. We chose universal design as our standard to bear because we’re moving beyond accessibility, and applying the principles we’ve learned from accessible design to a whole new world of mobile devices like the iPhone, and lifestyle devices like the Asus Eee PC. The point here is that the days of knowing what your users’ screens look like are over. Even if accessibility weren’t a consideration, universal design is going to inform most of the big decisions web content producers are going to face in the near future. We in accessibility have been where those decision-makers will be, and we have a lot of advice to impart.
We have a lot of information on new topics like the Web Content Accessibility Guidelines 2.0 and the WAI-ARIA specification. We talk about video and script like they’re first-class citizens. And we do the same for Flash, Flex and Silverlight. The fact is that all of these technologies are going to be with us for a long time, and the faster we embrace them, and learn how to make them work for people, the better we will all be for it.
Here’s a shot of the cover:
I just had a horrible experience with eBay, which I think is summed up in this message I just sent to them.
I have a serious complaint with the way eBay sends automated messages. I was forwarded a message sent from eBay to the seller, which reads:
“I would like to have the item shipped to the address below:”
…followed by the shipping address in my eBay account.
However, I had never asked eBay to give that address as the shipping address, and in fact, the payment info I sent on PayPal had the correct address. As a result, the seller shipped the package, then went out of her way to return to the post office to change the shipping address from the _right_ one to the _wrong_ one–all because eBay says I told her to. And now the package will be delayed by a full week, and arrive at my home, where no one will be present to pick it up.
I hold eBay and its email notification system responsible for this.
eBay should never, under any circumstances, misrepresent its users by making statements beginning with the word “I”. You do not speak authoritatively on behalf of your users. I’m a designer and I understand the urge to sound human, but in this case, and I’m sure many others, you are doing more harm than good. I am extremely displeased that my package will be late, but even more upset that eBay sees fit to substitute its words for mine.
The same goes for any other e-commerce outfit out there. I know it sounds all down-home folksy to say things like “I would like to have the item shipped…” instead of “The buyer’s address is…”, but you don’t know what I want. You only know what your user tells you. And you should never communicate more than that.
You know what I think should cause everyone to give at least some thought to accessibility?
I remember one day, when I was 30. I threw the sheets off the bed, and shot my legs out to launch myself from the bed. I took two steps forward, saw a blinding light… and found myself lying on the floor, unable to move for several minutes. It was my first back spasm, and knocked me out of commission for a couple weeks.
Suddenly, things I took for granted, like getting up and looking in the fridge, were things I had to consider. I didn’t want to go anywhere, because it hurt to breathe, much less move. But in that time, I had to fly cross-country to tend to my grandmother in her final days, and that meant managing my pain while my back was screaming in an airplane seat, and then being wrenched as I carried all my luggage. It was the first time that my mobility was reduced, the first time I preferred elevators to taking the stairs two at a time, and the first time I had to depend on other people to help me do what I considered to be basic tasks.
It seems that since then, every six months I get a reminder that my body is not necessarily my friend. Most recently, I strained a ligament in my foot while exercising. Let me tell you, foot injuries suck. When your foot hurts, you keep hoping it doesn’t get worse. And when it doesn’t, you’re scared to do anything that might aggravate it again. So I had a very strange weekend that involved walking with a cane to keep weight off my foot.
It’s simple to look at people with a visible disability and say, I’m glad that’s not me. But you know what? Sooner or later, it will be.
Your vision will likely be the first thing to go. You may strain to read small type, at first. Then, maybe you’ll try bifocals. After that, as the effects of presbyopia set in, you’ll come to rely on your glasses to read. Your vision may start to yellow a bit, as well.
But wait! There’s more!
Hearing loss is a common side effect of the aging process. You may also encounter problems with arthritis (by the way: you’re not resting your wrists on the wrist rest while you’re typing, are you? Are you?), or any of a host of other fine or gross motor dysfunctions that will advance over time. And you may find that your cognitive abilities aren’t as sharp as they once were. (Hopefully before those around you start talking about it.)
I started doing web development when I was 20. At the time, it was barely conceivable to us that people of a certain age would be using the web. We didn’t even know if the web itself was going to last. But here it is, still chugging after a dozen and a half years, and not looking a day over 10.
And nowadays, I look around at the people I’ve worked with, and some of them are really old. Like, in their fifties! Some have even retired — the kind of retired where they’re collecting Social Security and posting pictures of their grandkids to Flickr. Get it? They’re using Flickr. And YouTube. And Gmail, and Twitter, and especially eBay. They also tend to have money to spend, and companies tend to like people like that.
And yet, I still hear people dismissing accessibility for older people on the web. That’s not going to fly any longer. Younger people are coming up on the web, that’s true. But those of us already there are only getting older, and we’re not going to stop liking the web anytime soon.
Keep this in mind when you’re about to downplay whether older users will want to use your site. The right thing to ask is:
Will you want to use this site when you’re older?
Or maybe, do you want some 20-year-old smartass deciding you won’t?
Welcome, dear UIE Web Applications Summit participants, to my presentation links:
- Gez Lemon: Making Ajax work with screen readers
- James Edwards: Ajax and screenreaders: When can it work?
- Dushan Hanuska: Hijax
- Bruce Lawson: Ajax, Hijax and accessibility
- Jeremy Keith: Progressive Enhancement with Hijax
- Juicy Studio: Improving Ajax applications for JAWS users
- W3C: Roadmap for Accessible Rich Internet Applications (WAI-ARIA)
- Firefox 2 Voluntary Product Accessibility Template (VPAT)
- Mozilla accessibility slides (December 2006)
- Accessibility features in Firefox
- Mozilla accessibility projects
- Dojo Accessibility Strategy
â€œI think there’s almost a belligerenceâ€”people are frustrated with their manufactured environment. We tend to assume the problem is with us, and not with the products we’re trying to use.â€
Jonathan Ive, chief designer at Apple, in an article in Time magazine, 9 Jan 2007
Nikesh Arora, who is Google’s VP for European operations, was quoted last week musing about the future of the iPod, saying, “In 12 years, why not an iPod that can carry any video ever produced?”
It’s a question I’ve been asking for a while now. The first time was in 2003, when I started my crusade for the personal server. I repeated myself shortly thereafter as I called out the analytical no-op that is John C. Dvorak. The situation has changed a little since that time with the video iPod, though the storage available on iPods has only doubled, from 40GB to 80GB. But the potential is there, and sooner or later, it will be realized. In fact, it needs to be realized in order to satisfy the growth needs of both the consumer electronics and entertainment industries.
On its face, the idea of an iPod that contains all recorded music is actually pretty feasible in the next decade or so. Figuring about 20,000 major releases a year, at 60 minutes a release, at 192kbps encoding, that’s only about 1.8 terabytes. I can say “only” to a number like that because I remember when an array of that size was a half million dollars, and then $50,000, and now I can buy a 2TB array at Fry’s for about $1000. A futurist can safely assume that anything available today for a grand will eventually be embedded in someone’s cerebral cortex at birth, so we’ll go with it.
Anyway, let’s say that’s all there is to it for now. In fact, let’s say that we have 100TB of disk to play with, and that’ll hold all of the major releases ever recorded. Is that what we really need? Well, probably not, for a lot of reasons. Firstly, unless it’s truly convenient to slap everything ever recorded onto a single storage device, where “convenient” means “more cost-effective than filtering it all at a central source,” then you can achieve most of what you want in, say, five years. Assuming a collaborative filtering approach Ã la Last.fm, we’re already past the tipping point: storage is growing at a rate faster than users can fill it, and that empty disk is an opportunity to sell people what they might like.
The next piece of the puzzle, then, is the payment strategy. How do you grant access to any or all of this music, when it is in the wild? Once the bits themselves are decentralized, the commerce end of things needs to be decentralized, as well, or the entire system provides no value to users. If I am in Darkest Africa and I want to listen to the Fugees, and know that The Score is on my device, but I have no way to unlock it, that’s a problem. The only strategy that to me makes any sense is a subscription model, but even that has implicit hooks into a central certifying authority to prevent freeloading.
A bigger question is this: what role does the network play? Let’s say Apple offers a 10TB iPod that contains all of the media an average human would want. What happens next Tuesday, when it’s out of date? How do they sync up with the latest releases? We’ve been working on syncing technology for years, and we’re still not that smart about it. We’re going to need something to keep everyone with one of these devices up to date (and paid up), and the network we have isn’t really the best strategy for that.
Maybe true broadcast technology would help — say, investing in a one-way radio data infrastructure that keeps everyone informed. SPOT, on steroids. Or maybe most of the work can be done virally, by peers syncing with one another ad hoc, and without interaction. Or have media pushed to clients as they shop or dine or play. Ad-hoc sharing is a really powerful idea, as the Zune people know, though the mechanisms currently in place to restrict that sharing have reduced its value overall.
There’s a lot more to think about here. Enough to make a career of it, in fact. Where do indie labels and artists fit in? How do they add new releases to the system, and how can they hope to be compensated? What is the role of YouTube, et al., in systems like this? What if you’re an American in France, or a Frenchman in America, or just culturally calibrated enough to want both? When do the walls between nations come down, so that we can experience all the media the world has to offer?
The answer is that we’re not prepared to build a framework to support pervasive media concepts like this until a few more things shake loose. The 80% case here can be achieved in the next couple of years, if not for legal affairs and the conservatism of the rightsholders in this area. But there are big technical problems to solve, and it’s going to take a lot of coordinated thinking to analyze the infrastructural, social, legal, financial, psychological and design factors necessary to build a viable ecosystem.
We’ll get there. But it’s going to take a lot more than big portable storage devices to do it.
I’ve had a problem with how my browser memorizes passwords for a while now, and I’m certainly not alone. Since the holy grail of identity management appears to be a long ways away, I think it’s time to address it.
When I enter a username and password, Firefox helpfully offers to remember it. This is good, and helpful, if you know your username and password. My problem is that I have hundreds of accounts scattered all over the place, and I can never be sure that the username and/or password I am entering is correct. If it’s not, and I tell Firefox to remember it, then I am guaranteed to be starting with the wrong credentials on subsequent visits. Only I won’t know it until I try signing in. That’s a less than stellar user experience.
The problem compounds itself on sites where the form sends you to another page in the site’s hierarchy. Then, you may store the correct username and password combination on that other page, and Firefox will remember them both. And as a result, you’ll go to log in on a site’s front page, then fail, but then be sent to that second page, which will let you in. I had to do that for years with Vonage, having forgotten that the original username and password I provided were useless.
It seems a better way to store new usernames and passwords would be to ask whether to store them after the transaction has been completed. So once you submit the form, and you know whether your credentials have been accepted, you can inform the browser to continue. If not, you can go back and try again. But in any case, you will not be saving bad credentials that will continue to come back and bite you each time you forget whether you used this password or that, or whether you’re bob or bobsmith or bobs or firstname.lastname@example.org or b0b_l0l_360 at any given site you have visited.
Am I alone in this, or does this seem like a useful feature request?
“Caveats: I have two iPods, a Mini and a Shuffle. I have no current relationship with Apple, except I was once VP of Apple’s Advanced Technology Group.”
Don Norman, in a response to a post on the 10 most-hated iPod flaws, 30 November 2005