bestkungfu weblog

20 years

Filed in: General, Mon, Apr 30 2012 09:07 PT

20 years ago, I was working in a newsroom in Arizona. The first sign we were in for a long evening was the AP feed chirping to us with the message:

“BULLETIN – RODNEY KING – LAPD OFFICERS ACQUITTED”.

For the next hour and a half, the bulletins kept chirping every few minutes, but the “RODNEY KING” slug soon gave way to a new one: “L-A RIOT”.

Meanwhile, we all watched the NBC News Channel feed, which went from background video and scripted packages on the verdict to live KNBC helicopter footage of people in LA taking to the streets – first to vent about the verdict, then to walk out of abandoned storefronts with whatever they could haul. It wasn’t until early evening that we saw Reginald Denny, the truck driver who had been pulled from his car and brutally beaten under the helicopter’s watchful eye.

I was 17, doing an internship for my senior year of high school. There were few events in my life up to that point that rated a where-were-you-when. I can only remember the attempted assassination of Ronald Reagan, which happened on a sick day, and the Challenger explosion, which happened on a half-day. The previous year, we’d seen the launch of the Gulf War, the Scud Stud, and the streams of tracer fire and clouds of smoke rising up from Baghdad. Unlike these other events, however, it wasn’t a single event, or set up with choreographed team coverage. It was a chain of events that spun out of control as we watched. And from the newsroom, we scrambled to make sense of it.

I’ve been a spectator to a lot of news since then, all from outside the newsroom. The internet-connected individual has faster and higher-quality access to news today than we did back then. What we lack as a society is the ability to synthesize all of those voices. And there are very smart, connected people who are using that fact to distract and divide us. It’s not enough to have access to al-Jazeera or the BBC or Russia Today; we need to develop a level of news literacy few of us have to understand the forces at work, what they want us to believe and why.

The narratives of the LA Riots are generally known: a black man was pulled out of the car and beaten by five white officers, who were recorded by a bystander in the act, but were acquitted of police brutality charges, sparking five days of rioting in Los Angeles during which 53 people died. There’s so much more to the case than that, though, and if it were to have happened last night, we’d hear another story in parallel: one of a convicted felon (King served two years for robbery) who led officers on a high-speed chase through Los Angeles while drunk and disobeyed the officers who stopped him, leading to the well-documented efforts to subdue him. Numerous polls would be taken, and the public would get to vote on which side was right.

What’s wrong with today’s media is that they so pander to the audience that viewers feel their opinion is valid, however poorly-informed. That’s anathema to a journalist. While we struggle to process all this information we have available to us, we need to realize that cases like Rodney King, or today, Trayvon Martin, are not episodes of American Idol. There are no pure good guys or bad guys. The stories can’t be boiled down to a 30-second synopsis. The information we consume today is stripped of all the conflict and nuance that journalists were meant to help process for us, and instead, we’ve seen the rise of stripped-down storylines that lead the viewer to an emotional appeal – opinion as news.

Twenty years later, the societal problems brought to light (however briefly) by the Rodney King beating are still there. We will never even begin to approach them if, each time a case like this comes around, we talk ourselves into a deadlock over conflicting narratives about who did what to whom. It reaches far deeper than most of us ever get to hear, even when we have access to more stories than ever. Access to instantaneous news should not lead to the production of instantaneous opinion. And that’s why we need to seek out that information, even from sources we distrust, and start working it out. Together.

On April 29, 1992, that was my job. It still is. Only now, it’s yours, too.

Opticians – a metaphor

Filed in: consumerism, marketing, vent, Web, Sat, Sep 24 2011 23:45 PT

Today, I needed an optician, and none would help me.

I have a prescription, but it’s for contact lenses, so it’s missing one measurement I need so that the lenses’ focal point will be set properly in a new pair of glasses I’m buying.

I need an optician to measure my pupillary distance. You know that thing they do where they hold up a little plastic ruler to each eye for about 5 seconds, and write a couple numbers down? That. The people I talked to wouldn’t do that for me, even for money. I would have paid $20 for that one skill that they have, that takes them 5 seconds. But they wouldn’t do it.

The reason they do this is as plain as the nose on my face (especially in its current condition, that is, unencumbered by glasses). Opticians make their money selling eyewear. Specifically, they take your eye doctor’s prescription and turn it into the eyewear you need. An optician is to an eye doctor what a pharmacist is to a physician. Except, of course, your pharmacist hasn’t figured out how to sell you a high-index UV-coated Prada pill case.

In this day and age, however, there are many new places popping up that offer comparable products for much less money. All that is needed are the numbers on the prescription, plus a couple more that the optician is qualified to provide, and your glasses come in the mail. This destroys the potential for opticians to profit on your frames, lenses, coatings and other upsell opportunities they may have. So it appears they have chosen instead not to offer services to people who don’t want to buy their other products, even when they would happily pay for an optician’s skills à la carte.

And that’s fine, really, especially if you’re a libertarian. I’d have some respect left for the opticians today who gave me the cold shoulder if they had told me money was the reason, but each was at pains to state otherwise. I was told that it was policy, that they couldn’t guarantee the work, even that it was harmful to their integrity as opticians to spend 5 seconds measuring my pupils.

Recently, it seems even information about my own eyes, information which I had commissioned and paid for, is being treated as proprietary. I have asked for a copy of my prescription from my own eye doctor, and been refused. This was unthinkable even a couple years ago. Imagine your physician telling you that you can’t find out what he’s telling the pharmacist to dispense to you. Once you have left the exam room, the experience is no longer about helping you see well: it is about ensuring the greatest amount of money stays in the office. They will cling to your data and the skills you lack to keep you coming back to them, rather than finding a more efficient solution.

Today, I needed an optician, and none would help me. Not because they couldn’t, but because doing it their way makes them more money. And now I’m generally distrustful of opticians, if not actively looking forward to their obsolescence. From this point forward, I will only get my vision checked by an eye doctor who will hand me a printed copy of all my information at the end of the visit.

Web professionals, you might want to bookmark this. It may become relevant to you at some point in the future.

It’s panel picker time

Filed in: vent, Mon, Jun 13 2011 01:58 PT

A couple months ago, my friend Andy Budd put his finger on the core problem with the modern SXSW. It’s unfathomably large. I mean, really, who’s going to attend much of a conference when it’s split across seemingly every meeting room in downtown Austin? Can anyone ensure quality for the conferencing dollar with 1800 speakers in tow?

Of course they can’t. And the secret we don’t want to talk about is that it doesn’t really matter. Twenty-five thousand conferencegoers cannot be wrong. Everyone who has attended more than one SXSW knows that each year, it grows outside its container. And each year, it’s outside that container where most of the action is anyway.

Still, when the sun is out, we need a place to go, if only to check our phone to see whether that taxi driver who tried to give us Ron Paul campaign materials is going to return the laptop case we left behind, or keep it as payment for puking in his cab. At SXSW, these waiting areas are called “sessions”. And these sessions are driven, at least in part, by feedback from the SXSW panel picker, which will kick off the battle for our hearts, minds and Twitter streams on June 20th.

For those would-be panelists who are willing to listen, I’d suggest that if you think someone else is going to propose your panel, and you think it’s going to be roughly as good as yours, either try to join forces, or get out of the way. There were over 2500 proposed talks last year, and it was plainly too many for the crowd to come to any reasonable consensus around. Having seen how this system works, I find myself strongly in favor of returning to a curated conference with no public voting system, but sensing I’m in the minority here, I’d generally prefer ways to trim down the noise.

Speaking of which: most of what I will ever see of your panel picker proposal will be the title. I may end up looking at a hundred actual pitches, for which I recommend some thinking and discussion in advance, but that doesn’t mean you need to post the verbal equivalent of Speakers’ Corner. I’ve noticed a few patterns that I would like to mark as cliché, and I’m offering examples of accepted panels from SXSWi 2010, so we may avoid the mistakes of the past.

Cut the Shit

The easiest way to draw someone’s attention when all you have is one line is to curse or use graphic imagery. But any writer worth a good goddamn knows profanity is just a crutch. Witness this batch of winners:

  • Avoiding the 11th Hour Shitstorm
  • Bend Over? Surprise! Agencies Are Screwing You
  • Bordering Incest: Turning Your Company into a Family
  • Career Transparency: Why Personal Branding Is Bullshit
  • F***ing Old Spice Guy: Race, Sex, Micro-celebrity
  • How Social Media Fu@k’d up my Marriage
  • Recruiters Are Full of Shit, I Am One
  • Social Media and Comedy: Fuck Yeah!
  • Social Media or Sado-Masochism? Cyberbullying and Celeb 2.0
  • Tired of @#%ing Social Media Experts? (Led by two SM experts. Predictably.)

If you weren’t keeping track, in that list alone were seven FCC words, and one reference each to sadomasochism, incest and anal rape. And these are the proposals that got in. Stay classy, panelists!

I like a good F-bomb. Probably more than I wish I did. (Stupid 2 Live Crew.) But seriously, people. Using profanity in your title to get attention is nothing more than verbal prostitution. Don’t degrade yourself, or SXSW, by making this a race to the bottom.

Another thing about these aggressive titles is that, looking back at the most out-there sessions of SXSWi 2009 (which I attended) and 2010 (which I didn’t), disappointment reigns. Unless your name is Mike Monteiro, you cannot pull off a session built around an epithet. Actually, let me double down on this: watch Mike’s “Fuck You, Pay Me” talk. He makes this work not by beating his audience over the head with it. What he did, which I think is brilliant, is that he got the audience to join him in the joke. The point is, Mike has mastered profanity, and you have not. More importantly, he’s also talented enough that he could propose “Mike Monteiro Explains Client Billing through Experimental Jazz”, and I’d at least consider it, if it’s a late-afternoon slot. He’s proven his value. What have you done?

SM Addicts

Let’s revisit the last title in that bunch:

  • Tired of @#%ing Social Media Experts?

OHDEARGOD YES I’M SICK TO DEATH

…ahem. There were 2563 proposals for SXSWi last year. Of them, one hundred seventy-three had the words “social media” in the title. Another 165 had “social” in another form. All told, more than one out of every 8 panel proposals was somehow social.

This is my message to all social media folks: yes, we get it. You want to be on a panel at SXSW. But you also all want to talk about what to the rest of the world looks like boring, contrived bullshit that serves only to prop you up individually, and it’s gotten old. The only thing worse about the situation is that the panel picker rewards people with large networks, who then vote up their proposals, whether or not they may attend as a result. A distributed acceptance of service, if you will.

I like you. Many of you. Okay, some. And if you want to talk about the social aspect of things, great. But please, offer value to someone other than you and your navel-gazing friends.

For SXSW 2012, I’m going to vote down every panel in the panel picker with the word social in it, and I encourage you all to do the same. It’s 2011. We can take for granted that all but the most bare-metal tech talks have some aspect of humanity to them. Try harder.

You’ll be the death of me

Which brings me to my next category of useless titles: those who are trying too hard. A sampling:

  • Are Internet Consumers Killing Online Creativity?
  • Banks: Innovate or Die!
  • Death of the Demo; Rise of Branded Tutorial
  • Death of the Relational Database
  • Death of the Textbook, Emergence of Games
  • Iterate or Die: How Media Businesses Must Adapt
  • Kill Your Call Center! Bring Your Support Home

Yeesh. Always with the dying. How much Cure is in your vinyl collection?

Death isn’t sexy. Java’s been dead for 12 years, if not more. So why did Oracle buy a $7.4 billion corpse? The fact is that all of these cases are more about things transitioning from the New Hotness to an established technology, or from an established technology to a legacy technology. The Death card symbolizes change. Try talking to me about how things are changing instead of putting another few rounds into this particular equine.

Think about the future

  • The Future Enernet: a Conversation with Bob Metcalfe
  • Future Fitness: The Power of Personal Data
  • Future of Collective Intelligence: Location! Location! Location!
  • The Future of Content is Personal
  • The Future of Innovation in Banking
  • The Future of Microformats
  • Future of Mobile Gaming/Entertainment
  • The Future of Nonprofits: Thrive and Innovate in the Digital Age
  • The Future of Philanthropy: Social Giving Takes Off
  • The Future of Storytelling: DEXTER Fans Play Killer
  • The Future of Touch User Interface Design

So there are the dozen “Future” sessions. Add to that the 42 “How to…” sessions, and you have a pretty good idea at what words are just filler. Yes, we want to know the future of what you are interested in. Yes, we want to know how to do what you do. That is why we are at SXSW. (Well, that and Tacodeli. But mostly, knowledge!)

Buy my panel… suckers!

Finally, there are the sessions that tell us, the voters and badgeholders, that we’re idiots.
  • Freelancers Are Slutty, But So Are You
  • Grow Some Balls: Build Business Relationships from Nothing
  • How to Not Be a Douchebag at SXSW (sic)
  • Your Meetings Suck and It’s Your Fault
  • Your Web Developer Thinks You’re an Idiot

It’s not like you have to flatter people in your proposal, but most of this stuff reads to me like a Fiona Apple Oscar speech. This whole world is bullshit! Yeah. Useful. Thanks. Sessions that cater to those with low self-esteem do seem to be crowd-pleasers at SXSWi, strangely enough, but it’s another reason to dig in and ask for a meaningful description of the situation and where you, the panelists, find yourselves. Don’t tell me what I’m doing wrong. You don’t know me, or 99% of the others in the room, and the involuntary Twitter backchannels that are spawned will apprise you of this fact with extreme prejudice. Better to build up than to tear down.

So, we’ve covered what not to say in the title. What’s left? I can answer that with yet another session title from SXSWi 2010:

  • Who Are You and Why Should I Care?

The voters of the panel picker need to represent the values of the conference they want to attend. I think that’s been lacking, and the result is that fewer, not more, voices are being heard. When people set foot in Austin, they vote with their feet, and what they look for has nothing to do with punchy titles and everything to do with reputation. In stock market terms, that’s called a flight to quality. We want our Zeldmans and Veens and Bowmans and Santa Marias. Maybe the best thing we can do to have the SXSW of the future represent us is to return to the days where the choices were fewer, but they were generally sure to provoke, if not to enlighten. I have an idea for how to make those qualitative choices a little more reliable, but that’s for another late night.

Facebook needs accessibility help

Filed in: General, Wed, Jan 26 2011 20:17 PT

That title looks a little confrontational, I know. But no, seriously. They’re actually looking for someone with the skills necessary to help make the most popular site on the web a little more accessible to its users.

Okay, there’s the good news. Here’s the bad: while they’re looking for that person, they’re also busy reinventing my greatest nemesis: the CAPTCHA. Now, at first blush, it might look like it’s not going to be that big of a deal. Reportedly, Facebook is rolling out a service that will display three images of one of a given user’s friends, and prompt them to select that friend’s name from a list. This will appear as a prompt to users who are exhibiting suspicious activity, in an attempt to keep hacked accounts from spamming others, for example.

I will admit that asking people to identify themselves by verifying knowledge of their social network has some significant upside potential. However, by pinning that knowledge on being able to see and recognize an image, a couple groups of people get screwed in this deal. First and foremost are blind and low-vision users, who would fail this test just as readily as they’d fail to recognize mangled patterns of numbers and letters. Given that most of these users have no chance of passing this test, they would presumably be locked out of their own accounts with no real recourse to regaining access. My fear is that the tenor of any subsequent screening would be similar to what one might expect of someone from Nigeria who just failed the same test: that is to say, not pleasant. (Another group I’m less concerned about is people like Robert Scoble, who famously hit up against Facebook’s 5000-friend limit, and who I’d bet dollars to donuts would fail a photo ID on 4/5 of those friends. No offense, Robert. I’m just playing the odds.)

What really worries me about this isn’t that it’ll be a failure, but that it’ll probably be a success. Facebook has all the information it needs to provide random information that tie users together. If that happens, it won’t be long before we start getting these kinds of searches on our Ticketmaster orders in place of CAPTCHAs. Companies would love that kind of certainty because it would reduce fraudulent transactions dramatically. But what makes that a win for Facebook and its partners would also make an already-huge problem for blind and low-vision users even bigger. From what I see here, there isn’t any way to fall back to another kind of test, and this is precisely the role an accessibility specialist needs to play inside Facebook. Somebody has to jump in at this stage and say, “hey, guys, I think you forgot something.”

This one’s a freebie. (By the way, does this count toward my community service? Man, I’ll never punch that mascot again.) What I would say to them is that there is quite likely some other set of information that could be shared, that doesn’t require vision, but merely provide some kind of safeguard beyond the information that’s presently available. Provided the user’s email account hasn’t also been captured, something like a simple password reset could do the job. Failing that, Facebook needs at a minimum to provide a way for blind and low-vision users to contact a human being and prove their identity. That could be as simple as locking down access to a user’s private info once the account has been flagged as suspicious, and asking questions about that info. Sure, hackers could jump in and capture that information first, but once the more sophisticated hackers get wise to this new prompt, they’ll start capturing every friend’s photo as well. There’s no perfect security solution here, but we have to create some way to allow blind and low-vision users to protect themselves more or less equally.

Mind you, any blind user who’s created an account on Facebook has already had to defeat at least one CAPTCHA, which is another problem to solve, but remember that the user has no skin in the game on account creation. When your account may have been hacked, it’s critical, especially on a social networking site, to regain control of it as soon as possible to minimize the damage. (I wish I didn’t know as much about that feeling as I do. Thanks, Gawker.) So it’s important to the overall security of Facebook to ensure that legitimate users can retrieve their accounts quickly, irrespective of their sensory capabilities.

If you have the skills called for in Facebook’s job posting, let me know you’re interested and I can put you in touch (in confidence, if necessary) with someone there. I can’t think of an open position in accessibility that has a greater potential for one person to do a whole lot of good for a whole lot of people. And you will not want for design challenges.

Does the iPad 2 display connect to your brainstem?

Filed in: Apple, tech, Sun, Jan 16 2011 20:16 PT

I’m sure we could ask Apple, but they don’t comment on rumors.

Still, the mill continues to churn, and this week we saw double-sized glyphs embedded in a beta of iOS. From that, the conclusion being drawn is that the iPad 2 will have a pixel-doubled, 2048×1536 display. The question is, is that insanely great, or just insane? Let’s take a look at where we are today:

Device Resolution Size PPI Shipping
iPad 1024×768 9.7″ 132 Yes
Galaxy Tab 1024×600 7″ 171 Yes
Xoom 1280×800 10.1″ 149 No
Atrix 4G 960×540 4″ 275 No
iPhone 4 960×640 3.5″ 326 Yes

What we can see here is that Apple remains far ahead in pixel density among phones, but they’re behind even already-shipping devices in the tablet world. Once they fell behind other phones, they fired back with a display that the crop of devices announced at this year’s CES still haven’t caught up to.

I think it’s a foregone conclusion that the next iPad will follow the same pattern. Let’s be honest: the iPad’s display is good. Not great. It needs a boost.

But does it need to be quadrupled? Eh. I’m not so sure. First and foremost, there’s only one application that really benefits from that kind of pixel density, and that’s text. The iPhone 4 was a tremendous advance for readability, but bear in mind that a tablet is traditionally held farther from the eye than a phone. It doesn’t need to be as tack-sharp as an iPhone 4 to be unbelievable.

Second, video is another big use case, and a 2k 4:3 display doesn’t make sense for any common video size. If we take as a given that the iPad 2 will support 1080p video, that means a 1:1 representation of that video leaves a black box all the way around. In 720p, the format delivered by iTunes content, pixel-doubling is no big improvement. It’s obvious that higher pixel density means better video, but if Apple likes controlling every pixel, then they’ll want a resolution that matches the video content they already distribute.

Third, a 2048×1536 display means over three million LEDs, all of which need to be driven by a GPU. Just to keep up, the GPU would need to be four times as fast as the first generation, which is a tall order for one generation. I know that Hitachi has shown off a 302ppi, 6.6″ display, which suggests a retina display could be made, but just because something can be found, doesn’t mean it’ll be implemented. The screen itself is only part of the balance between performance, cost and battery usage.

Finally, according to iSuppli, the iPad display/touchscreen unit accounts for more than a third of the overall bill of materials for the 16GB version ($80 out of $229.35). How likely is it that they’ll quadruple the number of LEDs in that package without ultimately affecting the price? Don’t get me wrong: I’m sure Apple will squeeze more pixels out of that form factor. But if it’s actually 2048×1536, I will be extremely impressed.

So if a pixel-doubled iPad isn’t in the cards, let’s look at plan B. I’m just spitballing here, but I think the optimal display resolution for a next-generation iPad is 1440×1080. It’s a higher resolution than any announced tablet. At 186ppi, it’s the best tablet display on the market. It also displays 1080p video at full screen when zoomed in, along with 720p at a nice, clean 1.5x multiplier, which has been fundamentally handled by consumer electronics companies for several years. All without breaking the bank either in cost or GPU: At 1.56Mpix, it nearly doubles (1.978x) the current display, while still taking up less than half as many pixels (49.4%) as a 2k display.

Apple has become known for its hardware advances. They scored a huge coup with the display technology on their flagship device. But while I can’t completely rule it out either technically or economically, I just don’t think lightning will strike twice. If it does, I’ll be first in line. Again.

It’s the most wonderful time of the year

Filed in: consumerism, culture, design, tech, Wed, Jan 5 2011 21:05 PT

…by which I mean CES, of course.

Granted, half of the things presented this week will never see the light of day, and the other half will be three to six months later than they announced. But still, today alone was just staggering. Motorola gave an Apple-grade presentation (if you can look past its comically bad audio). Olympus finally announced a Micro Four Thirds camera I’m willing to take the plunge on. Even Microsoft may be at risk of becoming relevant again.

I think what can be said about this batch of announcements is that this is the year everything is good enough. What I mean to say is that, of all the devices I’ve seen in the last couple of days, nearly all of them are capable of convincing someone to give up the PC as their primary computer device. These aren’t just rehashed netbooks–relatively few, in fact, even have an Atom CPU–but devices everywhere from 3 inches on up that have enough juice to browse the web, handle email, play games, watch movies, find yourself on a map, and generally do what 90% of the market does with their PCs.

I’ve seen a lot of CES presentations in my time, but this is the first year that I’ve seen the writing on the wall for PCs as we know them. Now, okay, if you’re reading this, you’re one of two kinds of people: the ones who will be using your phone or tablet as your primary computing device by the end of the year, if you’re not already; or the ones who will still be lugging a 5-lb. clamshell device with a keyboard to your neighborhood Starbucks. Either way, in my opinion, you’re an outlier. You may need to type so frequently that a keyboard is always in your plans. Or you may be editing 4k video, or compiling an operating system. And that’s fine. PCs will still exist for those cases. But you’re still going to be affected by the trends in the industry.

What I want you to think about as you contemplate the death of the PC (or, say, Wintel, or the WIMP model, or what have you) is someone you know who’s not at all a geek. Maybe your mom, or the partner who stares glassy-eyed at you when you come home complaining about the latency of your DNS at work. Now, think: what do these people do with their computers all day? They browse the web. And by “the web”, I mean web-based email, Facebook, YouTube, Twitter, Netflix, their bank accounts, their stocks. Name me one thing 90% of these users need a Core i7 CPU for. Games? Only if they’re hardcore. Editing images or videos? Probably not worth the investment.

In the overall cost-to-benefit calculation, there’s going to be a lot more value given to size and battery life than to raw horsepower. And raw horsepower per dollar is really the only remaining benefit of the PC. They’re complicated, bulky, virus-prone, and get slower over time. I looked at my in-laws’ mid-tower Windows machine like it was a record player: it’s big, loud, sucks down a lot of juice… and most importantly, it was asleep most of the time I was there, since they got my hand-me-down netbook for a present.

Meanwhile, you can walk into any mobile phone store in the US today and pick up a 1GHz computer with a half-decent browser for anywhere from $200 to nothing. Then you can shove it in your pocket. That’s powerful. And what we’re seeing this week shows us that the gap between the desktop and the pocket is not only narrowing, but it’s morphing in all kinds of ways. If Motorola is to be believed, the tablet battle will be joined by the Android Honeycomb-powered Xoom this spring; there will be at least one 960×540 phone in the near future; and Windows 8 is aiming for low-power CPUs as well. Consumer electronics companies aren’t tailoring their offerings for power users: they’re aiming squarely at the non-geek in the house. (Don’t feel threatened. It’s for the best.)

This week, we’re seeing what the non-Apple players in the market are seeing as the future of computing. This looks to be the first time Apple has to look at the competition seriously.

Five Myths about Accessibility Myths

Filed in: accessibility, Tue, Jan 4 2011 05:06 PT

I’m an accessibility evangelist by trade. (No, seriously. It’s on my business card. My boss even knows.) Needless to say, such a job requires me to think seriously about what accessibility is all about, and how to communicate that to an audience that, frankly, has a lot of things on their mind, like keeping their jobs, turning a profit, or just not shooting up the place by the end of the day.

I would like to say that the material we have to offer people who want to learn about accessibility is plentiful (it is), and that it’s compelling enough to the constituencies we need to reach (it’s not).

The chief antipattern I can cite is the “accessibility myths” article. Search “accessibility myths” on Google and you’ll find a few hundred examples of this phenomenon. Ordinarily, these are blog posts citing a handful of elementary observations on the state of accessibility among designers and developers at the ground level. I know many, many people who have written them, nearly all of whose skills I respect. I want to ensure each of you that I’m not targeting any one of their lists in particular, except as symptoms of a greater problem. (I’m hoping this disclaimer will protect me from pages of comments defending previous attempts at this kind of writing.) The problem is that when one sets out such a line of demarcation–your information is false, mine is true–you may not be reaching the right people with the right news. Or if you are, you may not have made them feel motivated toward your cause.

In an effort to focus our energies a little better, I’ve put together the problems I find with the “accessibility myths” genre. And since it is better, pedagogically speaking, to present material in a form that the audience finds comfortable, here they are: five myths about accessibility myths.

1. Addressing knowledge gaps as “myths” is productive.

So let’s say I just walked into your RSS feed one morning and said, here are the five things you think are right about Mac OS X, that are wrong. That is to say: dear reader, I know you don’t know me, and obviously I don’t know you, but I’m going to start talking about how wrong you are about what may or may not be a common issue.

Wow. No wonder we’re so popular.

It’s clear that accessibility advocates frequently encounter the same kinds of opposition, and often fight dated or inaccurate information while trying to improve access to web content. But to stand on a soapbox and decry it all as mythology can be especially alienating.

So let’s try not to belittle people who we’re trying to rally to our side. Make sense? Good. I just don’t understand why everybody says you’re so dumb.

2. “Myth” articles are compelling and convincing.

Now that we’ve irritated the people we most need to reach, it’s time to move on to the banalities that set each specific accessibility expert off. Most myth posts begin with a no-brainer like “accessibility is about creating a text-only site” or “accessibility is all about blind people and screen readers.” These are generalizations at their most basic and uncontroversial. If you actually encounter someone in the wild who believes something like this, they’re not misinformed: they’re uninformed. It’s not a myth to someone who hasn’t ever heard or believed it.

The second type of misstep is to claim something is a myth when it’s actually something reasonable people can debate. One that I can think of that keeps recurring is the statement that accessibility work is “time-consuming, expensive and very technical”. That came from RNIB’s myths article from 2009. Here’s the thing: quite often, accessibility work is time-consuming, expensive and very technical. Especially to someone who doesn’t know all they need to know about it, or someone who went too far down the wrong path before accessibility was called to his or her attention. That is to say, your most critical audience.

It’s not the best strategy to say to someone who’s suffering through accessibility remediation that they’re not experiencing what they’re experiencing. Or worse, to tell them that they’re only in the situation they’re in because they didn’t come to us earlier. It causes accessibility advocates to seem out of touch with the reality of the lifecycle of web design and development. It’s good for people to be made aware of the problems that can arise. But for those staring those problems in the teeth, they’re looking for solutions. What they’re getting is: “I told you this would happen.”

3. “Myths” are actually the reason accessibility isn’t happening.

One of the most alluring parts of mythologizing accessibility problems is that we can take our anecdotal evidence and construct an entire worldview around it. It’s a great way to vent our frustrations when we aren’t enabled to change the outcome of a given site’s design, for example.

But the way I see it, maybe the myth in most myths articles is that people are really legitimately thinking any of these things. Might some people still believe that a text-only site is an acceptable way to check off that accessibility box on their launch checklist? Maybe. Was it a problem in 2003? You bet. Now? Less so. And with much better arguments in its favor, like an approach that also integrates scenarios for mobile users, as well.

One more thing here: the text-only thing isn’t a myth. It’s just what many people were taught, starting in 1998. (Remember that text-only sites are enshrined in paragraph 1194.22(k) of Section 508, which itself is a product of an assembly of accessibility experts, and which remains the primary reference for workers in all levels of government in the US.)

The goal of advocacy is to frame the debate regarding what you stand for, and why. When you begin the debate by arguing against what you’re not about, you are setting yourself up for failure. (Side note: remember this in 2012, Democrats.) It would be much better to say to people that times have changed, that we have better ways to do things, and that an integrated site is more equitable and less work-intensive overall than if we tell our readers they’re fooling themselves simply because they listened to the experts a dozen years ago.

4. “Myth” articles contain useful, actionable information.

Anybody who writes stuff like this knows that it’s not going to be any good until you come up with five red herrings to rail against. (Three, if you’re really an SEO person in disguise.) Where a lot of these articles go awry is in failing to provide information that people can integrate in their websites today. Go read a few dozen myths posts. Can you find more than a handful of real accessibility techniques there? Are there pointers to other resources so that people can learn more proactively? (Note to Roger Johansson: you, sir, are officially off the hook.)

When I put myself in the shoes of a web developer who’s investing five minutes into learning about accessibility, I tend to think I’d rather hear “you can use red and green if you’re also using other means to differentiate those elements” than “Myth: Red and green cannot be used.” This is a learning opportunity, not Opposite Day.

And while I’m piling on, if you can be confident about what people should and shouldn’t be doing for all content on the web, it should be easy enough to come up with some examples of what you’re talking about. Right?

5. “Myth” articles reach the audience they’re intended for.

Wanna know how I find out about new accessibility myths articles? My friends in the accessibility community retweet them. Incessantly. They’re inescapable. (At least, they used to be. Thanks, TweetDeck filters!) They touch a nerve with us, and we share them far and wide, like designers share that McSweeney’s Comic Sans piece. (Guilty.) We’re sharing the causes of our suffering. And there’s a place for that. But what is being practiced isn’t advocacy, it’s self-soothing.

Myths are “inside baseball” articles–the kind of thing you’d commiserate about over beers at your favorite accessibility conference. Are these kinds of posts really meant to recruit new accessibility advocates, or are they really only going to resonate with those of us who are already in the thick of things?

Think hard about who it is that you want to motivate with this kind of writing. If you put something like this together and find when you’re proofreading that it sounds like you’re venting about your day, it’s entirely likely that you’re preaching to the converted. And there are a lot of unconverted out there to choose from. Tailor your message to the needs of that audience, and you may have a better chance to make your case.

Dear future Android tablet users…

Filed in: General, Sun, May 16 2010 19:54 PT

So, I own an iPad. (And here I am working for Adobe. You may point and laugh… now.) I’ve had it out in public—including in Europe, where I might as well have worn an “ask me about the iPad” t-shirt. I’ve got the pattern down now: someone does a double-take, and I think, “Oh, shit. Here we go again. ‘Yes, I couldn’t resist. I bought it because (reasons), and I’m (mood) with it.’”

I enjoy using my iPad (nicknamed “killer”, by the way), mostly. iBooks and the Kindle app have been perfectly stable, which is good, because the moment my ebook reader crashes, it’s not a serviceable ebook reader. It’s the kind of thing you need to have nailed down.

I’ve got about four pages of downloaded applications, only one of which consists of go-to apps: Twitter client, media streamer, remote keyboard and mouse. The rest are kind of a blur and a distraction. Games, magazines, utilities. I could do without them. That one page covers 95% of what I want to do.

What I can’t do is listen to NPR while I browse my email, and while I’ve been promised that will change this fall, I’m not a patient man. Nor am I the kind of fanboi who’s going to say the promise of multitasking tomorrow is as good as multitasking today.

There are about four things I expect out of a tablet: I want a good-looking screen, HD-quality video, the ability to run the occasional app in the background, and I want it to run all day on a single charge. The iPad meets two and a half of those requirements, if we take off a half-point for only playing 720p video.

Here’s the thing, though. Of my four main requirements, precisely zero of them are unique to Apple. Between now and Christmas, there are going to be dozens of Android-based tablets flooding the market. They’ll all be at or below the price point of the iPad. And you’ll be able to pick the winners fairly easily: they’re the ones that meet all four of my criteria.

So let’s say for the sake of argument that things shake out this way, and by the time you’re in an L-tryptophan coma, your Black Friday ads are loaded with non-iPad options. And let’s say the following morning you sneak into a Best Buy at 5am, scratch and claw your way to one of these devices, and start browsing the Android Market.

You are now invested in the success of Android.

When someone does a double-take in a coffee shop, you’re going to think, “Oh, shit. Here we go again. ‘Yes, I looked at the iPad, but I went with this because (reasons), and I’m (mood) with my purchase.’” The reasons are largely fixed in the hardware, so once you’ve rolled your Gooblet off the lot, you already have your answer to that. How you feel about your purchase, however, is going to depend largely on what you can do with it, and that has a lot to do with what software is available to you, and whether or how well it works.

There’s a tendency to be more forgiving of open-source platforms. Your Free Software Foundation adherents will insist that it’s better to have products that are clearly inferior because free-as-in-speech is better. But try telling that to your mom when she’s trying to make head or tail of your tablet while she maps out the nearest Apple Store.

For Android tablets to succeed, users of the platform need to fight the instinct to apologize for its shortcomings and that of its software. You need to be vocal, even brutal, about the problems you find. If an open-source product doesn’t cut it, call it out. If a payware application crashes left and right, let ‘em have it.

The tablet market is not the same as the Linux hobbyist market. The vast majority won’t be compiling their own apps, much less rewriting them. They shouldn’t be expected to. As a result, developers of tablet apps need to be very responsive to their users. If you want the Android platform to take off on tablets, the applications can’t be just great for Android apps. They need to be great apps, period. No editing text files, no byzantine dependencies, no open-source shovelware.

You, the user, should use all available avenues to establish the bar for your expected user experience, and hold developers to it. On the iPhone/iPad platform, ostensibly, the App Store serves as a quality filter. (In reality, there are mountains of shitty little applications, but there, the App Store at least does the service of letting us one-star them.) In the Android Market, users need to be just as brutal, demanding and vocal as they are in the App Store. Android tablets are not going to succeed by being almost as good as iPads, but, like, less evil. The best way to make them successful is to be very clear about where the system, the hardware and the applications fall short. The best way to make them fail is to pretend they’re good enough, when they’re not.

Realistic sentimentality

Filed in: General, Fri, Jun 12 2009 18:18 PT

“Whenever people say, ‘We mustn’t be sentimental,’ you can take it they are about to do something cruel. And if they add, ‘We must be realistic,’ they mean they are going to make money out of it.”

Brigid Brophy, author

Twitter, ye markup be non-standarrrrrd.

Filed in: design, vent, Web, Fri, Sep 19 2008 11:10 PT

Twitter unveiled a new redesign today. Very little of it is really noticeably different, until you look at the underlying code. Which, last I recall, used to be pretty good–they even used the fancy-pants XHTML 1.0 Strict doctype (though still using tables for layout, which the spec says it shouldn’t do).

But one thing about the latest version makes me wonder just what the hell they’re doing these days.

<center>.

The <center> element. In September 2008. From the “it” Web 2.0 company. Seriously.

I know this will make me sound like the annoying standardista, but how could anyone who still uses <center> still be doing web design professionally in, of all places, San Francisco? This is an element which has been deprecated for eleven years. Do we really have people who haven’t changed their coding practices since before 1997?

Sadly, yes. And the worse news is, they’re writing books. I just saw a book whose first edition was published in July 2008, which teaches users to use <center>, to do layout tables, to use CSS primarily just for font selections, and loads of other outdated guidance. This is material from the bad old days of web design, and it simply gets regurgitated over and over again. To quote the late, great George Carlin, “it’s all bullshit, and it’s bad for ya.”

I don’t know what it is going to take to finally cull the proverbial herd of these kinds of authors and designers. But each time I see this, it makes me wonder when we can expect some kind of professionalism out of the average content producer. Many of us have been talking about this stuff for years. It’s de rigueur at many web conferences, to the point that people now roll their eyes at it. And yet, it continues. I also don’t know whether Twitter is doing this in-house or if they hired an external designer. But certainly, somebody there dropped the ball.

And I know that one <center> is not a big thing. It’s just a symptom of a larger disease: that of lazy, ignorant and/or incurious designers. When someone sticks to one way to do something without ever updating their own skill set, their designs get more and more inflexible. Which makes redesigns more and more difficult, and more expensive, all with less to show for it. Which brings us to the boxy gridlock we experienced in the 90s. Which is why standardistas get so angry about this stuff. We know that customers deserve better than this. We know that when customers find out how their designer painted them into a corner, it casts a shadow over all of us in those customers’ eyes.

The question that remains from all this is, how can the professionals in this field separate themselves from the amateurs? Really. I want suggestions. What concrete steps can we take to ensure that the good designers and developers, the ones who are always learning, who have a full and balanced skill set, don’t get lumped in (or worse, beaten out by) the ones who are locked in 1995? Who’s got an idea?

Powered by WordPress (RSS 2.0, Atom)