I’m sure we could ask Apple, but they don’t comment on rumors.
Still, the mill continues to churn, and this week we saw double-sized glyphs embedded in a beta of iOS. From that, the conclusion being drawn is that the iPad 2 will have a pixel-doubled, 2048×1536 display. The question is, is that insanely great, or just insane? Let’s take a look at where we are today:
What we can see here is that Apple remains far ahead in pixel density among phones, but they’re behind even already-shipping devices in the tablet world. Once they fell behind other phones, they fired back with a display that the crop of devices announced at this year’s CES still haven’t caught up to.
I think it’s a foregone conclusion that the next iPad will follow the same pattern. Let’s be honest: the iPad’s display is good. Not great. It needs a boost.
But does it need to be quadrupled? Eh. I’m not so sure. First and foremost, there’s only one application that really benefits from that kind of pixel density, and that’s text. The iPhone 4 was a tremendous advance for readability, but bear in mind that a tablet is traditionally held farther from the eye than a phone. It doesn’t need to be as tack-sharp as an iPhone 4 to be unbelievable.
Second, video is another big use case, and a 2k 4:3 display doesn’t make sense for any common video size. If we take as a given that the iPad 2 will support 1080p video, that means a 1:1 representation of that video leaves a black box all the way around. In 720p, the format delivered by iTunes content, pixel-doubling is no big improvement. It’s obvious that higher pixel density means better video, but if Apple likes controlling every pixel, then they’ll want a resolution that matches the video content they already distribute.
Third, a 2048×1536 display means over three million LEDs, all of which need to be driven by a GPU. Just to keep up, the GPU would need to be four times as fast as the first generation, which is a tall order for one generation. I know thatÂ Hitachi has shown off a 302ppi, 6.6″ display, which suggests a retina display could be made, butÂ just because something can be found, doesn’t mean it’ll be implemented. The screen itself is only part of the balance between performance, cost and battery usage.
Finally, according to iSuppli, the iPad display/touchscreen unit accounts for more than a third of the overall bill of materials for the 16GB version ($80 out of $229.35). How likely is it that they’ll quadruple the number of LEDs in that package without ultimately affecting the price? Don’t get me wrong: I’m sure Apple will squeeze more pixels out of that form factor. But if it’s actually 2048×1536, I will be extremely impressed.
So if a pixel-doubled iPad isn’t in the cards, let’s look at plan B. I’m just spitballing here, but I think the optimal display resolution for a next-generation iPad is 1440×1080. It’s a higher resolution than any announced tablet. At 186ppi, it’s the best tablet display on the market. It also displays 1080p video at full screen when zoomed in, along with 720p at a nice, clean 1.5x multiplier, which has been fundamentally handled by consumer electronics companies for several years. All without breaking the bank either in cost or GPU: At 1.56Mpix, it nearly doubles (1.978x) the current display, while still taking up less than half as many pixels (49.4%) as a 2k display.
Apple has become known for its hardware advances. They scored a huge coup with the display technology on their flagship device. But while I can’t completely rule it out either technically or economically, I just don’t think lightning will strike twice. If it does, I’ll be first in line. Again.
…by which I mean CES, of course.
Granted, half of the things presented this week will never see the light of day, and the other half will be three to six months later than they announced. But still, today alone was just staggering. Motorola gave an Apple-grade presentation (if you can look past its comically bad audio). Olympus finally announced a Micro Four Thirds camera I’m willing to take the plunge on. Even Microsoft may be at risk of becoming relevant again.
I think what can be said about this batch of announcements is that this is the year everything is good enough. What I mean to say is that, of all the devices I’ve seen in the last couple of days, nearly all of them are capable of convincing someone to give up the PC as their primary computer device. These aren’t just rehashed netbooksâ€“relatively few, in fact, even have an Atom CPUâ€“but devices everywhere from 3 inches on up that have enough juice to browse the web, handle email, play games, watch movies, find yourself on a map, and generally do what 90% of the market does with their PCs.
I’ve seen a lot of CES presentations in my time, but this is the first year that I’ve seen the writing on the wall for PCs as we know them. Now, okay, if you’re reading this, you’re one of two kinds of people: the ones who will be using your phone or tablet as your primary computing device by the end of the year, if you’re not already; or the ones who will still be lugging a 5-lb. clamshell device with a keyboard to your neighborhood Starbucks. Either way, in my opinion, you’re an outlier. You may need to type so frequently that a keyboard is always in your plans. Or you may be editing 4k video, or compiling an operating system. And that’s fine. PCs will still exist for those cases. But you’re still going to be affected by the trends in the industry.
What I want you to think about as you contemplate the death of the PC (or, say, Wintel, or the WIMP model, or what have you) is someone you know who’s not at all a geek. Maybe your mom, or the partner who stares glassy-eyed at you when you come home complaining about the latency of your DNS at work. Now, think: what do these people do with their computers all day? They browse the web. And by “the web”, I mean web-based email, Facebook, YouTube, Twitter, Netflix, their bank accounts, their stocks. Name me one thing 90% of these users need a Core i7 CPU for. Games? Only if they’re hardcore. Editing images or videos? Probably not worth the investment.
In the overall cost-to-benefit calculation, there’s going to be a lot more value given to size and battery life than to raw horsepower. And raw horsepower per dollar is really the only remaining benefit of the PC. They’re complicated, bulky, virus-prone, and get slower over time. I looked at my in-laws’ mid-tower Windows machine like it was a record player: it’s big, loud, sucks down a lot of juice… and most importantly, it was asleep most of the time I was there, since they got my hand-me-down netbook for a present.
Meanwhile, you can walk into any mobile phone store in the US today and pick up a 1GHz computer with a half-decent browser for anywhere from $200 to nothing. Then you can shove it in your pocket. That’s powerful. And what we’re seeing this week shows us that the gap between the desktop and the pocket is not only narrowing, but it’s morphing in all kinds of ways. If Motorola is to be believed, the tablet battle will be joined by the Android Honeycomb-powered Xoom this spring; there will be at least one 960×540 phone in the near future; and Windows 8 is aiming for low-power CPUs as well. Consumer electronics companies aren’t tailoring their offerings for power users: they’re aiming squarely at the non-geek in the house. (Don’t feel threatened. It’s for the best.)
This week, we’re seeing what the non-Apple players in the market are seeing as the future of computing. This looks to be the first time Apple has to look at the competition seriously.
Well, it only took me several years to do it, but I finally have a MythTV setup I’m proud of. I have my Comcast set-top box connected via FireWire, as well as my NTSC capture card, and I can connect from any machine on my network, including my MacBook Pro over Wi-Fi, and watch either stream, or both using picture-in-picture.
I even managed to shoehorn a copy of Xubuntu’s latest daily build onto a 2GB CF card, married to a CF-IDE adapter, so my Vaio with the 700MHz Pentium III and 192MB of RAM can join the party — very, very quietly. It was more of an ordeal than I think most people would be comfortable with, and I still have to get the WPA supplicant running, but I was happy to find that there was still enough computing power to run the MythTV front end. And now I don’t have to buy an alarm clock, or a TV for the bedroom. Instead, I have a nice, small, silent machine that can access all of the media in the house. Woot!
At some point, I may go into detail on how this all came together, but it took both the regular and alternate Xubuntu install discs, KnoppMyth, MythFrontend for OS X, MythTV Player for Windows, Parallels virtualization software, GRUB, dd, obscure FireWire drivers, two 3.5″ to 2.5″ IDE adapters, and the better part of two weekends to finally come together. What it didn’t require was any hardware I didn’t already have in my possession, which is a wonderful vindication of my geek-packrat cred.
Since the National Federation of the Blind sued Target Corp. for the inaccessibility of its Web site, many people have taken sides, vilifying Target and/or lionizing NFB in turn. I think it’s too early for that, if it’s necessary at all. In terms of US law, this was a suit that needed to happen, because the case law on Web accessibility is so far pretty thin. The most important thing to take away from this news is that the same case could be brought against dozens of comparable e-commerce sites, and all over problems that stop many users dead in their tracks, and yet could be fixed without affecting their visual design or functionality.
I’m hesitant to paint Target as the solitary enemy of users with disabilities. Let’s be clear: The accessibility of Target’s site is terrible. But in a short review I did of big-box store sites this morning, they’re not the worst around. In fact, they’re pretty much the middle of the range.
Costco is only one example of many I found. But I’m picking on them in particular because their brick-and-mortar operation is refreshingly progressive. The company prides itself on a “workplace focused on ethics and obeying the law”, and has enormous signs at their front door stating that they strive to accommodate the needs of their customers in accordance with the Americans with Disabilities Act.
So, to what do we attribute the utter inaccessibility of many e-commerce sites: ignorance, miscommunication, or malice? I’ve seen all three in practice. Often, it doesn’t take the threat of a lawsuit to get site owners to come around; they merely need to understand the problems, and what they can do to solve them, in order of impact on the user.
But I’ve also seen cases where it’s a legal game of chicken: some companies refuse to comply with a legal mandate that they feel doesn’t clearly apply to them. They’re gambling that the cost of being found guilty of non-compliance is lower than that of conforming to a standard that may not apply to them. This strategy falls apart like a house of cards as soon as one of them is found liable. And it’s a tactic I find particularly odious when they’re consciously acting to keep users with disabilities out.
The fact is that the Web has afforded many people with disabilities new-found potential to buy and sell things, work, manage finances, find community, gather news, and access government services — all things able-bodied people take for granted. When people with disabilities received legal protection, it wasn’t given out of pity. It was given to protect their right to participate equally in society. Web designers and developers can enable that equal participation with every site they design, using modern coding principles. Or they can hide in a castle or a cave, clutching their legacy code, certain that those evil, litigious disabled people are out to get them.
So, which is it?
At long last, the update to the paper I wrote (with the help and support of the WAI Protocols and Formats Working Group has been published as a W3C Working Group Note. It is now titled “Inaccessibility of CAPTCHA: Alternatives to Visual Turing Tests on the Web“, putting a finer point on the issue at hand than its predecessor, “Inaccessibility of Visually-Oriented Anti-Robot Tests: Problems and Alternatives“.
Weighing in at a hefty 3,000 words, it’s pretty long, even for me. It’s more than most people ever need to know about visual verification schemes. But hidden in there is a call to think about the problem you’re solving before relying on CAPTCHAs as a panacea. In some cases, outside of accessibility factors, its use is overkill. And in others, it may provide a dangerous false sense of security.
The new paper also gets into details the older version didn’t, and offers actual guidance at the end for solving the problem. The short version is as follows:
- If you are a major site that doesn’t have a choice
- …then it makes sense to have to use CAPTCHA, but you must allow other ways for real humans to access your service in a timely fashion.
- If you are a low-volume site such as a blog
- …don’t use CAPTCHA. Especially if it’s just to protect against posting spam comments. It’s inefficient, it’s a usability barrier for everyone, and it locks out more people than you think. Bayesian filtering is a Good Thing. I get dozens to hundreds of comment spams daily, and Spam Karma 2 for WordPress catches them all silently.
- If you are a financial services site
- …don’t use CAPTCHA-like tools for access control. New authentication systems are in use that randomize letter codes to correspond to a numeric keypad displayed on-screen. At best, the design falls into the dubious category of security through obscurity, which means it will be exploited when someone feels like it’s worthwhile. In the meantime, you’re blocking vision- and mobility-impaired users from basic tasks that would allow them to live unassisted. Until you figure it out, don’t take away those users’ autonomy for a short-term security benefit.
Have a look if you are using or considering CAPTCHAs for your site. And thanks to the Working Group, Al Gilman, Jon Gunderson, Janina Sajka, Marc-Antoine Garrigue, Dina Katabi, Kentarou Fukuda, Casey Chesnut, Sam Hocevar, Peter Krantz, Jason White and Viking At Large Charles McCathieNevile for their work on this paper and/or making access control more accessible.
I ordered an iPod nano 4GB from my credit card’s reward program at the end of October. It was scheduled to ship on November 7th, then November 20th. So today, I called customer service.
The representative apologized for the delay, and said that the dates slid because Apple had recalled the inventory they had been promising customers, due to the scratching issues. She went on to say that they should be receiving new product from Apple today and tomorrow.
While this is just one person’s story, and they could just be blowing smoke to keep a customer happy, it seems entirely possible — even likely — that if Apple wanted to fix the nano display’s propensity toward scratching, they’d do it before the Christmas season. I will try to compare the nano I should receive in early December with my business partner’s device to see if I can detect any difference, but I suspect that if they have in fact recalled and upgraded the plastic on the nano devices, they’ll announce it before I ever get to that experiment.
As if you needed more evidence from me that CAPTCHA is a bad idea, here’s some more: Amazon has just made automated Turing tests obsolete.
Witness Mechanical Turk, which creates an open market for humans to solve tasks which are “extraordinarily difficult for computers, but simple for humans to answer.” Sound familiar? It was already a known fact that spammers had used cash (not to mention porn) as an incentive to get people to solve CAPTCHAs. Mechanical Turk now disintermediates the spammer-to-solver equation.
I would say that this is a decent way for blind users to get someone to solve a CAPTCHA that is in their way. But I know how things are going to go: spammers will use Mechanical Turk in droves, flooding it with high-value Turing tests. They will load the system with tests, something which will be particularly easy for them to do since it has hooks to Amazon’s Web Services API. They will often masquerade as blind users to attract sympathetic solvers. And they’ll offer the vast majority of the tasks on the site, at low prices, which will threaten the community of solvers unless Amazon gets involved in a serious way to weed them out pre-emptively. In essence, Amazon will have to be able to disqualify CAPTCHA-collectors worldwide, and make it stick, in order to keep solvers coming back, and major Web companies from suing Amazon for contributing to their access-control problems.
In other words, this whole thing, cool as it seems, is doomed from the start. But it’s going to take visual Turing tests along with it. No matter how hard the tests are to solve, Mechanical Turk is a magic bullet for anyone who wants to pay to get past it. It’s not as threatening for bloggers (who shouldn’t be using CAPTCHA anyway, since Bayesian filtering is as effective and less obtrusive) as it is for the Hotmails, Googles and Yahoos of the world, whose resources are worth much more than a ten-cent investment in solving a Turing test. It’s just a much easier method for attacking a weak authentication scheme.
Those of you who saw me at the Podcast Hotel yesterday may have noticed something different. Hopefully it wasn’t the thinning hair, but rather the little piece of technology I was scribbling furiously into during the session.
Yes, I’m Matt May, and for the last six days I’ve been a Tablet PC user.
I needed to have a PC in the house to do development in .NET and testing on IE for Windows. Since my old Vaio is just broken enough not to be able to update the BIOS so it can run XP, that wasn’t an option. (It’s now running Fedora Core 4.) I could connect to my main server from my PowerBook using Remote Desktop Connection, but that’s no good for heavy work outside of the house, so I started looking for other options.
Once I decided I needed a new laptop, it was easy to make the jump to the tablets. After all, I already have a brand-new PowerBook, so if I’m getting a new machine, it had better be a multitasker. (I learned that from Alton Brown.) I also wanted a shock of features that weren’t available together in any entry-level laptops. And I wanted a display that didn’t make me want to tear my eyes out, as most PC laptops do. The tablets were among the only machines out there with good resolution, small size, and lots of what I wanted, and plus, I can can scribble on the screen, which is way more fun than it should be.
For a smoking $1119 at eCost, I bought a Toshiba M200, which has a 1.7GHz Centrino, 12.1″ 1400×1050 display, 512MB RAM, 60GB of disk, 802.11g, Bluetooth, and an SD slot (which is just one more excuse to upgrade my camera and go 100% SD in the near future). With no internal optical drive, I’m finding that the only thing I’m really missing is FireWire, because I already have a FireWire CD-RW drive, along with other accessories. A $30 or $40 PCMCIA card should fix that. These go for $1500 and up on eBay, and are about $2000 new, so I’m really happy I found it.
This has been the first business trip I’ve taken in seven years without a Mac in tow. It’s sad to say, but I almost didn’t miss it. Firefox, for one, is much more responsive in Windows than on OS X. I’ve recently become frustrated with Thunderbird and moved back to Mail.app on the PowerBook, so I’m dealing with Thunderbird again on the PC. I miss Adium, but Trillian is cool enough. I also have yet to discover an RSS aggregator or podcatcher that holds a candle to NetNewsWire, so I’m getting by with poor facsimiles thereof. And I’ll need to install Cygwin and/or CoLinux to make up for the utter lack of usefulness that is the XP command line.
Here was my oh-my-god moment with the tablet: I had to send a form off to a client the other day, so I took the fax I received in PDF format from my fax-to-email gateway, opened it up in PDF Annotator, wrote my information directly onto the form, saved it, and sent it back. Without the tablet, that process goes print, scribble, scan, email, or print, scribble, walk to Kinko’s. That was so sweet. I really can’t wait to do some design work directly to the screen. It’s that much fun.
When this contract is over, I don’t know what I’ll do. I’ll go back to the Mac for most of my work after this project, but I think I’ll still be using the tablet more or less every day. Most likely, I’ll do all my work (except drawing) on the PowerBook, and all my personal browsing and other playing around on the tablet. I’m working on my work-life separation these days, anyway. (Maybe I’ll even — gasp! — play games.) It’s also the portable media device for the house, especially since it can be flipped and tilted so it looks almost like a TV. In fact, I have a picture of the tablet running live TV via VLC, and I’ll go into some more detail on how I have that hooked up another time.
(Sidebar: the M200 is excessively stickerized. When I took it out of the box, it had stickers for: where to buy accessories; where to buy support; Windows XP; Centrino; nVidia G-Force FX; Wacom; EnergyStar; the XP license key; wireless MAC address; Ethernet MAC address; a Chinese QA approval; two for standards compliance; a battery-recycling warning; an excessive heat warning; two to denote that it’s reconditioned; one that shows the specs; and one warning me not to turn the display in the wrong direction. Too much noise. I’m stripping them one by one.)
A Wired article on the Future of Hearing exhibit at the Victoria and Albert Museum in London has gotten the attention of my newly-married friend, Keith Robinson. My good friend Shana has been to the museum and reported back with similar ideas: why not use these advanced technologies, originally designed for deaf and hard of hearing people, to benefit people with normal hearing in noisy situations?
This is, of course, what the field of accessibility does: pushing the leading edge of technology to the point that its benefits are felt by everyone. It’s been this way for years. Today, we have people with near-perfect vision getting eagle-eye vision with surgery. ATMs and coin-sorting machines talk to us. Optical character recognition is passÃ© by now, and discrete voice recognition is a common feature on phone systems. Women with toddlers in strollers use curb cuts and handicap toilets with impunity.
So it will be with hearing. The Walkman, and later the iPod and Bluetooth headsets, made it socially acceptable to have something stuck in your ears all day long. From an interpersonal perspective, that’s the hard part. If all that is now cool, then all manner of hearing enhancements are possible. It offers hearing that changes by situation. I’ve used my noise-canceling headphones to have a pleasant conversation with a stranger on a plane before. Why not put on an earphone that will let me converse with someone more clearly at a loud party? Or use multiple technologies originally designed for accessibility purposes to take that one step further, and have that device translate a foreign-language speaker’s words in real time?
The next logical step with these new auditory technologies is to reduce the overall cost to those who most need it, so that a hard of hearing person doesn’t have to choose between the primitive $500 hearing aids that are covered by insurance and amplify everything in the spectrum equally, or the $3000 ones that are small, use digital signal processing to bring out the right sounds for the situation, and are, you know, useful.
This new-found social acceptance also helps to destigmatize people with a need for hearing aids. The Wired article states that fewer than one in four Britons who need a hearing aid actually wear one. By making hearing technology fashionable, these people can not only demand something more aesthetically pleasing, but can demand more from their vendors at a lower cost as an advantage of economies of scale. One can already buy hearing aids that double as Bluetooth headsets for their mobile phone.
But millions of people have discovered the true joy of the iPod: being able to isolate oneself in stores, parks and on the subway. Changing headphone colors from black to white changed everyone’s perception of the average music listener in public. They’ve gone from passive, interruptible nobodys to active listeners who have put the world on pause. This is the inverse of situational hearing: it’s situational deafness. Whatever the social implications of that deafness may be, it is clearly something that makes my life easier when I’m out in public, amidst the noise and haste.
I have reached the limits of my patience with my Nokia 3650 and T-Mobile. In addition to its frequent failure to ring when I’m called (which I understand is actually common on the T-Mobile network), it’s got a nasty habit of shutting itself off for no damned reason whatsoever. So it’s history.
So I’m asking for help with a replacement. My requirements:
- Unlimited data plan
- The fastest, cheapest plan available. T-Mobile offers 56k for $20/month. Verizon offers ten times that data rate for $45/month. I hear that Cingular is offering its EDGE service under the MEdianet brand for $20/month.
- That includes dialup networking functionality for the data plan above. It appears the biggest struggles with data plans on phones center around operators disabling Bluetooth and dialup networking, so I want to make sure that this is part of the deal.
- Flip phone
- Since it will go in my pocket, I’d like a flip or a slide. I’m tired of ass-dialing.
So, that’s it. My research has been less than fruitful so far, but it is only day one. Anyone with information to contribute is encouraged to add it below, with my thanks.