bestkungfu weblog

Google is a deaf user

Filed in: accessibility, tech, Web, Tue, Nov 23 2004 08:37 PT

In the podcasting session at BloggerCon 3, Steve Gillmor said, “We’re never going to do any transcripts on the Gillmor Gang.” (And got applause.) He went on to say that it’s about the metadata and people’s natural voices and this and that.

By now, regular readers should see where I’m going with this. It seems that just as much of the Web has endeavored to include blind and low-vision users, it’s started building new barriers to deaf and hard of hearing users.

So, since we’re about to reach the same impasse, it seems the same plea is in order. To paraphrase Karsten Self (emphasis mine):

Google is, for all intents, a deaf user. A billionaire deaf user with tens of millions of friends, all of whom hang on his every word. I suspect Google will have a stronger impact than [laws] in building accessible podcasts.

The podcasting phenomenon has burst out very recently, and whether it fizzles out or goes supernova remains to be seen. But what it will leave in its wake, aside from a whole lot of new microphones cluttering a whole lot of computer desks, is a rush of new audio and video content coming from users — the same kind of users we now call “bloggers”. Most of these new podcasts, like audioblogs, will be either of low enough quality (the cat blogs of the audio world) or fun, but of such transient value (Dawn and Drew, Evil Genius Chronicles) that transcriptions are, perhaps understandably, not a top priority.

But the Gillmor Gang isn’t one of those shows. Gillmor went on to say that he tried to keep the shows running time long, essentially to prevent them from being digestible in short form. However, this group of people is not just sitting there filibustering just to flood the pipe. There’s real value in the topics they’re discussing, and the whole of their discourse is greater than the sum of their individual weblogs. If anybody should be seriously considering regular transcripts, it’s the Gillmor Gang.

What if a deaf user sees a topic that interests him or her, and wants to know what these subject-matter experts have to say about it? Should he or she go without simply because the moderator thinks it would disrupt the natural feel found in the panel’s voices?

Or, closer to home, what of the people who are searching for this kind of content? Should they take it on faith that a given show will talk about what they’re interested in? Or would it be better to offer them a good idea of what to expect, thereby opening up more users to the very idea of podcasting that Gillmor is so protective of? Transcripts are Google’s friend, and they’re richer in information than any metadata — and stronger than Google juice.

Why publish free transcripts when those big radio shows don’t bother, or charge out the nose for them? Because you can, and they should. There’s no better reason to one-up the pros than when there’s a need going unmet. They’d rather control how their information flows than reach a broader audience. Few podcasters have that interest.

Certainly, Steve’s right that there are bits of the discussion that are inexpressible in text. Readers of transcripts know that. I would imagine that relatively few people would be interested in a transcript, were there not other considerations in play, to wit: hearing impairment, no sound card, low bandwidth, or hostile environment. (When podcast fights pod farm, pod farm wins.) More options for consuming this kind of content is better than fewer. The solution to casting a wider net for podcasting is not the strategy of closing down alternatives. The solution is to open them up.

27 Responses to “Google is a deaf user”

  1. […] ck

    Google is Deaf

    Matt points out this article, “Google is deaf.” Its point, while more subtle, can be reduced to: […]

  2. […] day, December 3rd, 2004 Google is Deaf, for Now Via Matt Mullenweg, I find that Google is a deaf user. Elliot Back, commenting on that article, suggests r […]

  3. […] u can learn more about W3C at http://www.w3.org/ Matt made very good point in his recent blog regarding “no more transcripts” which do not offer any tr […]

  4. […] u can learn more about W3C at http://www.w3.org/ Matt made very good point in his recent blog regarding “no more transcripts” which do not offer any tr […]

  5. Ian Bicking says:

    Transcripts also allow for reliable and simple quoting, which is something very important in the blogging world. Of course, on the Gillmore Gang they’ve talked about a desire for better tools to do audio quoting (which is entirely possible right now, but the sound editing is a real pain to do). But that still leaves a situation where there’s two worlds — audio and text — likely without much back and forth. Do people blog about specific content in a podcast much (as opposed to blogging about podcasting in general)? I haven’t seen it, but I don’t really know.

    I think Gillmore thinks of himself as a pioneer in podcasting, and that personal investment means he doesn’t want to encourage a fluid exchange, as that would dilute the medium. It might still be as successful, but it would no longer be a “phenonemum”, just a natural progression, and there’d be nothing for him to pioneer. At least, that’s what I’d think if I was cynical ;)

  6. Joshua Allen says:

    I’ve thought the same thing about podcasting; it would be nice if there were a way to associate transcripts with the media.

    I think hat podcasting will be more dominated by organized media outlets than blogging is, because publishing takes significantly more work (it can’t be done with just a web browser at an internet kiosk); and conversely, the extra effort and time constraint of listening to the casts means that consumers will be more selective. So I believe that many of the professional casters will have the means and the motive to provide transcripts.

  7. agree wholeheartedly. it surprises – and worries – me that even “experts” who should know better still haven’t understood the basic premises of accessibility and usability…and, hell, common sense.

    it also worries me that common greed (if not financial, at least the “greed for fame and high search rankings”) will more than likely be the contributing factor to any accessibility improvements; rather than doing it for *people*, they’ll eventually be doing it for google (“because, you know, it’s deaf, blind, colour blind, doesn’t have javascript enabled, can’t run flash, can’t use a mouse, etc…”)

    *shakes head*

  8. I have to admit I was not thinking about accessibility in relation to the subject of transcripts. Of course it makes sense in that context, and I appreciate your perception that the Gang material is worthy of that additional effort.

    But I also have been learning as I (we) go about the nature of this new medium, and was surprised to discover my instincts about transcripts (they should be done) were counter to the emergence of the podcasting momentum. As the network grows and technologies for auto-transcriptions become affordable without the cost of training that holds back current technology, the accessibility problem will be overcome.

    I don’t think of myself as a pioneer here, but rather of an old school (the Firesign Theatre in my case) that values the magic of the spoken word, not as a replacement for the written word, but as a special place where intersting things can happen. As such, I enjoy the process, creation, editing, and copnsumption of this podcasting village, and make no bones about wanting to nurture it and help it grow into a powerful force that can help bring an end or at least a compromise with the DRM cartel. It is in that context that I described my ongoing exploration of the podcasting phenomenon when Adam Curry asked.

  9. Elliott Back says:

    Google is Deaf
    Matt points out this article, “Google is deaf.” Its point, while more subtle, can be reduced to:

    Podcasting, that is radio web shows, is the new craze. And we’re never going to transcribe them. However, Google is, for all intents, a deaf user….

  10. Joe Clark says:

    As the network grows and technologies for auto-transcriptions become affordable without the cost of training that holds back current technology, the accessibility problem will be overcome.

    There are no such technologies, nor will there be at any time in coming years. “The network” (i.e., somebody else’s machine) will not help you; you are responsible for the accessibility of your own content.

    The rest of Steve’s comment is, of course, off-topic.

  11. You suggest that podcaster should provide transcriptions, but don’t offer any suggestions on how that can be possibly be done economically. Producing accurate transcriptions is not cheap in either personal time or money (if you outsource it to a professional transcriptionist).

    Frankly, I have no interest in podcasts precisely because of the lack of transcripts, *but* I don’t pretend that it would be reasonable for podcasters to produce such transcripts.

  12. Firas says:

    “The network” (i.e., somebody else’s machine) will not help you; you are responsible for the accessibility of your own content.

    Actually, LiveJournal phone posts can be transcribed by their listeners… I daresay that a show with the audience of the Gillmor gang can easily throw up a wiki page and ask users to transcribe 30secs or so if they would and have a complete transcript by the end of the day… which they can then put on an official ‘transcript’ page.

  13. Joe Clark says:

    Actually, LiveJournal phone posts can be transcribed by their listeners…

    Great. How many do? And what are the accuracy and markup like?

    I daresay that a show with the audience of the Gillmor gang can easily throw up a wiki page and ask users to transcribe 30secs or so if they would and have a complete transcript by the end of the day… which they can then put on an official ‘transcript’ page.

    Great again. So we post unlimited inaccessible audio files and wait for somebody else, on a voluntary honour system, to make inconsequential snippets of them accessible, with no quality oversight. Wonderful system.

    You are responsible for the accessibility of your own content. You can’t wait around for somebody else to fix the problem for you.

    Cf. Case study: Full-text transcripts.

  14. Matt May says:

    …I have no interest in podcasts precisely because of the lack of transcripts, *but* I don’t pretend that it would be reasonable for podcasters to produce such transcripts.

    It’s really not that much to ask certain types of broadcasts. I can transcribe my own stuff in somewhere between real-time and half-speed. Perhaps, with some practice and an MP3 player that can slow my show down to something that I can consistently transcribe, I can get do my voice bits in about 15 minutes, or 1.5 times my speaking time. Whether it’s economical (or “reasonable”) for the podcaster depends on several variables, including audience, frequency of new shows, duration, type of content, whether it was scripted, and how much of it needs to be transcribed.

    However, it must be noted that none of these variables changes the fact that spoken-word audio without a text transcript is inaccessible to deaf and hard of hearing users.

    Auto-transcription technology — or, more accurately, auto-captioning technology (e.g., Automatic Sync) is in its infancy, if that. It follows that it’s not a reliable solution. (According to the presentation they gave at the University of Washington, Automatic Sync does support MP3 files, and they’ll charge $30-70 an hour to transcribe it.)

    More importantly, the Gillmor Gang will probably be the last to benefit from such a system because of the number of speakers and the amount of crosstalk involved in a talk show format. No speech-to-text system out there is going to be able to divine which speaker said what, much less what two or more people talking over each other are saying. That’s hard enough for humans to accomplish. Sometimes, it’s hard enough for moderators to track.

    (And yes, I realize I say this as I’m behind on my own transcribing work, which I totally promise to make good on. Bad Matt. No D’Angelo cheeseburger sub until Staccato 2 is transcribed.)

  15. Adrian Bacon says:

    I’ve always been a huge proponent of providing transcripts of my podcasts, and even though at the moment I don’t have any transcripts, I’m actively looking for a way of doing it in an automated fashion using software during my post production. I want to not only provide a transcript on Linux Log (the site hosting the podcast), but I also want to embed that transcript into the lyrics field of the MP3 (or any other format I choose to provide my podcasts in), so that what the transcript says is also physically tied to the actual audio data. That way, if any of the search engine crawlers ever learn to read the metadata of the audio files, assuming that it hasn’t been changed, it will know what the audio is, and where it’s from.

    Leaving out accesibililty, and focusing on identifying audio data you find without having to listen to it, this may not have an immediate benefit right now, but I think it can be in the future.

    And yet, it makes it infinitely more accessible.

  16. Meryl says:

    Oh, cool! A Web site is finally deaf like me! :) OK, that is my smart aleck comment quote for the day.

  17. Ponderance says:

    The PodCasting Revolution (for some…)
    That point, that you can’t skim and summarise audio (as yet) is an important one. Sure, the podfather Adam Curry is trying to get around this with OPML and HTML show notes, as are others, but that still doesn’t quite go beyond, say, being comparable …

  18. Google Is Deaf
    They’re just now noticing Google is a deaf user? Actually, much of the Internet is as podasting, videos, and audio files grow by (cliche’) leaps and bounds. Between the entry and the comments that follow including a couple from Joe…

  19. Andy Carvin says:

    Thanks for the insightful post. I discovered it after Glenn Laird posted a note about it on the Digital Divide Network. It made me go out and post my own podcast on the subject. And yes, http://www.andycarvin.com

  20. I was born with a nerve deafness that remains incurable to this day.

  21. Beth Case says:

    I have a podcast (http://disability411.jinkle.com) and provide transcripts at the same time the audio is released so deaf and hard of hearing individuals can access the information at the same time everone else can. I have received numerous compliments and feedback from non-disabled listeners who appreciate the information in text form. I use a very inexpensive service called CastingWords (http://www.castingwords.com). They charge something like 42 cents per minute of audio, generally get the transcript back to me within 24 hours and have fantastic customer service.

  22. Wanda says:

    Google is great for all Deaf Users whose deafness remains incurable to this day.

    Cochlear Implanted people can hear only if they are hard of hearing first.

    Cochlear Implanted People who were born Deaf still cannot use telephones like Hearing and Hard of Hearing People can.

    Cochlear Implants is not a cure for deafness.

    Google and texts are freedom to informations for all Deaf Users.

  23. elijah says:

    The PodCasting Revolution (for some…)
    That point, that you can’t skim and summarise audio (as yet) is an important one. Sure, the podfather Adam Curry is trying to get around this with OPML and HTML show notes, as are others, but that still doesn’t quite go beyond, say, being comparable …

  24. Gerry says:

    Why insulting the Deaf community?

  25. Matt says:

    I’m not. At all.

  26. linda says:

    this site you can leavI really aggree with you Barry. I m working for a similar kind of company called WiseTypist http://www.wisetypist.com. We also offer the similar service but at 30% less than their price. You may go with their service really for smaller works but coming to volume you houls really check once with our company

    Thank and regards,
    Linda

Powered by WordPress (RSS 2.0, Atom)