Computer transcription misleads even as it impresses

With speech-to-text transcription, what are you really saving?

[Patrick Emond contributed to this post]

Last week, IBM trumpeted  their latest achievement in automated speech-to-text: a record-low error rate of 5.5 percent. But always, especially with regard to saving money on transcription, you have to read the fine print.

Continue reading “Computer transcription misleads even as it impresses”

Commas can make or break transcription (or the case of the $10 million comma)

Commas can make or break transcription - ATC Blog
Oakie, Oakhurst’s loveable mascot, seen here seemingly succumbing to exhaustion with a world-weary smile and an absent gaze. Unknown whether overtime was a factor.

In which we ponder how an antiquated Maine labor law, a class-action lawsuit, and a controversial bit of punctuation can make the national news.

Recently, my wife forwarded me a New York Times article about a lawsuit in my home state of Maine. This isn’t a common occurrence, for how often does one really lend much thought to labor disputes in their hometown? But this one had a special flavor to it, that speaks to the risk inherent in subpar transcription.

Continue reading “Commas can make or break transcription (or the case of the $10 million comma)”

Reality Check: Transcription Vs. Speech Recognition Software – The Showdown

Transcription vs Speech Recognition Software Audio Transcription Center Blog
If anyone reading is a fan of the game show Jeopardy!, you already know that this week, IBM super-computer Watson is taking on legendary past Jeopardy! champions (and human beings) Ken Jennings and Brad Rutter in a Human vs. Human vs. Machine grudge match, and we now know Machine has won!
Congratulations to Watson.
We don’t have a super-computer, or a fancy game-show soundstage, but we are bringing you the results of our Human vs. Machine faceoff. Can human transcriptionists from the Audio Transcription Center (ATC) slay the Dragon? Read on and find out!
(Full disclosure: we’re a transcription company that has been in business since 1966. Successful speech recognition software could put us out of business. Just so you know.)
Championships have been won in Boston: the Red Sox have won World Series, the Celtics NBA Championships, and the Bruins Stanley Cups, all just five minutes from our very offices. So it is fitting that our office be the site of this titanic Human vs. Machine bout!
First of all, I will introduce the Machine… wearing a green cardboard box, from Nuance Software, Dragon Naturally Speaking 10, Home Edition, or as we prefer to call it “Team Dragon”. (Version 11 has been released since we began testing; and we will put it to the test at a later date.)
And in the other corner, wearing headphones, torn jeans and flexing their fingers… the human transcriptionists of the Audio Transcription Center (ATC), specifically four randomly-selected competitors from our staff of dozens of versatile, multi-talented transcriptionists. All four, collectively known as “Team ATC”, were eager to take on the challenge.
“But wait,” you exclaim! “Dragon only works with one voice at a time, this is an unfair fight!” Correct. But rather than automatically claim victory, we decided to level the playing field by having both competitors work with only one voice, who would be speaking on a variety of subjects.
Dragon Naturally Speaking (or “Team Dragon”), as well as our team of terrific transcriptionists (or “Team ATC”), would be transcribing the voice of… me. Your humble blogger, formerly heard on college radio and occasionally behind a karaoke machine, would be the voice that would take both competitors to their limits!
Let’s begin the match, shall we?
First of all: speed of delivery
Team Dragon: walk to the store, purchase the software, come back to the office.
Team ATC: walk to the subway, purchase subway ticket, come to the office.
Advantage: We’ll call this one a tie.
Speed of installation
Team Dragon: 32 minutes for “complete installation”. The DVD-ROM was a very bright shade of orange.
Team ATC: less than 10 minutes for installation, and that includes pouring themselves a cup of coffee while the computer boots up. Occasionally wears bright colors as well.
Advantage: Team ATC.
Speed of training for first-time use
Team Dragon: 39 minutes, from first launch until the program was ready for prime-time, including entering the serial number at least 4 times.
Team ATC: About two hours, including filling out at least 4 pieces of paperwork. We’re thorough that way.
Advantage: Team Dragon.
So far, before we’ve introduced actual transcription into the contest, we’re tied at 1-1. It’s a close match in the early going…
Now, let’s bring in some actual audio. Specifically, about 1,135 words, spoken over about 7 minutes, on a variety of subjects, by yours truly.
“But wait,” you exclaim. Again. “’Team Dragon’ has to be trained to recognize your voice! It’s designed to improve as you use it more!” Correct. Whereas ‘Team ATC’, none of whom have ever heard my voice on a recording, can hit the ground running immediately. Advantage: Team ATC.
Back to the audio: our four transcriptionists each took one pass at it, transcribing it verbatim (with ums and ahs). Once done, the audio was given a real-time review, and time needed to perform corrections was noted.
Transcription time for “Team ATC” for seven minutes of audio, spoken in a quiet room, clearly and methodically: averaged out to 20 minutes.
But how did it look, you ask? There was an average of two errors in the 7 minute file. Out of 1,135 words, that’s over 99.8% accuracy before review. Review time averaged out to eight minutes, for a total score of 28 minutes.
Now, for the first round with “Team Dragon”. For the first round, I once again spoke slow-ly and meth-od-ic-al-ly. I also spoke punctuation and carriage returns in their appropriate places, as per instructions.
Dictation time for “Team Dragon”, first round? 16 minutes. Which sounds fast, until you realize that reading the audio into a recorder at ‘normal’ pace took less than half that time.
But how did it look, you ask? Not so good. Review time took 18 minutes; with over 60 errors (versus two!), for a total score of 34 minutes, and around 94% accuracy or roughly 15 errors per page. Which sounds good, until you remember that this is one voice, speaking slow-ly and meth-od-ic-al-ly. Which most of us don’t do in our daily lives.
 
Advantage for round one: “Team ATC”.
Before the competition, and in between rounds, while “Team ATC” was eating lunch or going for walks, “Team Dragon” was in training, as I read and corrected material from various sources into the software. Song lyrics, blurbs from dust jackets, chocolate bar wrappers… “Team Dragon” was being further trained to recognize my dulcet tones.
For round two with “Team Dragon”, I changed a setting to speed up the process; Dragon has a setting which inserts commas and periods in logical places. That indeed shaved a few minutes from the dictation time: dictation now took 11 minutes.
But how did it look, you ask?  Still not so good. There were over 40 errors; review time took 13 minutes (which was, again, longer than the dictation itself), so over 96% accuracy or roughly 10 errors per page. Which, again, sounds impressive, until you compare it to 99% accuracy.
Total time for round 2, including review time: 24 minutes. Which means…
Advantage for round two: “Team Dragon”.
So what have we learned? That speech recognition software can, with repeated training, be accurate enough that your dictation time, plus your review time, can be faster than a human transcriptionist.
So “Team Dragon” wins? The robots are taking over?
Uh, no.
If your audio input consists of one voice, and only one voice, and you have enough access to that one voice to allow Dragon to become further accustomed to that one voice, then by all means, stop reading now, and become a proud supporter of “Team Dragon”.
For everyone else, “Team ATC” is still miles ahead. “Team ATC” can transcribe your all-hands meeting, with its 27 participants from the CEO to the intern. “Team Dragon” can’t.
“Team ATC” can transcribe your interview with your Nana where she talks about the old country; and because the Audio Transcription Center (ATC) can match your interview subject matter up with the right member of “Team ATC”, you can get a transcript with 99% accuracy or higher, even though we’ve never heard your voice.
 
“Team Dragon” can transcribe you or your Nana, at lower than 99% accuracy, and only knows what it’s been programmed about the old country.
And most importantly, the human beings at the Audio Transcription Center (ATC) can consult with you before your project even begins, and work with you to help you get the most out of your limited transcription budget.
When and if “Team Dragon” catches up to us, and is able to transcribe the material our talented, smart human beings are able to transcribe, quickly and accurately, we will be the first to jump on the bandwagon. Until “Team Dragon” puts us out of business.
But for now, if you call the Audio Transcription Center (ATC), there are no machines to train, no dragons to slay, just friendly, helpful customer service, a second-to-none transcription staff and a 100% satisfaction guarantee.
Next in line for us is a white paper that will help you find your best transcription solution, even if it is (gasp) not us!
by Patrick Emond

Reality Check: Transcription vs. Speech Recognition Software

Transcription Vs Speech Recognition Software Audio Transcription Center Blog 
Here at ATC, we occasionally get the tough questions. One in particular that briefly stops us in our tracks: “Why can’t I just use speech recognition software?”

Nobody likes being replaced by a computer, or a robot, and we are no exception. Our short answer to that question is this: “we are more accurate and more versatile than the software available today.”

Still don’t believe us? Well, we’re going to introduce you to our competition.

Speech recognition has been around since 1952: that early device could recognize single spoken digits. (We, on the other hand, have been around since 1966, and were able to recognize whole spoken sentences immediately.)

The next large leap forward came in 1982: Dragon Software, who still release speech recognition software today, released software for industrial use. By 1985, that software had a vocabulary of 1,000 words – spoken one at a time. (That is comparable to a four-year-old child. We don’t recommend having a four-year-old, even a precocious one, transcribe your audio.)

Dragon itself even admits this today: “Most of us develop the ability to recognize speech when we’re very young. We’re already experts at speech recognition by the age of three or so.” Our college-educated transcriptionists had vocabularies in the 17,000-word (and up) range. Even in 1985. And they still do.

By 1993, a computer could recognize over 20,000 spoken words, which put it on a par with human beings. Except for the accuracy, which was only 10% in 1993. By 1995, the error rate had dropped to 50%, which is quite a leap in a short time. (Our transcriptionists test at 98% accuracy.)

In 1997, Dragon released “Naturally Speaking”, its first consumer speech-recognition product. By 1997, we already had a 31-year head start on transcription for consumers at large.

We know, we know…

“That was back then. How about now?”

We’re glad you asked. 

Since 1985, the National Institute of Standards and Technology have been benchmarking speech recognition software. The graph below illustrates some key data points highlighting several of their relevant benchmark tests.  (Click the graph to enlarge.)
 
(source: National Institute of Standards and Technology, http://www.itl.nist.gov/iad/mig/publications/ASRhistory/index.html)

There are a lot of data points up there, so let me highlight the important features:

    • Take a look at the error rates (WER means Word Error Rate) for Conversational Speech (in red) and Meeting Speech (in pink). They aren’t even close to what human beings can deliver.
    • That 2% to 4% range is human error. As in, the accuracy rate you would get from our human beings. And we aim for even lower than that.
    • The only tests that match up with human accuracy are air travel planning kiosk tests (bright green). Also known as “People Who Speak Very Deliberately and Slowly in Airports.”
    • Very few people speak deliberately and slowly in real life.
    • The error rate for broadcast news readers (blue), ie: people who are very well-paid to speak clearly, is around 10%.
Software has to be trained to recognize your voice. And re-trained to recognize anyone else’s. Our transcriptionists can handle a meeting full of speakers and accurately differentiate them.

A 98% accuracy rate means you will spend much less time reviewing your audio, correcting errors and inaccuracies, and much more time growing your business.

The bottom line is this: computers are getting smaller, and more powerful, all the time. They can do many things better than human beings can.

But not, as you can see, transcription. And looking at the graph, they won’t catch up anytime soon.

Your audio wasn’t recorded in a lab, it was recorded in the real world, where we live. We transcribe conversations and meetings every day, from all over the world. Not to mention webcasts, dictation, presentations, and conferences.

Again, Dragon says it themselves: “People can filter out noise fairly easily, which lets us talk to each other almost anywhere. We have conversations in busy train stations, across the dance floor, and in crowded restaurants. It would be very dull if we had to sit in a quiet room every time we wanted to talk to each other! Unlike people, computers need help separating speech sounds from other sounds.”

Our transcriptionists and production staff are highly educated, well-trained, and are constantly learning, whether that means going to graduate school, reading magazines, or watching the newest viral videos.

We like computers, and we think we can co-exist. So, by all means, speak your destination into your cell phone’s GPS, or say “tech support” to speak to technical support. Those are two versions of speech-recognition software that many of us use almost every day.

But if your audio is any more complicated than that, call us. We’re versatile, we’re accurate, and if you pour us enough coffee, we won’t crash.

We have run full tests on the entire Dragon experience, from opening the box all the way to the proof of the pudding, which is in the crust… er, the transcript. We will publish those results on or before February 17, so keep an eye on your inbox and this blog for the results!