[ad_1]
Audio transcription has evolved from manual writing to digital recording and software-based transcription. While computer automation is faster, it lacks corrective language and struggles with the wide range of human speech patterns, leading to errors. Learned speech can help overcome this flaw.
Audio transcription is the process of taking spoken words and turning them into written text. In the past, a person would sit and write words as they were spoken. There are now audio recordings of various types and different transcription methods. Analog and digital recording methods will allow a person who is not present during the conversation to still transcribe the text. Also, many software packages read audio files and quickly convert them to text without actually having to play them.
For many years, audio transcription was a specialized and boring profession. People transcribing speech had to be present at the time of the speech, often in the sense that companies would have to hire people trained in advanced techniques such as shorthand. This also limited transcription services to those who had access to a qualified transcriber.
With the invention of audio recordings, this field has changed dramatically. With a recording, the transcriber could work from anywhere the recording could be delivered. Also, transcription no longer needed a shortcut as the recording could be reversed and listened to over and over again. A single transcriber could even work for a multitude of clients simultaneously, since he no longer needed to be present for speeches.
As computer use and internet speed have increased, the field of audio transcription has remained largely the same. The files, rather than the tapes, were often sent by e-mail rather than sent by regular mail. The speed of the process has increased, but the methods have not.
This changed in the late 1990s with the increasing use of speech recognition and dictation software. The task of transcription moved increasingly towards computer assistance and then full automation. Software packages have emerged that can read the information within an audio file and use the speaker’s wave patterns to create a text version of a speech. This would take seconds rather than the minutes or hours of a human transcriber.
Computer-automated audio transcription has some flaws that are difficult to overcome, the biggest of which is a relative lack of corrective language. When a human transcriber listens to text, they can correct minor errors in the speech to make it more readable. While some transcripts are textual, meaning they are exactly what the person said, most are not. Without corrective language, a human will often have to check the transcript for errors before it is used.
The other common flaw of computer-based audio transcription lies precisely in human speech. Since people have a wide range of tones and patterns when they speak, creating a computer program that can accurately read and translate the full range is extremely difficult. This means that some amount of error is common in almost all transcription software. The most common way around this flaw is through learned speech, where the program and a single speaker work together enough for the program to focus on the individual person’s patterns.
[ad_2]