There are a few basic steps that should be followed when creating a video or audio resource (e.g. a screencast, presentation, lecture, short tutorial, podcast etc.), to ensure your recording is accessible. Before we look at tools and approaches which help to produce accessible recordings, let’s clarify the following important terms:
🔲 Closed captions provide a text version of speech as well as other audio content such as background noise, music or sound effects. Closed captions can be turned on or off while watching a video with the click of a button. Captioning offers several benefits, as long as the captions are accurate. Captions are necessary to make video content accessible to those who are deaf or hard of hearing. On top of that, they provide assistance to English as second language (ESL) speakers, and help those with learning disabilities or attention deficits more easily maintain their concentration. Captions also make content more flexible, as people can read rather than listen (e.g. in public places, in libraries where sound cannot be used, etc.).
🔲 Transcription is the process in which speech or audio is converted into a written, plain text document. Transcripts are the output of transcription, and because they are plain text there is no time information attached to it. There are two main transcription practices – transcriptions can be produced by (i) auto speech recognition (ASR) where you use automated captions produced for a recorded video (e.g. using MS Teams + Stream) in order to create a transcript that you can edit and share alongside your video, or (ii) by human transcriptionists (at a cost per minute).
Accurate closed captions are legally required to make videos accessible. Transcripts are required to make audio-only content accessible. Captions and transcripts include the same text, so one can be used to develop the other.