Last week, Digiknow looked at Video and accessibility. Audio is an important feature of video. However, audio can be a standalone resource, i.e., podcast or audio book. Whilst other digital items may be visual, the audio is not and this is what we focus upon this week.

Firstly, we look at what audio is. Then how audio as speech can be accessible, what the difference is in Captions vs Subtitles. And, when transcripts should be provided.

Lastly, we look at a few pieces of advice to improve accessibility for deaf or hard-of-hearing audience members. Also, the benefits of captions / subtitles / transcripts.

Just in case…

Over the last few weeks, DigiKnow has been addressing various accessibility issues to include:

Let’s get started!

What is audio?

In todays’ terms, audio is sound which can be recorded, transmitted, broadcast or reproduced, i.e., sound files. Audio can be further defined to include what can be heard within the spectrum of human hearing. Either by analogue or digital means. It’s a huge topic.

Sound can be speech, background noise, singing or music. This is audio communication. The next question, if we create audio, how can we make it accessible?

Captions vs subtitles, what’s the difference?

Typically, captions and subtitles get used interchangeably. There is a difference in what the two things are. Read on to find out.

Captions are intended to describe the audio within the video (i.e., car comes screeching to a halt or loud music in background). Captions are descriptions that add to the viewing experience.

Interestingly, Ofcom published findings, in January 2020, on who uses captions within the UK. They found 7.5 million people used captions but only 1.5 million of that group were deaf or hard-of-hearing.

Six million people use captions for:

  • better user / viewing experiences
  • watching video in noisy or sound-sensitive environments
  • non-English speakers benefit from having the words on screen
  • actors whom speak quickly and/or mumble or use jargon / brand names, etc.

These were just a few of the reasons non-deaf viewers used captions.

Subtitles are optional for hearing audiences whom may not understand the language within the video. This comes back to the speed of speech, accents, slang, etc. Subtitles may be available in other languages within a video. This will be purely for the dialogue and appear verbatim. Subtitles do no include descriptive background audio or scene details.

What are transcripts?

Also known as audio transcriptions. This is the process of converting speech into written text. This should be done for any recording featuring audio where it is not possible to provide captions or subtitles. For example, for podcasts where hearing is required, transcripts should be supplied.

Other benefits include:

  • Transcripts are an easy way to circulate information. Whilst audio formats can be shared, not everyone will benefit
  • Quick knowledge transfer and/or study aid
  • Students get full text from lecture
  • Transcripts are easy to use
  • Increases accessibility

One disadvantage of transcriptions is the time and cost required to produce them. When using transcription services, the content still needs checked and amended before use.

Advice to help improve accessibility

Captions / subtitles and transcripts need to follow proper grammatical rules. Captions also need to include non-speech elements with the use of upper and lower case letters. Remember that font size and colour contrast (whether on video or paper) should be considered.

Transcripts and captions need to be published before you share videos and podcasts. This is so every member of the audience can enjoy the content at the same time. Subtitles appear on the video when displayed. All of these items are optional for audience members to use.

The benefits of accessible audio
  • Captions help foreign language learners with accents, with complicated and difficult to understand terminology. 85% of Facebook videos are viewed in mute mode. Captioning has really taken off. This is something to consider for other audiences.
  • Captions can be provided in real-time, i.e., for live-news, online-meetings, conferences, etc. There are 466 million deaf / hard of hearing people world-wide. Larger audiences could be attracted to conferences.
  • The quality of captions is critical. Whilst there are many digital translation technologies, they’re only 80-90% accurate. These services are less accurate with regional accents. If speakers talk over each other or mumble, this garbles the captions and reduces the experience of users.
  • On a technical level, poor captions impact Search Engine Optimisation (SEO). Bad captions deter viewers accessing your content. Auto-captions don’t pick up punctuation or check grammar. How wording appears may give a different message then intended. Grammar and punctuation are critical for any reading experience and can be likened to how hearing audience members benefit from speech intonations.
  • Besides providing transcripts for non-hearing audience members, everyone can benefit from transcripts as they can read the content. This is a great revision aid.

For live-events, if you advertise that real-time captions are available, you may have increased audience sizes.

Tech tip

Did you know that YouTube offers free captioning? If you upload a recording, you can download an SRT caption file and use this on other video platforms which support captioning.

Within Queen’s University

You may not know which of your audience members rely upon captions / transcripts or subtitles. And, audience members may not highlight those accessibility features which are crucial to their learning. However, these accessibility elements should be provided regardless of need.

Within Queen’s, we use MS Stream and Mediasite to host video content. This content may be embedded into Canvas for student learning. MS Stream has auto-subtitling but do check the subtitles for errors, it’s not 100% accurate.

Mediasite has introduced a captioning service which QUB staff can sign up to. To find out more, click here. Co-incidentally, did you know you can create podcasts on the Mediasite platform?

Podcasting service

There is a podcast service built in to Mediasite where the Mediasite team can broadcast podcast content to Spotify, Google, iTunes, etc.  This has been available for a while.

Some staff might be interested in creating a series of podcasts along particular topics or for particular courses. To do this, you need to Register your podcast.

Along with this, Queen’s has a list of podcasts already available: QUB podcasts


This week, we’ve looked at how to make audio more accessible to audience members and given a number of reasons why this benefits everyone.

Next time

Our blog post next Monday will look at hacking accessibility in the form of colour.

Remember, the DigiKnow blog posts are now released at noon on a Monday.

Please do join us then to learn more and don’t forget to follow us on Twitter: @MDBSelearn.


Leave a Reply

Avatar placeholder

Your email address will not be published.