Following on from two blog posts: Disability Services (May 10th) and Assistive Technologies (May 17th) at Queen’s University and last weeks post on Getting Access: Blind Users, we want to continue with consideration of audience members with specific needs and content design.
This week, we start off considering audience members living with D/deafness and hard of hearing. If you are a content creator, this post will guide you towards considering different audience member needs and designing content to be inclusive.
D/deaf and hard of hearing
The word ‘deaf’ describes or identifies people who have severe hearing problems. When you see ‘Deaf’ (capital D), this refers to people who have been deaf since birth and/or before they learned to speak (this is also known as culturally Deaf). They may also have a family member who is Deaf. Sign language will a Deaf persons first language.
Others who are deaf (small d) may have loss of hearing but may have learned to speak. People who are deaf, may use speech as their first language and/or use sign language but this isn’t the same for everyone who is deaf.
A third category is the hard of hearing. This group may not use sign language to communicate (although some might). The term hard of hearing may be used to describe hearing loss. There are other hearing conditions to consider and far too many to mention.
Around 11 million people in the UK are D/deaf or hard of hearing. Two million people are hearing aid users and 1 in 6 adults are affected by hearing loss.
What we want to explore is how digital content needs to be designed for D/deaf and hard of hearing audience members, what Assistive Technologies (ATs) are available and some 3rd party apps which can be downloaded to assist.
Accessible design for the D/deaf and hard of hearing
Accessibility is an important topic to ensure the digital content we create is accessible for all users. This will require adapting materials and this should be embedded in to our practices.
Let us ask, if you were D/deaf or hard of hearing what teaching materials could you not use? Video with audio only would be a problem. If the video is a person talking directly to camera, there is the possibility of lip reading but informational videos won’t provide that opportunity and if the audio isn’t of use, the video isn’t either.
Podcasts can’t be used by this group of audience members without adaptation. All other teaching materials could be used, i.e., text, slides, images, etc., but audio/video materials needs attention.
Taking this into consideration, let’s look at practices to avoid and include as content creators.
Practices to avoid
Please avoid the following practices when designing content for audience members with hearing impairment:
- Video with missing subtitles or transcript
- Podcast with missing transcript
- Audio with extraneous background noise
- Reduce overusing slang and the play on words
Video with missing subtitles or transcript
Video is a great teaching tool. It’s great for showing processes, condensing information and moving away from the text/reading format. Admittedly, it’s more engaging and often, a picture can tell a thousand words.
However, if videos only have audio as a means of verbal communication, this excludes D/deaf audience members from the content and disadvantages the hard of hearing when only audio is available. This content needs to have subtitles or captions generated for D/deaf and hard of hearing audience members.
Podcast with missing transcript
Podcasts are purely audio. The audio can be speech, music or other sound. Again, if only the audio is present, this excludes and disadvantages audience members who are D/deaf or hard of hearing.
Transcripts should be provided with podcasts to give your audience choice of how they want to digest the content. Remember, subtitles, captions and transcripts benefit all audience members as they can be read in noisy environments and the scripts used for annotation and/or revision purposes.
Audio with extraneous background noise
Audience members who are hard of hearing will really struggle separating speech from environmental background noise within podcasts and videos. If creating video or recorded content, please ensure the background noise is reduced in favour of foreground noise (speech).
Reduce overusing slang and the play on words
People within the Deaf community may have little/no knowledge of phonetic sounds which are common to people with normal hearing.
It is very difficult for audience members who are D/deaf or hard of hearing to understand the phonetical abbreviations and the play on words. Heavily using synonyms can be very challenging as sign language uses very few of these.
Sign language relies heavily on facial expressions and/or body language to communicate essential meaning and emphasis. Very often, there is no direct written translation between verbal and signed communications for these meanings and emphasis.
This is particularly true for Deaf audiences where the use of text (subtitles and captions) are considered the person’s second language (British Sign Language is a Deaf persons first language). D/deaf audience members may have varying levels of text fluency and may be not be as proficient with text.
Now let’s have a look at practices to include when creating content.
Practices to include
The following practices should be incorporated into your workflow when designing content for audience members with vision impairment:
- Closed captions/subtitles
- Highlighting key concepts using alternatives to colour
- Provide speaker notes (MS PowerPoint)
- Interpreters and sign language
- Playback controls
- Audio with extraneous background noise
- Lip Reading
What’s the difference between Closed Captions (CC) and subtitles? We’re glad you asked. Typically, captions and subtitles get used interchangeably but there is a difference in what the two things are.
Captions are intended to describe the audio within the video (i.e., car comes screeching to a halt or loud music in background). Captions are descriptions that add to the viewing experience (for blind, poor vision, D/deaf and hard of hearing audience members).
Interestingly, Ofcom published findings (January 2020) of who uses captions within the UK. They found that 7.5 million people used captions but only 1.5 million of that group were D/deaf or hard of hearing.
Six million people use captions and these were just a few of the reasons non-deaf viewers used captions:
- better user/viewing experiences
- watching video in noisy or sound-sensitive environments
- non-English speakers benefit from having the words on screen
- subtitles are easier to translate to other languages
- actors speak quickly and mumble or use jargon / brand names, etc.
- technical language can be difficult to understand verbally
- improves language literacy
Subtitles are optional for hearing and non-hearing audiences whom may not understand the language within the video. This comes back to the speed of speech, accents, slang, etc. Subtitles may be available in other languages within a video, this will be purely for the dialogue and will not include descriptive background audio.
The availability of captions and subtitles benefit all audience members.
When planning a podcast or video, typically there’s a script for presenters to read from on autocue. This should be made into a transcript and checked to ensure it’s word for word verbatim as per the audio.
Transcripts should be made available with the podcast from an accessibility viewpoint. They don’t just benefit the audience members living with D/deafness and/or hard of hearing. Every audience member can benefit from transcripts in the same way as subtitles/captions and for revision purposes and student annotations too.
We’ve covered why text accessibility is important in previous posts and include a link here to Hacking Accessibility: Text for you to refer to.
However, to consolidate text accessibility, text size should be minimum size 12 (MS Word) and 24 (MS PowerPoint body content). Text should have good contrast of text colour versus background. Line spacing should be set to 1.5 and left aligned. However, do read the previous post for more in-depth information.
Highlighting key concepts using alternatives to colour
Avoid using colour alone to highlight important messages within text and think about how you format text to draw attention to its importance.
Content designers may frequently use bold, italic and underline on text to highlight and make key information stand out. Some users overuse this formatting and it loses the intended impact.
If everything is highlighted, what is the key information?
Provide speaker notes (MS PowerPoint)
In MS PowerPoint, if speaker notes are used and PowerPoints are shared with students, speaker notes are available to students. This adds to the richness of content available and it is an advantage for all audience members.
Interpreters and sign Language
Qualified Deaf Interpreters may be deaf themselves. Undoubtedly, they have better and deeper understanding of how sign language works. Deaf Interpreters use sign language, visuals and other forms of communication to convey information to the D/deaf and the hard of hearing. Their main role is to communicate.
It’s not always easy to have access to interpreters for teaching and learning but at Queen’s via Disability Services, there is one-to-one support provided if students are in receipt of funded support such as: Disabled Students Allowance (DSA). This includes one-to-one interpreters and signers.
Like English, sign language has numerous dialects depending on where you are in the world. This includes:
- American Sign Language (ASL)
- British Sign Language (BSL)
- Irish Sign Language (ISL)
- Plus many more, too numerous to list (see Wiki page)
For many Deaf people, sign language is their first language. In Northern Ireland, British Sign Language (BSL) is used. Let’s focus on that a moment.
British Sign Language (BSL) was developed by the D/deaf and hard of hearing communities of the UK. It was first recognised by the British government in 2003 and there are approximately 200,000 D/deaf and hard of hearing users all throughout the UK.
The English and BSL languages are very different! BSL has it’s own grammar and sentence structures which differ from the English language. This makes BSL unique as a language. Sign language is a visual language which requires use of your hands, face and body to make signs to communicate.
For many people, learning sign language is of benefit as:
- it introduces us to new cultures and communities
- we meet new people and make friends
- it improves our peripheral vision and reaction time
- you may find work as a BSL Interpreter
- it is a satisfying job supporting others
- it increases deaf awareness
- you become better at spelling
- it helps you to become a better listener
- it improves your body language skills, and
- it includes the D/deaf and hard of hearing!
It is important for media players to have playback controls. This allows audience members to stop, start, fast forward and rewind the media and not only with a mouse but for keyboard users as well. Using Tab, Space and other keyboard controls, users should be able to navigate media playback without relying on hand/eye co-ordination.
Whilst this is more important for audience members with sight loss, other audience members may prefer to use keyboard shortcuts or they may be necessary due to technology limitations.
Playback should also offer audio levels, subtitles or captions, different playback speeds may be beneficial along with full-screen playback. Good practice: audio and/or video on the web (that includes the Canvas VLE) should only be initiated by the user (don’t autoplay).
For additional inclusivity, it might be beneficial for videos to have an option for interpreter/sign language on/off (like the subtitles/captions feature).
Audio with extraneous background noise
Audience members who are hard of hearing will struggle with separating speech from environmental background noise within podcasts and videos. To help make the content more accessible as content creators, we should consider these options:
- check foreground audio (speech) is clearly distinguishable from background audio (noise)
- ensure background noise is 20 decibels lower than foreground speech
- if possible, provide an option for background noise to be turned off, and/or exclude background noise
If you are on camera and/or MS Teams and addressing your remote audience, take time to consider those who might lip read. Speak at your normal pace and face the camera. Whilst there are live-captions available in MS Teams, they’re not 100% accurate.
It would also be prudent to provide teaching content via Canvas prior to teaching as this allows audience members time to prepare and set up the resources to suit themselves during a live teaching session. The audience can decide what’s important for them to display on screen and MS Teams does allow for the speaker/slides to be displayed as required by the user (student).
Following on from this, let’s have a look at Assistive Technologies (ATs) to help audience members with hearing impairment.
Assistive technologies available
What are Assistive Technologies (AT)? These are technologies specifically designed to assist people living with disability. AT covers a vast number of technologies for all sorts of assistance, we will list a few we have available at Queen’s specifically for audience members with hearing impairment. These include:
- Speech to Text
Whilst most computing devices currently have screen readers, magnifiers, colour/contrast controls and other useful accessibility aids built in, we’ve provided a few from both Microsoft & Apple which can be used for audience members with vision impairment.
The Microsoft corporation has added many accessibility features to their software applications. These include:
- Microsoft accessibility (overview)
- Captions (MS Teams)
- Speech to Text / Dictate
Apple has a number of accessibility features built in:
- Apple accessibility (overview)
Other downloadable apps include:
- InterpreterNow (sign language)
- SignVideo (sign language)
- Live Transcribe and Sound Notifications (captioning)
- Live Captions (captioning)
- RogerVoice (captioning)
- TextHear (captioning)
InterpreterNow provides a real-time free interpretation service for the Deaf Community in healthcare settings.
This is a free remote interpreting service for British Sign Language (BSL) and Irish Sign Language (ISL) users in Northern Ireland has been introduced to provide the Deaf community with access to NHS111 and all non-emergency health and social care services during the COVID-19 pandemic.
SignVideo is a subscription based real time Video Relay Interpreting Service (VRS) providing fully qualified British Sign Language Interpreters, on demand (SignVideo webpage).
Live Transcribe and Sound Notifications
Google makes this App and it’s available on android smartphones.
Hearing difficulties should not prevent you from enjoying phone calls. Simply read the live transcriptions and you’ll never miss out again (RogerVoice webpage).
Just to give some assistance regards improving digital accessibility when designing content, we’d like to re-share some blog posts to improve accessibility:
- Visual Information
- Colour Alternatives
- Using MS PowerPoint
- Using MS Word
Whilst there are many services, assistive technologies and Apps available to assist people living with D/deafness and hearing impairment, we cannot list everything here. There are some great resources which you can access at AT Hive which are broken down into specific categories.
We hope you have enjoyed this blog post and it helps you become more aware of the needs of audience members when creating content and other technologies available as well.
In our next blog post, we will be looking at audience members living with colour blindness.
Remember, the DigiKnow blog posts are released at noon on a Monday.
Please do join us then to learn more and don’t forget to follow us on Twitter: @MDBSelearn.