
Happy new year everyone. Recently while scrolling Facebook I came across a post by Deaf Geeky. I love her posts, she provides really clear information about technology and how it can benefit people who are Deaf and hard of hearing. You can see the video below. It is basically an example of Artificial Intelligence (AI) and how it can reproduce sign language. It is getting increasingly accurate. Deaf Geeky urges caution and safeguarding in regard to AI, which you will see later. Nevertheless, what it can do is impressive. I will discuss this more later in the article. Note, the video is in ASL, but Deaf Geeky has a way of communicating that is brilliantly clear and entertaining.
Very recently I had a discussion with a professional captioner. We were discussing the pros and cons of AI automatic captioning. As I do, I gave my two cents worth. I basically stated that I was a fan of AI automatic captioning. For me it is the immediacy of the access. I want access here and now! I don’t want to worry about bookings or availability, I just want instant access! It is a bit selfish, but wanting what Ablebods get every day, without fear or favour, is a pretty natural thing.
Through the discussion with the captioner themes came up. Job losses for professionals was one, and this is sadly already happening. Themes of quality and accuracy of captioning came up. Themes of how Deaf and hard of hearing people are expected to put up with sub-quality AI captioning so that big corporations like Fox can save money also were raised. All valid arguments. Still, I want immediate access. HERE AND NOW! AI captioning offers that to me.
Over the years times have changed. It started with the National Relay Service. Then Email came. The godsend of SMS via mobiles came. It was great. Until recently despite these advances, I was still heavily reliant on interpreting at work, for meetings and for day to day things like medical appointments. Captioning wasn’t even available. But these early advances in access allowed me to progress in my career and become relatively independent. But, until more recent times the need for a third person to facilitate communication was very prominent.
I look back and wonder how I survived back then. I look at all the options that I have now. Video relay interpreting from anywhere in Australia. Live remote captioning from anywhere in Australia. I have automatic captioning by AI. I have TEAMS or Zoom or Google Meet – all with inbuilt captioning. Let’s face it, I am spoilt for choice. If I so choose, I can also access Convo.
So with all these choices what do I prefer? Purely and selfishly, I prefer automatic captioning. There are two reasons for this:
- It is immediate and I have heaps of cost effective programs to choose from.
- It gives me access in my first language, which is English.
I am at the point now where I rarely book interpreters or captioning unless I have a really big meeting that requires fast paced interaction. I have the privilege of reasonable speech that makes this possible. I am fully aware that not every Deaf and hard of hearing person is in the same boat. Auslan interpreters are still very much needed and demand for them is still through the roof. But as I see it, me using interpreters less makes Auslan interpreters more available to those who need it.
In my day to day life I use AI automatic captioning make phone calls. I can do this on Android or iPhone. Have a look at the video below that shows the accuracy of it:
With phone captioning I access Telehealth. I arranged my recent home purchase and dealt with the bank using it. I converse with my mother. I make work calls. The independence it has given me is staggering. I set up the call by telling the person at the other end there will be a slight lag in my response due to a small captioning lag. I would encourage people to do this because the lag sometimes throws the people at the other end. Those with better hearing than me just use the captioning as a prompt for parts of the conversation that they didn’t hear well. It’s a far cry from the days where I called the NRS and hoped to god the line didn’t drop out or if there was an available Relay Officer.
I use AI captioning in social situations. For example, at Christmas my mother in-law visited. I turned on my trusty Live Transcribe app and learnt about her involvement in the local dance group. How she was responsible for arranging the music for the sessions. About her walking for fitness and the newly built hot house in the garden. In years gone by I would have missed all of this.
The Live Transcribe that I use is on iPhone with the elephant logo. It is incredibly accurate but many people have their own preference. I encourage people to find the app that works for them. Even with my deaf accent this one is incredibly accurate. Have a look …
But what about sign language. As you saw from the video at the start of this article, there is still along way to go. That said, I recall that we poo pooed AI captioning many years ago, especially when it first came out. We said it would not cope with accents. We said it would not differentiate homophones like sale and sail. When YouTube introduced it, some of the errors were hilarious. Look at it now! Will AI sign language develop in the same way?
Have a look at this service from ASL translation service, Kara.
Here is another from British company Signapse.
I know I am going to upset some people here. However, as I see it there are endless possibilities with this and it is only going to get better with time and as the AI memory grows. Currently, you can type in a phase and within a few seconds it comes up with a translation. With Signapse the maximum allowed on the trial page was twenty words. It did a fair translation of “I wonder if sign language translation through an app on our phones will instantly translate like AI captioning.” in 41 seconds.
Obviously, this is to slow to be used in the same way as I use automatic captioning. But, who is not to say that it will get faster and be an app on our phones where we can type or speak and the translation is instant in the same way that automatic captions are now?
What about the other way around? Will you be able to sign into your device and have an accurate and fast voice translation? I wonder how far we are from that? I really would put nothing past this technology. Perhaps we will see the day where it is so good that you can walk into your GP appointment, set up your device, sign to it and it will voice for you. Your Doctor then speaks and it provides you with a sign language translation. Far fetched? Who knows!
Imagine switching on your TV and you have the option of AI Auslan translation rather captioning. I know I am dreaming but as I said, AI is developing in such a way that almost nothing really surprises me now.
A word of caution though. Like with any solution, it will not suit everyone. There will always be people for whom the solution will provide more barriers than solutions. Indeed, we have already seen this with the use of AI technology for employment recruitment and how it excludes a whole host of people, including deaf and many with autism, from the process.
As these AI solutions evolve we need to remain vigilant to ensure people are not disadvantaged and excluded. Indeed there are many inherent dangers with AI that go way beyond just the fact that I demand instant access. We certainly need safeguards and strong regulations to protect us all. But where it is really beneficial, do we need to be more accepting of AI? Do we need to embrace it? Is resistance futile?
Ill end this with Deaf Geeky’s words of caution. As you can see, AI can be and is scary!!