Remember that automatic caption accuracy is far from perfect. As such, I miss too much information trying to read the captions on the phone during the call. This doesn’t work well because of caption placement. Some people suggested using a phone for the captions. I won’t have to work to bring both the captions and the video app back into view. In other words, when I select the video app, the captions will be right there. The key is that it needs to be part of the app. Perhaps, a compromise is to give users a choice on caption placement. See the section on eye contact for more information. Remember that part at the beginning about my brain multitasking to listen (in my own way)? That’s why the placement of the captions matter. To follow a conversation, I depend on reading lips, my cochlear implant, and the captions. This is especially true in video conferencing. One of the biggest reasons is because it puts the captions closest to the lips. At 98 percent, almost everyone picked the bottom. I asked people if they prefer captions on the top or bottom of videos. Does it do it on its own? Or do I have to play with it to keep it in place? Does it keep disappearing or jumping? 2. Background: #242424 (slightly off-black)Īnother factor is the scrolling of the text. I’ve done many experiments using feedback from people including those with color blindness, dyslexia, and ADHD. You know how most captions contain a black-ish background with white-ish text? Yup, this combination also works well for live video calls. (Cover in the next item: Caption Placement.) I constantly go back and forth between the two. Where possible, I placed the captions below the video, which brings them closer to the lips. Nonetheless, I tried the apps because I wanted to give them a fair chance. This factor quickly knocked some apps out of contention. Readability has three components: size, format (color), and scrolling. This refers to the ability to read the captions, not the actual content of the captions. These determine the effectiveness of the tool. The guidelines for automatically captioning a video call is also different from the captioning guidelines for videos.īased on my experience as someone who depends on captions to hear, I’ve created these automatic captioning guidelines for video calls. Live transcription while on a video call is a wholly different experience than simply transcribing a call after the fact. Still, a couple tend to work better than others. None of the free automatic captioning tools is perfect. Automatic Captioning Guidelines for Video Calls Even with live subtitles from an automatic captioning tool. The more attendees there are, the harder it is to follow. One-on-one video calls are best, of course. Knowing when to speak up in a group video call is hard. She researched it and found a way to do it with Google Slides. Just before the call, Ann Marie Beebout tells me she’s going to caption the call. At the very least, I could see their faces. The Turning PointĪ client sent an invitation to a friendly lunch video conference. That’s what happens to me on a video call. That most of us become less efficient at each task. You’ve probably seen the studies on multitasking. It has to convert lip movement and sound into sentences. Listening to video calls requires I work harder to hear than the average person. For me, listening is more important than speaking. These factors together create a frustrating experience. Another problem is that some videos look pixelated or blurry. Lipreading is hard when the sound doesn’t sync with lip movements.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |