Do It Myself Blog – Glenda Watson Hyatt

Motivational Speaker

From Speech Impairment to Motivational Speaker: How I Create My Talking PowerPoint Presentations

Filed under: Motivation — by Glenda at 3:16 pm on Wednesday, December 4, 2013

Glenda presenting at the Cerebral Palsy Association's AGM

People are often puzzled by how I can be a motivational speaker when I have such a pronounced speech impairment. A fair puzzlement, indeed.

My career choice is largely thanks to technology. Because of technology, I am able to convert text into synthesized speech, which I then embed into my PowerPoint presentation that also has scrolling captions and images.

However, the process is not for the faint of heart or technophobe. For the technophile who likes an ingenious mashup, here is a behind-the-scene-look at how I created my most recent PowerPoint presentation “Go Beyond: Stare Your Fear in the Face and Go for It!”

Writing and Editing

The process begins with writing my presentation in Microsoft Word. Typing with only my left thumb is slow; using the WordQ software for word prediction and completion saves me keystrokes.

However, when I am in my writing groove, I either keep typing and lose the benefit of having word prediction or I constantly look up at the word prediction box on my computer screen and lose the flow of words.

Here’s where using my original iPad with the free (yet no longer available) DisplayLink app as a second computer screen comes in handy. I drag the word prediction box over to the second screen and place the iPad on my lap, within the same view as my keyboard, which makes writing a little more comfortable.

Word prediction box on iPad on my lap

Time: 20.75 hours

Chunking Text

The scrolling captions, for the benefit of audience members who are hearing impaired, are actually text boxes stacked above each PowerPoint slide. Motion paths (the green and red arrows in the image below) move the captions down to along the top of the slide when I hit the Space Bar while presenting. Each slide has 15 captions; an arbitrary number that can easily be decreased on a slide, but not easily increased.

A PowerPoint slide with caption boxes stacked above

Each caption holds approximately a line and a half of text from Microsoft Word.

Once I have written my presentation, I break the text into slides and captions. Captions become identified by the format Slide X-Y – where X is the slide number and Y the caption number – which is important in later steps.

My written script divided into slides and captions

Some slides end up having less than 15 captions and some captions are short depending on natural breaks in the content and where I want slightly longer pauses. This is one of the few ways I can control the speed of delivery.

Time: 2.5 hours

Copying the Captions

At this point, my ever-patient husband Darrell copies the captions from Word and pastes into the corresponding the caption box in PowerPoint. He also saves each caption as a separate text file, using the structure Slide X-Y as the filename.

Time: 4.33 hours

Kate-izing the Text

Next comes converting the text into speech with the software TextAloud and the synthesized voice known as Kate. One by one, I open each text file and listen to how Kate reads it. Sometimes some tweaking of the pronunciation is necessary; for example, is “read” meant to be spoken as “reed” or “red” in that instance?

Screenshot of TextAloud software

Once saving it as a WAV file (the only option compatible with PowerPoint), I link the audio file with the appropriate caption via the Animation Panel in PowerPoint. Here’s where the filename structure Slide X-Y comes in handy.

Animation Effects dialog box in PowerPoint

Time: 5.25 hours

Creating, Adding and Layering images

For the most part, I use my own images rather than stock ones in my presentations. Finding them and then cropping and adding arrows or such (as needed) takes time, albeit fun.

The tricky part is the layering of the images. The slide below has four images layered upon one another, plus text boxes and arrows to highlight details. All of these are inserted between the appearance of the captions via the Animation Pane on the right.

Slide with captions and open Animation Pane

Getting the order and the timing right for all of these moving bits is when I reach for the chocolate; the darker, the better.

Time: 17.25 hours

Testing, Tweaking and Practicing

Now that the presentation is built, I can see how it looks and sounds as a whole. I make revisions, adjustments and corrections as needed. Changing one word means redoing the audio file, editing the caption and re-linking the audio file to the caption. It all takes time, but it is worth it in the end.

With this one presentation, I ran out of time before I was 100% happy with the end product. No one knew except me.

Time: 3 hours

After 53.08 hours, 16 slides, 163 audio files, 163 captions, 163 motion paths, 38 images and numerous arrows, text boxes and accessories, I have a 30 minute presentation. Whatever it takes to get the job done!

Here is a brief clip from “Go Beyond: Stare Your Fear in the Face and Go for It!”:

(Transcript is available here.)

To have me share the entire presentation with your group, your organization or at your event, please contact me.

If you enjoyed this post, consider buying me a chai tea latte. Thanks kindly.

Related Posts

I Can Communicate, But Is My Voice Being Fully Heard?

Filed under: Living with a disability — by Glenda at 7:00 pm on Thursday, March 21, 2013

Glenda using her iPad

Reading Robert Hummel-Hudson’s blog post Finding Her Own Voice has me thinking about the difference between “voice” and “communicate”. (I wonder how many people have ever sat down to really consider the difference between these two terms that might appear synonymous upon first ponder.)

Text-to-speech devices enable individuals to communicate, but are our voices fully heard? How can we reflect panic, softness or passion with these devices?

In my pondering, I recalled a moment at last summer’s ISAAC conference (the international conference on augmentative and assistive communication). One afternoon I attended a Town Hall, which had a somewhat futuristic sounding vibe because only people using AAC were allowed to speak. The talkies needed permission to speak.

At one point, I needed to swallow a giggle after an abrupt “No” came from a robotic sounding voice from somewhere in the dimly lit auditorium, in response to what the moderator had said from the stage. A few moments later came a response from a somewhat similar sounding voice elsewhere in the room. The slow paced conversation continued between the similar sounding robotic voices.

With spoken voices, the individual speaking can be identified and much information can be garnered from the sound of the voice: the speaker’s rough age, usually the gender and ethnicity, as well as the speaker’s emotional state and such.

However, with these synthesized voices, most of this information cannot be determined from the sound alone. These voices sound so alike.

This is one reason why, a few years ago, I was immediately drawn to NeoSpeech’s Kate, who I use in my presentations and videos. Kate’s voice is different, distinct; dare I say, even sexy. It was love at first sound byte!

Yet, Kate does have her limitations. When I am creating a presentation, part of the process is what I call “kate-izing”: tweaking her pronunciation to be as correct as possible, e.g., is “read” to be spoken as “reed”’ or “red”? Oftentimes the tweaks are fairly straightforward, but there are hilarious moments while I, with a significant speech impairment, attempt to correct the pronunciation of a synthesized voice. It feels like high tech speech therapy!

The tweaking of her pronunciation is relatively easy; the conveying of emotion is what I have yet to make her communicate. The excitement. The passion. The rant.

I acknowledge that this is one of my challenges as I move forward with my motivational speaking. I will need to rely even more heavily on the right choice of words rather than on tone and inflection to fully communicate the message I am aiming to get across. Yes,  I can also use my body language and facial expressions, but, with my cerebral palsy, that is not always under my full control either. It will be a learning process with much experimenting to find an effective way to use my voice fully.

An interesting ponderment, isn’t it?


To keep up with my adventures, musings and insights, be sure to subscribe to DoItMyselfBlog.com.

If you enjoyed this post, consider buying me a chai tea latte. Thanks kindly.

Related Posts

Life with a Speech Impairment: A Toolbox of Communication Methods Required

Filed under: Living with a disability — by Glenda at 7:33 pm on Thursday, March 14, 2013

So…how do I communicate when I have a significant speech impairment?

It really depends upon the situation and degree of familiarity the other individual has with my Glenda-ish.

Allow me to explain.

Phone calls with individuals without any experience in Glenda-ish

Text chat on SkypeIn the last two weeks, the need arose for two phone calls with people not indoctrinated into my unique dialect. It is difficult for people to understand that, yes, I am a motivational speaker, yet chatting on the phone is not possible – until they master Glenda-ish.

Thank goodness for Skype!

I text chat while the other individual talks. Or, we both text, which results in a complete record of our conversation. There is no need to take notes. Yes!

Meeting with friends still learning Glenda-ish

Glenda and Avril next to a colourful dragon lanternWhen my friend Avril and I spent a wonderful afternoon at the Vancouver Art Gallery and then wandered around the Chinese New Year festivities, I spoke a few words, which she was fairly good at deciphering.

Once we had ordered our award-winning gelato – my choices indicated by saying “two” or “four” (from the top on the posted menu) – and were sitting at a table, I whipped out my iPad to use the keyboard with word prediction in Proloquo2Go. That allowed for a deeper and more equal conversation.

Glenda Watson Hyatt and Karen PutzA few weeks later, when my friend Karen from Chicago came in to town for an all-day workshop the following day, I had the pleasure of greeting her at the airport and then going for lunch at Steamworks right downtown.

With Karen being Deaf, another layer of communication is added to the mix. Because using my iPad on the SkyTrain is not overly wise, I pulled out a communication skill I learned many, many moons ago in Brownies: finger spelling! It did the trick quite nicely.

Likewise, a couple of years ago when I met my friend Jennison, his blindness required yet another layer of communication since he couldn’t see what I was typing on my iPad. Thankfully the Proloquo2Go app has a Speak button. Jennison listened to what I had typed. We proceeded with an easy flowing conversation.

Meeting with the Master

After seeing Karen to her hotel, I zipped next door to the Metrotown Mall to find an accessible washroom. As it was only mid-afternoon, I had the urge to ask Darrell if he would like to meet for coffee at our Tim Horton’s.

But I don’t have a cell phone. Not a problem. I whipped into Chapters Bookstore and parked close enough to the Starbucks area to borrow their wifi. Using the Skype app on my iPad, I texted my husband and arranged to meet him in half an hour.

Sitting at Tim’s with our cafe mochas in hand, we talked for an hour or so, which isn’t unusual for us, without any hiccups in communication, except for the “men are from Mars, women are from Venus” moments. I cherish the conversations we still have, after nearly fifteen years of marriage.

Glenda Watson Hyatt and Darrell Hyatt

For me, having a significant speech impairment means having a toolbox of various communication methods that was I can mash together and switch out in a fluid manner, depending upon the situation and the needs in that moment. It truly is that simple.


To keep up with my adventures, musings and insights, be sure to subscribe to DoItMyselfBlog.com.

If you enjoyed this post, consider buying me a chai tea latte. Thanks kindly.

Related Posts

From a Speech Impairment to a Motivational Speaker: How Did I Get Here?

Filed under: Motivation — by Glenda at 2:53 pm on Tuesday, February 12, 2013

Glenda Watson Hyatt speaking at Open Web Camp IV
(Photo credit: Dirk Ginader)

While sitting at the airport gate last July, waiting to board the plane to Portland and then onto San Jose where I was scheduled to deliver two presentations on web accessibility, I wondered, "How did I, an individual with a significant speech impairment and a physical disability, get here?"

I pulled out my iPad and made some notes, which I found a few days ago.

I asked myself again: how DID I get there – sitting at the gate, waiting to board a plane to the States to give two presentations?

I flashed back to one brief session that Mom and I spent with Fred, the guidance counsellor, during my last year at high school. Thumbing through the various university calendars and brochures, the Certified General Accountant program sounded somewhat appealing. I was good at math and I could take the courses via correspondence, which would be perfect because my family was moving to isolated rural living once I graduated. I could then establish a business and work from home as a CGA. That was the extent of my career planning. Seriously.

I did one year of the two-year program, but I slowly realized that I wanted more in life; something more than sitting alone in my bedroom, working on boring accounting assignments. (This was long before the Internet and life as I know it today.)

One thing led to another and I found myself living on my own in a one-bedroom apartment in residence at Simon Fraser University atop Burnaby Mountain. After taking a course or two per semester for seven years straight, I graduated with my Bachelor of Arts (BA) with a major in psychology and a minor in Communications.

A minor in Communications. That is somewhat related to giving presentations; kind of. But that still didn’t fully explain how I was about to board a plane on my way to two speaking gigs.

Following a few twists and turns after graduating with my BA, I found myself giving the occasional presentation. However this was long before the text-to-speech software that I use today. Presentations were participatory: audience members took turns reading aloud text on the PowerPoint slides.

When presenting at one local conference, the laptop refused to communicate with the LCD projector. For the thinking-on-my-feet solution, I had attendees come up to the front, one at a time, to read aloud what was on the screen. Now that is a highly participatory session! For my next presentation I prepared acetate sheets for the overhead projector, as a backup plan. But I digress.

Life continued meandering until another twist came in April, 2005. I share this excerpt from my autobiography I’ll Do It Myself:

I was asked to speak at the Social Planning and Research Council of British Columbia’s (SPARC BC), “Beyond the Obvious: Exploring the Accessible Community Dialogue”. My initial thought was But I don’t give speeches. I can’t. Since I was raised without the word “can’t “in my vocabulary, that was a fleeing thought. I quickly turned my thought to How can I do this?

I had been using the free computer software ReadPlease for a couple of years to proofread my writing. ReadPlease reads aloud text that is copied into the program. I thought, Maybe I could put ReadPlease onto my laptop and have it read aloud my speech for me. I hesitantly agreed to speak. Unsure if the technology would work, I took a printed copy of the speech with me, in case I needed someone else to read it on my behalf.

Finally, it was my turn to take the stage. Being on stage alone for the first time in my life, with two hundred eyes staring at me, I wanted to run. But, I didn’t. I gave my speech. When I was done, I left the stage, trembling. I had given my first ever speech! And the technology worked!

Glenda delivering her first speech

Afterwards something amazing happened. For the rest of the day people actually came up to me and spoke with me. I was heard for the first time. I was no longer invisible, no longer silent. It was an amazing, unexplainable feeling that I would like to experience again. I would like to give more speeches. I would like to be heard again.

Since that moment, I have delivered several more presentations. Each time I was heard again; an experience that has yet to get old for me.

How did I get here?

By taking the less traveled road. For an individual with a significant speech impairment, being a motivational speaker is not the most obvious career choice.

By taking a deep breath, believing in myself and  saying “Yes, I can!” to something least expected from someone who does not speak clearly.

By figuring out the technology, with much assistance and support from my husband Darrell, to make it possible for me to travel this path.

By surrounding myself with people who will not let me fail; people who see beyond my disability and push me to become all that I can be.

In a snapshot, that is how I ended up waiting for a flight to San Jose. And, to be honest, that is how I hope to get to visit more places and to deliver many more presentations.

For this reason I am beyond excited to announce my new site, my speaker site at GlendaWatsonHyatt.com.

By following along this path less traveled to be a motivational speaker, my intention is to encourage, to entice, to motivate you to move forward, to go for it, to strive for your potential and to live life more fully.

Please visit my speaker site for more information about this adventure. And, thank you for joining me in this amazing journey.


To keep up with my adventures, musings and insights, be sure to subscribe to DoItMyselfBlog.com.

If you enjoyed this post, consider buying me a chai tea latte. Thanks kindly.

Related Posts

An Early Valentine’s Day Love Story

Filed under: Motivation — by Glenda at 10:49 pm on Thursday, February 7, 2013

Two weeks ago the Marketing Manager at Reality Controls approached me about their application control:mapper, "which is helping to improve the quality of life of people with disabilities through motion and voice control technology. Desktop computers can be controlled by voice, arm, head, torso and feet motions." I was intrigued and, because they are in Vancouver, I was interested in seeing it in action. However, my Faith kitty was unwell and I didn’t dare leave her alone.

Today, Faith was well enough to leave. Darrell came with me. I had asked earlier if he could, because I thought he would be interested in trying the application too since he is such a geek.

The control:mapper definitely is intriguing and will make gaming (and other applications) more accessible to people with motor disabilities.

I appreciated having Darrell with me because he was able to translate Glenda-ish. Even though I have my iPad for communication, having my husband translate is still easier and more efficient. He also offered valuable ideas and insights to the development team.

Darrell and I ended up having lunch at McDonald’s for the sole reason that I was in DESPERATE NEED for an accessible washroom. (His pit stop earlier that morning was not an option for me. Enough said.)

He was dreading going back to the SkyTrain Station because of the seemingly steep hill en route; that is what happens with a lack of depth perception and spatial orientation. Working around his dread, we wheeled an extra 12-15 blocks to another station (with a familiar hill). It was a quasi spring day, so why not enjoy an unplanned road trip together?

Along the way, we happened across Purdy’s Chocolates Headquarters and Factory. We experienced the store, of course.

Give and take, and working as a team: that is how love works.

Happy Valentine’s, a little early!

Glenda sitting outside of Purdy's Chocolates

“When life takes you pass a chocolate factory, ENTER!”

If you enjoyed this post, consider buying me a chai tea latte. Thanks kindly.

Related Posts

« Previous PageNext Page »