Speaking of Literacy

Episode 1: An Interview With Dr. Kristi Hendrickson

Iowa Reading Research Center Season 1 Episode 1

This episode features Dr. Kristi Hendrickson, assistant professor in the Department of Communication Sciences and Disorders at the University of Iowa and director of the UI Psycholinguistics Lab. Her research focuses on how listeners and readers recognize spoken and written words, how they connect words to meaning, and how they use words earlier in a sentence to predict upcoming words. She and her research staff examine this process in a range of populations (children and adults, individuals with hearing loss, and dual-language learners). Eye-tracking and EEG (electroencephalogram) are the tools of these investigations.

Learn More

  • UI Psycholinguistics Lab: The primary area of research in Hendrickson’s lab is spoken and written language processing: how listeners and readers recognize the words they hear and read, how the meanings of words are accessed, and how individuals predict upcoming words during incremental sentence processing. Learn more about Hendrickson’s current projects and recent publications on the lab’s website.
  • Dr. Henrickson’s research: Access Dr. Hendrickson’s published works on her Google Scholar profile. 
  • Wendell Johnson Clinic: Learn more about the University of Iowa’s speech-language pathology resources and department at the Wendell Johnson Speech and Hearing Center.  
  • Dr. McMurray’s research:  Access Dr. McMurray’s published works on his Google Scholar profile.
  • Dr. Si On Yoon’s research: Access Dr. Yoon’s published works on her Google Scholar profile.
  • A Novel Idea: The History of the Science of Reading: For more information about the science of reading, listen to the IRRC’s podcast, A Novel Idea: The History of the Science of Reading. 

Episode Transcript and Sources
https://irrc.education.uiowa.edu/transcript-and-sources-speaking-literacy-episode-1   

Speaking of Literacy Website
https://irrc.education.uiowa.edu/speaking-literacy


Discover more from the Iowa Reading Research Center!
Website: irrc.education.uiowa.edu
Facebook: @iowareadingresearchcenter
X: @IAReading
Instagram: @iowareadingresearchcenter

Grace: You’re listening to Speaking of Literacy, the Iowa Reading Research Center’s podcast on the field of speech-language pathology and its impact on literacy development.

Natalie: This is a podcast intended for educators, students, and any individuals that work with kids who want to better understand the role of professionals in the speech, language, and hearing science profession and how their work supports the teaching of literacy and reading skills.

Grace: We are your hosts, Grace Cacini…

Natalie: …and Natalie Schloss.

Grace: We are undergraduate students at the University of Iowa studying communication sciences and disorders and are the assistive technology coordinators at the Iowa Reading Research Center. We have come together to educate others about this profession and present the current research and how it relates to reading and writing. 

Natalie: We welcome everyone to join us on this journey as we uncover the facts and outline the roles and responsibilities speech-language pathologists have in the reading development of the individuals they work with. Throughout this podcast, we will often refer to speech-language pathologists as SLPs. You can find definitions for terms like this, as well as links to things we mention in the episode in our listening companion. 

Grace: Welcome, listeners. We are so excited to start this podcast talking about the role of a speech-language pathologist in reading and writing. Natalie and I are both here for this first episode, where we plan to hear about the clinical and research roles of an SLP from a member of the University of Iowa’s very own communication sciences and disorders department. Without further ado, we're pleased to welcome the very first guest on this podcast, Dr. Kristi Hendrickson.

Kristi: Thank you very much for having me.

Natalie: Could you introduce yourself and your role for us?

Kristi: Yeah, so I am an assistant professor in the Department of Communication Sciences and Disorders and the director of the Psycholinguistics Lab, and we do research on spoken and written language processing. I am also a certified speech-language pathologist as well. So, I have both my education and clinical practice, as well as have my PhD and do research in the department. In addition to doing research, I do some teaching of our undergraduate and our graduate students—both the master's level students and the PhD students.

Grace: Well, that's awesome. Way to give us a little segue—looking back at kind of your past in that, starting with your bachelor's degree in psychology and political science, can you walk us through a little bit of your education and professional journey to where you are today in the field? 

Kristi: Yeah, so, I'm a first-generation college student. I went to the University of California, Davis. I'm originally from Berkeley, California. And I started my undergrad, and I was shocked and had no idea what I was doing. I was not really involved in anything besides playing volleyball and hanging out with friends. And then I graduated, and I didn't know what I wanted to do. And I took some classes on child development as part of my psychology degree. And so I went to that professor, and I said, “I was sort of interested in the stuff you were teaching.” And she happened to have just gotten a grant and needed some support. And she sort of offered me a job in her lab. And it was out of nowhere. I wasn't prepared. But that really did sort of change my life at that point. I worked as a paid research assistant for two years in her lab, working on a study looking at children with Fragile X syndrome, where we were looking at visual processing deficits—very low-level visual processing deficits. And through that experience, I met a lot of families. I met a lot of amazing children and learned a lot about visual processing. But I got really interested in the language piece of Fragile X syndrome as well. So Fragile X is the number one known genetic cause of autism. And kids with Fragile X syndrome have challenges with language as well. And so I kind of shifted gears from visual science to understanding more about language. And that's when I started working at UCLA as a research associate, doing some work on how children acquire phonemes—both children who are learning one language, but also children who are learning more than one language. And so we did some of those traditional experiments where—when do children hear the difference between different sounds? And then through that experience, I realized I really wanted to be in language. And I applied to a lot of PhD programs and only one in communication sciences and disorders—it was all in psychology. And I ended up going to the one in CSD, and that was at UC San Diego. San Diego State has this joint doctoral program. And I basically got certified as a speech-language pathologist as well as did my PhD sort of at the same time there. And then I came out to the University of Iowa and did my clinical fellowship year. So, a year practicing as a speech-language pathologist here in the Wendell Johnson Speech and Hearing Clinic. And then I went back across the river to psychological and brain sciences and did a postdoc with Bob McMurray. And that was sort of my first introduction to reading, writing, and literacy research. Before that, it'll always be spoken language. And through my postdoc was my first introduction to doing research and reading. And then when I started my lab here in 2018, the first study that I ever did was a study on how much are you activating the sounds of words when you read them. And from then on, we've been slowly doing more and more research on literacy—on reading—both at the word level and the sentence level.

Grace: Wow, what an impressive journey. Now we're going to jump into the more language and literacy part of this podcast.

Natalie: Yeah, I'd be interested—could you explain a bit of how language acquisition and understanding vocabulary and all of those language skills impact literacy? I think a lot of people who know about CSD—know about speech pathologist work—understand that, but it's kind of not as well known outside of that field.

Kristi: Yeah, sure. So, a speech-language pathologist—their scope of practice is in oral language and spoken language, but also reading. So, in the schools, we work with the reading specialists and the special educators because reading is actually built on a foundation of spoken language, right? So spoken language is acquired earlier, and written language and reading has to be explicitly taught, right? You don't just kind of acquire reading by someone reading to you. You have to learn the grapheme-phoneme correspondence: letters go with sounds and these sorts of things. So, speech-language pathologists really are the spoken language experts, and reading is built on this bedrock of spoken language. In particular, you're mentioning two aspects that are really, really important to reading development, and that's phonology and, in particular, phonological awareness. So, this is what we call a metalinguistic skill. And what that means, metacognition, is thinking about your own thinking. If you've ever been like, “oh, why did I have that thought?” that was a metacognitive moment. Metalinguistics is thinking about your own language. And so phonological awareness is a form of that metalinguistic skill where you're thinking about the sounds of your language. So some examples of phonological awareness skills is, if I give you a word like “cat,” I want you to say it one sound at a time, /k/ /ă/ /t/, or vice versa, where I'm going to give you a word, and I want you to break it down one sound at a time, or I want you to say the word “sunflower” but without the “sun” part. So, manipulating the sounds and syllables of words is really, really predictive of later reading ability. So, we see those kids who have higher phonological awareness skills when they're younger have better reading skills later, and that's because sounds are so important when we're learning to read, right? Vocabulary is also really, really important. So, the more words you know or have concepts for, the better you are with reading comprehension. We know that individuals with larger vocabularies tend to have higher abilities in reading comprehension skill. So those are kind of the bedrock of the speech-language pathologist scope of practice—is really understanding phonological awareness, really understanding how vocabulary contributes to reading. And so, oftentimes, when we're working in school settings, those are the skills that we're targeting for kids who have difficulties with reading.

Grace: So now, with that understanding of the important role phonological awareness and vocabulary development have on the ability to read, what recommendations should our parent audience be aware of in the ways they can help their child outside of school?

Kristi: Yeah. So, I have recently very much changed my view on this. If you were to ask me ten years ago, I would say, “Reading to your child is so important—you need to read books.” I've changed for two reasons. First, I've become more culturally responsive, and I have more cultural humility than I had before. Shared reading with your child is not a part of every culture. It's a very Western thing to do, right? We call it dialogic reading, where you're sharing reading, asking questions about reading. So, I would never tell a parent, “You have to read to your child” anymore because that's me putting my culture on another individual. We know that shared reading for certain individuals in certain cultures where the research has been done can contribute to vocabulary development and such, right? But there's also other things that we can do. So, the other reason that I've changed my perspective is because now I have an eleven-month-old. Before, I would advise—I’d teach students how to be clinicians and say, “You can send this home program home with the parent.” It's really difficult to be a parent, right? You don't have a lot of time. The stress that is involved in having to add something to your plate—it’s so monumental. So now I think about it more as, “What do you do at home? What are things that you do at home?” “Oh, well, when we're at dinner we tell stories.” And maybe they're part of a culture that's really narrative rich. Great. Telling stories—we know that, like, fostering storytelling can help reading. So, when you're at dinner, continue to tell those stories—you’re doing it. That's great. So, it's more of affirming the stuff that they're doing in their daily life that's not going to add burden to them that we know underlies reading and writing development.

Grace: Yeah, I love that consideration of the role culture can play into reading development, as well as I'm sure the parents listening are affirmed by that perspective you share of commending the stuff parents are already doing rather than adding more to their to-do lists.

Natalie: So how has your clinical and even personal experiences you kind of touched on impacted your research and your teaching of future speech pathologists?

Kristi: Yeah, I'll focus on teaching, I guess, since we have talked about research a little bit. My background as a first-generation college student has really informed my teaching in particular. I remember being an undergrad in a class and just feeling like I didn't know anything, and I didn't know what I didn't know, and things seem so mysterious. So, I really try to have a transparent teaching style where I make it very clear what the expectations are. Instead of spending all of this time—if you're a student doing an assignment, I found myself spending so much time trying to interpret what the professor wanted me to do that I didn't have any time left to actually do well with the assignment. So, I break down assignments into pieces. So, instead of having a big term paper at the end of the semester on—I've had this in the past—maybe a big term paper on the development of reading. Sure, it's at the end, but every week we workshop little pieces of that assignment. And if students just show up and try, they get points—they're not graded for it. And then we work together over time, step by step, and then at the end, they write the paper, but by that point, I've seen their writing a million times, and it's a pretty quick grade with a strong rubric. So, I think my background as someone who felt lost a lot of my college years—like I didn't know what I didn't know—I really try to teach in a transparent style, which has been shown to be more equitable. So those students who need the most help get the most benefit—more than students who don't need as much help. And so, I've tried to incorporate that into my teaching. I just had a meeting with my teaching assistants for that class I do the writing in, and one of them had taken the class in the past. I'm like, “Please tell me all the things that I did poorly here,” because, from a student perspective, things are so very different.

Grace: Thank you for that input. That’s really valuable information, especially for educators listening. Now we’re going to look more at your research. We know one of your methodologies in your research is eye tracking. We'd love to hear a little bit more on how this works and what kinds of research questions you’re looking at when you are conducting these eye-tracking studies. 

Kristi: Yeah. So, the eye-tracking method, we primarily use that for asking questions about how individuals recognize words. And normally, this paradigm that I'm going to talk to you about has been used for spoken word recognition. So, you hear me say a word like “candle.” How do you know what those phonemes, those sounds are? How do you recognize the word form of “candle”? In my postdoc, I worked with Bob McMurray and others in the lab to develop a written word version of this so we can look at—okay, when you are reading a word, what are the processes, and then compare them across the modalities. And the paradigm is this: Participants see four images on a screen. And I'll give you a word example—the word “candle,” right? So, I should step back and say, this is what happens, and then we can kind of talk about how that's instantiated in the method. So, as you hear me say the word “candle,” it's unfolding over time, right? So, there are moments as the word unfolds over time where there's uncertainty about what the word will be. So, as it unfolds—let's slow down my speech dramatically—and I say, “ca—,” it could be “cat,” “candy,” “camera,” right? And listeners—you don't wait to the end of the word and go, “candle.” The moment you hear speech information, you activate your brain, activate words that are congruent with that information. And then over time, as the word unfolds—this is happening in milliseconds—you are sort of inhibiting those words that are not “candle.” So that's happening, like, five times a second. You're constantly doing that. As I'm talking, you're overactivating things, inhibiting, overactivating, inhibiting. And we know a lot about that process in spoken word recognition using eye tracking in a task called the visual world paradigm. So, we put four images on a screen, like a picture of a candle, a picture of candy, maybe a picture of a handle, and then some unrelated thing. And then, participants click a little dot in the middle of the screen, and then they hear the word “candle,” and we are tracking where they're looking on the screen, millisecond by millisecond. And what we see is that, as the word unfolds and they hear “ca—,” their looks are being directed both at the candy and the camera, and they're kind of competing. And then, as the word unfolds, they're recognizing that “handle” and “candle” match at the end—they're rhymes, and so they start to look there. So, we're using this paradigm. We have found that, as you're hearing a word, there's competition. A lot of words are activated, and they compete. And this has been shown for reading too, but with different methods and only indirectly. So, we developed a written word version. So, instead of hearing it, you click on the dot in the middle, and then you read the word. And even though the word is not unfolding over time—you can see the whole word at once, there's no uncertainty—this competition process is still happening. Now, it doesn't happen as much for spoken words, but it still does happen, which is incredible because there's no ambiguity, there's no uncertainty. You look at the word “cat,” but you activate words that look similar to “cat” as well.

Grace: Thanks for that explanation. It’s amazing how, with eye tracking, we can see how people process spoken and written language over time.

Natalie: So, some research in your lab has focused on the role of phonology and orthography in word recognition—orthography being the written system of a language. Could you talk about some of your findings and the importance of phonological awareness for recognizing both novel and familiar words?

Kristi: Right. Yeah. So, in terms of… there's a debate. I'll go kind of all the way back. There's a debate about how to teach children how to read. There's sort of phonics-based approaches, and that's when I was growing up, I was “hooked on phonics works for me.” We were taught phonics, which is understanding the grapheme-phoneme correspondence—that letters make these sounds and really explicitly teaching the sounds. Now, we can compare that against whole-language-based approaches, which essentially try to teach reading—or at least decoding, right, decoding, meaning, word recognition—the same ways we teach language. Just sort of give them the words, and they'll implicitly learn. There's no need to really focus on the sounds that much. And there's sort of an implicit assumption here. Major theories of reading single words suggest that there's these two roots that you were bringing up: familiar and unfamiliar words. When you have a familiar word, you just look at it, and you activate what word it is, and you don't need the phonology, you don't need the sounds. And then there's this indirect route, which, if I give you a word like “tbudaischultz,” you look at it, and you have to sound it out. So, you activate phonology. So, this is a famous model—the “dual-route model,” it’s called—where you have a direct and an indirect route to word recognition. And sort of implicit in these whole-language-based approaches is that, once the word is familiar, you don't activate phonology. It's not that important. So why do we got to keep teaching it? Research in our lab—and this was spearheaded by my PhD student, Jina Kim—found that, actually, when you're reading highly familiar words—and I'm talking about words like “cat,” “dog,” “candle,” “chair”—you are activating phonology. So, this is really strong evidence that teaching the sounds that correspond to letters is really, really important even when a word becomes familiar. And we did that with a really interesting stimuli—we did it with anadromes. So, these are words that are the same forward and backward, like “God” and “dog,” for instance. And we had these three conditions. We had a condition where both phonology and orthography were the same forward and back, like “God” and “dog.” And then we had a condition where the orthography is the same forward and back, but the phonology is not, like “leg” and “gel.” (The word “leg,” and if I turn it backwards, it’s “gel”). And then we had another condition where the orthography—the words look nothing alike forward and back, but the phonology is the same forward and back. And that's something like “badge” and “jab,” right—those two. And what we find—if you remember that, when you read a word, you activate words that sound similar, and this is the case for anadromes—what we found was, in that condition like “God” and "dog,” where both orthography and phonology are telling you they're the same, you activate those pairs big. Interestingly, when you read a word like “jab,” your brain is activating “badge,” even though they don't look anything alike. And these are highly familiar words. And in the case when it's “leg” and “gel,” where they look identical forward and back, but the sounds are different, you don't activate them. So, this suggests that phonology means a lot. When you read a highly familiar word—even a word like “cat,” that you should be able to just go straight past phonology—adults don't and kids [don’t]. So, we have a study of adults and middle schoolers, and both of them are activating phonology to a large extent.

Natalie: That is so interesting. And it makes me think, if we know phonological awareness and phonology is so important, and that [it] develops pretty young, and learning to speak, and again, the connection between written and spoken language, can we look at really early intervention for students before they would even start learning to read—like three, two—who are having issues, maybe with phonology and understanding those phonemes, as really an intervention for future language development? I guess, how early can you start intervention for reading?

Kristi: Yeah, I mean, these pre-reading skills—like phonological awareness, vocabulary, anything with spoken language—there has been work to show that interventions for children before they start formal reading instruction actually have impacts for reading. So, it's the case that younger children with these pre-reading skills, if you intervene and teach them skills in phonological awareness, it does transfer to reading. But as you get older, like in second grade, this relationship is not as strong. So, if you teach someone phonological awareness skills in second grade, it doesn't help as much. And that's because these top-down skills like vocabulary knowledge matter more. So, you're right, doing these interventions early with reading may not have to do with reading or the visual of reading at all, but the underlying sound system. 

Grace: So, yeah, we know the younger we start children gaining that understanding of pre-reading skills is just so crucial. I would love to lead this into a little tricky question. I'm putting you on the spot for, but for our parents out there looking for advice from someone who has that clinical and research background like you do, what would you feel comfortable sharing with them?

Kristi: What I would say is if parents are wanting to do more at home, although we've already talked about that—they're probably busy enough. Making that connection between sounds and letters is really very important, and it's something that you can't get around, and it has to be explicitly taught.

Grace: Awesome. And maybe one other thing off of that, in your experience with other SLPs and the school environment, for a parent that has their child working with an SLP in the schools, what do you wish they knew? Obviously, I think it's still misunderstood for a lot of families that send their child to speech on what really, really is going on during these sessions and classes, so they have a better understanding of kind of that background in that too.

Kristi: Yeah, so, when I did practice, I practiced mostly in California, and things can be a little bit different. But I would say what an ideal situation would be in a school is that your child is being kept with their general education peers in their general education classroom. So back in the day, speech-language pathology looked much different. Well, first of all, language wasn't even—it was the speech teacher, right, or the speech pathologist—now, speech and language. And it used to look like someone coming to the class and taking a student out and going to the speech teaching room and just the speech-language pathologist doing treatment that is kind of drill based with a kid. Now, what's happening—and this is based on knowledge that we know it's best if children stay in their general education classrooms—is more and more speech-language pathologists are pushing into the classroom and co-teaching, right? So, in my practice, I rarely pulled children out and went into the speech room. I mostly pushed into the classroom. And so, we would do activities where we would have a rich focus in, let's say, phonological awareness. But all the kids were getting that. Now we push into the general education classroom, and we are doing stuff where we're having students work in small groups or work together to tell stories or have conversations. Or if it's an idea with phonological awareness, we're doing a whole classroom push-in approach. And this has several advantages. One of them is it keeps the student in their general education classroom, and all kids could use more instruction on these types of topics. It's not just the kid who I'm serving, but all of the kids could. So, I think there's this sort of idea that the speech-language pathologist comes and pulls kids out. And that actually modern, state-of-the-art type of speech-language pathology looks more like a teacher teaching in the classroom. And this is also because a lot of the students who we work with have multiple goals. Where maybe it's a reading goal, maybe it's a goal about interacting with their same-age peers. And those goals are really hard to trigger when you pull children away from their same-aged peers, right? So, we try, to the extent that we can, to keep children in what's called their “least restrictive environment,” which is with their peers in the classroom.

Natalie: Right. And I think that was a great point, that every student can probably benefit from having more phonological awareness because we know just how important it is for reading and language just across the board.

Kristi: Right. Yeah. And teachers usually are excited to work with a speech-language pathologist because they're kind of the communication expert. And communication at the conversational level or in written language and reading is so important because we know that, by fourth grade or even before, children are really expected to transition from this learning-to-read phase to where they're, like, reading to learn science, they're reading to learn math. And so, it becomes this bottleneck for some students, where it can restrict their ability to learn other subjects as well. So, you're right. I mean, a lot of students can benefit from the goals that we might work on with students who are on our caseload.

Grace: For sure. We kind of had one more point—if you wanted to speak to this or not—into kind of the work you've done with bilinguals and your research on that and kind of sharing that perspective with anybody that's listening, that works in a school and has in kind of this research on the level aiming for individuals with that vocabulary development and word recognition in bilinguals.

Kristi: Yeah. So, we have a grant to look at children who are learning two languages. How are they negotiating those two languages as they hear words and as they read words? So as part of this project, we go to local dual language immersion programs around Iowa. The one that we've collaborated most with so far is in Pella, Iowa, where they're learning their curriculum is both in English and Spanish. This is a collaboration with Stephanie De Anda at the University of Oregon, who's also doing this. And we take our eye tracker into the school after school on the weekends—a shout out to Mi Trinh, Héctor Sánchez Meléndez, Katlyn Bay, Saloni Upadhyay, where these are students who have gone to Pella, and they have done most of the testing for this—where we bring the eye tracker and we do a range of language assessments, both in English and Spanish. But what we're really interested in is, when a bilingual is reading or listening in one language, to what extent is that other language active? So, that competition process that I talked about—you hear the word “pencil,” and you might activate words that start the same in English, but you also might activate “perro,” which starts the same. How much does that competition move across language, essentially? And we've done this in undergrads at the University of Iowa. We've done this with middle schoolers who are learning English and Spanish, and we're doing a project on older adults, too—older adult bilinguals. This is a collaboration with Si On Yoon, who is at NYU. And what we're finding is, when you are listening in one language, so when you're listening in your dominant language, you're activating your non-dominant language quite a bit. Or, excuse me, the other way around. When you're listening in your non-dominant language, you're activating your dominant language quite a bit. And we found this in adults, and we found this in children. And interestingly, we're seeing less of that cross-language activation happening when reading. And we have some hypotheses as to why this is the case. Going in, I thought that, when you're listening in a language, like, if I'm listening in English, and I'm a bilingual English-Spanish speaker, there's a lot of cues in the signal that are telling me that I hear English. So, for instance, even though we have the phoneme “b” or /b/ in English and Spanish, our vocal folds start to vibrate earlier for the Spanish “b.” There's differences, even though it's both a “b,” there's these cross-linguistic differences, versus the letter “b” is the same in Spanish and English. So, our prediction was that, when you're reading in English, for instance, there's more ambiguity about the language because the letters look the same while the sounds don't. And so, you might get more of that cross-language activation. And that's not what we found. What we found is that you get more of the cross-language activation when you're listening. So, when you're communicating via spoken language, the two languages are interacting more than when you're reading. Our main idea of why this is is because spoken language unfolds over time, and there's more ambiguity, and that makes way more competition. Versus reading—there’s less ambiguity, so there's less competition within language and across language. But we're trying to sort of tease that apart with future studies.

Natalie: I think it adds a whole other layer to understanding the link between speech and reading, especially, say, a child, mostly at the home, speaks Spanish. And then once they get to school, they're given all these texts to read in English, and they're trying to be taught how to read in English. How does that competition work in transferring the phonological awareness from mostly Spanish with some English, to reading solely in English?

Kristi: Yeah, I mean, you're bringing up the point, a really important point, that someone's dominance in a language or their proficiency might change if it's listening versus reading. So, what we do is we give our participants these language exposure questionnaires where they can give us a lot of information about their perceived proficiency and use of a language and when they were exposed to the language. And through that, we can kind of decide which language they're dominant in. And it turns out that that depends on modality. So, a lot of our participants, at least for the undergraduate study, said that they felt dominant in English when reading, but very balanced when speaking, and that changes these competition processes. So, if you're pretty balanced, you may or may not show more competition because there is no dominant language, right? And we could see that, in addition to looking at competition, we look at the speed with which they're processing the words, right? So, we look at how fast they're recognizing English and Spanish spoken words and how fast they're recognizing English and Spanish written words. And what we see is that their reported dominance changes how fast they're processing things. So, the participants reported that they were English dominant for reading, and it turns out they were faster at reading English words, and they reported that they were balanced when listening, and they showed equal speed in recognizing English words and Spanish words. So that really does suggest that thinking about language proficiency, dominance history, and modality, spoken or written, is really important in this type of research.

Natalie: I would be interested to know how this differs with different languages. I know most of our research we have here in the US is focused on Spanish and English because that's our most prominent bilingual population. But, for example, French and Mandarin are two very different languages. Does it matter what the languages are, or is the research similar?

Kristi: It matters a lot. So, there's research to show that—when I worked as a speech-language pathologist in the schools, I worked in a school that had a large proportion of English-Vietnamese bilingual speaking children. And you see evidence of language transfer, right? So Vietnamese is a language where there's very few consonants at the end of words. English has a lot of consonants at the end of words. So, what you see is that individuals who are learning both Vietnamese and English children—dual-language learners—in their English, you'll show a lot of final consonant deletion, and that's the influence from the second language. And it's so important as speech-language pathologists to understand how much two languages interact because if a child who is monolingual English was deleting final sounds when they spoke, that could be indicative of challenges with speech-sound production, right? But not if it's influenced from a second language. That's just evidence of being a natural bilingual and have your two languages interact. Another point you bring up is orthographic systems. So, we've toyed with the idea of doing these sorts of studies about how much you activate phonology when you're reading with character-based systems. So, I have a student who's Korean. We've thought about—Jina Kim, my PhD student—we've thought about doing these same studies to see how much the orthographic system is contributing to this activation of phonology.

Natalie: That's so fascinating. And it brings up, again, the cultural competency for speech-language pathologists, teachers, really, everybody, especially working with children.

Kristi: It's the most important thing. It's the most important thing to understand as a speech-language pathologist, to understand different cultures, to understand different language systems. You rarely work with a child who doesn't have individual differences in language experience that needs to be taken into consideration. And that's why it's so important to have cultural humility and to understand language systems outside of English—have a pretty good understanding of their sound system, of their linguistic structure—because then you're able to have a more nuanced understanding of a child's language and reading and writing abilities. And so you don't overdiagnose them with disorders or perhaps underdiagnose them because you're just assuming that it's due to language transfer.

Grace: Yes, I think that's actually a great core thing to leave everyone with today because, sadly, we're out of time. 

Natalie: Dr. Hendrickson, we would like to thank you very much again for taking the time to chat and sharing so much valuable information with us and our listeners today. Be sure to check out more information about Dr. Hendrickson and her research within the Psycholinguistics Lab at psycholinguistics.lab.uiowa.edu. 

Speaking of Literacy is a podcast from The Iowa Reading Research Center at the University of Iowa. It’s produced, edited, and mixed by Grace Cacini and Natalie Schloss, with support from Bailey Christensen. Expert review by Nina-Lorimor Easley, Lindsay Seydel, and Stephanie Edgren with additional review provided by Kate Will, Olivia Tonelli, and Sydney Smithgall. 

For further credits, including audio and music attribution, please see the link in the show notes. For definitions of key terms and links to research mentioned in the episode, check out our listening guide.

Visit us online at www.irrc.education.uiowa.edu for more information and additional literacy resources for educators and families. You can also follow us on X at @IAReading or on Instagram at @iowareadingresearchcenter.

If you want to help spread the word about Speaking of Literacy, subscribe, rate, and leave us a review wherever you get your podcasts. If you are interested in being a guest on this show, complete the survey in the show notes, or send us an email at irrc@uiowa.edu

People on this episode