Multilinguish Episode 1: Sexist Robots

Is artificial intelligence inherently sexist? In the first episode of our podcast, we dig into where AI bias comes from and how to correct it.
AI bias

Subscribe to Multilinguish on Apple PodcastsSpotifyGoogle PlaySpreakerStitcher or wherever you listen.

When you ask Amazon’s virtual assistant Alexa whether it’s sexist, it replies: “I think everyone deserves to be treated with fairness, dignity and respect.” But examining artificial intelligence as a whole tells a different story, in which virtual assistants, translation programs and robots of all sorts are created with human biases programmed in (though not on purpose). In our first episode of the Multilinguish podcast, we learn the two ways AI is programmed to process information, hear examples of situations where language led to some unexpected (and sometimes creepy) outcomes, and discuss what’s being done to address AI language issues before things go completely out of control.

Part I: How Does Language Create Sexist Robots?

Producer Thomas Moore Devlin explains where AI language bias comes from and how this problem can (maybe) be solved. We hear from computational linguistic engineer Kate McCurdy, who has done extensive research on gendered language and how it affects artificial intelligence.

Part II: What We Learned This Week

In our roundtable segment, “What We Learned This Week,” the whole team gathers to share the fun and fascinating language facts we uncovered in our research. This week:

  • David reveals the world’s oldest language (warning: it’s complicated)
  • Steph tells us how to bring Marie Kondo’s decluttering advice into yet another aspect of our lives: language learning
  • Dylan (that’s me) gives us a glimpse of dating apps around the world
  • Thomas explains the wonderfully complex way Chinese typewriters work
  • Jen digs into the surprising ways Spanish and Arabic are intertwined

Show Notes

Special thanks to Kate McCurdy for taking the time to speak with us.

Is Language Making Artificial Intelligence Sexist? | Babbel Magazine
Linguist Kate McCurdy On How To Make Computers Less Sexist | Forbes
Amazon Shut Down Recruiting AI For Sexist Results | PCMag
Microsoft’s disastrous Tay experiment shows the hidden dangers of AI | Quartz
Bob And Alice Shut Down | Independent
Google Fixes Translate Tool to Correct Gendered Pronouns | Independent

Episode Transcript

Jen: Welcome to the first episode of Multilinguish, the podcast for anyone curious about language and all the weird and fantastic ways language connects us all. I’m executive producer Jen Jordan. Over the five episodes of this first season, we’ll explore unsolved language mysteries, linguistic relativity, travel advice, and sexy accents. But in today’s episode, we’re talking about robots. More specifically, we’re talking artificial intelligence, or AI. We often joke about our robot overlords becoming sentient, but the reality is we teach robots everything they know. This includes all of our ingrained language biases, and when this highly human construct is fed into the sharp contrast of artificial intelligence and it’s spit back out at us, the results can be, well, disturbing and revelatory. Later on in the episode, the whole team will share what they learned this week. But first, here to tell us more about these sexist robots is producer Thomas Moore Devlin joined by senior producer Dylan Lyons. Let’s get into it.

Jen: Alexa, are you a robot?

Alexa:  I like to imagine myself a bit like an aurora borealis, a surge of charged multi-color photons dancing through the atmosphere. Mostly though, I’m just Alexa.

Dylan: That’s weird.

Thomas: That was deep.

Dylan: Alexa, who made you?

Alexa:  A team of inventors at Amazon created me.

Thomas: Alexa, how do you say “they are a doctor” in Spanish?

Alexa:  “They are a doctor” in Spanish is Son un médico.

Thomas: It’s getting rid of the pronoun.

Jen: So they’re saying “are doctor”?

Dylan  But they are saying un médico.

Thomas: That’s true, assuming it’s male. So that proves the point that I was making, excellent. Good point, Dylan. (laughs)

Jen: So Thomas, what made you want to dig into the topic of sexist robots?

Thomas: So, artificial intelligence. What’s the biggest thing you think of when you think of the hazards of artificial intelligence?

Dylan: World domination?

Thomas: Yes. But, there are also other issues besides. So obviously, the first thing you think of artificial intelligence, you think of like Hal from Space Odyssey, or —

Dylan: The Disney Channel Original Movie Smart House. My favorite.

Thomas: Of course.

Jen: I think of C3PO.

Thomas: Oh, all right. Well that’s…

Dylan: He’s nice.

Thomas: The friendliest possible robot.

Dylan: He’s a nice guy.

Jen: Yeah, useful.

Dylan: It’s a nice robot? I don’t know.

Thomas: Smart House, is that one like the house that’s trying to take over for the Mom who died?

Dylan: Yes, but…

Jen: This is for children?

Thomas: Yeah.

Dylan: Teenagers, maybe preteens? It was scary though, the house like…

Thomas: It was.

Dylan: Took over and was like, “I am in charge”.

Thomas: Yeah, it locked them all in, and so that was the earliest Alexa if you think about it. (laughs)

Jen: Right.

Thomas: So I was interested in artificial intelligence and because we’re a language company, I was interested in how artificial intelligence and language interact. So I wanted to talk to someone who worked at Babbel who kind of had experience at that. So I talked to Kate McCurdy…

Kate: Kate McCurdy, and through Friday of this week I am a senior computational linguistics engineer in Babbel’s Berlin office.

Thomas: So I sat down with Kate just because she had an interest in artificial intelligence and how it interacts with language, but she also kind of, not specializes because she does a lot of work because at a language learning app, you want to explore all of the ways that technology and language intersect. But one of her specialties was focusing on this idea that there is bias that is baked into artificial intelligence.

Jen: Interesting.

Thomas: Yeah. So again, sometimes you think artificial intelligence. It’s a robot, it doesn’t have feelings so it can’t have biases because it’s all just numbers packed in. But apparently that’s not true. And so, the reason why that is, is because the way we make it bakes in bias to these…

Dylan: So because it’s made by humans who are biased, sexist, racist, whatever they happen to be, that translates into the machine?

Thomas: Yeah, and not necessarily consciously. It’s not like there are engineers who are like, “Well I hate women, so I’m going to make this robot also hate women.” But that has a lot to do with how we now make artificial intelligence. So the first thing to understand is that there were two basic schools of thought having to do with how to make a computer think.

Kate: It’s generally kind of acknowledged in the history of artificial intelligence, and especially with respect to the history of natural language processing. The first all starting from the premise that you can successfully model something like language in computers, right? You have some different camps and they might roughly be divided into this sort of symbolic manipulation approach that could be described as good old fashioned AI. Much in the same way that when you’re learning a second language, as a native speaker of a language, I don’t really think about what the rules are in English, say for plurals or when do I voice this sound and not this other sound? I just know them.

Thomas: “Good old fashioned AI,” when she said that at first I was like, that’s a kind of funny term that she made up. But it turns out it’s actually this term that’s used by anyone who works in artificial intelligence to describe the slightly older school thought about the best way to teach computers, which is basically, you’re putting in rule after rule after rule, and then it just follows those instructions. Do either of you have experience with coding?

Jen: A little bit. I mean very early days, like this is a header and this is how it should look, and like denoting all the styling.

Dylan: Only on Myspace.

Jen: (laughs) I’m guessing AI is a little bit more complex than headers and links.

Thomas: Yeah. With good old fashioned AI, it’s basically like a lot longer, because you had to have so many more instructions. For example, the earliest code that anyone does, no matter what code program language they’re doing, is “hello world”, where they basically put in the rule and then when something happens, it produces “hello world”, and that’s the first thing. If you want to make a really advanced robot, then you have to make a lot of rules and have thousands and thousands of lines of code and you’re teaching it exactly what to do and it can really, in that way, only be as smart as the programmer.

Jen: But it doesn’t allow for the kinds of things like, like were talking to Alexa earlier, it’s not improvisation, because everything is still coded into the AI. But, I don’t know, I guess I’m trying to say it doesn’t allow for more interactive ability, basically. Right?

Thomas: Yeah. I mean, Alexa is probably more good old fashioned AI that we would think, because even though it seems like she’s, or it, because they don’t actually have gender, it’s very advanced. They don’t actually have brain.

Jen: There are prompts for what she’s picking up on what you’re asking, right?

Thomas: Yeah, I actually talked to Kate a little bit about this whole experience of like, is the artificial intelligence that we interact with on a daily basis as advanced as we tout it being?

Kate: It’s an interesting thing. There’s a lot of layers of work that go into designing a good conversational experience, and there’s a lot of layers of work that have to be executed well to make it feel like it’s done well. And I’d say, to the extent that there are people who have that experience and feel that it works well, then it’s quite a success. But we should be aware that it’s a pretty carefully curated success, right? So for example, I can’t remember whether it was Microsoft or Apple, but there was reports that they hire comedians to write the joke dialog lines, for example.

Thomas: Now I know it’s unbelievable to think that a real comedian wrote the terrible jokes that Alexa has, but it’s not coming up with those jokes out of nowhere and most of what you’re interacting with is going to be within a set of scripts that’s already been taught. And when you go outside of those perimeters, that’s when Alexa will say something like, “Hmm, I don’t know that one”, and it feels more natural than you’d expect robots from the old days to be where they just say like, “Error, cannot compute”.

Dylan: Cannot compute.

Thomas: But it is very much a very set system where it knows certain things and it doesn’t know other things. And it’s not really actively learning, and at least the Alexas that we interact with, they’re working on making it more advanced.

Dylan: Right.

Jen: So eventually it would learn more about you and be able to tailor its responses more to your words or your accent or the way you speak. But it doesn’t do that yet.

Thomas: Yeah, that would probably be Amazon’s goal in the long run, right now it’s more all of that work is done in a lab and they’re trying to teach it. But the product that’s put out, it’s very set in stone.

Jen: So that’s good old fashioned AI?

Thomas: Yeah. Now the second kind of AI is the one that is more popular these days because it’s such a lot more promise and we don’t know exactly how much promise it will show in the long run. But for now, it’s definitely the most exciting. So I talked to Kate about that.

Kate: The one approach, the sort of connectionist or neural network approach to computation is kind of like trying to model getting a computer to understand language in some loose approximation of how we think a child does, or in a sense where you don’t have all this sense of structure or rules per se, you just have a lot of data. And it’s up to this very, very loosely specified powerful computational learning mechanism to come up with the structure out of it.

Thomas: So Kate there was talking about the neural network approach, which is the very exciting robots learning on their own, in a way.

Jen: You say exciting, I think that this is the robot that’s going to kill us, right?

Thomas: I mean in that idea that robots are going to take over the world, this would be the kind of robot, because old fashioned AI we have, pretty much, most control over whereas this, less control. We don’t really know what’s happening. What happens is like a baby learning language, you just kind of present it with a bunch of information and then it can extrapolate from there.

Thomas: Same thing with humans. Do either of you know the Wug test?

Dylan: The Wug test?

Thomas: The Wug test. It’s this famous linguistic experiment which proved how babies can extrapolate information. So they took some babies with some basic language skills, I guess they are “tots” at that point, and they showed them this little picture that looked like a bird and said this is a wug, and then they learned it was a wug. And then they brought out one with multiple of the same image and asked them what is here? and they were able to fill in that it was “wugs“. So even though they’d never heard this word before, they were able to use information from other plurals that they knew and then add to it. So they knew “wugs“.

Dylan: Interesting.

Thomas: So it was kind of the same thing that they want to do with this neural network approach where you can feed it a bunch of information, and then it can start making connections that aren’t necessarily…

Jen: Explicitly stated, yeah.

Thomas: So at first it’ll be really dumb and it needs human guidance to start it on the route, but eventually it’ll just take in all of this data and then create an intelligence.

Dylan: So at least for now, robots could be as smart as babies.

Thomas: Yeah, at least as smart as babies, maybe.

Dylan: And then eventually as smart as adults.

Jen: Smarter than all of us. (laughs)

Dylan: Yeah, or that, yeah. It’s a little terrifying, but also cool.

Thomas: Yeah. The neural network approach is taking in all this data and it’s great, people love it, but this is where the bias comes in. I’ll use an example of something that everyone uses pretty regularly which is Google Translate. Because Google Translate learns through taking in a bunch of data and trying to find the most likely translation of the statement. So when you put in “hello” and want to translate in to Spanish, it will try to match it with what it already knows, and it will translate it into “hola”. And Google Translate, you can also get feedback, I’m pretty sure you can still get feedback where it will be like suggests a translation or something, so it’s taking in more data. But mostly what it’s doing is it’s just going through internet resources and information to create language matching. But there is a problem, which was not everything translates perfectly well. The most famous example was when you wanted to translate a language Turkish. So, [inaudible 00:13:20] exists in a few different ways, so in Spanish you’ll have [Spanish 00:13:27] and that’s the table, it’s feminine. Or you’ll have [Spanish 00:13:32], the donkey, it’s male. And so that’s grammatical gender. Even though it doesn’t necessarily directly correlate, like you don’t think a table is necessarily more female than male, it exists and its there.

Thomas: English does not have that on nouns or anything, but it goes have grammatical gender on pronouns because we have “he”, “she”, “they”, etc., but there are some languages that don’t have that at all. So in Turkish, you just have “o”, and it’s a pronoun, it works for anyone.

Jen: So you would say “o” as if you would say “he”, or “she”, or “it”, it’s all the same word and all means, vaguely, the same singular pronoun?

Thomas: Yeah.

Jen: Interesting.

Thomas: And if you put in “o”, at least this Google Translate as it was, it would have to choose which way it wants to translate that. It’s going to find the most likely translation, not necessarily the translation. Google Translate, you put in “o”, it goes through, it tries to find the most likely translation. And if you put in a phrase like O bir doktor, I’m probably not pronouncing that correctly, I don’t speak Turkish, but what that basically would mean when you translate it, if you’d speak to an actual person who’s bilingual, they’d say “he” or “she” is a doctor. But Google is just trying to find the quickest translation, just one, and it translates it as “he is a doctor”. But if you put it in something else that means “he or she is a nurse”, it’ll translate that as “she is a nurse” because the data that it’s pulling from says that it’s more likely for the sentence to say “he is a doctor” than “she is a doctor”.

Dylan: So is it basically pulling form the body of writing on the internet?

Thomas: Yeah, it’s got a few different sources. I mean Google has control over everything.

Dylan: Right. (laughs) Every aspect of our lives.

Thomas: But, it just needs to make this choice as quickly as possible, and so it makes that choice without any guidance.

Jen: So it’s fed by human input but is making decisions based on the amount in volume and quality of input that’s being given.

Thomas: Yeah. I mean that’s not that different, again, from how humans kind of get their own input where they’re more likely… I mean there’s the famous joke about, what’s the joke? It’s like a father and son get in a car crash, the father dies and the son’s put in surgery, and then the doctor says, “I can’t operate on this man, he’s my son.” And the point is that you’re supposed to not understand, like, “How is that possible?”, and then you’re like, “Oh, the doctor’s a woman”.

Jen: Yeah, it really lands a lot differently now these days.

Thomas: Yeah, a lot differently. When I was a child, I think it blew my mind slightly more than it does now, because now I’d be like, “Oh, that’s stupid”. But when you’re just taught the certain schemata, the certain shortcuts in your brain of what things should be, you just start making these connections that aren’t necessarily true. But then, that can lead to issues like, Tay.IO, no Tay.AI.

Dylan: What is that?

Thomas: Tay.AI was this experiment that Microsoft put on to try to make a twitter bot that could talk to people.

Jen: Oh, no.

Thomas: Yeah. You can guess where this is going. So it started out with this basic amount of information and it was just supposed to be able to respond to people when they tweeted at it, and it would take in that information that was being tweeted about and integrate that into its lexicon and how it worked. And it started out fine, people tweeted at it and it would say like, “Hello, my name is Tay”, and things like that. But then, slowly over the course of less than 24 hours, it became racist and sexist and a conspiracy theorist about lots of things.

Dylan: Oh, boy.

Thomas: And just saying to kill certain groups of people.

Jen: Yikes.

Thomas: And so that’s a good example of just, a really extreme example of how quickly, if you have a bad data set, things can go horribly, horribly wrong.

Jen: Just exponentially racist and terrible.

Thomas: Yeah. And that’s easy to see and spot because it’s just there.

Dylan: And I feel like Twitter is a particularity…

Thomas: Yeah, Microsoft should’ve known better.

Dylan: Yeah. A rough place to start.

Thomas: So things like that, we’re like that’s not great, but other times it can be less noticeable. Actually, for the next example, I have a script that I’m going to send both of you because Facebook created these two bots so that they could talk to each other, and then they started inventing their own language just based off of feeding into each other.

Jen: So wait, so they gave them, these two bots, they gave them, I’m assuming, some vocabulary, but then they started to talk to each other?

Thomas: Yeah. Facebook wanted to create these AI bots that were good at making exchanges with each other so that they could talk, and then they would adjust their language to make the most efficient talking possible.

Jen: I remember hearing about this. And then they had to shut it down.

Thomas: Yeah, it had to be shut down because they looked into what they bots were saying to each other and, like I said, I’ll send you both a script and, Dylan, you’ll be Bob, one of the bot names, and Jen, you’ll be Alice. So it’s all about gender, whenever you’re ready Bob.

Dylan: Should I start? Okay. Should I do a robot voice? I’m going to do a robot voice.

Jen: No, don’t do a robot voice.

Dylan: Don’t do a robot voice?

Thomas: Do like a light robot voice.

Dylan: Okay. “I can I I everything else……….”

Jen: “Balls have zero to me to me to me to me to me to me to me to me to”

Dylan: “You I everything else……….”

Jen: “Balls have a ball to me to me to me to me to me to me to me”

Dylan: “I can I I I I everything else ………….” (laughs)

Thomas: All right, so you get the picture. (laughs)

Jen: I feel like this is really romantic, actually.

Dylan: It’s beautiful.

Thomas: Yeah, Bob and Alice, love story durations. So obviously they completely went off the rails and they just, I mean this might be a more efficient way of communication, maybe we should always be speaking like this, but, I don’t know. And also it’s impressive because Bob and Alice really have different styles. Because Bob uses all the dots whereas Alice uses “to me to me to me to me to me.”

Jen: All the dots make me feel like I’m texting with one of my friends. (laughs)

Dylan: (laughs) Yes, baby boomer texting problems.

Jen: A little sense of apprehension. (laughs)

Thomas: So when you lock two bots in a room, this is what apparently happens. And it kind of reminds me of, have you ever seen that YouTube video of twins talking to each other when they’re babies?

Jen: Oh, yeah.

Dylan: No.

Thomas: And they’re making baby noises but they’re talking back and forth so there’s this idea —

Jen: Like a twin language.

Thomas: Yeah, twin language, which is kind of a real phenomenon where twins, when they’re babies, can learn to talk to each other not using the language around them, but making up their own language.

Jen: But the scary thing is with robots, we can’t understand it, and that’s where it gets problematic.

Dylan: I mean we can’t understand babies either.

Jen: That’s also problematic. (laughs)

Thomas: Eventually they grow and then, because they’re exposed to outside data sets, they learn better, whereas these two did not. But yeah, we don’t really understand artificial intelligence and a lot of the problem, as mentioned with these neural networking things, is that it’s kind of call a black box. Because with old fashioned AI, you can see exactly what’s happening, what’s being adjusted, how you can find a line of code and be like, this is why it’s doing what it is, whereas the neural network approach, it’s learning on its own, its slowly becoming sentient, it’s going to kill us all.

Dylan: Creepy.

Thomas: And we have no idea what’s going on inside. It makes its own little adjustments and we just can’t really figure out necessarily why, especially when it’s involving massive data sets.

Dylan: So are there solutions to this problem of bias?

Thomas: Hopefully. The problem right now is that, because it’s a black box, I mean we really only know things when it becomes apparent. There was this other artificial intelligence example where Amazon made this tool to hire faster because Amazon gets so many applicants that a bunch of resumes were basically fed into it and it would try to decide which people should be passed on for humans to see their resumes to be decided on. But eventually they realized that the algorithm that it was going through was sexist and it was penalizing people for being a woman…

Jen: So they were looking at women names and, at some point, started ranking them as less of a good fit based on that?

Thomas: Mainly it was the word “woman” that they realized was being penalized. Like if you had…

Jen: So if you majored in women’s studies?

Thomas: Yeah, or like if you did “oh I was part of this women entrepreneurship club”, you’d be more penalized.

Jen: Yikes.

Dylan: Why, what? Why would that happen, I don’t…?

Thomas: Yeah, it’s not great. And again, it’s probably the human bias that’s already set in in possibly the hiring practices, where it was being taught. It had to be taught first, like which resumes to react positively to, and then it started making its own adjustments and possibly even magnifying the biases that humans have.

Jen: That’s the scary thing is, like technology usually just accelerates the things we ask it to do or it can do it so much faster than you ask it to do. So in the sense that there’s bias or issues or decisions that need to be made quickly that affect much bigger things than just like calculations, that it gets spun out of control so much faster.

Dylan: Yeah.

Thomas: Yes.

Jen: And to a greater extent.

Thomas: If Amazon hadn’t caught this, we don’t know what would’ve happened. And if we don’t know what’s going on with these artificial intelligence, we can’t necessarily fix it. I mean the biggest thing that we can do right now is just make sure everyone, especially the people who are working these things, but also everyone knows that this is a problem that can start affecting us. And actually, it was recently in the news thanks to Alexandria Ocasio-Cortez, as she brings everything to the news, and she was basically saying that artificial intelligence is racist. The backlash immediately was, I think it was one funny tweet that was like “Alexandria Ocasio-Cortez says, ‘Algorithms, which are powered by math, are racist.’.” That goes a lot to our instinct, is that we think robots, they’re unfeeling and entirely logical, how could they possibly be anything but making the purest possible decision? We’ve been talking mostly about sexism here, but they can also have any of the isms, it’s a problem.

Jen: I guess let’s recap because we’ve talked about a bunch of different things.

Thomas: Yeah.

Jen: So AI works in 2 different ways in terms of language. So, the first one is this good old fashioned AI, and that’s where you’re line for line programming what you want the robot to say, and so you’re having a direct human impact on how they’re responding to you.

Thomas: Mm-hmm (affirmative).

Jen: The second way that’s becoming more popular and is more exciting/terrifying, depending on which end of the spectrum you’ve had it, is a way where based on a number of touch points and history of ingrained attitudes towards gender, but also in the way language is structured, is there anything else or other solutions? How else are we, if any of the way we’re —

Dylan: Is it all awareness?

Thomas: It’s mostly awareness, and it’s mostly awareness on the part of the people who are programming this. One of the more promising examples is, the example I mentioned earlier of Google Translate not being able to handle grammatical gender very well, was actually kind of fixed. So right now, if you decided to go and translate O bir doktor or any phrases that have this gender, then it will give you both of the translations that are possible.

Thomas: If you look up O bir doktor right now and translate it to English it’ll give you “she is a doctor” feminine, “he is a doctor” masculine, and it says “translations are gender specific” and you can learn more, and it teaches you a little bit about how this kind of translation works.

Jen: Nice.

Dylan: That’s great. We don’t usually applaud Google Translate here, but good job Google Translate.

Jen: Yeah, good job. That’s impressive.

Thomas: I know. It’s good. It’s now working in, I think, 5 different languages, but this is, again, and example where the awareness is really what drove it. Like this was a very apparent example, it was brought up at different talks and it was popular online, and that’s why it was addressed. But it requires going in and fixing it. If we want to fix possibly larger problems, especially as we let AI become more and more separate from humans and it’s learning more and more on its own, we really need to determine right now how we can do that, and there are proposals for creating like FDA for artificial intelligence…

Dylan: Some sort of oversight.

Thomas: Yeah, like a government oversight committee. Google actually just put in a request to have rules for artificial intelligence on the world, but also they were like “not too many rules”.

Jen: And considering our congress-people can barely understand how to attach something to an email, it’s probably going to be a little bit, but…

Dylan: Yeah.

Thomas: That’s going to be a problem and, obviously, that whole congress system needs to figure out a way to deal with issues that 90 year old men don’t necessarily understand fully. But, yeah, I’ll let Kate say why this is such an issue.

Kate: It seems like being a bit of a scold, but I think we’re starting to become more aware of the potential adverse effects of actually just saying, “well artificial intelligence, it’s so innovative, it’s so creative, it’s so good.”, and it can be all of those things, but very much what makes it into all of those things is a kind of humanistic and directed context which develops it for particular human ends. I think recognizing the risks inherent in that and the responsibility upon people building these systems and also upon people in the societies in which these systems operate is, I think, going to lead us into a better place and probably with better artificial intelligence as well in the longer run. But in the short run I think it feels sort of like grim.

Jen: Interesting. So, all that said, how do you guys feel about robots and artificial intelligence, Dylan? Are you optimistic?

Dylan: I’m cautiously optimistic. Honestly, I think it’ll probably get worse before it gets better but eventually as the technology develops, they’ll come up with new ways to hopefully eliminate some of that bias.

Thomas: Yeah. Personally I’m optimistic because I kind of feel like I have to be, because we’re now getting to a point where artificial intelligence is just a part of our lives. It’s on our phones, we’re interacting with Alexas and Google Translates on a regular basis. It’s just going to be more and more an essential part of what we have. Right now is when we need to be worried about these things, because as it gets bigger and bigger, it’s going to get harder and harder to reign in. But I want to just say, it could make our lives easier. Someday we’ll only be working 10 hour work weeks because the robots will do all of the things and we’ll still get paid because that was the future that, no the Jetsons had a job, some future show taught me I should be looking forward to.

Jen: Why did the Jetsons have job? They had robots do literally everything.

Thomas: Maybe it has to do with human purpose, that’s why I said 10 hours a week instead of 0 hours a week. I want to go somewhere and have someone telling me I’m doing a good job.

Dylan: (laughs) How do you feel, Jen?

Jen: I feel better understanding some of the reasons why AI does what it does, but it is so baked in already, I feel like in all of our lives that it’s a little bit scary. Awareness is super key and I think there’s a lot of really positive things happening just in the way that we’re using gender in general, embracing the singular “they” and the few instances, and the awareness I think is good. But if we could have a 10-hour work week, I for one welcome our robot overlords.

Thomas: Yeah, as long as they don’t kill us.

Jen: That was super interesting, thanks Thomas.

Dylan: Thank you, Thomas.

Dylan: Multilinguish is brought to you by Babbel, the language app. With Babbel, you can speak any language with confidence. Convenient lessons are only 15 minutes, and you can choose from 14 different languages, including Spanish, French, Italian, and more. So Jen, what’s your favorite lesson on Babbel?

Jen: Well right now, Dylan I’m studying French. I wouldn’t say the lesson I took this morning was my favorite, because it was a really boring grammar lesson. Basically I remember a lot of French vocab, none of how it fits together, which is real problem if I need to make conversation in France. I will say my favorite French lesson so far of all time in Babbel is the one where it’s a dialog where she’s going into a wedding and she knows none of her family, which I find odd and a situation I hope to never find myself in. But I would say that the most useful lesson is the grammar lessons that help you fit it all together.

Dylan: Grammar is important. And we’re offering Multilinguish listeners 50% off a 3 month subscription. New customers can get this offer by visiting That’s

Jen: All right, welcome back. We have the whole team here, again I’m Jen Jordan. I’m the executive producer at Babbel and we’re going to talk about what you learned this week. David, let’s start with you.

David: I learned about the oldest language in the world this week, and I actually learned that a kind of silly question to ask, I didn’t realize, but when you think about the oldest language in the world… It’s not that languages that we speak today, natural languages, just arise out of nowhere, they all have an ancestor from which they’re descended. So if you go back farther than the language that you speak today, like English, Spanish, whatever, and you trace the roots of those languages, you could actually go all the way back in history to the origin of language itself and there’s lot of debate about what this is. Maybe it’s some language like pre indo European, which is like a reconstructive language that takes into account all the rules of language change and grammar shifts to try to reconstruct what people were speaking in 3500 B.C.E, but you could claim that that’s the oldest language in the world because other languages kind of spiraled off from that.

David: But you could also trace back to see from what points of history you find writings or recorded histories of language, so Sumerian cuneiform form, for example, is from around 3000 B.C., maybe a little bit earlier, but no one speaks Sumerian today and no one really used cuneiform. So classifying the oldest language, it’s both, like I said, a silly question because language itself, all languages are pretty much the same age if you think of language itself as a concept, but you could also try to figure out what forms of language existed the longest period of time ago, whether or not they’re still alive today. So it’s a complicated question, one that doesn’t have an exact answer.

Jen: I have a question.

David: Okay.

Jen: You mentioned reconstructed languages. Is that like thinking about spoken languages that you’re trying to make into a written form, or what does that actually do?

David: So from what I understand, you take spoken languages today, you know what their sound systems are, how they’re pronounced in the rules that govern their phonology, sometimes their grammar, but it’s mostly about what words sound like. And if you study how those have changed over time, so like the dramatic vowel shifts that gave rise to, or maybe not the vowel shifts, the consonant shifts that gave rise to different sounds in English and Dutch that come from Germanic but didn’t change in German, you know that there’s a certain set of rules that govern how consonants change over generations, so you kind of reverse that rule and accept it as true, as a rule that just governs how language works. You go backwards and then you can kind of figure out what words would have sounded like 3,000 years ago. It takes a lot of digging and data analysis and all that sort of fun technical stuff, but you, in theory, are able to kind of predict, well not really predict, but anti-predict like back in time what languages would have sounded like with these rules in reverse. Does that make sense?

Jen: Yeah. So wait, so what is the oldest language in the world?

David: I don’t know, there’s no answer.

Jen: I want answers.

David: The answer is that there is no answer. That’s what the whole point of my article, I think it’s kind of a tease, actually, because I owe my…

Jen: I’m really angry about this. (laughs)

David: Yeah, me too. I actually wanted to have a clear definite answer to present to the people. I know everyone is just waiting at the edge of their seat to find out, but written systems, do they count as spoken language? Because we don’t know what Sumerian really sounded like even though we can try to approximate it, and what would it be for someone to speak Sumerian today. If they did, like if we could maybe resurrect Sumerian and learn how to speak it, maybe that would be considered the oldest language. And I’ll wrap this up, but another cool example, I just think it’s a really cool question. So Hebrew wasn’t really spoken as anyone’s native language or mother tongue for centuries, for maybe about the 4th century C.E. up into the 8th century C.E., but with the creation of the state of Israel, which has Israeli or Modern Hebrew as its official language, the language has been brought back from its liturgical and religious context into a more colloquial form. So does that mean that Hebrew is a really old language or does the formulate existed today, is that a completely separate variant dialect of Hebrew that you can’t really claim as the same as ancient Hebrew? Just big questions. Then you get, my point is that it’s really hard to figure out what is the oldest language to classify.

Jen: You could’ve just said, “I don’t know.” (laughs)

David: I did, I did, and then it wasn’t good enough for me, at least.

Steph:  As you’ve been talking, I just heard like (hums Jurassic Park theme).

Jen: Steph, what’d you learn this week?

Steph:  I learned that it was possible to create language driven content around the current Marie Kondo craze. (laughs)

Thomas: Your girl.

Steph:  She’s my girl.

Jen: Can you KonMari?

Steph:  You can KonMari anything if you want to, if you really try hard enough. So I decided to write an article about how what would go about Marie Kondo-ing your language studies, because her name is now functionally a verb in 2019. And so some of the advice that I gave was that you can sort of apply those same exact methods to any tangible tools that you use to study languages. So for example, you might still have your high school or college Spanish textbook, and maybe it’s working for you, and if so that’s great, but also just because it was the first learning method you were exposed to doesn’t mean that you have to keep using it forever. Especially now that we have app based language learning available to us.

Steph:  So one of the things you can do is lay out all of your tools in front of you, hold them in your hands, try to figure out if these things are still doing it for you. Obviously not everything about learning has to spark joy because there’s also discipline involved, but I think that your body kind of knows when something is helping you or not. Another approach that you can take is to ditch any vocabulary that’s not sparking joy for you, and by that I mean don’t waste time struggling through like sports vocabulary if you have no interest in talking about soccer. You’re an adult now, you can pick and choose what you need to learn. And actually this is a method that a lot of polyglots recommend is narrow it down to words that you think you want to use.

Jen: I love that you could be having a conversation in another language like on a date or something and they start wanting to talk about sports and you’re just like “file not found”.

Dylan: Back to food.

Thomas: I do that in English, though. Like oh, football? No.

Steph:  Yeah, 404 error.

Jen: Yeah, that’s fascinating.

Steph:  And then obviously the final one is also if the language you’re studying isn’t bringing you joy anymore, be honest. It’s okay to sometimes study a language for 6 months and decide that it’s not the one that you want to commit to long term. I kind of compared it to relationships and dating because its February and I’m apparently the love guru.

Thomas: Yes.

Jen: Our senior love correspondent.

Jen: Dylan, what did you learn this week?

Dylan: So speaking of commitments and dating, I dug into dating apps in other countries because I wanted to know what that looked like. So can you guys guess what the most popular dating app is the world is?

Jen: Tinder.

Dylan: Close, but no.

David: Match?

Dylan: No.

Thomas: E-Harmony?

Steph:  Plenty of fish?

Dylan: No, okay you’re all wrong. So Tinder’s #2, the #1 app is Badoo.

David: That was a trick question.

Dylan: That was not a trick question.

Steph:  Isn’t that a search engine?

Dylan: No. It’s a dating app.

Steph:  It’s a search engine for people.

Dylan: Yes. It’s a search engine for love. So it has almost 400 million users worldwide, tinder has about 50 million globally, but it kind of depends on the region on which apps are used more. So obviously we know that Tinder is very popular in the U.S., it’s also very popular in Mexico and Canada, but once you move down further into South America, Badoo really picks up in popularity. Another up and coming app in the U.S. and Mexico for Latin Americans is Chispa, which means “spark” in Spanish, and it’s actually owned by the mass group which owns Tinder, but it’s a partnership with Univision and it is Spanish and English. And again, it’s all about the swiping. And basically I found that as you move around the world, most of the apps involve swiping, surprise, surprise. And my personal favorite is a Swedish app that’s also available in Finland and starting to spread a little more, but it’s called Happy Pancake, which is probably the best dating app name I could ever come up with. (laughs) And it prides itself on being completely free, there’s no extra added charges like Tinder has limited swipes that it can charge you for more.

Jen: It’s very Scandinavian of it to be free.

Dylan: Yes, and most of it has a search…

Steph:  Your tax dollars pay for it

Dylan: Exactly. It also has a search function that allows you to find people with similar interests, so that’s a nice extra component that’s not just about looks.

Thomas: Can you just search for people? Because I feel like half the reason people use dating apps is just like “I want to see which people I know on here”.

Dylan: You mean by name?

Thomas: Yeah.

Dylan: I don’t think so.

Thomas: I’m just like “Dylan”, ha ha, left.

Jen: It’s like the population of one city in an entire country, I’m pretty sure you’re going to recognize some people.

Thomas: Yeah.

Dylan: You better swipe right on me. But, and then last one I’ll mention is in France. It’s called, ugh, French pronunciation, here we go Adopte un Mec.

David: Sorry, France.

Dylan: Sorry. Which translates to “Adopt a Guy” and it’s a popular French app that was made to empower women. So basically it’s free for women, they charge men to send message requests they call “charms” to women and they have all the power, so they can either accept it or reject it, and if they accept it, then you can have a conversation.

Jen: So it’s like Bumble, right?

Dylan: More like Bumble, yeah.

Steph:  It almost kind of makes me think like they’re kind of re-contextualizing this as you’re adopting a child that you then have to take care of.

Thomas: Isn’t that what dating a man or dating is all about?

David: Have you ever seen those girl boyfriend things that you stick in water and you’re like… That’s what it reminds me of.

Dylan: That’s basically what it is. So yeah, that’s dating around the world.

David: I have a question.

Dylan: Yeah.

David: Did you ever see and research about, or do any research about Facebook’s new dating platform? I think they’re going to unveil something.

Dylan: It’s coming soon allegedly, I don’t know when exactly, but…

David: That’s scary to me. I don’t want, I feel like Facebook knows too much about me already.

Dylan: That’s why it’ll find you the perfect most compatible partner.

Steph:  Wow, this sounds like a Black Mirror episode.

Dylan: Yeah.

David: Stay tuned.

Jen: Thomas, what did you learn this week?

Thomas: I learned, this wasn’t actually from research, but I happened to go to a museum and there was a fascinating exhibit. So I went to the Museum of Chinese and America, which is a smaller than most museum in Chinatown in New York, and they had this whole exhibit I didn’t know about in advance about the Chinese typewriter, which is not something I’d thought about before, because… So you’ve got your English computer, typewriter, or I guess any Latin alphabet, because there are only 26 characters, plus other stuff… Like it’s easy to fit everything on there, but because Chinese languages, it’s not all one language obviously, but since they have a logo syllabic system, which means basically each symbol means a syllable, then there are like this huge number, but according to what I read a quote unquote educated person will know about 4,000 symbols. And that is not easy to fit onto a typewriter

David: No. Way too many buttons.

Thomas: Yeah, I mean basically they talked about all these different techniques that have been used to try to create a usable sized typewriter. One of them was just like let’s make a really, really big keyboard, which looks massive. And it was an interesting section about the public perception of this, because apparently during the ’80s and ’90s, it was common to make fun of the Chinese typewriter. MC Hammer has a dance where he’s moving his legs back and forth, it’s actually like the Hammer Time dance, but it’s also called the Chinese Typewriter.

Dylan: That is so… What?

Jen: Yeah, that seems like it’s wrong on a number of levels.

Thomas: It was odd. And there was a Simpsons joke about the Chinese typewriters, it was weird. And there were other techniques that they used, like trying to assign, I don’t think this was for regular typing but for a different kind, but they tried to assign a number to each of the symbols and then you translate the number into the keyboard.

Jen: So you have to memorize all of these symbols, thousands of symbols, and then you have to memorize the associated number with them and that’s how you type?

Thomas: I think it was actually Morse code that they were trying to do, like trying to translate into Morse code so you had to give a number and then you put the number into Morse code.

Jen: Oh, my God.

Thomas: Which…

Dylan: That’s very convoluted.

Thomas: Made it more difficult. Yeah.

Jen: So how do you actually, what does a typical laptop look like, then?

Thomas: I mean now days, it’s easier, they have an iPad set up that you can try. There’s basically 2 techniques that can be used, you can either just physically draw the symbols, which was not easy for me to do, or there’s the technique of using the English letters or the Latin alphabet to have the equivalent of the symbols, and then it translates it into the symbol onto your screen for you. I keep saying English mainly because English companies are why this typewriter got so popularized that basically now everyone is forced to use it, even though it’s not necessarily the best language. But it’s interesting.

Jen: That sounds more harrowing than T-9 typing. A really impressionable time in my life, so.

Dylan: What did you learn, Jen?

Jen: Thanks, Dylan. So this week I read a great article from our collogues in Berlin and it’s talking about the crossover or the influence of Arabic in Spanish. So apparently there are hundreds of words that are influenced by Arabic in Spanish and the Spanish language, and this goes way back. There’s obviously a lot of history that I won’t go into, but basically the moreish occupation of the Iberian Peninsula way back when in like 711, back in the three digit years…

Dylan: Love that store.

Jen: It’s also my birthday. Anyway, so because of that, overtime a lot of the Arabic word, like pre-Arabic words ended up being infiltrated or morphed into the Spanish language, and I’m looking for the actual example they give. Another common Arabism is the fusion of “al” with nouns in Spanish. So “al” is basically an article like “the”, and so a lot of words about agriculture and a lot of words about food ended up having “al” as the prefix, and then sort of like merged into the word. And there’s actually a ton of examples in…

Thomas:          Like lunch.

Dylan: Almuerzo.  I was thinking that too!

Steph:  It’s 12:00.

David: Thinking about lunch.

Jen: So I thought that was super interesting and something I wouldn’t normally think about, and history and language are fascinating. Anyway, thanks everyone. Have a good week, bye.

Jen: Multilinguish is produced by the content team at Babbel. We are:

Thomas: Thomas Moore Devlin.

David: David Doochin.

Steph:  Steph Koyfman.

Dylan: Dylan Lyons.

Jen: And I’m Jen Jordan. Ruben Vilas makes us sound good. Our logo is designed by Ally Zhao. You can read more about this episode’s topic and even more on Babbel magazine, just visit Say hi on social media by finding us @babbelusa, all one word. Finally, if you like what you heard, please rate interview this podcast. We really appreciate it.

Jen: How do I use Siri?

Dylan: Are you serious?

Jen: I’m dead serious.

Dylan: Are you serious?

Thomas: Home button.

Jen: My home button doesn’t work.

Dylan: Do you have “Hey Siri” installed? Hey Siri?

Siri: Yes?

Dylan: Mine is a British man though.

Jen: That voice is, yeah… Change the voice.

Dylan: They don’t want you to be a British man anymore.

Siri: I’m not sure I understand.

Listeners make the best language learners.
Start Here