When most people think of big data they think of numbers, but it turns out that a lot of big data — a lot of the output of our work and activity as humans in fact — is in the form of words. So what can we learn when we apply machine learning and natural language processing techniques to text?

The findings may surprise you. For example, did you know that you can predict whether a Kickstarter project will be funded or not based on textual elements alone … before it’s even published? Other findings are not so surprising; e.g., hopefully we all know by now that a word like “synergy” can sink a job description! But what words DO appeal in tech job descriptions when you’re trying to draw the most qualified, diverse candidates? And speaking of diversity: What’s up with those findings about differences in how men and women describe themselves on their resumes — or are described by others in their performance reviews?

On this episode of the a16z Podcast, Textio co-founder and CEO Kieran Snyder (who has a PhD in linguistics and formerly led product and design in roles at Microsoft and Amazon) shares her findings, answers to some of these questions, and other insights based on several studies they’ve conducted on language, technology, and document bias.

Show Notes

  • How analysis of language can predict success on Kickstarter, affect job listings, and more [0:00]
  • Specific words and phrases to use and avoid [9:59]
  • Discussion of how the analysis works [16:11], and how language can affect gender bias [23:59]


Sonal: Hi, everyone. Welcome to the “a16z podcast.” I’m Sonal, and I’m here today with Michael, and we are talking to Kieran Snyder, who is the CEO and co-founder of Textio, a company that analyzes job listings to predict how well they’re going to perform, and can help optimize them to get more qualified, diverse candidates. And interestingly, they’ve been able to figure out, besides what doesn’t work very well in job descriptions — words like synergize — they’ve been able to figure out what does work well.

Broad effects of language

Kieran: Language, like — in tech, people love to talk about hard problems and tough challenges.

Sonal: But it’s a lot bigger than just about jobs. The ability to understand the words we use and how we use them is pretty important, because even though we’re completely immersed in a world of tech, where a lot of the conversation is around big data as numbers, a lot of the data that we produce — or, the output of our work — is actually taking place in the form of words, and those words matter.

Kieran: Sometimes how you say things is more influential than what you’re actually saying, right, and it’s counterintuitive to any of us who’ve built products before, because you like to think you’re leading with a strong vision.

Sonal: Clearly, words matter. And another place that that plays out is with hidden biases that are often revealed in words. For example, Kieran examined a number of resumes to see the differences between how women and men describe themselves, as well as in performance reviews, to see the ways that women and men were described differently.

Kieran: The word abrasive, which has been talked about since then, ended up, you know, being used in 17 out of a couple hundred women’s reviews, and 0 times in men’s reviews, right. The, sort of, stereotypical, like, “aggressive” was used in a man’s review with an exhortation to be more of it, and in women’s reviews, it’s a term of some judgment.

Sonal: Okay. Let’s get started. Kieran, welcome. So, the reason we actually invited you to the “a16z Podcast” today is because you’ve been writing a lot of interesting work based on the outcomes of your product, where you’ve been analyzing people’s use of language in certain contexts as a way to surface insights. And I think that’s really fascinating, because I think we have a tendency, in our world, to focus on big data as if it’s just numbers — and not other forms of data, because you’re really describing — I mean, what you describe your work as doing is applying machine learning to text and natural language. So, how did you kind of — how does that work, and then we can talk a little bit more about how you got there?

Kieran: Yeah. So, how does it work? Language is just an encoding of concepts, right, and anything that can be encoded can be measured. And so, I was sharing this story the other day — we were actually — originally started out looking at Kickstarter projects, right. So, we started out with this question — could we just look at the text of a Kickstarter project, and some of its, you know, metadata around the text and predict, you know, before it was ever published, whether it was going to raise money. And we didn’t look at the quality of the idea. We didn’t look at whether a celebrity endorsed it. It turns out we got over 90% predictive on minute zero of a project, as to whether it was going to hit its fundraising goal, based solely on things like how long is the text, and what kind of fonts are you using, and how many headings do you have.

Sonal: So, wait a minute, just to unpack that a little bit. So, before the project even went live on Kickstarter, just looking at those features of the text, you’re able to predict whether it [will] be successful or not.

Kieran: Exactly.

Sonal: What were some of the high-level takeaways from that?

Kieran: Yeah. So, longer is better where Kickstarter is concerned.

Sonal: Interesting.

Kieran: Kind of counterintuitive. One thing that broke our hearts, because my cofounder, Jensen Harris, and I both have some design background — you would think these cleanly designed projects, with this beautiful use of single typography would do best. Not so. You want it to look like a ransom note. So, you want to mix and match types. You want lots and lots of headings.

Sonal: Oh, my God. That sounds visually painful.

Kieran: You want images to be frontloaded, kind of makes sense. But a lot of what we found was not intuitive.

Sonal: Interesting.

Kieran: And so, it demonstrated for us the value of actually measuring, because the whole Kickstarter corpus is out there in the world, right. So, you can actually have great training data. You can see how well prior projects have performed. And we saw, “Hey, we’re kind of onto something here,” just looking at the — so very painful as a product person, the quality of your idea doesn’t matter — just looking at the content aspects we could predict.

Michael: And how do you account, then, for all the other sort of outside variables, you know, whether it was at the beginning of the Kickstarter kind of, like, craze, whether it was a certain time of year for that matter?

Sonal: A certain type of product even.

Michael: Yeah. Or geography? How do you know that, in fact, your analysis was correct?

Kieran: I mean, you can look at some of those other factors, right, because you can see when projects are published. It turns out that doesn’t make a big difference. You can see — the only things that really moved the needle in a very short-term way are, do you have a celebrity endorsing you — because that can get you a lot of social media attention. It doesn’t make or break you, but it can help quite a bit. And generally, how good you are at your social media strategy can tip the balance a bit. But none of those other factors turned out to be as significant as we expected.

Michael: The ability to really zero in via just the text — did that surprise you?

Kieran: I mean, we started off with a hypothesis that it would be that way, and that, you know, sometimes how you say things is more influential than what you’re actually saying, right. And it’s counterintuitive to any of us who have built products before, because you like to think you’re leading with a strong vision. We weren’t surprised. We were curious, as we started to apply the technology to some other verticals, whether it would extend. You know, our first big area has really been in the area of job listings, where we’ve looked to see in the first real product application — where we’ve looked at listings now from over 10,000 different companies. 

We’ve measured who’s applied to which listings, and we do see — the content matters. We do see some tailoring by geography. It turns out what works in New York is different than what works in San Francisco. We see a lot of tailoring by industry. So, what works to hire in tech is very different than what it looks like to hire a claims adjuster, or someone in retail, right. So, you see some differentiation. But in all cases, depending on how you’re slicing and dicing the categories, that text leads — you know, we’ve looked at real estate a little bit prior to launching our jobs application, and we’ve seen the same principles apply.

Sonal: So, so far, you’ve been talking about the form of the text — like, the length and the fonts and the design — but, like, were there particular words that popped out as well, in terms of what people said on those Kickstarter descriptions, or anything like that? I’m bringing this up, because there’s just this recent anecdote in the news that I read, about someone saying that you can predict success or default of loan applications based on words people use — like God, or using God a lot will actually mean you’re more likely to default on your loan, for example.

Michael: By God, I’ll pay you every month. I promise. Yeah.

Kieran: In Kickstarter, we didn’t look at that. We started looking at that for real estate listings and then jobs, where we’ve looked at it quite a bit. So, we saw when we were prototyping out the real estate stuff that if you say “off-street parking,” that really moves the needle for low-income homes. But for high-income homes, in terms of the number of people who go to your open house, and then the eventual sale price of your home — for higher-priced homes, it’s actually a negative, because why would you want to highlight that it has off-street parking? It’s just sort of an expectation. So, we saw, you know, vocabulary mattered quite a bit. In jobs, it matters hugely. You know, we’ve identified, at this point, over 25,000 unique phrases that move the needle on how many people will apply for a job, what demographics, how qualified they are.

Sonal: Could you share some of that insight with us, because, you know, the reason I came across your work is because I read an article about how you analyzed performance appraisals and job descriptions for insights about what moves the needle, and the differences in how people communicate. What are some of the things — I mean, just because we have a huge audience that does job descriptions.

Michael: That needs to hire some people.

Kieran: Yes. That needs to hire. Yeah. So, there is, sort of, a set of language that works really well for everybody. These are not surprising on the face of them, but when you look, you see lots of them. So, things like, “We’d love to hear from you.” Be really encouraging and positive in your listing. Using the right balance of talking to the job seeker. So, your background is in science, and you really enjoy roller skating in your free time. And talking about the company. “So, we stand for this,” in terms of the balance between “you” statements and “we” statements, can matter. You know, language like — in tech, people love to talk about hard problems and tough challenges. Curiously, we see patterns change over time. So, my favorite example of this is the phrase big data. So, a year and a half ago, if you used the phrase big data in a tech job listing, it was positive. You know, it was seen as compelling and cutting-edge. In June of 2015, it’s not negative, but it’s totally neutral.

Michael: That’s interesting. I wanted to ask, because if everybody, sort of, gloms onto these best practices, how then does the signal versus the noise shift?

Kieran: Exactly. Marketing content, as with any marketing content, the patterns that work change as they get popular and get adopted. And so, one of the reasons we believe software is so interesting as a solution here, is that it can kind of keep track at broad scale of what’s actually happening right now in the market. So, you may have published a job listing that worked really well a year ago, and probably have a lot of your listeners write their job listings as they go back to that one, and then they try to edit it and tweak it a little bit and fix it.

Sonal: That’s exactly what happens.

Kieran: Right. But it actually doesn’t necessarily work, because the market has changed. And so, there’s a lot there.

Key words and phrases

Sonal: Were you ever — I mean, I’m just curious about this — were you ever able to find or study associations between people’s intent and outcomes in job listings? So, for example, one of the things that we’ve seen happen a lot is that people only become real about what they actually want out of a job description when they actually put words to paper, and words have that power, to sort of help discipline what you’re looking for. You might not even know what you’re looking for until you write it down. Have you ever looked at anything around that, or found — heard interesting anecdotes around that given your work?

Kieran: We have seen that listings tend to perform better when they are originally authored. So, you can see some degradation over time when people patch, you know — I take a little bit from this listing and a little bit from this one, and I sort of stitch them together. And it’s probably because when you’re originally authoring it, you bring that coherent point of view.

Sonal: That’s really interesting.

Kieran: So, a little bit — pretty early for us to have seen that. And we also identify phrases that torpedo your listing.

Sonal: Like?

Kieran: Corporate sort of clichés and jargon.

Sonal: So buzzwords, basically.

Kieran: One of the very common — we call it a gateway term — that kind of torpedoes your listing is the word “synergy.”

Sonal: Oh, my God. That should torpedo any piece of content.

Michael: Yeah. Yeah.

Sonal: I don’t care what it is.

Kieran: But it’s a gateway term, because when people include “synergy,” they’re also significantly more likely to include, you know, “value-add” and “make it pop” — kind of silly, but they’re all over the place. And it turns out, every candidate of every different demographic group hates them. And so, there’s a lot of opportunity to improve in these jobs.

Michael: So, in the, sort of, the editorial world, we would call that jargon. And it sounds like…

Kieran: We also call it jargon, specifically.

Sonal: I think we all call it that. Jargon is jargon. No, totally. Actually, it’s interesting, because, with words like that, they’re obviously in use because they’re useful words, and it’s kind of sad, because — I mean, synergy at some point was probably a useful word. So, it’s kind of interesting, because over time, with your corpus of data, you’ll be able to sort of map how people’s language changes.

Kieran: Exactly.

Sonal: And when you think of dictionaries as, like, these static instruments for capturing text these days, it is kind of fascinating how language is changing in a way that we’re able to track differently now, thanks to online and software.

Kieran: It changes lexicography, like, just as a whole discipline. It changes lexicography for sure. I don’t know that you could do it in a static way anymore.

Sonal: Right. I totally agree.

Kieran: The internet has just exploded that.

Sonal: Right. Exactly.

Michael: So if big data is, kind of, neutral now, is there a kind of job type or job description that’s the celebrity of the job search world right now?

Sonal: Yeah. What word is, sort of, popping out that’s really moving the needle for you guys, or that you’ve observed?

Kieran: There are several. Most of your listeners are probably in tech. It varies a lot by industry. So, “at scale” right now. “At scale” is a very popular phrase.

Michael: A-ha.

Sonal: That’s popular here, too. We talk about that a lot.

Kieran: Yeah. Well, it is. You don’t want to do things and use methods that are perceived to be manual, or perceived to be limited in some way. So, “at scale” is one that shines — and it started in tech, but it spread to other industries, which is common that we see that. One of my favorite examples, given that we spend a lot of time talking to HR people, is — turns out “workforce analytics” is no longer a good phrase to use. You want to use “people analytics.” So, you know, you can get these highly specific, you know, deep in an industry changes — that if you’re in the industry and you’re on the cutting edge, you probably know, but if you’re just a startup trying to hire your first analytics person, you probably have no idea. You don’t have a deep background in the industry.

Sonal: That’s great.

Kieran: Right. Yeah.

Michael: So you’ve described different job listings in real estate. And so, this approach you think can extend in different directions. You started with Kickstarter, but what is it that it’s doing, and how do you — like, it seems a little bit magical, I have to say — that, like — I know that this is a job listing, so therefore, it’s going to have to do this. But a real estate listing has to do something kind of different.

Kieran: Right. That’s a really good question. So, you know, this approach is as powerful as the data set that you have. So, if you want to understand a document type, the very first thing you need to do is collect a lot of examples of the document type. And that means you need the documents, and you also need some information about their outcomes. So, you are publishing a Kickstarter project. We want to know, did you make money or not? That signal for us. You’re publishing a job listing. We want to know, did you attract a lot of good people? Did you attract only men? Did you attract no one? So, you know, for each document type that we take on, the first thing we do is, we make sure we build out a great training data set. 

And then we apply really classical natural language processing techniques. So, we look for patterns, and we say, “Okay. These are the ones that were successful,” where successful is defined as, you know — attracted more applicants than 80% of similar listings, maybe. And then we start looking for the linguistic patterns in the successes, the ones that aren’t as successful, ones that skew in a certain way demographically, and then we play that back. So, sort of a key thing for us, is that you get that feedback in real time, as you’re typing. So, as you’re working on your document, before you ever publish it, pay to publish it somewhere, you can make it good. And so, the training set is the, sort of, core of all of that, because without that outcomes data, then it’s just someone’s opinion.

Michael: And then could you extend that to say, like, “Look, I want to write a screenplay for a blockbuster.” I mean, could you — people have probably tried this, but…

Kieran: In fact, a very prominent Bay Area CEO proposed to us a couple months ago that we start applying this to screenplays.

Sonal: To actually start producing content, or just analyzing them?

Kieran: Sell it to Hollywood.

Sonal: Oh, wow. That’s great.

Kieran: Yeah. So I think any time you’re writing content to sell something, this is really interesting technology. And you could be selling your company. You could be selling yourself — you’re a job seeker with a resume that you want to have optimized. You could be selling your product in an e-commerce setup. You could be marketing yourself. You could be marketing blast emails. Any time you’re writing content to get people to take an action, this is really useful technology.

How the analysis works

Sonal: Well, let’s talk about where this fits, and let’s purposely use some jargon here, and let’s talk about where it fits in the tech trends — like, where it fits in that space. So, it sounds like you’re describing — big data techniques apply to natural language, or machine learning techniques applied to natural language. But natural language has been around for over 3 decades, 30 years. I mean, in the early days, they didn’t have this kind of corpus to train the algorithms on, obviously, so they had to use different kinds of techniques. Like, where does your work fit, and how do you see how it fits in the evolution of natural language — like, how has it been and where are we now, kind of?

Kieran: Yeah. I mean, I think in core natural language processing, empirical strategies have always been really important. So, when I was a grad student years ago, writing a dissertation, collecting data was just a lot more work, right. So, I had to go and record people in the field, and I had to transcribe things. I mean, it feels ancient now, actually, but I actually finished my Ph.D 12 years ago. It wasn’t that ancient. The fact that the internet has codified everything over the last 15 or 20 years, at least in English and most Western languages, means that you have this ready set of corpora available for you. The tricky part is collecting the text and the outcomes.

Sonal: Right. 

Kieran: The outcomes are the part that’s hard. Finding the content is easy.

Sonal: So, you’re describing the difference between just analyzing something and being able to predict something using that text.

Kieran: Exactly. When you analyze something, you can say, “Oh, cool. This word is really popular now. That’s an interesting fact. It might be valuable to someone to know it.” But it’s different than saying, “This word is actually helping your document in some way.”

Sonal: What are some other scenarios where you could use, sort of, this natural language text analysis to predict interesting things?

Kieran: Yeah. So, people are really starting to think broadly about this. We saw a New York City-based company helping people optimize the sale of their New York City apartments recently, using the right phrases. We’ve seen people do things in healthcare that I think are really interesting. It’s not a known vertical to me, but looking at the kind of notes that doctors take about a patient, and predicting the patient’s likelihood of having a major insurance incident over the next, you know, 12 to 15 months. Some really interesting things in actuarial science. Like, I think anytime people are producing text — which, by the way, in businesses, whatever your business is, text is actually the thing you produce the most of…

Sonal: Right. I believe that.

Kieran: …which any industry, and so people produce a lot of text. It’s meant to describe often what they think is going to happen. And so, I mean, the field of opportunity is pretty big.

Sonal: The techniques you’re describing — is it the same underlying technique applied to all different domains, but do you have to also train each corpus on a different domain? Like, there’s a special inside language in each industry. Or are there also universals across all of them?

Kieran: That’s a really good question. You don’t know until you train, is the short answer to the question. So, we have a set of NLP libraries that look for common attributes of text, and we always start out any new vertical by turning them on the documents and seeing what happens. So, things like sentence length — almost always interesting. Things like the density of verbs and adjectives — almost always interesting. Document length — almost always interesting. But the specific phrases that matter, and what it means to write a job listing, is very different than what it means to predict whether a patient is going to become ill, right. 

And so the specifics matter. The goals matter. So, if it’s a document that’s intended for broad consumption, it really probably shouldn’t be longer than 600-700 words. If it’s a stock prospectus, where you’re giving a company some information about how their stocks are likely to perform, it’s going to be pages and pages. And so, you know, the specific benchmarks that you’re looking for often vary vertical by vertical, but the principles of the kinds of things you look for are pretty similar.

Sonal: In the past, it seemed like only really big companies could do this, because they had, like, the type of computing hardware and processing power to pull this off. Like, what’s changed that a small startup could do this?

Kieran: AWS. AWS is what has changed things, right. I mean cloud compute at scale and, you know, Google Cloud and Azure. There’s a lot of competitors now, but AWS did this for startups, I think. And I say that, not because I worked at Amazon before, but it actually is. Like, for our team to set up the server infrastructure that we need is [critical]. You know, so I think that that’s a thing. And just the fact that there’s so much text data encoded on the internet. Google has democratized a lot of access to data. And so, that has helped, too.

Sonal: That’s great.

Kieran: Yeah.

Michael: Did you guys, I have to ask, did you kind of put any Kickstarter projects up there yourselves, just to give it a whirl?

Kieran: No. We were asked this a lot during our fundraising. We did look at pitch decks, by the way. One of the things…

Sonal: Oh, I want to hear about that, by the way.

Kieran: I will come back to your question. One of the things that’s been fascinating about having the beta out there in the world is the ways people are using it. So, of course, they’re using it for job listings, but people are using it for everything. Like, just a couple days ago, I had a material science professor write to me saying, “I put all my course syllabi through.” I was like, “Really? Like how did that work for you? I can’t imagine that that was a good result.” And he’s like, “Oh, I threw out all of the job parts. I just looked at gender bias, because that was a component that I needed for what I was doing.”

Sonal: Wow.

Michael: So, describe, when you say put it through — like, what happens? I understand, like — in my head, I have this idea that I’m typing along and, you know, suggestions come flying at me, but…

Kieran: That’s exactly what happens. So, there’s a website, and you paste or type in your content, and as you’re typing it’s getting annotated and marked up for you with patterns, suggestions, things you might want to change, scores.

Michael: And you can, in the case of the syllabi, right, you can dial it up or down depending on what you want the outcome to be. So, in his case, “Look, I’m sort of tracking for gender bias or…”

Kieran: He was looking for a specific aspect of what we provide. And, of course, the product isn’t tuned for what he wants, but he still found that aspect to be applicable to what he was doing. We’re seeing people put marketing content through, pitch deck content through. So, to your question, about did we initiate any Kickstarter campaigns? We didn’t because we weren’t making…

Michael: But you guys would be genius at it.

Kieran: We might be, yes. We’ve given a lot of advice to people on Kickstarter projects since then. But we didn’t, because we were making an enterprise product, right, and if we had followed through on a Kickstarter product and then it got funded, then we’d have to build it.

Michael: Right.

Kieran: But we helped friends, for sure.

Sonal: That’s great. So, what did you find out about the pitch decks actually? I’m totally intrigued by that, obviously, given who listens to our podcast.

Kieran: I mean, pitch decks are not always highly text oriented, right. So, great pitch decks don’t include just your text attributes, but there are certainly things like length of your deck that matter. Slide titles end up mattering quite a bit, because people are looking to see a certain style of content.

Sonal: And less space. And we’ve all seen any kind of meeting where some one person gets hung up on one word in a headline.

Kieran: Yeah. It can.

Sonal: It always happens too.

Kieran: It can. We didn’t go deep on pitch decks, but we looked at as many as we could find as we were building our own pitch deck in our last round of funding, and found some patterns in the set.

Michael: In the synergy line of questioning, were there words or phrases you should never include in your pitch deck?

Kieran: You know, I don’t know.

Michael: Okay.

Kieran: I don’t know.

Sonal: I guess, there might not even actually be — yeah. I wonder if there’s — there’s never, I guess, a set set of rules.

Kieran: I bet there are. We didn’t identify them.

Michael: Right. Synergy is probably one.

Kieran: Yeah.

Language and gender bias

Sonal: Actually, let’s talk a little bit more about — and maybe we should wrap up on this note — let’s talk a little bit more about some of your findings around gender differences.

Kieran: Sure.

Sonal: So, you said the materials science professor tested his own syllabus — which again, I’m not sure that made sense, like you said, because there wasn’t a reference corpus to, I guess…

Kieran: There wasn’t, but when you have, you know, tens of thousands of phrases that are lighting up, and he’s writing for a science STEM student population, odds are good that there’s going to be some lexical overlap.

Sonal: Oh, that’s great. Right.

Kieran: So, you know, he found some things there.

Sonal: So, describe some of your findings around job descriptions, because — given what your product focuses on right now in terms of gender differences — and how people — what things you picked up on that?

Kieran: Yeah. So, prior to us doing this, there was some really strong qualitative research, right. The National Coalition of Women in Technology, the Clayman Institute here at Stanford — they’ve done some really interesting qualitative work, but the number of phrases that they identified was on the order of a couple hundred. Avoid “rockstar.” Avoid “ninja.” You know, we want to hire more women in technology.

Sonal: Guru.

Kieran: The interesting thing for us — first of all, we’ve talked to a lot of industries outside of tech. And so, while in technology we want to hire more women, when I talked to people who are hiring ICU nurses, or elementary school teachers, bias goes the other way. And so, it’s very important to us that we don’t judge — we just forecast and let you make the right choices for your business.

Sonal: Right. Whatever you’re optimizing for given wherever there is an indifference or imbalance.

Kieran: Right. Right. So, I will say, we have validated much of the qualitative research, which is good, that there’s, you know, some alignment on those points. We have found cases where things are — it’s pretty subtle, right. So, the difference between “fast-paced environment” and “rapidly moving environment” — it’s almost head scratchingly tiny, but statistically, one of them…

[End of Transcript]

image description Looking for more episodes?
Find them wherever you listen to podcasts.