Performance Through Representation in Data: Adding the Human Element From Hiring to Modeling

Ally Tubis

Head of Engagement & Retention Analytics, Disney Streaming Disney+, Hulu, ESPN+, Star+ at Disney+, Hulu, ESPN+, Star+

Learning Objectives

In a data role, one of our main objectives is to build efficient models and ensure that the recommendations from modeling efforts are as accurate or interpretable as possible. In my talk, I’ll introduce a framework that every leader/data contributor can follow and discuss a modeling approach and critical junctions in the modeling process to ensure models are less biased and performant. I’ll talk about what has made me and my data teams most effective based on all of my experience as an analyst, decision scientist and data leader. It all comes down to adding the human element.


"I invite you today to go on a journey with me to prove that having a representative data team, a diverse data team is likely to have a positive impact on modeling results. "

Ally Tubis

Head of Engagement & Retention Analytics, Disney Streaming Disney+, Hulu, ESPN+, Star+ at Disney+, Hulu, ESPN+, Star+

Transcript

Hi, everyone. Today, we’re going to talk about how representation in data can have a positive impact on modeling performance, and how adding the human element from hiring to modeling can have really great effects. My name is Ally Tubis. I’ve led small and large data teams across media and tech companies. Currently, I lead the engagement and retention analytics team and analytics vertical, focused on understanding who our subscribers are across Disney Plus, Hulu, Cam Plus, and soon to launch Star Plus, what engages them, and what we can do to give them the best experience through analytics, insights, and advanced modeling.


Today, I want to talk about building a performance data science model, and how we can get there. We, as humans, design a model. We build it, we test it to make sure it works right, and we feed data to the algorithms to learn from. In essence, algorithms are actually tools that take the data that people give them, they identify patterns that people show them, and they echo those patterns. So this is why we have this very wise saying here today, “Your data set must be very large, diverse, and indistinguishable from the reality you seek to model.”


We all know, as data professionals, that having representative data, that data set that shows us how the overall population will behave is critical to get accurate modeling results. So I asked the question today, should the data analyst or the data scientist also be representative of the population? And does that matter? I’ve searched long and hard to find examples where having a representative data team have a positive impact on modeling results. I came up short in finding those examples, but I did find a bunch of examples of how having a non representative team have dire impact on society. You want some great examples, Kathy O’Neill’s weapons of mass destruction is an incredible book full of them.


I’ve also found studies showing the positive impact of diversity within an overall organization or in leadership on business performance and growth. But again, no studies that specifically looked at data team members. So I invite you today to go on a journey with me to prove that having a representative data team, a diverse data team is likely to have a positive impact on modeling results.


First, let’s talk about my running with an algorithm. Welcome a dating app. At one point in my life, I realized that I was ready to find my perfect match, and to start your life with them. So I went on a dating app. This dating app asked me a whole slew of questions about who I am, how I live, whether I brush my teeth every day, and whether that’s important for me to have in a partner. It’s fed my data through an algorithm, and the output that I received was a list of perfect matches. But what I realized after having met some of these people, is that on paper, yes, they are a perfect match, but that’s only based on the questions that I was asked. It wasn’t based on the things that were most important to me. Those questions were not asked, If I had left my future partner in the hands of this algorithm, then I would never have found my perfect match. I realized that this algorithm matched my perfect partner based on questions that were not really important to me. There were other critical questions that [inaudible]. This taught me that asking the right question and feeding algorithms the right data is really important in your modeling process.


Who typically comes up with these questions? The data scientists. Who typically feed the data to the algorithm? The data scientists. Now, imagine this, you have two data science team. The data science team on the left hand side is representative of the people using the dating app or of your product, and the data scientists on your right are white men. They are not representative, they are a homogenous group. Which do you think will provide a more representative set of question and data to an algorithm?


I’d like to share with you a story of when I was hiring a data science team at a global media company. This company had a very diverse audience. As I was going through resumes, and I was getting resumes from the recruiter, 90% of the applicants were white men from the US. One thing that I had asked the recruiters was to give me a representative pool. It was really critical for me that at least the pool of candidates looked like our readers. From all the people who I interviewed, I gave the people who were kind of had a minimum set of criteria a blind data challenge.


It turns out that a very non traditional candidate ended up doing the best on this data challenge. She was a former Vice President of Business Development, had many years of experience outside of data, but went through a data boot camp that was pretty intensive. In her past life, did a pretty serious math program. So technically, it was really strong. Because of her business development experience, mixed with her technical experience, mixed with experience she had in her boot camp, she was able to ask a really important question about the data, and choose the right data to build a mock model off of. So we ended up hiring this candidate.


08:12

It turned out, in the end, that she was able to contribute a whole other perspective that we did not have on the team. She was able to ask the right questions from the data, optimize for the right objective function, and think so critically about the problems that we aim to solve. Ultimately, our results were stronger. They provided better insights, and the models are more performant. So what did I learn? That getting the right representation is critical to get stronger model performance from my own experience.


08:53

Now, let’s think about the modeling steps. Typically, when you’re building model, you want to define the objective function, what are you optimizing for. You want to gather all the data, and hopefully understand it really well. Feed it through a model, get some results, get the performance, test it, and then ultimately get to an interpretation in action.


09:25

But before kicking off this modeling process, and where I see many companies missing a very critical step is it’s so important to identify the goal of the problem, to speak with the business leaders to understand how the results are going to be used. What should we really be optimizing for? That’ll help us define the right objective function, that’ll help us gather the right data, and that’ll help us test the modeling results in a way where we know the end use. In every one of these steps, the human element is critical before kicking off identifying that goal with the business team critical.


10:19

Let’s look at how that works. Take this example here. We have a global media company, a subscription business that is reliant on advertising revenue. To take two data teams that were given a goal by the business team, saying that, “Hey, we need to grow our revenue through advertising revenue, we don’t want to show more ads to our subscribers. It’s a bad user experience. So how can we optimize it differently through data science models?” So on the left hand side, you see one data science team. They’re not representative of the subscriber base. On the right hand side, you see another data science team that is representative. They both ask the business team, “Do you know if there are any indicators of whether certain metrics are related to advertising revenue outside of showing more ads?” The business team says, “Yes, click through rate.” So the more pieces of content people click into, the more they watch and read, the more ads pop up and the more revenue they make, as well as time spent. So as someone reads content, or watches a video, and as they keep scrolling, they will see more and more ads.


12:05

What the non representative data team might say is, “Well, let’s look at the relationship between click through rate and revenue and time spent in revenue.” They might find that there’s a much stronger relationship between click through rate and revenue, and pragmatically say, “Let’s optimize for the objective that’s optimized for click through rate.” That’s what they base their objective function on.


12:32

The representative team may actually be fans of this product, and use the product deeply, and understand the subscribers better. They may say, “Hey, when we look at the top content by click through rate, it’s short, it’s fluffy, it’s doesn’t really give us the representative group of readers anything that has substance.” But they also find that 20% of the subscribers consume 80% of the content, which is what you see in many media companies, the 80:20 rule. These 20% of subscribers are super users. They don’t like to read content that has high click through rates. They like to read more meaningful, deeper content. As they scroll and consume more, they see more. So the second data science team may say, “Let’s optimize for time spent. We want to give our subscribers the best experience, not to drive them to more and more content that’s not going to give them what they’re looking for.”


13:53

The next step is gathering the data and understanding the data. The not representative team may say, “We have all this consumption, behavior, and patterns. Let’s throw that into the model. That’s all really important.” The representative data team may say, “Let’s understand the data really deeply. Let’s think about the subscribers from many perspective. Let’s engineer our features that are used in the modeling efforts in such a way that our data becomes more meaningful.”


14:32

So let’s take an example. We might find a data scientist who lived in a very low income neighborhood. They may say that, “I really love our product, but because of bandwidth issues, I actually can’t see a bunch of content.” So it’s really important to take into account the zip codes and locations and quality of service in our recommendations that we want to automate. So they might bring in other data that the first data science team who’s not representative might not have even thought of, because they didn’t have that perspective. So being able to build out features that help us better represent the subscribers, ultimately, will get to better modeling performance.


15:31

So we feed the data through the model, and we want to know how it worked. So it’s really critical to hold out some of your data, and to run it through the model, and to test it to see how is the model performing, and to ask the right questions about the modeling results from different perspectives. Are the results that we’re getting really the right results? To think really critically about that before putting those models into production. Then, when we put those models into production, we want to make sure that we interpret the results, and we provide actionable insights to the business based on all the great modeling work, and in a way that’s interpretable, and again, takes the subscribers into a cat. And again, the representative data science team will do that very well.


16:38

So let’s look at this, again, in one view. Really critical to set the goals from the beginning, define the objectives, have a bunch of inputs, the data, feed them through the model, test the results, and get the outputs. As you can see, from every step of the way from the example that we went through, the human element is critical, but often overlooked. As [Indtistinct] Jane, Chair of Women in Data, puts it, “You’ve got to have diverse teams to even spot bias in the first place.”


17:24

There are three things that are really important when we start thinking about AI and machine learning. It’s not so much about data, it’s about people. Firstly, we’ve got to be aware of our own biases. Secondly, we need diverse teams to work with technology. With 78% of people working in AI being male, there are biases that they naturally will not spot. Finally, we’ve got to make sure we’re giving the machines non biased data sets. Now, do we have representation in data today? Not according to a recent study done by Genpact research, which was run globally across many industries, including finance, insurance, tech, where they researched and spoke with 500 executives, and found that only 34% of all companies say that they have established protocols to manage AI bias. That means that 66% of these companies have not optimized for AI bias.


18:55

So how do we have representative data teams? It starts with hiring practices, like I had mentioned that I had done when building a data science team before. It’s critical to set the right hiring practices. You want to make sure that you request to recruiters that there is representation in the candidate pools. That means a lot more work for the recruiting team and probably for yourself. You want to make sure that you have blind screenings, so everyone has a fair chance. It’s also really important to ask strategic questions about how the data scientists think and what new perspectives they can bring through technical challenges. You can easily ask technical questions that anyone can answer quickly and know who can answer those and who can’t. But you’ll also find that lots of people learn on the job. But you will not find that all of the people who can answer those technical questions can bring in those diverse perspectives, which are golden, trust me.


20:21

As we’ve seen so far, you want to also make sure that your interview panel is diverse. You want to have a cross functional team interviewing candidates, so that the feedback that you get is also coming from the diverse people on your team, and also that people who are interviewing feel comfortable and supported because they are also seeing diversity in your organization.


20:50

Training. Once your diverse team starts, you want to make sure to involve the entire team in the onboarding process, so that your new team members feel supported and like they’re part of the team from the very beginning, and your team who’s been around for a while feel the same way. What I like to do is I like to throw people new hires into new projects to learn by doing because that’s really when you truly learn, you learn on the job. You want to make sure you create an inclusive and safe culture environment. Create safe spaces so that your team can feel comfortable asking questions, voicing concerns, and ultimately, that encourages diversity of thought, brings in stronger solutions, stronger team dynamics, and stronger relation to our diverse subscribers in the end.


21:55

So here we have it. At the top of the rainbow, we have crucial steps in every modeling effort. We want to understand our goals really carefully. We want to define our objective functions, so that everything that we’re doing will ultimately help us get to our overall goal. But really, this rainbow starts way earlier. It starts in your hiring practices, creating the inclusive environments, giving your team the support that they need to be able to do the best work that they can do.


22:39

Remember, machines are only as smart and biased as we make them. What’s key in performance improvements, in my perspective, is the people. I’ve seen this in practice, and you hear this from really strong data leaders who have made a big impact. Kathy O’Neill, who has written the weapons of mass destruction, as I mentioned, tells us and I quote, “Big data processes codify the past, they do not invent the future.” Doing that requires moral imagination, and that’s something only humans can provide. We have to explicitly embed better values into our algorithms, creating big data models that follow our ethical lead. The first time I heard about this idea of moral imagination was from Jacqueline Novogratz, who has so many incredible stories from her work in impact investing. She looks at moral imagination as the basis of an ethical framework for a world that recognizes our common humanity and in system opportunity, choice, and dignity for us all. I firmly believe that the way forward in data science and all work that we do is through moral imagination. Thank you so much.


Get full Q/N Access

Sign up to Q/N with a few details to watch this presentation.

  • Hidden
  • Hidden