//

“Using Statistics for Security: Threat Detection at Netflix”: Nicole Grinstead with Netflix (Video + Transcript)

October 16, 2018
VIDEO

At Girl Geek X “Elevate 2018” conference, Nicole Grinstead (Senior Security Software Engineer, Netflix) will discuss anomaly detection project, Trainman, and how it enables Netflix to find and act upon high risk corporate user behavior. Threat detection has become increasingly valuable in today’s complicated corporate security landscape.


WATCH ON YOUTUBE

Speakers:
Nicole Grinstead / Senior Security Software Engineer / Netflix
Angie Chang / CEO & Founder / Girl Geek X

Transcript

Angie Chang: Alright, we are live. Welcome. This is our 11:00 AM session. We have Nicole Grinstead, a senior software engineer at Netflix. A few things she’s focused on are things like corporate identity and access, applying user behavior analytics to threat detection, and user focused security. She’ll talk to us today about Netflix anomaly detection project [inaudible 00:00:44] and how it enables Netflix to find and act on high-risk corporate user behavior as threat detection is becoming increasingly valuable in today’s complicated corporate security landscape. Hand it off to you.

Nicole Grinstead: Great. Thank you. Thanks everyone for joining me virtually today, I’m really excited to be here. A huge thanks to the Girl Geek Elevate conference organizers for asking me to speak, and a huge thanks to the sponsors as well. Without further ado, I’m Nicole Grinstead and I work at Netflix as a senior security software engineer on our cloud security team, specifically on information security. Today I’ll be telling you a little bit about what we’re doing for advanced threat detection. Specifically, how we’re using statistical modeling and machine learning to detect malicious behavior.

Nicole Grinstead: Really quick, just to define user behavior analytics for everyone. Basically, what user behavior analytics are, it’s kind of the industry-wide term for what we’re doing here. It’s looking at what are users normally doing on a day-to-day basis, and then finding deviations from that normal behavior. When we see deviations from the normal behavior, they might be doing something a little different out of their ordinary, but it could also be indication that an account has been compromised, and that’s something that we as the security team want to look at. That’s kind of what it is.

Nicole Grinstead: Example, if you think about what a software engineer does, for example, on a day-to-day basis, you might look at your source code repository, you might look at some dashboards to look at your logs, some deployment tools. Then let’s say all of a sudden one day you look at an application that holds your company’s very sensitive financial data. That’s pretty weird and that’s something that we as security team might want to take a look at even though maybe you just were interested, that could also mean that someone has gained access to your credentials and is using them maliciously.

Nicole Grinstead: To give you another quick example, let’s say you’re maybe an HR or a PR employee and you spend most of your day working in documents. Let’s say we have a baseline of your normal amount of documents that you read or that you modify, and say that’s 20 on a normal day-to-day basis. If all of a sudden that shoots up and we see you downloading or touching a thousand documents, that looks pretty weird and it could look like data exfiltration. Again, that’s something that we might want to take a look at.

Nicole Grinstead: To take a quick step back why we think this is worth that kind of big investment. I mentioned at the beginning, we’re using machine learning, statistical modeling, that takes quite a bit of effort on our end. To give you some perspective, a 2017 study done by IBM security estimated that data breaches cost anywhere from around $3.6 million if a breach does not include any sensitive data, all the way to, on average, $141 million if that breach includes sensitive data.

Nicole Grinstead: These are top of mind things, data breaches have been in the news recently and it’s very costly. It can cost a company a lot of brand reputation and other very severe monetary consequences. One way that data breaches can occur are phishing attempts. This is really common. It’s estimated that on average, about one in 130 emails sent is a malicious phishing attempt, so not to say that one out of every 130 emails that makes it all the way to your inbox is a phishing email, but some of these things get pre filtered out.

Nicole Grinstead: They’re super prevalent and they’re very commonly used by organized hacker groups. About 70% of organized groups are using phishing emails as one of their modes for attack, and that’s because they’re very effective and successful. If you think back to some high profile data breaches that occurred recently, the 2016 DNC breach before the election partly was caused by a successful phishing attempt.

Nicole Grinstead: Also, the 2015 Anthem data breach, again, successful phishing attempt. Not to say that there aren’t other ways to mitigate phishing attacks and not to say that that’s the only way that accounts can be compromised or credentials can be compromised, but this is one really prevalent issue and really prevalent attack vector. Just to give you an example and demonstrate the kind of things that we are facing, the threat that we face and what we’re doing about it.

Nicole Grinstead: Basically, this is the fun part of the talk, I think. I’m going to explain at a high level what we’re doing at Netflix to detect that malicious behavior. The data is all there in our raw logs. We have SSL data of what users are logging into what applications, where they’re logging in from. We also have application specific logs, what users are doing within sensitive applications. Also Google drive data, for example, what types of actions people are doing, how many documents you’re accessing, that kind of thing. So we have all of that raw data and that’s really where we’re finding this information of where the deviations occur.

Nicole Grinstead: The first thing we do is clean that data up a bit. As you can imagine, it might not tell the full story, just one raw line and your logs. We make sure that we enhance that data and get kind of the originating IP address if, for instance, a user has come through VPN or something like that. That’s really the first step as we enhance our data, and make sure that we have everything that tells the full story about what action the user has taken.

Nicole Grinstead: Then we start to take those actions and model what their normal behavior is like. Just to give you an example of a few of the things that we think are interesting. If you think about what a user typically does, you know, they’ll come in, they might access the same types of applications, so that’s definitely one thing that we detect on is what type of applications does a user normally do versus what are they doing right now, and is that weird?

Nicole Grinstead: Another aspect is if you can think of a user probably normally logs in from the same device on the same browser. User agent is a really common thing that you can see in a log where we can tell what kind of machine they’re coming in from, and that usually doesn’t differ. Sometimes people get new machines, sometimes they upgrade their browsers, like we have some logic to dampen those upgrades or things like that. But if all of a sudden that changes, it might be a signal or an interesting thing to look at.

Nicole Grinstead: Additionally, location. People do go on vacation, but normally if you think about a user’s behavior, they’re probably either logging in from home or from their desk at work. These are all signals that we can look at and model out a user’s normal behavior and see when there’s deviations, that might be something that’s interesting to us.

Nicole Grinstead: As you can imagine then, just generating anomalies and figuring out where things are different doesn’t necessarily give us a full picture of when something is malicious or if something might be going wrong. That’s where the next step is on top of these raw anomalies that we’re generating. We apply some business logic to be a little bit smarter about what we think is important to investigate, because just seeing raw anomalies, it could be interesting but it also can be a little bit noisy. As you can imagine, people do deviate from their normal behavior sometimes.

Nicole Grinstead: This is then kind of the step where we try to figure out is that actually risky to our business if this action is occurring. As I mentioned in one of my first slides, if you think about accessing really sensitive financial data, that’s something that’s higher risk than maybe accessing our lunch menus. If I never accessed lunch menus for Netflix and then all of a sudden I do, well yes that was anomalous, but does the security team care if somebody is looking at lunch menus? No, we don’t care. There’s no sensitive data to be gleaned there and it’s not something that we want to spend our resources investigating. That’s one aspect.

Nicole Grinstead: Also, I think in all of our organizations, some users have access to more sensitive data than others. Also, if you think about executives, not only do they probably have access to more sensitive data than some other people in the organization might, but they also might be a larger target because they’re high profile and externally visible. We also kind of look at what type of user it is, and if it’s a certain type of user, they might be a little more or less risky. These are the types of things that we apply after the fact to weed out the noise a little bit and see what are the really high risk things that we should be focusing on and looking at.

Nicole Grinstead: The final step is when we’re actually going to display this to our security team of analysts. We are using Facebook’s open source technology graph QL to enhance that anomaly. [Audio drops from 12:05-00:12:44] Hey, hopefully everyone can hear me again. I’m not sure exactly what happened, dropped briefly. Okay, great. Yeah, then, that final step is where we get information from outside of just our anomaly generation and tie that up with other interesting data sources.

Nicole Grinstead: If we are looking at not just that interesting event, but then events around that. What does the user typically do, what kind of applications did they log into right before, what types of applications did they log into right after, that type of thing. Also, what organization they’re in, what type of job they do, so any other extra information, extra data that we can use to kind of enhance that and tell the whole picture of who this user is, what they typically do and why this was a weird behavior and if it’s risky.

Nicole Grinstead: That’s kind of at a high level what we’re doing. I really appreciate everyone joining today again. I think we have some time for questions.

Angie Chang: Thank you, that was excellent. Thank you for hanging on while we had minor technical difficulties. We do have some questions. First question we had from Carla is how do you handle and what steps do you take to keep it protected for a cust … How do you keep customer data protected and maybe used internally to diagnose a problem?

Nicole Grinstead: Actually, thanks a lot for the question, that’s a great question. We on the information security team are more focused on our corporate employee accounts. On the consumer facing side, if a consumer’s account is compromised, you won’t have access to intellectual property or financial data, stuff like that. On my team, that’s more explicitly what we’re focusing on with this particular project. Not to say that that’s also not a problem or an issue that we face or that we work on, but that’s not my area of expertise, I’ll say.

Angie Chang: Thank you. All right. Another question we have here is from Sukrutha, which is, how has your knowledge of security breaches and anomalies impacted your relationship with tech?

Nicole Grinstead: Yes, great question. I would say our relationship with … It definitely makes you think twice when you’re getting like a random email from someone that you’re not expecting or whatever. I’ve had a lot less, I guess base level trust in technology in general, maybe I’ll say. I shouldn’t say base level of trust, but just … I always have that hat on of someone could be doing something malicious here and there are a lot of malicious actors out there. It’s just something to be aware of.

Angie Chang: Okay. Thank you. Another question we have is how did you get into security?

Nicole Grinstead: Yes, that’s a great question. I just kind of fell into it. It was one of those things. I just started working on an identity and access project previous to Netflix when I was at Yahoo, and you just kind of ended up being a gatekeeper for sensitive information, you have to be very security aware. I just kind of found that it was super interesting being on the defending side of trying to keep things safe, so just delved in more from there.

Angie Chang: Cool. Let’s see. A question we had from Andreas is, how do you determine what a normal behavior is?

Nicole Grinstead: That’s a great question. Basically, this is where we’re using statistical modeling to build a baseline of what a user is normally doing. We’re looking at our logs and seeing these are the normal behaviors over time, and then seeing if this current action or if you can think about this current log that we’re looking at, if that deviates significantly from what a user is doing on a day-to-day basis. We’re using that log history over time to figure out what a user’s normally doing.

Angie Chang: We have a question here about, does the assignment of risk level happened manually or is it automated by machine learning system?

Nicole Grinstead: That’s automated, I wouldn’t say that it’s necessarily machine learning at that part, we’re using more just business logic to assign risk level. We know where our sensitive data is, we know which systems and which applications hold that data. For instance, one level where we say if this thing that was anomalous is a risky system, that risk level is overall little bit higher.

Angie Chang: Does the system alert you when outlier behaviors happen?

Nicole Grinstead: It does.

Angie Chang: Okay. One last question, quick question, what does working as a security engineer at Netflix like?

Nicole Grinstead: Sorry, could you repeat that? It cut out a little bit for me.

Angie Chang: How is working as a security engineer at Netflix like?

Nicole Grinstead: It’s great, it’s really rewarding. I’ll say that there’s just tons of interesting problems to solve, I think in the security space in general. More specifically at Netflix, one of the great things about the culture here is that there’s a lot of freedom to … Where we see opportunity, anyone at any level is able to call that out and drive that forward. It’s a little different from other organizations I’ve worked in where it might be a little more resource constrained and you’re kind of a little more maybe, you work a little bit more in a specific role. I’ve had the ability here to do a lot of different things that I’ve found interesting. I’d say it’s really exciting and fast-paced, fun place to work.

Angie Chang: Thank you. That’s awesome. Thank you Nicole for joining us and pulling through. We have ran out of time, but thank you so much for joining us from Netflix today and people are tweeting, so feel free to answer the tweets and we will for next week. Thank you.

Nicole Grinstead: Great. Thanks so much everyone.

Angie Chang: Bye.

Share this