Girl Geek X MosaicML Lightning Talks (Video + Transcript)

Like what you see here? Our mission-aligned Girl Geek X partners are hiring!

mosaicml girl geek dinner outdoor patio networking dinner palo alto playground global

We enjoyed dinner and demos at the sold-out MosaicML Girl Geek Dinner in Palo Alto, California. 

Transcript of MosaicML Girl Geek Dinner – Lightning Talks:

Angie Chang: Lorem ipsum

mosaicml girl geek dinner julie choi

MosaicML VP and Chief Growth Officer Julie Choi welcomes the audience. She emcees the evening at MosaicML Girl Geek Dinner. (Watch on YouTube)

mosaicml girl geek dinner laura florescu speaking training nlp

MosaicML AI Researcher Laura Florescu talks about making ML training faster, algorithmically with Composer and Compute, MosaicML’s latest offerings for efficient ML. (Watch on YouTube)

mosaicml girl geek dinner amy zhang speaking

Meta AI Research Scientist Amy Zhang speaks about her career journey in reinforcement learning, from academia to industry, at MosaicML Girl Geek Dinner. (Watch on YouTube)

mosaicml girl geek dinner tiffany williams speaking

Atomwise Staff Software Engineer Tiffany Williams discusses the drug discovery process with AtomNet at MosaicML Girl Geek Dinner. (Watch on YouTube)

mosaicml girl geek dinner shelby heinecke speaking recommendation systems salesforce research

Salesforce Research Senior Research Scientist Shelby Heinecke speaks about how to evaluate recommendation system robustness with RGRecSys at MosaicML Girl Geek Dinner. (Watch on YouTube)

mosaicml girl geek dinner angela jiang openai

OpenAI Product Manager Angela Jiang speaks about turning generative models from research into products at MosaicML Girl Geek Dinner. (Watch on YouTube)

mosaicml girl geek dinner banu nagasundaram speaking aws

AWS Product Manager Banu Nagasundaram speaks about seeking the bigger picture as a ML product leader at MosaicML Girl Geek Dinner. (Watch on YouTube)

mosaicml girl geek dinner lamya alaoui speaking hala systems

Hala Systems Director of People Ops Lamya Alaoui talks about 10 lessons learned from building high performance diverse teams at MosaicML Girl Geek Dinner. (Watch on YouTube)

mosaicml girl geek dinner sold out angie chang speaking girl geek x

Thank you for joining us for our first IRL Girl Geek Dinner in over two years during the pandemic! The evening’s talks on machine learning talks were delivered by women at MosaicML Meta AI, Atomwise, Salesforce Research, OpenAI, AWS, and Hala Systems. The sold-out MosaicML Girl Geek Dinner on May 12, 2022 was hosted at Playground Global. (Watch on YouTube)

mosaicml girl geek dinner ukranian original borsch tshirt

This site reliability engineer discusses Ukranian borscht or machine learning, or both, at MosaicML Girl Geek Dinner.

mosaicml girl geek dinner speakers tiffany williams banu nagasundaram laura florescu julie choi lamya alaoui shelby heinecke angela jiang angie chang amy zhang

MosaicML Girl Geek Dinner speakers after the event: Tiffany Williams, Banu Nagasundaram, Laura Florescu, Julie Choi, Lamya Alaoui, Shelby Heinecke, Angela Jiang, Angie Chang, and Amy Zhang.

Like what you see here? Our mission-aligned Girl Geek X partners are hiring!

“Prompt Design & Engineering for GPT-3”: Ashley Pilipiszyn with OpenAI (Video + Transcript)

Like what you see here? Our mission-aligned Girl Geek X partners are hiring!

Transcript

Sukrutha Bhadouria: We will move on to our next talk. Our next talk is going to be given by Ashley. Ashley leads OpenAI’s developer ecosystem and creative application strategy, where she helps accelerate developers and startups build new applications with positive impact. She has also helped lead the launches of OpenAI’s research and commercial products, including Usenet, Jukebox, Rubik’s Cube, Multi-Agent, Image GBT, GPTC API, CLIP and so, oh my goodness. That was a lot. Welcome, Ashley.

Ashley Pilipiszyn: Excellent. Thank you so much for having me. Let me go ahead and share my screen. All right. Excellent. Let me bump this over here.

Ashley Pilipiszyn: Okay, great. Well, thank you everybody for joining this session. I am very excited to walk you through prompt design and engineering with GPT-3. As mentioned, my name’s Ashley and I’m the technical director at OpenAI. So, just a quick introduction here. If you haven’t heard of OpenAI before. So, we are an AI research and deployment company with the mission to ensure that artificial general intelligence benefits all of humanity.

Ashley Pilipiszyn: And what’s unique about us, is we’re actually made up of three distinct pillars focused on engineering startup, research lab, and safety and policy group. And so, a little bit of background here in the lead up to GPT-3. So, nine months ago, we launched our very first commercial product, which was the OpenAI API.

Ashley Pilipiszyn: And this has really become our core platform for accessing our latest AI models. And unlike most AI systems that maybe you’ve interacted with before that are typically designed for one use case, our API actually provides a general purpose text in, text out interface, which I’ll walk you through in a live demo in just a bit.

Ashley Pilipiszyn: And so, this enables our users to try it on virtually any English language task. Since launching, we’ve already seen 200 production-ready applications built using the variety of capabilities that GPT-3 offers. And so, what we’ve seen is actually this incredibly new ecosystem of applications. Spanning things from legal to HR, game development, customer support, productivity, science and education, and both new companies being developed and startups as well as other companies integrating the API. So, a little bit about GPT-3. So, this model doesn’t have a goal or objective other than predicting the next word.

Ashley Pilipiszyn: And so, the key thing to take away here, and this is going to be key as we begin to dive into this prompt design, is it is not programmed to do any specific task. So, this single API can perform as a chat bot. It can perform as a classifier. It could do summarization because at its root level, it’s able to understand what those tasks look like purely from a text perspective. So, really the best way to really… If there’s one thing to take away about GPT-3, it is really just trying to predict the very next word based on all of the previous text it’s seen beforehand.

Ashley Pilipiszyn: So, prompt design and engineering. What do you need to take away here? So, if you have ever played the game charades, this is actually a really great exercise for figuring out how to program with GPT-3. Because what essentially you’re trying to do, again, if it’s just trying to predict the kind of task that you’d like it to perform, you basically want to provide enough context, but not have to give all the information at once. And so, you want to be able to just provide some guidelines about what you’d like GPT-3 to do.

Ashley Pilipiszyn: So, for example, if you want to do classification, want to be able to provide some information about what you’d like done and then maybe a couple of examples. And then try to even provide some counterexamples as well. And so, I’ll show that in just a second. Before we dive in, I just want to highlight some of the settings that are going to come up. There are things called Temperature and Top P. These again, back to thinking about prediction. So, these are not necessarily creativity dials, but they’ll control randomness.

Ashley Pilipiszyn: Another thing we offer is “Best of.” And so, again, GPT-3 in the API is trying to think, “Okay, what is the best response here?” And so, what is the highest average value of the tokens being generated. Frequency, we also… Basically it’s saying, “Okay, we don’t want to repeat what’s already being generated.” And then the Presence setting is also trying to figure out, “Okay, do we want to change topics here and being able to move forward from that?”

Ashley Pilipiszyn: So, we can come back to that, but I’m going to go ahead and move over into… This is the OpenAI beta site. And so, let’s just move this down here. So, this is the Playground setting. So, here on the right hand side, you’re going to see all of these settings that I was just talking about. So, for example, you can determine what the response links will be and to generate with. As I mentioned, this is the Temperature setting. So, we have it currently set to 0.7. So, that’s a pretty standard setting. We also have the Frequency Penalty, the Presence Penalty, and Best of, which I had mentioned. We won’t dive into these just quite yet.

Ashley Pilipiszyn: So, what we have here is what’s known as a prompt library. And what we’ve done is, actually with our developer community, figured out what are some of the best prompts that people are able to get really good results on and what are those settings?

Ashley Pilipiszyn: So, for example, let’s say we want to summarize for a second grader. If you’ve ever received an NDA or any type of legal documents. Actually I, myself, am not a lawyer. And so, many times if I’m reading a legal document, I really don’t know what the essence of that document is really saying. So, actually this prompt, Summarize for a 2nd grader, is really helpful because essentially it is transforming more dense text and simplifying that into maybe how you would explain that to a second grader.

Ashley Pilipiszyn: So, the prompt here. This is actually talking about Jupiter. So, it’s saying that it’s the fifth planet from the sun, the Roman God it’s named after, et cetera. So, again, as I was talking about before, you’re providing the example, so you’re already telling GPT-3 here, “My second grader asked me what this passage means.” You’re already putting that context of putting it into something that a second grader understand, then you’re separating it here. And then you’re actually putting a content that you would like summarized. And then you’re telling GPT-3, okay, you’d like it to be rephrased in plain language a second grader can understand. Here, it will also tell you, “What are some of the ideal settings for a prompt like this.” So, let’s go to Playground.

Ashley Pilipiszyn: And just a second, there we go. So, then all the settings, everything pops up in my Playground setting. And so, here the prompt is, and let me bump this up and let me hit submit. So, “Jupiter is a big ball of gas. It’s the fifth planet from the sun. It’s bright. You can see it in the sky at night. It’s named after the Roman God, Jupiter.” That’s pretty good. It pulled out kind of all the main pieces that we’d want from the prompt and the original text.

Ashley Pilipiszyn: Now, the cool thing here is, too, let’s say you don’t want to use Jupiter… Or figure out more about the solar system, but let’s say you did want a section of a legal document. What you could do is you can just edit these prompts right in your Playground. So, you could delete this and go ahead and delete this as well. And then you could go ahead and copy and paste your own text in there as well, because you’re still retaining those key guidelines. Again, imagine if this is a game of charades or even if you’re working with a coworker and you’re trying to give a set of instructions. So, the key instructions here are asking the second grader–saying, “My second grader asked me what the passage means,” and you want it rephrased. But you can always insert different types of content here.

Ashley Pilipiszyn: So, let’s do another example. So let’s go back to the prompt library. So, a very cool thing we also understand. Remember how I said GPT-3 is focused on text. However, it is able to transform text into emojis. Which actually, thanks to one of our developers who discovered this, we were actually not aware of this capability beforehand. So, if you want to convert a movie title into emoji, you could give some examples. So, Back to the Future might be, you know, boy, man, a car, and a clock. Batman might be a man and a bat. Game of Thrones will be some arrows and some swords. And again, you’ll have the settings on here to get you started.

Ashley Pilipiszyn: So, we can open this up again in Playground. And so, let’s see what we’ll come back for Spider-Man. So, it’s got some spiders, some webs, and that’s pretty good. Let’s see if… What it might come back with if we try it again. All right. So, it looks like it’ll repeat itself on that one. But also, you can begin to combine some of these as well.

Ashley Pilipiszyn: So, you can imagine using chat. So, obviously chat bots are a really popular application. And as I mentioned before, you can think about in customer support scenarios, you can think of in all different types of applications.

Ashley Pilipiszyn:Many of us have already interacted with chat bots before. So, let’s say you want to customize your chat bot. So, the base prompt here is, “This is a following conversation with an AI assistant. The assistant is very helpful, creative, clever, and very friendly.” And so, we’ll begin this dialogue. So saying, “Hello, who are you? I’m an AI created by OpenAI. How can I help you today?” Let’s say, “What movie do you recommend I watch this week?” And we’ll set AI. And submit, oops. My apologies.

Ashley Pilipiszyn: Looking at works of Christopher Nolan. Interstellar, Inception, The Prestige. That is actually a little bit freaky. Christopher Nolan is one of my favorite directors and I love, actually, all three of those movies. So, very spot on actually. But you can begin to actually customize these even more. So for example, let’s say, “The assistant is very creative, clever, very friendly, and an expert on sci-fi.” So, let’s say, “Which books should I add to my reading list?” The Left Hand of Darkness. The Gate to Women’s Country, The Ship Who Sang. Interesting.

Ashley Pilipiszyn: So anyways, you can begin to play around and begin to add that additional context. So, for example, we’ve seen people say, “Okay, this AI chat bot is a science teacher or a bookstore clerk,” and you can begin to actually create these various personas to kind of probe GPT-3, or nudge GPT-3 into the direction, or have that context that you would like it to have. So, let’s do one more.

Ashley Pilipiszyn: So, I mentioned earlier before, Classification. So, you can imagine this being a really useful example. Whether you think of product classification, here is an example of a list of companies and the various categories that they’ll fall into. So, if we open this up in Playground.

Ashley Pilipiszyn: So, again, we’re telling GPT-3, “Okay, Facebook. You want the tags, social media, technology.” LinkedIn will also have that, but maybe enterprise and careers. McDonald’s, you’ve got food, fast food, logistics. And so, this is an opportunity also to create different types of tags. So, let’s see. Logistics transportation. Let’s add… What’s another one. See what comes back for TikTok, social media entertainment. So, that’s pretty good.

Ashley Pilipiszyn: But you can imagine again, applying this to a variety of different products. So, let’s say you’re building a different kind of app for different types of clothing or different types of foods. These kinds of things. And so, you can begin to actually add all of these different capabilities together. So, let’s say for example, the chat bot from the previous example also was able to then help you classify the different products you had in your application.

Ashley Pilipiszyn: And so, as I had shown before for the different startups we’ve seen, et cetera, all the different applications you’re seeing with GPT-3, all boil down to these prompts. And so, your ability to actually help GPT-3 understand, “Okay, what is the end result that you’re trying to get GPT-3 to do,” is really where a lot of interesting things can happen. And so, some of the best applications we’ve seen have been ones where you actually combine these capabilities. So, not just doing a single classification or a single chat bot, but actually being able to integrate those because that’s where GPT strengths lie. As I said before, GPT-3 can do a lot of different things. It’s not programmed to do one or the other, but it actually is very good at, essentially, multitasking.

Ashley Pilipiszyn: So, with that, I wanted to… I’m not sure if any questions have come through, but I wanted to leave a time for just a few questions. But I know this was a very, very rapid fire, deep dive into prompt design and engineering. If A), you have any questions, please feel free to email me. If you are interested in getting access to GPT-3 or building an app or product with GPT-3, again, please email me. I’d be delighted to discuss and very excited to have more people join our developer ecosystem and build with GPT-3. So, thank you so much. And I’d be happy to take any questions with the remaining time.

Angie Chang: There’s some questions in the chat. Most of them were like, “How do we get access to GPT-3?”

Ashley Pilipiszyn: Okay.

Angie Chang: You just answered that question, but if you would like to look in the chat, there’s a question about how OpenAI overcame bias about, for example, food suggestions, American versus Western food, or summarizing New York Times, Wall Street Journal, short article or headline. Let’s see if you can answer in three minutes.

Ashley Pilipiszyn: Okay, awesome. So, and I can not see the exact question, but I think… So, on the question of bias. So, excellent question. It is a, first of all, a very big industry-wide issue at OpenAI, especially we’re really focused on addressing this. Especially with our safety and policy work.

Ashley Pilipiszyn: Actually, I highly recommend checking out if you haven’t last week, we released a new research release about multimodal neurons in our latest clip model, which is our most powerful vision model.

Ashley Pilipiszyn: And the reason I bring this up is because, this is kind of demystifying what’s happening underneath the hood with these AI models. Because obviously, these models are trained on all of the internet. And so, they’re basically integrating what they’re learning from us on the internet. And so, what this multimodal analysis allows us to do, is actually peek under the hood and understand, “Okay, so how are these associations being made?”

Ashley Pilipiszyn: And this allows us to figure out, “Okay, then how can we begin to address these,” by identifying where these associations are happening. And so, this is really borrowing a lot from neuroscience. So, but to address bias in the case of prompt design and engineering.

Ashley Pilipiszyn: There is an opportunity actually to address some of this in text form as well. And so, whether it’s modifying your prompts. So, I think the example was for like foods or recipes, being able to provide a little bit more context to be able to help nudge where you’d like GPT-3 to go. And this actually will help with giving examples as well.

Ashley Pilipiszyn: So, actually one quick example that might help address this is… Question/answer. So quickly, what you can do in a situation like this is you also can provide, for example, a question that’s rooted in truth. I would get the answer. If you ask me a question that’s nonsense, or it doesn’t have a clear answer, I’ll respond with unknown. And so, you can also provide facts or essentially give those examples of how you’d like GPT-3 to respond. So, that’s another way again, is through that prompt as well.

Ashley Pilipiszyn: And then the second question… Angie, I forget what was the second question on?

Angie Chang: It was on headlines, for summarizing media company headlines.

Ashley Pilipiszyn: Oh yes, yes. So, summarization, I guess more broadly. So, GPT-3 is excellent at summarizing. Actually, it can do data parsing and summarization. And so, if I’m understanding the question correctly, could you take a variety of headlines and then summarize a bunch of different headlines and what’s the TLDR main takeaway from that? GPT-3 would be very good at that. Pretty much summarizing, again any text, it will be quite strong at.

Angie Chang: Great. Thank you so much, Ashley. That’s all the time we have today. I know people will be definitely signing up to join the GPT-3 beta and trying it out. And thank you for leaving your contact information on the slide..

Ashley Pilipiszyn: Yes.

Angie Chang: Where you can get in touch with Ashley directly.

Girl Geek X Elevate 2021 Virtual Conference

Like what you see here? Our mission-aligned Girl Geek X partners are hiring!

Girl Geek X Planet Lightning Talks! (Video + Transcript)

Like what you see here? Our mission-aligned Girl Geek X partners are hiring!

  • See open jobs at Planet and check out open jobs at our trusted partner companies.
  • Does your company want to sponsor a Girl Geek X webinar in 2021? Talk to us!
  • Transcript of Planet Girl Geek Dinner – Lightning Talks:

    Angie Chang: It’s six o’clock and that means it’s time for another Girl Geek Dinner, and this time, however, we are coming to you virtually for the first time!

    Sukrutha Bhadouria: Just going virtual opens up our access to you and to you to each other, few people in various time zones, some people who say they’re in London at 2 A.M.

    Angie Chang: I’m just super excited to be able to partner with Planet and bring this evening of talks to hundreds of girl geeks.

    Adria Giattino-Johnson: So today I’m going to talk about diversity and belonging and the climate that we’re at right now and how it’s not business as usual, and rethinking what diversity is going to looks like in 2020.

    Lisa Huang-North: And when you do make that leap into your new role, how long do you want to be there? Is there a stepping stone to another bigger career pivot? For example, if you’re moving to a new industry or is it a way for you to grow and really deepen your expertise, for example, within the industry or within the field?

    Sara Safavi: Along the way I’ve had to pick up some new habits, some new practices and ways of working in order to make my staye in remotesville as a remote employee sustainable.

    Barbara Vazquez: What I’m going to talk about today about agile development and estimation, because I’m a software engineer and we do agile development at Planet. These are some tips that might be useful on a day to day.

    Kelsey Doerksen: Today, I’m going to be talking a little bit about how to handle big data in space and the different machine learning projects I’ve been a part of over the past few years.

    Deanna Farago: My name is Deanna Farago and my team and I operate a fleet of satellites that are currently imaging the entire planet every day.

    Elena Rodriguez: I chose a topic because this is something that I’m always thinking about it, and now I have the opportunity to talk about it and I’m going to take advantage of this – this is how I ended up here, so I’m going to show you my story.

    Sarah Preston: Stories are passed to community and understanding. So think about all the stories that you loved growing up. There were some kind of connection that you made, either to a character, to the author or to the setting that drew you in and made it really memorable.

    Brittany Zajic: I’m on the business development team here at Planet. Business development means something different at every company. Here we focus strategic partnerships and the commercialization of new markets.

    Nikki Hampton: At Planet we have always been committed to diversity, but we are doubling down on our commitment and particularly so looking with respect to attracting and retaining communities of color. For all of you online, we are looking forward to and eager to work with you to tap into a broader network of talented folks that you might want to consider referring to us or applying and sharing with a who you know. But we’re super excited to have been part of this and are grateful that you all attended!

    Angie Chang: It’s six o’clock. And that means it’s time for another Girl Geek Dinner… This time, however, we are coming to you virtually for the first time from our homes in Berkeley, California here. Sukrutha, where are you?

    Sukrutha Bhadouria: I’m in San Francisco, California.

    Angie Chang: And behind the wings we have Amy, who is coming from … Amy, where are you coming from?

    Amy Weicker: Pennsylvania.

    Angie Chang: Pennsylvania. Awesome. We have a bunch of people coming in. Can you use the chat below and tell us where you’re coming in from? While everyone does that, Oh my God.

    Sukrutha Bhadouria: Wow. Orange County, San Jose. [inaudible] India, my hometown. What were you saying, Angie?

    Angie Chang: I’m like, normally we get to see you in a beautiful office space. It’s always great to just go to these different companies and go there and meet the people, eat their food, drink some wine — and then hear from their women at the company speaking about what they’re doing at the company. From roles in engineering and product to sales … we’re going to hear from a few sales people tonight .. It’s really great and exciting to hear from many of the women working at the company on what they love to do.

    Angie Chang: We learn a bit about the company. I’m just super excited to be able to partner with Planet and bring this evening of talks to hundreds of girl geeks. These videos will be available on YouTube for free later so if you can’t come because you actually had to cook dinner and eat it with your family, you can still watch it later.

    Sukrutha Bhadouria: I want to just call out a few people in various time zones. Some people who say they’re in London at 2:00 AM, that’s awesome. India, 6:30 AM. That’s amazing, where in a funny way just going virtual opens up our access to you, and to you to each other 100% across time zones and across a variety of fronts. So that’s awesome.

    Angie Chang: Cool. I guess it’s time for introductions. My name’s Angie Chang. I’m the founder of Girl Geek X. I’ve been organizing these Bay Area Girl Geek dinners, as we called them for the first 10 years. Then now we’ve been doing Girl Geek X events. We’ve done over 200 events at companies big and small, at companies you’ve heard of and companies you haven’t. I think it’s really fun to keep doing it all these years because of that. You get to learn about so many companies that you never thought of. You go in there and you hear about all the ways that the company has people working in these different departments that you never knew existed. Suddenly you’re like, “Oh my God, I guess this sounds really cool.” By the end, when they’re like, “And we are hiring,” you’re like, “Yes, I know what you do. I know what team I can join. I heard from people at that company, I know their names. I can now find them on LinkedIn and poke them and send them my resume.” Please do that. They are hiring. Sukrutha?

    Sukrutha Bhadouria: Yeah. Hi, I’m Sukrutha. I’m the CTO of Girl Geek X. Angie and I met several years ago when I had just moved to the Bay Area looking for other like-minded women like yourself to connect with. I found out that there was an upcoming event with Girl Geek Dinner and I saw Angie’s name there. I was like, that’s awesome. I should try to go. For whatever reason, I wasn’t able to go that evening, and I instead managed to get the company I was working at to sponsor. Angie and I played phone tag for a little bit, but we ended up meeting and I was like, this is so exciting because that particular event had over 200 women AND men show up — 200 people show up, basically. It was such a great energy in the room. I just couldn’t get enough of it. I wanted to come back.

    Sukrutha Bhadouria: That’s where our journey together started. That was dinner number 11. We’ve since had over 200 dinners. I’ve actually lost count. At that point it was one every few months. We ended up having the frequency just go up. We then launched into podcasts. We launched into virtual conferences. So you can see all of that content on our website (girlgeek.io). Just to catch up if you’re new to this, usually what we do in this situation is we survey the room and we ask how many of you are attending this event for the first time. I don’t know how we would do that now, but I’d be really curious to learn from virtually raising your hands. How many of you are attending for the first time? Wow. I can see the numbers, counting now over 40 people are raising their hands as the first time.

    Sukrutha Bhadouria: Wow. That number’s climbing, Angie. That’s amazing. I’m so happy to see so many first time attendees. Generally, like for us, it has been amazing because we would get so much out of these dinners, the podcast that we do, as well as the conferences, because the energy from just meeting other people specifically like you, you may not have that access in your company. We were getting so much out of it. We would hear from the sponsoring company, how they were getting access to really motivated, smart individuals like yourself, where they ordinarily wouldn’t have the access to. Likewise, the attendees would come to these events and they’d be like, “Oh my gosh, I didn’t realize that were these many people who are just like me.” And then they started to make friendships. Often Angie and I would talk about how important it is to network before you actually need it.

    Sukrutha Bhadouria: I myself was super shy and awkward. And honestly, I still am. Who knows with the pandemic and sitting at home how awkward I’m going to be in real life when all of this lifts, but I do force myself. I learned from Angie, actually, how best to get involved in a conversation and approach people that I know I can benefit from that connection and they can benefit from it, as well. We started to build our circle. From that, I learned concepts like build your own personal board of directors, people who advise you in your career and your work life balance and topics like that. Then people who give you honest feedback on how you can improve yourself. So many things like mentorship and sponsorship and how to go about seeking that for yourself and how not to directly just go up to someone and be like, “Just be my mentor,” but then not give them enough context. So how to go about it the right way. There’s usually tips and tricks like that, that we will benefit most from asking other people who’ve had shared experiences like ourselves. What do you think, Angie? What do you think people get out of this?

    Angie Chang: I really appreciate going to Girl Geek Dinners and then Girl Geek events, because we reach a wide range of women who are working in tech and engineering and product. Also a lot of startup entrepreneurs and operations and marketing people. And they all intersect. I think in our careers, which are going to stand for decades, we are definitely going to be changing our jobs, and our roles will be different. I remember when I first met Sukrutha, she was a software engineer in test, and now she’s a senior engineering manager and it’s been years and it’s been great watching her change her career and grow and continue to look for … I think people look for people like them.

    Angie Chang: If I were an engineer, which I was 15 years ago, I would go to a Girl Geek Dinner and I’d be like, “I want to meet other engineers,” but then you wouldn’t have that happy chance of meeting other people, women who are working in other roles, but then you’d be like, “Oh my God, this is actually really cool.” These weak ties and these relationships are actually really beneficial in the long run. I don’t think I would have asked for it when I was younger, to meet all these different types of people, but now I really see it’s fortuitous and it pays to be a little broader. I like the Girl Geek X umbrella, instead of saying I’m only in product, which I was for a few years, or I’m only an entrepreneur, which I was for a few years.

    Angie Chang: Now, it’s just a great place to meet a lot of people. They keep coming back. We actually keep seeing a lot of faces. There’s always a lot of new people and a lot of people that come back time and again, based on who is hosting. We’ll be having different companies host virtual events moving forward monthly. You can look forward to different companies. But tonight we’re really excited to bring you the Girl Geeks of Planet Labs. I am going to be introducing our first speaker from Planet Labs, Adria.

    Angie Chang: Here’s a quick bit about her. She joined Planet’s federal division in Washington, DC as a people partner, where she was able to continue her passion for innovation and data with strategic human capital. She earned her master’s degree at Georgetown university with a research focus on diversity, equity and inclusion in tech. She is co-lead to Planet’s belonging taskforce. Welcome, Adria.

    Adria Giattino-Johnson: Thank you so much. I’m so excited to be here. This is such a great event, and it’s my first time. Obviously my first time as a panelist, but my first time attending the event. I’m just so excited to have so many people here listening to our talks and just connecting with women in different industries. I’m excited to just attend future events later on. Thanks so much for the introduction.

    Adria Giattino-Johnson: Let’s jump into a little bit about Planet. I’m going to share my-

    Sukrutha Bhadouria: Adria, would you like to turn on your video so people can see you?

    Adria Giattino-Johnson: Oh, I’m so sorry.

    Sukrutha Bhadouria: No worries.

    Adria Giattino-Johnson: I think we can all relate. I think this has happened to probably all of us. We’re all in a remote workforce right now. Maybe everyone can raise their hand if they’ve forgotten their video once or twice. Thank you. That made me feel a little bit better. Let me share my screen really quickly with everyone. We will jump into a little bit about Planet and then … oops, sorry … I will jump into my presentation.

    Adria Giattino-Johnson: About Planet, aerospace know how meet Silicon Valley ingenuity. From our spacecraft to our APIs, we engineer our hardware and software to service the largest fleet of earth imaging satellites in orbit and scale our seven plus petabyte imagery archive, growing daily. Planet designs, builds, and launches satellites faster than any company or government in history by using lean, low cost electronics and design iteration. Our Doves, which make up the world’s largest constellation of earth imaging satellites, line scan the planet to image the entire earth daily, which is really cool. We launch new satellites into orbit every three or four months. Most earth imaging companies don’t build their own satellites, but we’re not like most earth imaging companies. Planet designs and builds its satellites in house, allowing us to iterate often and pack the latest technology into our small satellites.

    Adria Giattino-Johnson: Complete vertical integration enables us to respond quickly to customer needs and perpetually evolve our technology. Operating one satellite is a challenge, but operating 200 is completely unprecedented. If you haven’t checked out our Ted Talk on YouTube, I highly, highly suggest you do. Planet’s submission is really cool. I’ll dive into a little bit about why I love working at Planet in a little bit, but it really is unprecedented. Our mission control team uses patented automation software to manage our fleet of satellites, allowing just a handful of people to schedule imaging windows, push software into orbit and download images to 45 ground stations throughout the world. Planet processes and delivers imagery quickly and efficiently. We use the Google Cloud platform and enable custom processing so that customers can tap directly into our data the same way we do. Our data pipeline ensures easy web and API access to Planet’s imagery and archive. We make every scene available as a tile service, composite scenes into mosaics, and build time slice mosaics so you can see change over time. That’s a little bit about us.

    Adria Giattino-Johnson: I am the first speaker, so I’m just going to dive into my talk. I hope that was a high level overview of Planet. Every person that works at Planet is super passionate about our mission, what we do. I really can say that every time I’m out on the street and I do tell people that I work for Planet, our mission is just so cool, that we build our own satellites and we have daily earth imaging. It really is unprecedented. It’s a really cool place to work.

    Adria Giattino-Johnson: On to my talk. I’m the people partner for Planet Federal. I work out of Washington, DC. Planet Federal, it’s the government arm of Planet. We partner with the government. I function as the people partner, which is basically HR. The people partner does function kind of as an HR business partner. Today I’m going to talk about diversity and belonging and the climate that we’re at right now, and how it’s not business as usual. We’re rethinking what diversity and belonging looks like in 2020.

    Adria Giattino-Johnson: A little bit about me. I like to use the group identity wheel anytime I do any type of speaking related to diversity and belonging, because I think this is a really good representation, at least for me, the way I like to represent myself and my different group identities. I am a cis gendered woman. My pronouns are she/her. I’m a US national, identify as agnostic. I am a Black, queer lesbian living with disability. I’m a millennial, upper middle class, and I do hold an advanced degree. This framework is really good for me. I think it’s really good for others, just to kind of show places where I’m marginalized and places where different group identities that I am also dominant.

    Adria Giattino-Johnson: Let’s jump in. So why I joined Planet. It was an industry jump for me. I had about seven years in human resources. I started as a generalist. I grew into leadership and then I later expanded into consultancy. I’m really passionate about strategic HR and diversity, equity, and inclusion. I began looking for something in the tech industry. I wanted to feel really connected to the mission of the next place that I landed. I was instantly intrigued by Planet and their core values. Why I love working at Planet, and this is what keeps me passionate, keeps me engaged, it’s why I show up to work every day. I love my team. They’re brilliant. I can actually say this globally, across Planet. We just have a really talented group of individuals that work for our company. If we’re at coffee chats or happy hours or whatever you can just listen to people for hours.

    Adria Giattino-Johnson: Everyone is just brilliant at what they do, and everyone is so passionate about how they contribute to Planet’s mission. The work that I do is really great for me. It is what I’m passionate about. I get to do that every day. Planet is dedicated to agility and learning, which is something that’s really important to me, especially being in the people department. I love working on the people team because I really enjoy fostering connection and collaboration between teams.

    Adria Giattino-Johnson: Let’s dive into the topic today of what I wanted to talk about for this lightning talk, which is diversity and belonging. This year has been a tough year, and I think we’re all in agreement. We face a global pandemic. We’re facing systematic racism and police brutality, political unrest, and let us not forget the murder hornet scare in May. Just in case you did forget, I put a little slide here. It did terrify me, I think, as well as some others. Wanted to add a little bit of levity there. This was an addition to our plates, I think, that we did not need in May. But so let’s dive into the topic for today. We are a nation that’s currently experiencing trauma. Filmed police brutality and racist interactions have flooded our broadcasts as well as social media. It’s something that we’re seeing every day. Many, from all backgrounds and racial identities, have filled the streets in protest to support Black Lives Matter. In response to this, a number of companies have put out statements in solidarity, and it’s forcing many companies, including Planet, to grapple with internal diversity statistics and consequently rethink diversity, equity, and inclusion programs.

    Adria Giattino-Johnson: Let’s talk a little bit about statistics. Statistics show that Black employees are left behind. In 2014, Google released their diversity statistics, which many tech companies followed suit after that. But before that it wasn’t something that companies widely released. Statistics over the past six years have shown that despite diversity efforts by most organizations, Black representation remains extremely low with a net change that is almost nonexistent. Statistics do show a slight increase for women in tech, which shows that some diversity efforts are working, but some marginalized groups are still being left behind, which is super important to look at. Let’s look a little bit at the delta for Black employees and tech. So this is a really good representation to just show you over the past five to six years there really hasn’t been a change, despite companies having large funding towards diversity, having diversity programs in place.

    Adria Giattino-Johnson: The numbers still remain extremely low. There has been, as I said, an increase for women in tech. It’s been a small increase. There’s still so much room to go, but there has been some strides made there. So just wanted to show a little bit of visual representation of that data. Let’s talk about why diversity efforts are failing. This is what I mean when I’m talking about diversity, quote, unquote business as usual. This is what companies have been doing for decades. Despite a few new bells and whistles that came about in the ’90s, companies have been essentially doubling down on the same approaches that they’ve been doing since the ’60s, which is diversity training to reduce bias. I think many of us have held trainings like that if you’re in people operations, like I am, or maybe you’ve attended a training like that. Hiring tests and performance ratings that limit bias, and putting grievance systems in place for employees to challenge managers.

    Adria Giattino-Johnson: These tools are really designed to preempt lawsuits. I think that framework is even in the wording. When we do attend these trainings, it’s very fear-based, I would say. They don’t dive further than that. They don’t dive further to promote equity and inclusion. Now we’re seeing a shift. Employees are demanding change. Companies can no longer operate business as usual in diversity, equity, inclusion, and belonging. Employees don’t want a PR statement from the organization, but rather they want to see a clear action plan related to inclusion and anti racist efforts. This really falls in the wheelhouse of the people team.

    Adria Giattino-Johnson: It is an organizational wide effort, but it’s something that I’m proud to be involved in. I wanted to talk a little bit about that today. Moving toward belonging and the new landscape for diversity, equity, inclusion, and belonging. I really, really love this framework and I wanted to make sure I included in this talk. Diversity has no meaning without inclusion and belonging. Diversity is like being invited to the party. Inclusion is being asked to dance and belonging is dancing like no one is watching. Belonging is really being able to show up at work as your true self, and being able to be your authentic self in the workplace. We spend so much time at work that really having this piece where you’re being invited to the party without having these other pieces, it doesn’t mean anything. This is exactly why these diversity efforts are failing.

    Adria Giattino-Johnson: I’m not going to dive super into the inclusion framework here, but I did want to include a visual of the sweet spot for inclusion, which is a high level of belongingness and a high value in uniqueness. What that results in is an individual being treated as an insider, and also allowed and encouraged to retain uniqueness within their work group. Let’s talk a little bit about definitions, because a lot of times, I think you can get these trendy words that are happening within diversity or even happening within HR, within people. Belonging can be pegged as a trendy word and it’s really not. I wanted to be explicit about the definitions. Belongingness has to do with whether or not a person is and feels treated as an organizational insider. Uniqueness is measured by the degree to which an individual feels he or she can bring his or her full self to the work without needing to assimilate to cultural norm.

    Adria Giattino-Johnson: The degree to which an employee can fully engage, feel safe, and feel connected in the workplace greatly depends on these two categories. And like I said, these can often be left out of diversity programs. So let’s dive a little further into diversity without belonging. Like I said, diversity without belonging inclusion allows marginalized groups into the organization, but then it forces them to fit in to the existing dominant culture. Many Black employees, for example, experience a pass on promotion, noting that they should get to know other managers more, or network more, or connect more. There’s really not explicit definitions in terms of what that really means. For many marginalized groups, Black employees specifically, they report not feeling safe to connect at work and be their authentic self due to cultural difference and fear of bias or repercussions. There’s a real barrier there. Statistics show that attrition rates among Black employees and those of other marginalized groups are much higher. A 2017 report surveyed over 2000 tech employees who left their jobs. It found that many people of color felt that they had unfairly been passed over for promotion, faced stereotyping or bias related to quote unquote fitting in or connecting with others.

    Adria Giattino-Johnson: Let’s talk about getting it right. I mean, that’s what I really want to talk about in this talk. When belonging and inclusion are embedded in company culture, it no longer forces employees to fit into the dominant culture, but rather it builds a culture around everyone’s unique identities. Rethinking strategy. Belonging becomes the heartbeat behind an organization’s culture and core values. I’m proud to say that that’s something Planet is working towards and I think that they value. I am the co-lead on the belonging task force. I can really say that that is embedded in Planet’s core values. Without inclusion and belonging, employees do not feel as though they can show up as their authentic self at work, like I said before. This inhibits recruitment, retention, and promotion of marginalized groups, and it also inhibits diverse voices from speaking up and being heard. Let’s talk about creating sustainable change. An internal and external audit is something that must be done.

    Adria Giattino-Johnson: Companies, including Planet, must take a long, hard look in the mirror and they must sit with what they see. What are the diversity statistics amongst marginalized groups, specifically Black employees in this climate? What are the attrition rates amongst these groups? How do these systems that organizations have in place contribute to oppression of these groups? Creating a safe space for employees and fostering belonging is also really important. I’m sure a lot of you have heard about employee resource groups, or maybe you’re a member of one.

    Adria Giattino-Johnson: They’re a great place to create a safe space for employees to connect. They’ve actually been in effect since 1964, and they were established as a response to anti-black prejudice following the 1964 riots in New York. They’ve continued to be a huge part of the tech community, but companies must really be careful to utilize these groups as a safe space, rather than placing extra burden on them by forcing them to do organizational diversity work and education on top of their jobs. Especially with us being women in tech, sometimes the burden can fall on the marginalized group to do the education, to do the work on top of their jobs. That’s not really the purpose of an employee resource group. It’s to create that safe space, to create belonging, and to create connection. Employers should really watch that and be careful of putting that burden on the employees.

    Adria Giattino-Johnson: Looking at the internal and external pipeline of candidates is also really important. Talent and recruitment reform, I think is the biggest part of this. You want to audit your hiring practices, and broadening the schools that you recruit from is really important and including HBCUs, it’s also really important. Recognizing bias against HBCUs and other university programs as being seen as a lower bar is the first step in that. I think that’s something that a lot of tech companies are looking at right now. Also auditing referral programs. So I think referral programs sometimes can fall by the wayside, especially in tech. If a workforce is already homogenous, referrals can further contribute to this as referrals from employees tend to be within their own identity groups.

    Adria Giattino-Johnson: I challenge everyone on this video to think about when you’re referring people into your organizations, are you amplifying diverse voices? Who are you referring, or is it homogenous? This is something that even as employees, we can be thinking about when we’re bringing people into our organization.

    Adria Giattino-Johnson: Addition of external efforts, and this is something I’m really proud to partner or be involved with Planet. Recognizing the disparity of marginalized groups in tech and committing to investment in community partnerships and education is also huge in creating sustainable change. An example of this is investing money to give black and LatinX students exposure to geospatial and STEM studies and potentially creating an internship pipeline based on such programs.

    Adria Giattino-Johnson: The last portion I want to talk about is mentorship programs. I think Angie highlighted, it was either Angie or Amy, highlighted mentorship in the beginning of this. People in senior roles tend to want to mentor and groom people who look like them or remind them of themselves. This is implicit bias. It’s unconscious bias. It’s not on purpose. But this means that people in marginalized groups often do not have someone to advocate for them. Organizations and managers within these organizations, if you’re a people leader on your team, you should be intentional about diversity in mentorship programs rather than leaving it up to senior management.

    Adria Giattino-Johnson: The last portion is stamina. This isn’t a checklist. This isn’t a quick fix. This isn’t a measurable ROI. ROI is like always what executives want to hear is if you’re on the operations team or maybe you’re a people leader on your team I’m sure you talk a lot about ROI, building business cases for everything that you want to pass through. But that’s not the case here. This is systemic change that we’re trying to create at the organizational level, which is sustained over years of hard work to see measurable results. Companies must commit to sustainable change over time at every level of the company to value and prioritize diverse and inclusive workforces.

    Adria Giattino-Johnson: I’ll end this just by saying, I am so excited to be a part of these efforts at Planet. I look forward so much forward to seeing sustainable change within our company, and I hope that your companies are also working to create sustainable change. I hope that your voices are being heard. This is a really important time for all of our companies, especially within the tech community. I’ll be excited to see what type of change happens within the tech community in years to come. So thank you so much.

    Sukrutha Bhadouria: Hi. Thank you so much, Adria. That was wonderful. It was really inspiring for sure for me. We’re going to switch over to our next amazing panelist, Lisa Huang-North. I’m going to do a quick introduction and then we can jump into Lisa. Wow, great background, Lisa! Lisa is a product and program lead at Planet. The team is responsible for delivering product solutions that help customers scale their business. Before joining Planet, Lisa worked for over a decade in strategic consulting, finance, digital marketing, and full stack software engineering. In her free time, you can find Lisa building Lego Technic sets, coaxing her sourdough starter, and dreaming of the day when we can all travel to see friends and family again. Oh my gosh, don’t we all? Welcome, Lisa.

    Lisa Huang-North: Thank you very much, and thank you for the intro. Let me share my screen. Hopefully, everyone had a great time listening to Adria’s talk. I’m really excited to be following such a fantastic speaker. Can you all see my screen?

    Sukrutha Bhadouria: Mm-hmm (affirmative).

    Lisa Huang-North: Hopefully, yes. Okay, wonderful. Yeah. Really today I’m hoping to speak with you around pivoting, and I think especially with 2020, it’s really thrown the spinner. I think a lot of people’s plan, whether that be life plans or career plans and career pivots, there’s never really a good time for it, but it’s even more stressful when there’re uncertainties around that. I’m hoping today I can share three lessons from our satellite operation team and really get you to think around how you can plan for your career pivot.

    Lisa Huang-North: To start, let’s see. Here we go. All right. Firstly, about me, I’m currently a product and program lead here at Planet, and I’m also a part of our wonderWomen ERG group that Adria mentioned earlier, [inaudible] taskforce. I call myself a Pivoteur with five career pivots. Prior to COVID shutdown, I loved to travel. Hopefully that’s something that resonate with everyone. And here, I just included a short quote because that was part of what inspired my brief or the talk, was Robert Frost’s poem around traveling or taking the road less traveled.

    Lisa Huang-North: The first lesson, what are your areas of interest? A lot of the time for our satellite operation team, the first thing they need to know about tasking on satellite is, where do you want to look, and what do you care about? I will use two use case to try to explain. The first one, perhaps you’re in agriculture. Perhaps you are a farmer, in which case, the area that interests you could be roads. You’re trying to find the roads that will help you travel to your farms versus if you’re a civil government, for example, someone in San Francisco who is doing city planning, the things you care about will probably be buildings or infrastructure, and not so much about the road itself to a farm land area.

    Lisa Huang-North: Using these sample lessons similarly for you, when you’re planning your career pivots or career changes, that will be my question to you, what are your areas of interest? That can be an industry, a vertical, perhaps you really tech or you want to try out finance or non-profit. Maybe it’s a skillset that you want to gain along the way, or perhaps it’s really about a national or geographic location, you want to move to the city or you want to be closer to family. So those are interesting points to consider around your area of interest.

    Lisa Huang-North: In my case, it was a combination of all of those when I did my first two career pivots, I will say. I started off in Chicago, my career as a mutual fund data analyst. So, that was at Morningstar. And one of the things that I personally felt was really important was a chance to work abroad because I think it’s important to learn about different culture and get a chance to work and live in those places [inaudible 00:39:30] traveler.

    Lisa Huang-North: And that’s what brought me to my first opportunity where the company went through a merger and acquisition and I volunteered, interviewed, and ended up moving to Cape Town, South Africa, where I headed up the data operations for our Sub-Saharan African office. And that’s the picture on the left. And after doing that for a couple years, I realized, hey, data analyst is great. I get to learn a lot about data operations and logistics and business analytics, but I really want to do something more creative now. And I love something that’s more customer facing and somewhere where I can work on my marketing or communication skills. So that was my second pivot where I moved and became a food writer. I know, I know a little off course, but it was something fun. I was in my early twenties and for me, it was about the skillset that I wanted to gain and in the immediate format.

    Lisa Huang-North: All right, lesson number two, what are your time of interest? A lot of the time for our satellite operation team, they need to know what the targeted time period for our customers, our users will want to see imagery of. Again, going back to the earlier examples, if you’re in agriculture, for example, a farmer. Your time of interest is probably quite seasonal. For example, with this picture, you actually see a lot of the circular fields. That’s what you’ll spot throughout the U.S. And in their case, their time of interest would probably be spring because they’re planning for the growing season and they really need to know what the health of their fields are. However, going back to civil government, if you’re looking at zoning or city planning, or even thinking about where do I want to develop the city, building more infrastructure, building new highways, some of those time of interest could be longer term instead of a season. You’re looking at your own year or even multi-year horizons.

    Lisa Huang-North: So think about that when you’re going through a career change or planning for it, what is your time of interest? Are you looking at something that will happen within the next 12 months, two years? And when you do make that leap into your new role, how long do you want to be there? Is there a stepping stone to another bigger career pivot, for example, if you’re moving to a new industry or is it a way for you to grow and really deepen your expertise, for example, within the industry or within the field. And feel free to put your thoughts in the Q and A as well, it’s always fun to make it interactive as you are pondering through these lessons.

    Lisa Huang-North: So in my case, I would say while I was becoming a food writer, I fell into digital marketing because a lot of writing and communication are augmented by social media. And from there I discovered one of my passions, which is in public speaking. So for me, my time of interest at the time was really to hone my public speaking skills and communication skills. And one of my capstone projects or goal I set for myself was to speak at the TEDx event. And at the time Cape Town held or organized various TEDx events. There’s ones organized by the university and there’s ones organized by the city itself. And I was able to, again, submit the talk proposal and be selected and really presented. And that was where I had the unique opportunity to meet Archbishop Desmond Tutu, as well. Still one of the highlights in that time of my life.

    Lisa Huang-North: And carrying that forward, now my next time of interest was looking at two to three year horizon where I said, “I have my data analytic skills down. I have my creative marketing skills down. What do I want to learn next?” And I really wanted to be able to build a product so that I’m not just talking about it or selling it or analyzing it if I can build the end to end user experience. And that’s where it brought me to my next pivot into a full stack software engineer role. And I went through a coding boot camp where I really learned the full stack where on the backend learning Ruby and on the front end learning JavaScript, using frameworks such as Ember.js and React.js. And that’s the photo you see on the top right. Again, I like to have milestones or capstone project for myself, and for that one, I really wanted to present some fine learnings in the form of a conference talk. And I was able to present at GDG in Madrid, that’s Google Developer Groups, during my travels when I was in Madrid. Think about the time of interest as you pursue your next career change.

    Lisa Huang-North: All right, lesson number three, and I think this one is actually one of the most important one. And it’s a reasonable or logical extension coming from area of interest, time of interest, and now what are your success criteria? Using the earlier examples, if we are looking at those as an agricultural farmer. This image on the screen, it’s probably not very successful because I don’t see a lot of farming or agricultural land near San Francisco downtown. Whereas if the photo was of [inaudible] with garlic farming or even of Napa Valley with the wine industry there, that probably makes a lot more sense and that image will be successful, right?

    Lisa Huang-North: But again, going back to city, if you are San Francisco government and you’re doing city zoning and infrastructure development, this image is probably perfect for your use case. You’re able to see downtown, you’re able to see Embarcadero. And in fact, you can even see Presidio on the top and the bridge, The Golden Gate Bridge. And even with Karl the Fog, the clouds, we’re always looking up for cloud covers at Planet, even though the cloud obfuscate the left side of the city, you really get to see 90% of the city.

    Lisa Huang-North: So this image for civil government will be successful. So link in to that, what are the factors for your success criteria? Is it about the job, the scope of the role, maybe it’s about salary because you’re at the time of your life where you need to provide for your family and financial stability is key. Or perhaps if you’re younger and earlier in your career journey and for you, personal growth and learning is the key factor for your success criteria. So think about that as you’re planning your career change and planning for the next pivot.

    Lisa Huang-North: In my case, I would say that through those different career changes, initially the success criterias were pretty immediate. Which are, what skills can I learn? And am I having fun with it? Am I having fun while I’m changing these different jobs or learning new things? And I would say on the top left, this was at a friend’s wedding in Durban, South Africa. And for me at the time, the social aspect was a huge thing, too. I really wanted to meet people. I wanted to experience different cultures and those, my lifestyle choices, were integral pieces to my success criteria beyond professional growth.

    Lisa Huang-North: And slowly as I moved back to the U.S., I would say that my success criteria has changed over time. And now, instead of just focusing on perhaps immediate and personal gains, I’m really looking at how I can integrate or how I can be closer to families and what that means for my lifestyle and what I want in the longterm, starting a family, for example, mentoring other women in tech. And that’s how I’ve been involved in Women in Product and Tech Ladies. And in some ways, still trying to get connected with my roots from when I ran the startup by attending startup conferences and just keeping fingers on the pulse about what’s happening in the startup space. So that was really key shift from personal growth lifestyle to professional, family, as well as any mentorship impact.

    Lisa Huang-North: And that ultimately was what brought me to Planet. I think, as Adria mentioned, a lot of us here at Planet, we are fully aligned with Planet’s mission. And one of the success criteria for me when I went through the latest round of job search was around impact. I really wanted to join a company where I myself can be contributing to something that is impactful at the global scale. And really, Planet way surpassed that and some more because I would say beyond global, this is really a planetary and specie level. And I think hopefully with the use case I have shared, you can see how it impacts industries at the time. And I’m sure some of the speakers later will share even more interesting story such as forestry or crisis management. And you’ll get to hear a lot more. So take this time in the question Q and A area, if you can think about what your success criteria are, start sharing that with us.

    Lisa Huang-North: So finally, savor the journey. I think bringing back the three lessons about area of interest, time of interest, and your success criteria, another thing to remember is that while we are in the midst of career change or any pivot, the uncertainties are probably quite stressful. And you may feel like you don’t really know where you’re going, or if you are going to be able to attain the goals that you have set out for yourself. But as a famous saying go, hindsight is always 20/20. And while you’re in it, you may feel like you’re going through a rough divergence, snaking around from place to place. And it doesn’t feel like a linear path, but looking back, or if you zoom out and take a bird’s eye view, you’ll probably realize that you’ve made something beautiful and you have created this fantastic journey for yourself, where all those different skills and experience you pick up along the way were pieces of the puzzle. And ultimately when you piece all of them together, they look really stunning.

    Lisa Huang-North: So I hope that will help to lessen some of the stress, anxiety you’re feeling as you put it through these uncertain times. And to close, obviously, if you have any questions, feel free to reach out and let’s chat. You can connect with me on Twitter, on LinkedIn. I will be here for the networking event later on as well. So definitely reach out and we are hiring. So always happy to chat about Planet. Thank you.

    Angie Chang: Thank you, Lisa. We are running a little behind, so we’re going to skip the Q&A but feel free to ask the questions and we will ask Lisa and we will share them later in a blog post with everyone. But right now our next speaker is Sara. And we’ll bring her right up. Hey, Sara.

    Sara Safavi: Hey, how’s it going?

    Angie Chang: Good. How are you?

    Sara Safavi: All right.

    Angie Chang: So… you can get your slides…

    Sara Safavi: Mm-hmm (affirmative).

    Angie Chang: Perfect. So Sara, by means of intro and [inaudible]. She leads the developer relations team at Planet Labs. Welcome, Sara.

    Sara Safavi: Thank you. All right. So yes, I will get started. Like Angie said, I lead the DevRel team here at Planet Labs. And what I want to talk to you all about today is my experience working remote. I’ve been working remotely, both here at Planet and prior to Planet for about five or six years. So about three years here at Planet and then a couple different companies before. Along the way, I’ve had to pick up some new habits, some new practices and ways of working in order to make my stay in Remotesville as a remote employee sustainable.

    Sara Safavi: Tonight, I just wanted to share some of those tips with you and go through them really quick. I want to give you a starting point, not so much teach you everything, but a starting point you can reference if you’re also somewhere at the beginning of this journey. I know a lot of us are, especially in the last couple of months, so it’s a topic that we’ve all been talking about. And this, if you ask somebody for their one tip for working remotely, this one is probably what you’ll hear most of, establish a routine, make sure you have a routine.

    Sara Safavi: I’m putting this first because it is so common that you’ll hear it. I have a couple of things I’ll mention after this less common, but I do think that this is important. But something important to notice here is that we’re new because I’m talking about establishing a new routine. You need to develop some new routine that works for you because this isn’t the same as your pre-Remotesville routine. Your life is no longer in the same patterns. You’re not going to get up in the morning and pack a lunch, probably. You’re not going to get into your car, stop at the gas station on the way. You probably not even going to put your shoes on in the morning.

    Sara Safavi: So it’s completely different scenario, which means it’s going to take a different routine. But routines are still important because our brains can be stupid. And we want to trick them. A routine helps you trick your brain into understanding that we’re getting ready for work, we’re going to work, we’re no longer sitting at home in bed, it’s not the weekend, it’s still a weekday. So taking that time to get dressed in the morning, do your hair, put on something that makes you feel powerful and professional. It really helps separate that situation in your head between home and work.

    Sara Safavi: So build a morning routine that takes care of you. Maybe do some yoga, meditate, go for a run, whatever it takes to establish that new routine. But some other things that people don’t necessarily talk about, a friend of mine shared this concept with me a couple of months ago, and I really love it. So I had to stick it in here. Teach yourself and give yourself permission to put your body first. What I really mean by this is a lot of times when we’re working solo at home, it can become really easy to just stop listening to our body’s needs. If we’re not changing what we’re doing or interacting with other people, if we’re just sitting at our desks for eight hours a day with a cat or a dog sitting under the desk, then you can really start ignoring your own body’s needs.

    Sara Safavi: So if you catch yourself feeling out of sorts or not able to get into that workflow like you usually do, or just feeling like something’s wrong, or you keep beating your head against the same bug for 10 minutes, take a minute and check in with yourself. See if there’s some body’s needs that you’ve been ignoring. Did you skip lunch? Have you not stood up from your desk for four hours? Since you don’t have like a water cooler to walk towards, maybe you forgot to get a drink of water, hydration is important. But just take a moment, check in with yourself because a lot of times, the ways that we’re feeling are actually directly related to ignoring what our body’s asking for.

    Sara Safavi: And similarly, talking about stepping away from your desk, when you’re working remotely, you really have to make space for scene changes. If you’re in an office, many times a day, you’re going to get up, you’re going to go to a conference room, you’re going to go visit your coworker’s desk, you’re going to go to somebody else’s desk and ask to see what they’re working on. You’ve got all these opportunities to change your scene, but when you’re working at home, you don’t have those opportunities anymore. So you have to deliberately make space for them. Schedule them into your daily routine. Maybe you’re going to take your dog for a walk for a half hour every afternoon. Put that on your work calendar. Or maybe every Monday morning, you water all your plants, put that on your calendar. Put dancing breaks on your calendar, I have friends that do that and I love it. You’re working remotely though, your schedule can be flexible, maybe you can do a yoga class at 1:00 PM. Maybe you have the freedom to do that, but you have to deliberately seek out those opportunities to change your scene.

    Sara Safavi: Similarly, you have to seek out connection. You really have to rethink what it means to make connection. If you’re working remotely, like I said, you don’t have those coworkers desks to walk to. You don’t have a water cooler. You don’t have a break room to go make a cup of coffee or grab your lunch and heat it up. You don’t have those natural opportunities for connection. So as a Remotesville citizen, you need to be deliberate and intentional about this. Instead of just telling a coworker on Slack, “Hey, we should get coffee sometime,” you should send them a calendar invite for 2:00 PM on Wednesday and say, “Hey, I’m going to be on Zoom, having coffee. Let’s chat.” Make it an intentional and easy way for them to accept and say, “Yeah, let’s connect.”

    Sara Safavi: Find opportunities to network. Find a network of other people working remotely, whether it’s at your current company or friends that you know who are in different companies. And if you don’t have a network already and you can’t find one, maybe that’s a perfect time for you to make your own. Something that’s really great that we overlook in remote work is coworking. It can be really great to just cowork with somebody. And I don’t mean an active Zoom chat, like a coffee break, where you’re talking back and forth, but maybe you just open a video call with a coworker and you guys just sit there in silence doing your own work together. It’s really companionable.

    Sara Safavi: So rethinking what we mean when we’re thinking about human connection and then being deliberate and intentional about it, is what’s going to make that remote work environment more sustainable. Something to watch for is to be aware about the creeping attraction of home comforts. So if you’re working in Remotesville, you’ve got a comfy couch, you’ve got a comfy bed, you’ve got all of the comforts of home, but I strongly recommend that you don’t work from your bed.

    Sara Safavi: So I know Deanna is going to talk to us later about satellite operations from bed, and I totally fully endorse it. I think that’s awesome. But what I mean when I say don’t work from bed is, don’t make this your normal Monday to Friday, nine to five office space. Like I said, brains are stupid. You need to trick your brain into understanding home versus workspace. You have to use sensory cues to signal that difference. You have to let yourself close an office door at the end of the day. So maybe you don’t actually have an office at your house, but maybe you have to mentally be able to close that door.

    Sara Safavi: If you’re working from your bed all day, it’s super comfortable. It’s awesome. Maybe you’re even really productive, but then the problem comes when it’s time to go to bed and you want to sleep, but your brain is like, “Oh, this is where I’ve been working all day.” So you start thinking about work again, and your brain starts turning the last problem you’re working on over in your head. And it’s really difficult to have that isolation. So maybe at home, you don’t have a lot of space, maybe you’re working from your dining table. That was me for the first two years of my remote career. But something you could do is put a lamp on that table and turn that lamp on only when you’re working. And when you’re done working, the lamp’s off. Little stuff like that, those sensory cues can really make a difference in being able to mentally close that office door.

    Sara Safavi: I’ve given you a lot of advice and I do want you to remember, these are interesting times where we’re living through right now. This isn’t the normal time that you would be switching to working remote in tech. So give yourself permission to practice a little self compassion and be kind to yourself, but also be honest because compassion doesn’t mean lying to yourself. So if you forget to step away from your desk for eight hours, or maybe you fail to put anything besides coffee and LaCroix in your body since 8:00 AM today, it’s okay. But it’s important to be honest and name that and understand that it happened and then just try again tomorrow. You understand that it’s important to listen to your body, to stay hydrated, to take those opportunities for scene change, and just try again tomorrow.

    Sara Safavi: So try to create a routine that works for you. A new routine. You’re not going to make your old routine work here. Take breaks. Remember to move around. Listen to your body and brain’s needs. Intentionally seek out human connection and make invitations to people that are easy to act upon that are not passive. And don’t let comfort creep overtake you. Try not to work from bed all day every day. Don’t ignore your body and your brain’s needs. Don’t skip meals. It’s okay to take a break and step away from your desk, but above all, don’t be too hard on yourself.

    Sara Safavi: So I don’t know if we have time for Q?A. I would love to take questions if I can, but otherwise that’s my contact info. I would love to hear from any and all of you.

    Sukrutha Bhadouria: That was great. Thank you so much. We’re definitely going to take questions later, like Angie mentioned, but thank you so much. All right, next up… Barb is a software engineering manager and developer on the applications team at Planet. Take it away, Barb. Welcome.

    Barbara Vazquez: Thank you. Hey, everybody. My name is Barbara Vasquez. I go by Barb and I’m a software engineering manager and developer, as well, at Planet. A little bit about myself, I was born and raised in Puerto Rico. I have been working in the geospatial industry as a software engineer since 2008, when I moved to the DC area. And I have been living right now, I’m in Maryland, but I’ve been in the DC area since then. I joined planet about three years ago in 2017. And I’m part of the web applications team. We build some of the tools that help people have easier access toward data.

    Barbara Vazquez: The main thing that, if you’re familiar with Planet, is an application called Planet Explorer. If not, go check it out, planet.com Explorer. Now what I’m going to talk about today, it’s about Agile Development and estimation. It’s mostly focused because I’m a software engineer and we do Agile Development at Planet. And these are some tips and things that might be useful for people doing Agile. Even if you’re not doing Agile, thinking about estimation and how much something will take you to do is useful on a day to day. But with further ado, if you’ve done Agile Development and you do the daily scrums or the daily meetings, you’ve had these thoughts, what are points?

    Barbara Vazquez: Why are people asking me so many questions so many times, when will it be done? Why do I have to give status every day? And it can get tiresome. And you might just want to flip the table and say, this is not what I signed up for. This is not why I want to do software engineering. But through the years, I’ve learned that it can work in your favor. It can actually help you be more organized and communicate better, to have less stress.

    Barbara Vazquez: So estimating with points, if you’re not familiar with Agile or Points. Points is a system that tells people, mostly managers, how difficult do you think a thing is and how long it will take you. But in my perspective, yes, that’s one benefit, to tell your manager when things will get done, but it will help you be honest with yourself.

    Barbara Vazquez: Can I really do this? Is two weeks enough? Or however long you have to develop something. That doing the mental exercise will get you in a better spot where you might not need to pull all nighters. If you have to work weekends to meet your deliverables, you’re probably signing up for too much. Or you might be underestimating what is being asked from you.

    Barbara Vazquez: In Agile, the way it works, you sign up for work and you have X weeks to do something. I’ll use our example. We do two weeks of development. If after those two weeks, every time you’re rolling over things, rolling over means that you did not complete it. That means something is wrong in the process. It’s not necessarily you. It’s a team thing. It’s being underestimated.

    Barbara Vazquez: Scope creep happens. You’re midway. You’re almost done. And then somebody is like, did you think about this? What about you do that? And you go on a tangent and you forget about your original goalpost, or the biggest one that nobody wants to admit is you probably don’t have enough information, but how do you tell your manager that you don’t have enough information?

    Barbara Vazquez: Shouldn’t you be able to do it on your own? Not really. That’s what the whole point of Agile and team development should be. And points are there to help you communicate that.

    Barbara Vazquez: How to start doing better estimates. One thing I do with my team is ignore numbers. Just give me T-shirt sizes, small, medium, large, or extra large. Extra large, can I do this in two weeks? If it’s an extra large, no. It probably needs to be broken apart. You probably need to talk more about it. A large size, will probably take me the two weeks. I’m threading there on borderline not completing it, but let’s give it a shot and let’s see how it goes. Medium, I can get this done. I don’t know how long it will take me. It’s definitely going to be more than a day but I can get it done. And small is I can do this with my eyes closed. It doesn’t matter.

    Barbara Vazquez: That’s my rule of thumb. When I go to do estimates, it’s give me a sense, how do you feel this is so that we can have that conversation of how long it will take. As soon as you do this mental exercise, you’ll get in a better habit and you’ll start recognizing better. I don’t have enough information or this is super easy. Why am I even thinking about it? Let’s get it done.

    Barbara Vazquez: So once you get the T-shirt sizes down, you can map this to whatever point system your team uses if that’s the preferred methodology. A lot of people use the Fibonacci sequence where it’s one, two, three, five, up to 13, where a 13 is the extra large equivalent.

    Barbara Vazquez: So this once you get used to, and you’re like being able to do t-shirt sizes, you can move up to doing the point systems. In any case, even if you don’t do Agile, thinking about your tasks in t-shirt sizes can help you think about difficulty, can help you keep yourself organized and just do that mental exercise of what do you need to get done that week?

    Barbara Vazquez: The other point, two points, no pun intended, is keeping your other responsibilities. Add some buffer. You might be able to sign up, just keeping with the example, two medium things, because life happens. Add some buffer, COVID has taught us that life is unpredictable and your normal cadence is not the same anymore. Distractions happen, you might have family at home. Take that into consideration as well when you’re doing these estimates.

    Barbara Vazquez: And the other point, the other thing to think about with points is it helps you negotiate. It helps you make sure priorities are clear of what needs to be done first versus what needs to be done later. If your plate is full, whether it’s with actual tasking, if it’s with life, use the points to help you drive conversations. I can only do so many mediums stories. If I sign up for one more, I will definitely roll it over because that’s what I’ve learned.

    Barbara Vazquez: And in the end, having slightly more predictable cadence is valuable for everybody. And again, I say slightly because life happens and we cannot be 100% predictable, but we can get there. And that’s all I have. Thank you everybody. I know we don’t have time for Q and A, but that’s my email, barb@planet.com. If you want to reach out or we can talk later.

    Angie Chang: Awesome. Thank you, Barb. That was really great. I’m going to find Kelsey. Video, it’s perfect. Great. We can see you. So Kelsey is a space systems engineer at Planet. Welcome, Kelsey.

    Kelsey Doerksen: Thank you. Perfect. So good evening, everyone. My name is Kelsey Doerksen and I am a space systems engineer at Planet. I started about four weeks before work from home was an order for the San Francisco office. So I got only a little taste of what it was like to work in the physical San Francisco office, but I’m really happy with my past five months being a part of the team.

    Kelsey Doerksen: And today I’m going to be talking a little bit about how to handle big data in space and the different machine learning projects I’ve been a part of over the past few years. And so I’m just going to jump right into it. So first I wanted to start off with what is machine learning and what do I really mean by big data?

    Kelsey Doerksen: So big data is really just that, it’s a large volume of data or a lot of data. And we use machine learning with this big data to seek statistical patterns, to enable computers and algorithms to make either a classification, such as differing between pictures of dogs and cats, or prediction about the data.

    Kelsey Doerksen: I really like this three step image here that basically breaks down what machine learning is really at a high level, where you start with this big conglomerate of data, you can’t really make sense of it or extract any meaningful information from it. You apply analytics to it. And in this case it would be a machine learning algorithm. And from those analytics, you’re able to make informed decisions about the data in question.

    Kelsey Doerksen: I’m going to be talking about three different projects I’ve worked on at a very high level. Don’t be worried if you don’t know anything about machine learning. And I’m going to start off with my first project I worked on, which has to do with machine learning on Mars.

    Kelsey Doerksen: For those of you who are unfamiliar with the Mars exploration Rover mission, this was a NASA mission that launched in 2003, and it sent two twin Mars rovers, Spirit and Opportunity, to the surface of Mars. Unfortunately for the Spirit Rover, its wheel actually got stuck in the Martian soil. You can see in that black and white gif image there that is taken from the Spirit Rover itself. And unfortunately the mission was lost in 2010 for the Spirit Rover because its wheel was stuck in the sand and they weren’t able to get it free.

    Kelsey Doerksen: How could we have used machine learning in order to prevent this from happening for future Mars Rover missions? As we know, Perseverance is launching, hopefully soon, barring any delays. This is a project I worked on at the NASA jet propulsion lab called the Barefoot Rover project. Essentially what the Barefoot Rover project purpose was, was to use what is physically felt by the Mars Rover wheels, to be able to detect different things about the surface it was rolling across of.

    Kelsey Doerksen: My work was specific to making sure the wheels were not slipping or sinking into the different types of sand material we had at the JPL campus. And it was also, I worked on the terrain classification and detecting if there’s any subsurface rocks that could possibly penetrate the wheel and cause damage to the wheels.

    Kelsey Doerksen: How this worked from a machine learning perspective at a very high level, essentially what we had was a yellow pressure pad wrapped around the outside of the Mars Rover wheel. And we took those pressure pad readings and trained that in a classifier to be able to detect these things that are on the bottom of the slide there. So we were able to tell the hydration content of the soil, anomaly detection, safety, and stability of the Rover, slip and sinkage, which is what I worked on, terrain classification, rock detection, and other different tear mechanical properties.

    Kelsey Doerksen: This is a really cool project I worked on and it’s going to be implemented on future Mars Rover missions. The second project I’ll talk about is machine learning for the sun and for our Earth atmosphere. So this very terrifying image you see on the slide here is a picture of a Coronal Mass Ejection event. What a Coronal Mass Ejection event is, is a huge explosion on the surface of the sun.

    Kelsey Doerksen: And essentially what happens is these huge explosions send out high energy particles into space. You can see there, Earth is to scale in terms of the size of a Coronal Mass Ejection and the sun as compared to the size of our Earth. The distance is not to scale, but the size of the two planetary bodies is. So why this is of concern other than the fear that it strikes of course from this image, don’t worry. It’s not going to cause any … The flames will not reach our surface. But what they do do is send these high energy particles to our Earth’s atmosphere that essentially push our satellites around. So from a satellite operator perspective, the satellites can actually be moved off of their orbit path and collide with other objects in space, which is obviously really detrimental to the satellite operators.

    Kelsey Doerksen: How can we use machine learning to tackle this sort of problem? Well, we can’t stop these Coronal Mass Ejection events from happening, pictured there is a gif image from the Soho telescope that is showing what a Coronal Mass Ejection looks like. So we can’t stop these huge events from happening, but we can at least try to learn as much as possible about them and how they are affecting our satellite. And this was my master’s thesis work using the satellite accelerometer data to detect these solar storms. So I mentioned before that these solar storms send out huge amounts of high energy particles and they reach our Earth’s atmosphere. The way you can think about this is if you’re walking outside and it’s very, very windy and you’re getting blown back by the wind, that’s kind of is what’s happening to our satellites when these particles reach our atmosphere.

    Kelsey Doerksen: And that can be captured in the satellite acceleration data. The two graphs I have pictured on the slide here, the top graph, it shows the acceleration of the satellite when there’s solar storm happening. So you can see the signal is quite erratic and it’s actually doubles and above in the linear acceleration of the satellite itself. Whereas during a period, when there is no sort of solar storm, the satellite is very periodic and the signal isn’t fluctuating at any alarming rates.

    Kelsey Doerksen: The last project I worked on and want to introduce is, of course, using Planet data, and this is machine learning for our Earth. So I’m really happy to be a part of the new partnership with the Frontier Development Lab and Planet, which is an eight week research sprint with the NASA and SETI Institute, and Planet is working with the Waters of the United States team, which is using Planet’s daily imagery with machine learning, to assist with drought detection and prediction in small streams in the continental United States.

    Kelsey Doerksen: Pictured here is the Seminole reservoir in Wyoming, United States. And the first signs of droughts can be identified in the small streams that branch off of large bodies of water like these. So by comparing pixel values in these streams using Planet’s daily imagery of sites, similar to this, the team of researchers will be able to detect and predict future droughts across America with the aim to scale this work to other areas across the globe.

    Kelsey Doerksen: I can’t get to my … There we go. I really hope you were interested and able to follow along with those three different projects I worked on. I think machine learning, it’s such a new and growing field and space is the perfect application for machine learning because we have so much data. And if you have any questions, you can feel free to reach out to me, and thanks very much for your time.

    Sukrutha Bhadouria: That was excellent. Kelsey, are you seeing the comments? Awesome, Kelsey [crosstalk].

    Kelsey Doerksen: I can’t see them, but thanks a lot.

    Sukrutha Bhadouria: Someone said I want to be all the speakers. That was just amazing. I learned so much. So moving on to our next speaker, Deanna. Deanna leads the team at Planet responsible for operating and maintaining the over a hundred imaging satellites, or Doves, currently on orbit. Welcome, Deanna.

    Deanna Farago: Thank you. I’m so happy to be here. This is my first Girl Geek event. I’m excited also just to hear from other Planeteers because, sadly, it’s a large enough company that you don’t automatically know everyone. I love hearing everyone else’s stories, as well. All right, so I will present. Hopefully everyone can see that okay.

    Deanna Farago: All right, as I mentioned, my name is Deanna Farago and my team and I operate a fleet of satellites that are currently imaging the entire planet every day. And, traditionally, satellite operations can be very time and resource intensive. For example, in order to operate one spacecraft, you could have a room full of engineers around the clock, 24/7 monitoring, telemetry and contacts, and just system performance.

    Deanna Farago: And our satellites operate in a different paradigm and risk posture. This has allowed us to be able to automate a lot of the operations. Even before COVID, we could operate essentially anywhere as long as we had a good internet connection and our laptop. Before I describe what that looks like, it’s important to understand what the mission is and the scale of our operations.

    Deanna Farago: Our company’s mission one is to image the entire planet every day. And you need a lot of satellites in order to do that. And we actually, in addition to operating satellites, we design, build, and test all of our satellites in house. And this is a big advantage for us as operators, because if and when we run into issues on orbit, we can work directly with the engineers that designed the satellite in order to troubleshoot the problems and help come up with on orbit mitigations, as well as design out these bugs/features in the next spacecraft iteration.

    Deanna Farago: And then once in space, we use just a little bit of atmosphere that we have to use something called differential drag to space out the satellites over time. And as one satellite images over a strip of land, the one right after it should image this strip of land, just adjacent to it. And this essentially creates alliance scanner. What you’re seeing here is a 24 hour snapshot of what the imaging strips could look like that the satellites are capturing. And we have a distributed team operating our satellites. We have four people in San Francisco, one person in Toronto, and a team of four in Berlin. And we send tasks to the ground stations, which then send the schedules up to the satellites. And just a fun fact for this group that at Planet, we have three satellite operations teams and they’re all managed by women.

    Deanna Farago: The concept of operations is actually quite simple for these Doves. We don’t image over the ocean. We only image over the land, but basically anytime they’re overland, they just point down, take pictures. If they’re over ground stations, we downlink those pictures in logs and we communicate with them. And then in the background we’ll just run maintenance activities, essentially thinking of them as like tuneups and checking in on like subsystems and keeping an eye on any degradation that might be happening or running experiments. And, in theory, if the satellites are performing well, they should just be as easy as this man’s rotisserie grill, where we just set it and forget it. We can even run it custom experiments, and we set up the tasks and not have to worry about it.

    Deanna Farago: However, things don’t always go smooth. There’s a lot of fires that can happen. And that’s kind of how we know we’ll never really be able to automate ourselves out of a job. These are just some examples of issues that we’ve seen on our satellites. So a satellite suddenly starts spinning up, and we have to figure out why is it spinning up? And we need to de tumble it. We noticed that the satellites have low battery, that’s voltage, and we need to take action before they start browning out and rebooting rapidly. We see that telemetry sensors are reading zero value. Is this a real thing? Or is the sensor it just being faulty? And we have to reset it. Or sometimes satellites just are unresponsive out of the blue and we have to spend time to figure out, did something change, did something break on the satellite?

    Deanna Farago: Or can we just set up some automation to keep an eye on it? And all of these actions started out as manual. We would detect these problems and then operators would spend time triaging it and then eventually taking action. And now our teams have automated responses to all of these so that they trigger off of just telemetry on the satellite. As soon our automation sees like the driver readings are reading up. Then we know the … Sorry, the robot just basically sends a task to respond to this, so an operator doesn’t actually have to. And this decreases latency in the system and gets the satellite back into production as quickly as possible. And there’s always going to be unknown unknowns, and we’re constantly trying to find these new problems and automating responses to it.

    Deanna Farago: What does a day in the life of an operator? Well, we work nine to five and we have a checklist that we rotate among the team members. This enables our team to be able to have weekend or holiday coverage. Even though we’re working normal office hours, we want to make sure that there’s always going to be satellite operators, eyes on the system every day. And for this number of satellites, we have to aggregate our data. Aggregating our data is key. What that means is we build lots of dashboards based off of our telemetry, and off of our logs from the satellite. And it allows us to be able to easily see if there’s any satellites that are responding and acting out of family. And that will then trigger an operator to say this one’s not behaving the same as its fellow satellites. I’m going to dig in further and try to triage it.

    Deanna Farago: We have weekday team standups and we’re supported by amazing other teams in mission control. And those teams also have their own on-call. And so if something does break in the middle of the night, that affects the whole fleet. Those teams help support us. I wanted to show this because it’s one of my favorite things that we’ve taken a picture of at Planet. And it’s actually a series of pictures that we stitched together into a video. And just before a rocket launch, we’re able to opportunistically schedule a Dove to take a series of images of a rocket delivering more Doves to space. Just a real quick cool shot. And that’s shot by one of our satellites. So very cool. And then sadly, we won’t be doing any missed high fives and hugs and mission control in person anytime soon, like our former coworker here Rob Zimmerman. But we can still enjoy having first contacts and commissioning with one another virtually. And this is our, I guess, equivalent version of that from a few years ago when we were able to successfully make contact with 88 satellites right after launch. And with that, that’s all I really wanted to share. I couldn’t go into too much detail, but I’m happy to answer questions. If you’d like to email me. I am at deanna@planet.com. Thank you for having me.

    Angie Chang: Thank you, Deanna. That’s really awesome. And you … Let’s see. And now we are going to bring up Elena, who has over two decades of experience in sales and she’ll be telling us her journey.

    Elena Rodriguez: Excellent. Good evening, everyone. I’m so happy for this invitation. I just joined Planet three months ago and I really wanted to talk about … sorry, this is my first time, I wanted to talk about the adventure of making a decision, how important it is for our career. But first, let me introduce what I do here at Planet.

    Elena Rodriguez: As I said, I joined the company three months ago, I’m this salesperson for Mexico, Central America, Ecuador, and the Caribbean. I have been in the business for more than 20 years, and I am so, so honored to be part of the Planet team. I’m so happy and so proud of working for the company that is offering solutions that are critical to mitigate some of the main challenges that we are facing right now, like climate change, food crisis, fighting poverty, so many applications, and I feel so proud to talk about our business when I go out there and meet my clients and listeners. So I chose a topic because this is something that I’ve been always thinking about it. And now I have the opportunity to talk about it. And I’m going to take advantage of this — is how I ended up here. I want to show you my story.

    Elena Rodriguez: Ever since I started back in the 80s, I have all the dreams like I wanted to be a fashion designer, because that’s something that I really enjoy since I was a little girl. And I took … but it was difficult for me because fashion was a very expensive career in Venezuela, and I had a scholarship, and I moved to Seattle from Venezuela to study sales and advertising. I have no choice. So let me tell you that, that was the first time I didn’t make any decisions.

    Elena Rodriguez: I had to choose what I thought was available for me that time. So I remember my sales teacher, Mr. Fine, it’s impossible to forget him. That he was always saying that a good sales person is capable of selling anything anything. Selling water to a fish. I wasn’t growing that idea of on my mind, but I was thinking, I don’t know if I’m really right for this career, sales is like — I don’t know — However, I was already thinking like when I was a little girl, I was drawing paper dolls and I was selling those to my friends at school. I was making bracelets with the colorful telephone wires, and I was selling those. I was a sales person already!

    Elena Rodriguez: I went back to Venezuela and I graduated, but I was still thinking, I don’t know what I want to do, this is my passion. I want to be a fashion designer. And it took me four years to graduate. It was the beginning of this career in Venezuela. And it was a lot of work. It was very expensive. There were times that I couldn’t sleep, doing all the drawings, the designs, and making all these dresses, this yellow one, and the one along here, I made them. And I was so inspired, because that’s exactly what I wanted to do.

    Elena Rodriguez: But then something funny happened during this practice — is that every time my friends called me and asked me for a dress, because they chose the fabrics, I have my [inaudible] they chose what they liked. And I made the dresses. Then when they came home to pick them up, I didn’t want to sell them! I was like no, I keep them. So I decided that’s not for me.

    Elena Rodriguez: It took me a while and I was thinking, you know [inaudible] what am I going to do? We are almost through this and I need to make a decision. I needeed to plan because I had a strong pressure from society, my country, and I made a decision — I thought it was time for me to have a family. And that was a decision that really, I thought about it a lot, because I know what it meant for me at the time — that I had to give up some things that were important for some time.

    Elena Rodriguez: But those changes, I always ask myself — once I start with passion to adapt to a new reality, because I had that question on my mind. And the answer is definitely no, I was just growing up. And it was time for me to make that decision and get prepared and be responsible for the decision that I have made.

    Elena Rodriguez: In 1995, it was a huge revolution in Venezuela because that’s when Internet arrived to our country. It was the time also when my boy was born, he’s 25 right now. And I remember I was taking care of my son and I was hearing all this noise outside — my husband and his friends talking about Internet — let’s go, let’s navigate, let’s check — They were looking for some topics and they were celebrating and I was feeding my baby and I was thinking, Oh my gosh, I think I’m losing something, something’s happening here, and I don’t know, I don’t want to sound selfish, but I had that on my mind. You know, so what am I going to do with technology, but I don’t know if I can even think about that! Would I ever touch a computer again? I had all these questions at that time. [inaudible] years things turn to be kind of difficult in my country. And I had to work. I had to live outside definitely my [inaudible]. And I had to go outside and find a different job, something because I needed to bring money to… because I had a family and things were difficult, and I was ready to get back on track, but I wasn’t ready for the technology. I had missed one year of all these changes! So selling was becoming more challenging, new terminologies, services, a new way of communicating… communication skills.

    Elena Rodriguez: The first job I got out there was for selling ads for the magazine called Computerworld with names like Microsoft, Sun Microsystems, IBM, HP, and those that were never familiar to me — it started to be new and that was nice — I was into a completely different world. This job was the one that allowed me to meet the people that helped me, that guided me, that inspired me to be in this field. And to be honest, selling was never had never been so much gratifying for me.

    Elena Rodriguez: Five years later, I had to make a very difficult decision that by the way, this week when I was practicing this presentation I found out how, I mean, how your country, your family, your culture really touched you. And I was like, I didn’t realize before, it’s like I was keeping that into myself, but it was a big decision. It wasn’t something that I was prepared for, but that was the time where the political situation in my country was unsustainable and started to be not sustainable even worse. I had a job offer in Mexico and I didn’t think twice. I moved here. And as you can see, the picture was… I think that was my first week here in Mexico. And you can see all the disaster. And remember I was asking if I would ever touch a computer again.

    Elena Rodriguez: Well, here is a computer, but I was only able to touch it because it was impossible to carry, so heavy. Everything has been changed as we know, that’s funny. So that’s when I started. It was like, for me, that was my own revolution, geospatial, learning new terminologies. It was such an exciting world. I was working with geographers, engineers, and so many people that I met in the industry. I really was in love with this new market. I was, like, wow. And I’m very proud because I participated in the first high-resolution satellite sale to the Mexican government. And I had all these questions from people. I mean, what is that you do? Are you a spy? What is it and that was very funny. But every time I had more challenges, it was time for me to learn more.

    Elena Rodriguez: And that’s really… That was very interesting. I don’t regret. I’ve been doing this for more than 20 years now. I still live in Mexico. I’ve met such interesting people, nice people, being in this environment. And I feel the pride to sell something, that I know that it’s going to go there to help people, to make people make good decisions. And this is something I feel so proud about it. And I’m here. This is what I do now. The geospatial world got me. I’ve been doing this work for, as I said, for more than 20 years, I’ve been in the drones industry, as well. I learned how to fly the drone. I was so proud about it. This picture here — in the mining, it was something very scary because I was in Peru and I had to sleep there. So, many nice adventures. I am so happy that I got… That I decided to stay here. I don’t [inaudible] change from fashion designing to the geospatial world. I can always be creative and I use the fashion designing for myself. So I like clothes. I like that. I mean, that’s inevitable. I can’t leave that behind, but this is, the right decisions brought me here. No regrets how I did it. I don’t know.

    Elena Rodriguez: As you see, sometimes we need to do what we need to do. I’ve been humble. I know that I’m not an expert. I’ve been learning and I always learn. It’s very challenging, this work. I rely on those experts that are willing to teach me and I take that very seriously. I understood that there are ways, many interesting ways to explore different options. I learned that we have to capitalize the knowledge because after you invest so much time in learning about something, changing probably is not such a good idea.

    Elena Rodriguez: Well, I don’t want to discourage the people that are doing this, but for me, I said, no, this is what I’ve learned, took me a long time. I want to be here. I wanted to be… to decide to be part of the change was very… That’s something that really pushed me as well. So that keeps me investigating and asking. So I’m curious about the technology and especially about the things that I do. Every time I made the decision, of course, I had to ask myself how it was going to benefit or affect my loved ones and understanding that it’s not always about me, that I have to care for my family. The company that I work for, there’s a world outside.

    Elena Rodriguez: I have faith in people. Trust me, I believe in people. I think we can always… We are a big team and I have a real engagement for environment. And I don’t know, I take care of my garden, my little dog, and I actually care about that. And, well, that’s it. Thank you. I think we don’t have time for questions. Thank you for listening.

    Sukrutha Bhadouria: Thank you so much, Elena. That was amazing. We learned so much from you. So our next speaker is Sarah Preston. Sarah is a marketing manager at Planet Labs, exploring how to use space-based imagery to improve life on Earth. Just pulling Sarah up. Hi, Sarah, how’s it going, right in front of the Golden Gate Bridge?

    Sarah Preston: Thanks. Out here in San Francisco. You can hear me alright, right?

    Sukrutha Bhadouria: Mm-hmm (affirmative), Yeah. So, welcome.

    Sarah Preston: Okay. So I’m going to share my screen and… Okay, can you all see that?

    Sukrutha Bhadouria: Yep.

    Sarah Preston: Okay, great. Thanks. Yeah, my name is Sarah Preston. I’m a product marketing manager at Planet. Now, a product marketing manager… Product marketing can mean a lot of different things in a lot of different organizations. But what I do is I work across our product and our marketing team and our sales teams to really find the right fit for our imagery and to understand what our prospects and what our audiences need out of imagery, even if they don’t know it yet. As you can imagine, narratives are extremely important part of what I do. So, I’m super excited to be here with you all to geek out about data-driven storytelling.

    Sarah Preston: Okay. First, why do we tell stories in the first place? Stories are paths to community and understanding. Think about all the stories that you loved growing up. There was some kind of connection that you made, either to a character, to the author, or to the setting that drew you in and made it really memorable. You joined that community that was telling that story. And within that story, whether it’s fact or fiction, there was information, and you got to learn from others in that community and to build an understanding about the world around you.

    Sarah Preston: What is a good story? So, “a good story is driven by emotion and balanced by fact.” That’s one of my favorite quotes, actually, that I heard. I can’t claim ownership of it, but, really, when we listen to a great story and we create a connection to a story, we’re really feeling some emotion and emotions can be extremely powerful motivators. I think, in or outside of the workplace even, an emotion can be excitement. It can be fear. It can be confusion. It can be ambition, but also a very human desire to understand the world around us. Emotions, they get us engaged in a story and interested. But facts and data, they keep us grounded.

    Sarah Preston: As an example of how you might be able to see this, Planet took this image of Pripyat, Ukraine back in April. Now this was when Pripyat was experiencing massive wildfires and this was right outside of the Chernobyl exclusion zone that you can see in the center there. It was an extremely dangerous time, already a dangerous area. Radiation levels had spiked 16 times more than usual and Ukranian officials were telling the world, basically, that these fires had been controlled, extinguished. Clearly not the case. Now hearing this, when we talk about emotions, hearing this story in the news, you can’t help but feel a sense of fear, maybe helplessness and anxiety, and all these emotions that are driving, maybe not necessarily the international community, but driving officials to understand what is happening. How can we solve it? Well, Planet came in and we captured this image and this image has a lot of data in it to help move these decisions forward, to help these move and capture these emotions.

    Sarah Preston: When we look at this image, we can see where the smoke is drifting. That tells us where the wildfire might be spreading to. We can see how far the wildfire has already spread on a grander scale. We can see how close it is to the Chernobyl exclusion zone. How radiation levels might continue to increase. And it tells us a lot about where we can deploy resources and where we can deploy flame retardant and, at the same time, keep all of our first responders safe. We had these emotions that we were feeling at the beginning, and a really good way to think about it is: Emotions, they move us forward. They encourage us to do something, but facts and data, they move us forward in the right direction. They give us an idea or an insight about where to go.

    Sarah Preston: How do we craft great stories? Great stories is really about taking our audience or, on a business scale, our prospects, on a journey from ignorance to understanding. Now, there are not three key points to creating a great story. This could be an hour long seminar and I’ve been to them before. It’s such a fascinating subject, but, given the time we have, I narrowed it down to three points that I think are really important.

    Sarah Preston: Know your audience. You want to understand what are their motivations? What are their expectations? Maybe what do they feel themselves on a daily basis? What’s their vocabulary? How do they communicate with each other and interact with the rest of the world? You want to really clarify the problem. Every story has its key conflict. You want to understand: what exactly is the conflict of the story you’re building and what is driving it, whether that is the emotions. And then you want to create some insight. What is the data showing us? This is the second half of the storytelling. How do we get past the conflict and use that data to create insight, to move us all forward?

    Sarah Preston: And here is an example, also at Planet, of how we recently used those points to create a broader story. We started work with the New Mexico State Land Office and they were looking to monitor permitting activity in the Permian Basin. You can see that on the right side of the screen, the sample image. And there’s a lot of mining activity out there, but they just couldn’t see in the way they wanted to.

    Sarah Preston: First, what we did here is we had to know your audience, right? We understood, and came to understand, how exactly the office itself functions, how it fits in with the broader civil government. What exactly is their legal mandate, who is our main point of contact and how to best really work with them in the first place. This is knowing how to communicate with them. Now once we know how to communicate with them, we can clarify the problem. Why is the office really experiencing this challenge? Why did they have very poor visibility into the more remote Permian Basin? Well, aerial photography like they’ve tried, was very slow and resource-intensive as was manned surveys. Sending people out there to actually see what’s going on, it was growing expensive. They were growing frustrated, really, that they didn’t really have a good way to monitor this land.

    Sarah Preston: What Planet did was, now that we knew our audience, and we then clarified the problem, we were able to deliver the data to really create a good insight to solve their challenge. This is sample data, again, right here on the right of the screen. We deliver near-daily imagery to them so they can see change and what’s actually happening and activity. And once they see that activity, then they can deploy resources, whether that’s people or anything else to solve that issue.

    Sarah Preston: Before I wrap up, I want to put another little plug. If you’re interested in learning more about storytelling at Planet, we actually have a customer conference coming up in October and we’re going to be featuring customers and partners talking about how they used our imagery for their own storytelling and how they’ve been able to build their own paths to understanding and building their own communities. The reason I want to feature this here is because it’s actually completely free this year and online, so very, very accessible. And before I completely close out, my last point, really, is: We are in a hugely data-driven world, and it’s really not so much about just collecting data anymore. It’s about collecting the right data and really understanding how to use it, how we get insights and go from that, go from that ignorance to that understanding to create solutions and to create great stories around our world. I don’t think I have time for questions, but that is my short brief. Again, this is a topic I could talk about at length, but hopefully you captured something out of this.

    Angie Chang: Great. Thank you so much for that, Sarah, and we are now going to be bringing up Brittany, who is a natural disaster research scientist turned businesswoman.

    />

    Brittany Zajic: Alright. Thanks, everyone. Hi everyone. Thanks for the opportunity to speak with you all tonight. My name is Brittany Zajic and I’m on the business development team here at Planet. Business development means something different at every company. And here, we focus on strategic partnerships and the commercialization of new markets. I also lead our disaster response operations, which is part of our social impact initiatives, where we provide satellite imagery to first responders and official stakeholders in the event of a large, natural disaster anywhere in the world. And, while not exactly a natural disaster, COVID-19 is very much a global public health crisis reshaping all of our behaviors and our environmental systems. So, today I’m going to talk about how satellite imagery is helping us better understand the impacts of this pandemic.

    Brittany Zajic: By capturing a series of places in different points of time, satellite imagery is able to tell an important story. When millions of people began sheltering in place earlier this year, many looked to Planet, asking how we could help. So, how can satellite imagery help during a pandemic? Tonight I am going to showcase a few of the many applications surrounding the economic and environmental impacts of COVID-19.

    Brittany Zajic: First, we head to Wuhan, China to see the start of their shelter-in-place. In these first two comparisons, we see a stark difference of traffic patterns and these images taken only two weeks apart, with not a single car in sight starting January 28. And I’ll go back one more time. I know this is quick. We then shift to expand further beyond just the limited car transportation, and, instead, think about the closures of factories, construction sites, and all other industrial activities that had a dramatic impact on the air quality in regions of, and parts of, China. Here is a comparison over a portion of Beijing from the start of the year on the left to March 2020 on the right. We then shift to Italy, the next epicenter of COVID-19. Many media outlets spoke of the now quiet canals and the cleaner waters running through the city, which was largely captured in these series of images here. I’ll run through these one more time. This is October 2019, March 2020, February 2020, and March 15th.

    Brittany Zajic: Finally, we have the next epicenter that migrates to the United States, where it continues to remain today. New York was hit hardest and here we can see the construction of a temporary hospital in none other than Central Park, Manhattan, in the heart of New York. The rest of the United States followed suit soon after and shut down as well from the Bay Bridge Toll (that you take from going Oakland to downtown San Francisco) to the decrease in air travel (here’s a Southern California logistics airport — and just to highlight, we can see all the airplanes stacked up, not being in use), to the empty beaches (of Miami, Miami Beach, Florida) and then also the empty parking lots of Disneyworld in Orlando, Florida.

    Brittany Zajic: So, it’s pretty incredible for satellites to be able to so clearly capture this pause on life that has been experienced, that we’ve all been experiencing these past couple of months. Now, there is no question that one data set has been able to tell a great story, but Planet imagery combined with multiple other data sets is going to be able to tell us even more. So I’m going to spend the remainder of this talk today, talking about EOdashboard.org, an international collaboration among space agencies that is central to the success of satellite Earth observation and data analysis.

    Brittany Zajic: The tri-agency COVID-19 Dashboard is a concentrated effort between the European Space Agency, the Japanese Space Agency, and NASA. The Dashboard combines the resources, technical knowledge and expertise of these three partner organizations to strengthen our global understanding of the environmental and economic impacts of COVID-19. So, if we remember back to my early example in Venice, Italy, we visually saw the difference of boat traffic and water turbidity. Now, with EOS Dashboard, using information from several different satellites and sensor types, we’re able to turn that visualization into a quantitative assessment and observation, which is incredibly valuable when measuring environmental and economic indicators or factors.

    Brittany Zajic: A second example of these quantitative metrics is the air quality in Beijing. Again, deriving these insights from an entire suite of different satellites, the ability to analyze these trends from space aids the effort to fight and defeat this pandemic. I leave you all with encouraging you to further explore this Dashboard and learn more about how COVID-19 is impacting people all over the world and explore it through the lens of satellite imagery, because together we can defeat this. Thank you.

    Sukrutha Bhadouria: Hi, thank you so much. That was great. Next speaker is Nikki Hampton. Nikki is Planet’s VP of People and Talent, and she would like to share a few words on their commitment to diversity and inclusion. Welcome, Nikki.

    Nikki Hampton: Thank you. I want to thank all the speakers, even though I know all of these women, I learned so much about them and the work they do and how they got to where they are. So, I’m pretty excited about that. I mostly wanted to say that at Planet, we have always been committed to diversity, but we are doubling down on our commitment and particularly so, looking with respect to attracting and retaining communities of color. And for all of you online, we are looking forward to and eager to work with you, to tap into a broader network of talented folks that you might want to consider referring to us or applying and sharing with whom you know, but we’re super excited to have been part of this and are grateful that you all attended.

    Angie Chang: Thank you so much for that, Nikki. Now we’re going to just move into the Q&A. If there are a few questions, I think we have literally like five minutes till 8:00 PM when we kick off networking. So, if you have any questions, please ask them in the Q&A section and we will be sharing them with Planet and you’ll be getting a follow-up email with job links. They are hiring for some positions like senior corporate counsel, systems engineer, software engineer, account executives. So, you can be like Elena. Sales development reps, customer success managers, and more, and the job links are usually in our Girl Geek X Planet emails that you’re receiving. So, just scroll down and click on those links or forward it to a friend who is looking for a new role.

    Angie Chang: We will be heading over to our networking hour at 8:00 PM. It is on a platform called icebreaker.video and you will have the link in your email, if you look in your email, or we can put it in this chat and we’ll be doing some facilitated one-on-one networking where you literally meet one-on-one with people in a non-Zoom environment. It’s going to be a little more fun and you actually get to talk to people and see their faces. So, if you can hop-

    Sukrutha Bhadouria: And I wanted to call out, thank you so much to everybody speaking and thanks to everybody who has been commenting. I definitely see that it has been super valuable for you all. I wanted to mention, because I’ve also been getting asked, how you can get your company to partner with us to do a virtual Girl Geek Dinner. Definitely reach out to us, through the website, sponsor@girlgeek.io — that’s our email — and if you want to reach out individually to Angie or I, our emails are listed on the website as well. The other thing I wanted to say is, if you do get your company to sponsor, you must sign up to be one of the speakers, own it, use the stage that you are creating for everyone else to promote yourself as well. So, that’s all I had.

    Angie Chang: Great. So thank you all for being so good at the chat, and we’ll see you over at icebreaker.video so we can chat one-on-one with everyone. Thank you all and we’ll see you there. We’re going to keep this on so people can see the link and click on it — and hopefully we’ll rejoin and see you over there in a minute. Alright, bye.

    Like what you see here? Our mission-aligned Girl Geek X partners are hiring!

AI Overlords, Battling Covid-19 and Algorithmic Bias: a conversation about the importance of Human Goodness in AI.

Julie Shin Choi, VP & GM of AI Marketing at Intel AI, at Girl Geek X, Elevate 2020

On Friday, March 6th, senior female tech leaders & engineers came together to celebrate International Women’s Day with over a dozen tech talks & panels during the Girl Geek X Elevate 2020 virtual conference. Today’s blog includes takeaways from a talk by Julie Shin Choi, VP & GM of Artificial Intelligence Products & Research Marketing at Intel AI. Prior to joining Intel, Julie led product marketing at HPE, Mozilla, and Yahoo. In addition to the YouTube video replay, a full transcript from Julie’s talk is also available.


One of the reasons that Julie Shin Choi chose to join Intel, she told us, was the opportunity and the scale that Intel’s AI technology platform would provide from a career perspective, but she never anticipating falling in love with the people of Intel.

“It is really this human goodness at Intel that keeps me here.”

One of the things that we’ve learned in recent years is that AI is a powerful agent for helping people around the world. Intel CEO Bob Swan shared an example from the Red Cross earlier this year at CES. As we all know, the Red Cross is an amazing relief organization dedicated to helping people in times of disaster.

Julie explains that Intel, the Red Cross, Mila (an AI think tank in Montreal), and other organizations recently formed a data science partnership alliance — their objective was to map unmapped parts of Uganda and to identify, through deep learning, different bridges that the Red Cross could take to deliver aid in times of disaster.

In addition to viral outbreaks (a case of Ebola emerged last June), Uganda is also prone to severe flooding.

“Bridges are often washed out or impassable,” said Red Cross CEO Dale Kunce. That “can mean that your 20-minute drive all of a sudden becomes several hours.”

Ultimately, Intel and their data partners were able to examine huge satellite images and come up with algorithms that could automatically identify bridges that could be utilized by disaster relief workers — they labelled and identified over 70 previously unmapped bridges in southern Uganda.

This is just one example of why human goodness matters when we think about AI application development. There are endless applications, some of which are especially current and relevant right now.

AI is playing a huge role in fighting the spread of Covid-19.

Everyone has heard about and is taking precautions against the global Covid-19 pandemic, but are we talking about the important role AI is playing in fighting the spread of this deadly virus?

“Globally,” Julie informs us, “We’re using big data — we’re analyzing different databases of where people have gone and the different symptoms that they may present.”

State, federal and local governments are turning to big data to make policy decisions and measure the impact and effectiveness of their policies in near real-time.

“One novel use case that we [at Intel AI] identified in Singapore is of a company that’s using IoT [Internet of Things] technology to help scan people and identify thermal readings — so basically fevers — without human contact.

Intel AI’s technology is powering thermal screening that’s helping keep people safe by catching more Covid-19 cases earlier, and with less manual input from healthcare professionals.

This AI-aided screening method is proving to be about three to four times more efficient, so they can scan 7 to 10 people with this AI device, as compared to using human healthcare practitioners. They’re able to free up limited resources and keep more healthcare workers on the front lines where they’re most needed right now.”

The utilization of AI is really helping manage a lot of the issues related to coronavirus in Singapore.

We’re seeing other innovations like this cropping up all around the world as technologists team up with big data partners, healthcare providers and policy makers to help track and slow the spread of Covid-19.

AI is new to us, so folks sometimes fear the capabilities… but our kids understand it. And they’re the ones who will be programming them.

“I have two children, 8 and 12. A couple of months ago, we were talking about the world, and the one in junior high, he said, ‘Well, I think that my generation is going to be spending most of its time solving the problems that your generation created.'”

Julie continued, “And then my little one, who’s still in elementary, chimed in right away, and he said, ‘With the help of our AI overlords, right?’

These kids already, they’re so aware, and I think the advice to our children would be to really read books, play with one another, learn how to have friends from many different backgrounds, become the best humans they can be, because it’s not going to be robot overlords. We’re going to need good humans to program those AIs.

Good humans are the key.

“In AI, good humans are needed because it’s such a powerful technology and it’s such an accelerant that really depends on algorithms at the heart, and these algorithms are coded based on assumptions that we make about data.

AI starts with data but ends with humans. It’s technology that’s being built for humans. I think it’s very important that we partner with people who really understand the human problems that we’re trying to solve. We need to partner with domain experts.”

AI is going to take a diversity of talents and tools.

There’s really no one size fits all, Julie explains: “We’re going to need CPUs, GPUs, FPGAs, these are all different kinds of hardware. Tiny edge processors. We’re going to need a host of different software tools. We’re going to need data scientists and social scientists, psychologists and physicists, marketers and coders to all work together to come up with solutions that are creative. It’s really going to take a village. Be open-minded.”

“And let us always be thoughtful,” she added.

“I know that in Silicon Valley, people often say it’s important to go fast and to fail fast, but in AI, I don’t think so. I think we need to take time. We should be thoughtful and really, really careful and considerate about the assumptions we make as we create the tools that create the algorithms that feed the AIs.”

Good humans will be needed every step of the way.

A lot of people worry that AI is going to take our jobs and replace humans.

Julie Shin Choi, Vice President & General Manager, AI Marketing at Intel AI

“I’m a firm believer that AI will not be replacing humans, it will be augmenting humans. So it’s helping us, not replacing us.

For example, radiology is a major transformation area that’s being transformed by AI faster than most because of the applicability of computer vision for x-ray imaging. “But what we’re seeing is that physicians actually are welcoming the help of AI. It’s a great double check.

When you have a 97% accurate algorithm that’s going to ensure that your patient gets the right diagnosis — even though the algorithm is sometimes even more accurate than you, especially if you’re tired — it’s an absolutely phenomenal double check. The end goal for the human in that case, in medicine, is to go and help that patient with the most accurate information that the human doctor has.

What we’re seeing is that AI is helpful to humanity. It’s truly an augmenting type of technology and not a replacement.”

We talk a lot about the impact of bias in AI and how to limit it.

“Bias is certainly a problem and it’s something that we, as a community of technologists, policy makers and social scientists — all different backgrounds — we need to attack this together.

A lot of it just comes down to being intentional. There are audits of algorithms. There are ethics checklists, actually. There are best practices that have been set up, and I can actually introduce [the Girl Geek X community] to Intel’s AI for Good leader, Anna Bethke, who is an expert in this domain and a wealth of knowledge.

We need to address bias with intentional and very purposeful conversations, because again, the algorithms are based on assumptions that humans code. So the only way that we can eradicate and deal with the bias issue is by talking to one another. The right experts in the room ensuring and asking, ‘have we checked that bias off the list?’

Don’t just assume that coders know how to create a fair algorithm. I don’t think we can assume that. This is a very intentional action that we need to build into our AI development life cycles. The bias check.”

For more from Julie Shin Choi, watch the full video on YouTube, read the transcript of Julie’s talk during Girl Geek X Elevate, or follow her on Twitter.

To be notified of future Girl Geek X events and receive our weekly newsletter, subscribe to the Girl Geek X mailing list.

Interested in partnering with Girl Geek X to feature your female leaders or promote your current job openings to our community of 20,000+ mid-to-senior level women in technology? Email sponsors@girlgeek.io


Girl Geek X Microsoft Lightning Talks & Panel (Video + Transcript)

Like what you see here? Our mission-aligned Girl Geek X partners are hiring!

Angie Chang speaking

Girl Geek X Welcome: Angie Chang kicks off a sold-out Microsoft Girl Geek Dinner at Microsoft Reactor in San Francisco, California.  Erica Kawamoto Hsu / Girl Geek X

Transcript of Microsoft Girl Geek Dinner – Lightning Talks & Panel:

Angie Chang: So hi, everyone. My name is Angie Chang and I’m the founder of Girl Geek X. I want to thank you so much for coming out tonight to the Microsoft Reactor. I’m super excited to see everyone here and to introduce you to all of Microsoft’s girl geeks, to see this amazing art and tech demos. Who here signed up for a demo? I saw a lot of people interested in demos and getting tours, so I’m really excited that you are able to do that. Thank you once again to Microsoft and to all the people who helped plan this night.

Angie Chang: How many of you this is your first Girl Geek Dinner? Wow. And how many of you consider yourself like a regular at Girl Geek Dinners? Thank you so much for coming back again and again. We do this almost every week, going to different tech companies, meeting the girl geeks, and we hope you tune into our podcast. We have a regular podcast on topics from internet security, to emotional security, to management, to working in the Silicon Valley. So please tune in on iTunes or Spotify. We also have a very active social media. So if you follow us at Girl Geek X, you can also tweet and share with Girl Geek X Microsoft tonight and we will retweet and reshare.

Angie Chang: Now I would like to introduce our first presenter. Her name is Kaitlyn Hova and she is the co-owner of Hova Labs, where they have designed and produced the Hovalin, which is a 3D printed violin. Kaitlyn.

Kaitlyn Hova: Thank you so much for having me. This is wonderful. So my name is Kaitlyn Hova. I currently work at Join and I also co-own a company called Hova Labs, where we like to make a bunch of weird projects. It’s kind of like one of those like, “If I had time, why wouldn’t I make this?” kind of companies. So it’s just me and my husband and the biggest thing that we really wanted to do was to find a way to convey what synesthesia was like in real time. Who here knows what synesthesia is? Yeah, it’s not very many people. It’s all right. So synesthesia is a neurological phenomenon in which two senses are inherently crossed, causing sensations from one sense to lead to an automatic but also involuntary experience in another. A version of this is called chromesthesia, which is when people can physically see sounds.

Kaitlyn Hova: I didn’t know this was in any way unusual until I was around 21 years old when I was in my final music theory course and our professor just mentioned, “Isn’t it crazy? That some people can see sounds?” Yeah, I ended up dropping my music degree and going into neuroscience, because that’s way more interesting, right?

Kaitlyn Hova: So, ever since then, I’ve been trying to find a way to display what synesthesia was like, because when you’re discussing it with people, it tends to end up going into the more like psychedelic conversation, and it’s not really. So, how to display it? I play violin, so we thought, “Wouldn’t it be wonderful if there was a violin that we could light up with the colors that I see in real time?” This didn’t exist, so of course you have to go to the drawing board, and the first thing on our list was, “What if we had a clear violin and we just put LEDs in that?” We couldn’t find a clear violin and if we could, it was probably too expensive.

Kaitlyn Hova: So, ended up deciding like, “Well, how hard would it be to 3D print one?” It took a year and a half to figure out how not to make a violin and then to figure out how to. I think we went through about like 30 or 40 iterations because you end up getting really desperate and saying like, “Well, what is the violin anyway?” because it’s really hard to make this. It started out as a stick with strings and then kind of grew from there.

Kaitlyn Hova: So now, here it is. Once we got our first prototype, we ended up deciding that this violin on its own, LEDs aside, was a really great product, so why not release it open source for people to 3D print their own music programs? We’re still seeing a trend in schools where music is systematically underfunded, while these same schools are getting STEM grants, so why not? Seems like a connection there. Thank you.

Kaitlyn Hova violin playing synthesia

Violinist Kaitlyn Hova plays a few songs at Microsoft Girl Geek Dinner.   Erica Kawamoto Hsu / Girl Geek X

Emily Hove: Let’s hear it for Kaitlyn. Kaitlyn, thank you so much.

Kaitlyn Hova: Thank you.

Emily Hove: This is fantastic. What a great way to start off such an inspirational evening.

Kaitlyn Hova: Thanks.

Emily Hove: So thank you very much.

Kaitlyn Hova: Cheers.

Emily Hove speaking

Program Manager Emily Hove welcomes the Girl Geek X community to Microsoft Reactors around the world, from San Francisco to London!  Erica Kawamoto Hsu / Girl Geek X

Emily Hove: Welcome, everybody. Welcome to the San Francisco Microsoft Reactor and the Girl Geek Dinner.

Kaitlyn Hova: Thank you, Chloe.

Emily Hove: My name is Emily Hove. I’m part of the global Microsoft Reactor program and we have a lot of synergies between Girl Geek and the Microsoft Reactors. Similar to the way Girl Geek inspires and connects women in technology, our Reactors are all about being community hubs and everything that is related to developers and startups, giving developers and startups the tools where they can learn, connect, and build. So, we hope you all find a night that is inspiring and where you’re able to connect and build today.

Emily Hove: If you’re interested in a little bit more about the Reactor program, we’ve got some cards around the room and they talk about some of the fantastic upcoming workshops and meetups that we have. So we’d love to encourage you to check out our calendar of events and invite you all to attend. With that, I’d like to bring up Chloe Condon, who will be our MC for the evening, and help introduce some of the inspiring people and inspiring women in technology that we have for you tonight. So Chloe, cloud developer advocate extraordinaire.

Chloe Condon: Hello. Thank you so much for coming. This is theater in the round. So I’m just going to keep walking in a circle like I’m giving a very serious keynote so you all don’t see my back. Thank you so much for coming tonight. We are so excited to have you here at the Reactor. Who’s first time at the Reactor, this event? Incredible. That is so exciting. I hope we see you here a lot more. If you want to participate in one of the Fake Boyfriend workshops that I put on here, you can build a button to get you out of awkward social situations, come see me after. We are doing those all the time here. They’re so much fun. Also ask me about my smart badge. This is a little scrolling LED badge that we’re probably going to do a workshop for pretty soon, as well. So come see me after if you’re interested at all in learning about those events and we’ll get you signed up for them.

Chloe Condon: I’m going to tell a little story before I introduce our first guest. I am so, so excited to be your MC tonight. I actually met Angie because I went to Hackbright. Do we have any Hackbright or bootcamp grads in the audience? No. Amazing. So, Angie spoke at my bootcamp and told us all about Girl Geek Dinner and I thought, “That sounds so cool. I would love to go to one someday.” So it’s literally a dream come true to be here with all of you today. This is my first Girl Geek Dinner ever, and I get to be your MC.

Chloe Condon: So, I’m so excited to introduce our first speaker tonight. She is incredible. Please, please show everybody how cool your dress is when you come up here, or I’ll be very upset. I would like to introduce Kitty who is going to tell us all about the incredible technology and fashion that she uses to make things like the amazing dress that I’m sure she’s about to tell you about. So Kitty, come on up. All right.

Kitty Yeung Microsoft Girl Geek Dinner

Microsoft Garage Manager Kitty Yeung gives a talk on “Hacking at the Microsoft Garage” at Microsoft Girl Geek Dinner.  Erica Kawamoto Hsu / Girl Geek X

Kitty Yeung: Hi, everybody. Good evening. Thank you so much Chloe for introducing me. In fact, I’m not going to talk about my dress. That’s for the demo later. I’m going to talk about actually what’s behind that, all the innovation work that we’ve been doing at Microsoft. So, I’m the manager of The Garage at Microsoft. How many of you have heard of The Garage before? Some of you, some of you I’ve met actually.

Kitty Yeung: So, this is a program that drives the innovation, drives a culture of innovation and experimentation. How do we do that? We say, “Doers not talkers.” We actually get our hands dirty. When we think about something, we act on it. These are the culture pillars for Microsoft. To a lot of us when we first see them, they saw just words, but how do we actually implement these and achieve this? We have all kinds of programs and mechanism to drive innovation in Microsoft. Hacking, we have global sites, we have internship programs, experimental outlet is how we ship projects out, and we have intrapreneurs program, and we do storytelling. So I’m going to go into each of these.

Kitty Yeung: The hacking at Microsoft has become the culture. We actually organize the world’s largest global hackathon at Microsoft, and The Garage is the organization that organizes it. Guess how many people attended this year? Globally, there were 27,000 people attending our hackathon, and everyone was excitedly bringing their great ideas to the hackathon and forming teams all around the world. Whether or not you know them, whether or not you’re from the same org, same teams, you can put your skills together and build something that you feel passionate about. We had thousands of projects every year submitted to the hackathon, and The Garage helps people not only have these ideas submitted, we help them grow their ideas into prototypes, and we help them ship.

Kitty Yeung: Satya is a big supporter of our hackathon. He walks in the tent and look at the projects. He said last year, “Bigger ideas, more customers.” So, we can hack on anything we want. So it could be small things. It could be something that we use every day. It could be something that has real impact in the society, we can really help our customers achieve their industry scale ideas. So we also work with our customers and we bring our customer come here to hack.

Kitty Yeung: The experimental outlet, we also call it a ship channel. So this is a mechanism for us to get those ideas in but also provide them with the business model, idea building, how to enter the market, and we help our employees ship those projects out. So if you go to The Garage website, you will see about 100 projects that’s already in the market, and we feature our employees who came up with those good ideas. You can see all the teams on the website, everyone who put their part time together to really achieve something. So, we also have very big projects that we collaborated with industry partners and customers.

Kitty Yeung: Intrapreneurs program is kind of a internal startup program. It involves these ideas, these teams, hackathon teams, to actually pitch their ideas to the leaders and get support. So some of these projects can grow into a feature of an existing Microsoft product, or sometimes they become a product of Microsoft.

Kitty Yeung: We also run our internship program very differently. If you are familiar with traditional internships, usually students come in and they work under one manager in a big team working on a small part of a big project. Instead, our interns come in as a team and inside a team usually we hire like 30 students per site. Silicon Valley just started our first pilot program, so we only had one team, but we have six really, really good students. Usually we’ll have teams of six to eight, and they have developers, usually a PM, and a designer, forming a complete skill set. Then business teams at Microsoft pitch their ideas to our interns and the interns pick which one they like to do, and they drive it like a startup in the company for 12 weeks. Then they can deliver the projects back to the team, or even better, we can ship it directly into the market. It’s a very, very competitive and rewarding program. So if you’re undergrad, think about applying to that internship program at The Garage.

Kitty Yeung: We also engage with storytelling, those ideas, those projects got shipped out. We tell a story, we have a PR team, and you will see a lot of news articles about Microsoft innovation. Pay attention next time when you read an article like that if they mention The Garage.

Kitty Yeung: The global sites is also our feature. We have seven global locations right now for The Garage, and we are expanding. Each location has our own ecosystem, and also, each location has our facility. We have maker spaces, we have technologies that we provide to our employees. They can do prototyping, they can bring their ideas to share with their colleagues. We do startup pitching. We do show and tell and workshops to educate our people and also give them a platform to achieve their collaborations.

Kitty Yeung: So these are the seven sites worldwide. We’re in Silicon Valley and we are now called The Garage Bay Area. And as you can imagine, we have a unique ecosystem of a lot of startups, a lot of big companies and universities. So we work with all of these people in the ecosystem and we collaborate to really build projects that can impact the world. So, as I mentioned, we work with our employees and engage with all of our business teams inside Microsoft, and we work with customers. We bring them to work on projects and hack with us.

Kitty Yeung: Here are some numbers. You can see that we have very global and diverse team, but we actually only have 20 people worldwide. So, the 20 people drive all of those activities that I just mentioned. 27,000 hackers this year is an updated number. Last year, behind that 27, there was 23,000. You can see that it’s growing every year. It’s only going to get bigger. 76 countries participate and we’ve held more than 100 interns already. With the most competitive schools around our local areas. You can find more than 100 projects that’s in the market and on the global website. 19 of them became actual Microsoft products and lots of social media posts, lots of news articles about Microsoft innovation. So, make sure you follow us on the social media.

Kitty Yeung: Some of the Bay Area’s specific projects. Seeing AI, we build a lot of projects that help the people with needs, people who have disabilities. Seeing AI is a project that we shipped a few years ago that help blind people see through technology. So you can hold a phone, the camera will detect what’s in front of you and also read it out, interpret. It can also detect facial expressions and people’s age. So it gives blind people information about their surroundings.

Kitty Yeung: Sketch 360 is a project we just shipped last year, is by an artist inside Microsoft, Michael Scherotter. He had an idea of, “Why don’t we sketch 360 pictures directly?” So, we can build like a full environmental canvas and you can draw anything you want. You can also put that into VR or AR to visualize it. We also last year shipped some apps. Spend is by MileIQ team. So, lots of local projects. We’re just going through our hackathon projects this year.

Kitty Yeung: So personally, that’s why I’m also here to do a demo. I’ve build some of the projects in The Garage to satisfy personal ambitions of anyone in Microsoft can use The Garage as a resource to build their communities, can build their projects. So I have built a lot of wearable technologies. I’m doing a demo right there. We have these different dresses with different sensors and AI, machine learning functionality, and robotic dresses that I can show you later on. But I also have a passion for quantum computing because of my physics background. I’m a physicist, actually. So, I see the need to build a community of people learning about quantum. So this is a study group that I founded in Bay Area, teaching people how quantum computing works, including physics, maths, the hardware, and software, and any employee with good ideas, they can do this. So we have a lot of employees who wanted to do, say AR tech community, they can come to The Garage and do that. Or they have passion for IOT, they can come to The Garage and do that. So, these are just some examples.

Kitty Yeung: So since Girls Geek is also sort of about career, I think this will be my last slide to show you something about your aspiration. This is a guide. So see where you are in this chart of Ikigai and see where you are and figure out what would you like to be. I think for me, I can feel Ikigai in Microsoft because I’m doing something I love, something the world needs, and something I can be paid for that’s important, and something I’m good at. So, if you can get to that sweet spot, that should be your goal. Also, think about how you’re aligned to the global goals. That’s what I can do. I highlighted some of the goals that I could do in the company as well as through my personal projects. I think I would love to expand this and I think this will be a good guide for everyone, how we can do more impactful work for the world. Thank you.

Chloe Condon: Okay. Wait. You cannot leave the stage without sharing this dress. I’m going to make you model it. It is so incredible. So, do you want to say a little bit about it first?

Kitty Yeung: Okay. This is one of my designs, among the other ones I brought. All of these prints are my own paintings. This is a painting of Saturn and I wanted to simulate Saturn on the dress. How do I do that? Because Saturn has a ring, so why don’t I make a ring that when I rotate it will show Saturn. It also has an angle detector. There’s an accelerometer in here. So if it achieves a certain angle it will light up like the stars.

Chloe Condon: Amazing, amazing.

Kitty Yeung: Thank you.

Chloe Condon: Thank you so much. When you wear such a fabulous dress, we should have had a catwalk. I’m so sorry everyone. Amazing. Thank you so much, Kitty. I really, really love that and I loved that final slide. I took pictures of it so I can look at it later and map out my own plan. I am so excited to introduce our next guest that is going to tell us all about machine learning. Priyanka, come on up to the stage. I have a little … do you need a clicker? Amazing. Here you go.

Priyanka Gariba speaking

Head of TPM for AI Priyanka Gariba gives a talk on “Leading a large scale and complex machine learning program at LinkedIn” at Microsoft Girl Geek Dinner.  Erica Kawamoto Hsu

Priyanka Gariba: Hi, everyone. First off, I’m not showing off anything as cool as what the other women did, but I also want to say this is my first time here at Girl Geek Dinner and I think this is amazing. Look at the energy, like room full of women. How many times in a day do we get to see that, or even a month, right? So thank you for having me. My name is Priyanka Gariba and I lead Artificial Intelligence Technical Program Management group at LinkedIn. My talk for today is going to be how we are scaling machine learning at LinkedIn. We are one of the large and complex program that has been funded by our engineering group.

Priyanka Gariba: So, I’ve structured my talk into four different areas. I’ll give a quick introduction on LinkedIn and some of the products that are really powered very heavily by machine learning. I will then get into the problem statement of what we are trying to do in order to scale machine learning. Then talk a little bit about our technology, and then wrap it up with sure, we can scale with building a solution and with technology, but there’s also an aspect of people, and so how do we scale that, and what is LinkedIn doing about it? Okay. All right. With that, let’s get started with the vision and mission for LinkedIn.

Priyanka Gariba: Our vision is to create economic opportunity for every single member in the global workforce. Our mission is, the way we are going to realize it is of course by connecting world’s professional to make them more productive. Let’s take an example of this room itself, right? So many cool things that were shown up, so many cool people, so many cool women that we spoke to. Just imagine if we were connected to one another, there’s so much value we can bring in each other’s life, and LinkedIn can help us do that. So, how are we trying to realize our vision and our mission is through some of our products.

Priyanka Gariba: I’m hoping and I think everyone here is at least having a profile on LinkedIn, and if you’re not connected to the cool women here in the room, I encourage that before you leave, definitely connect with one another. But some of the products that really help us do that is People You May Know. This is a product line that really helps us build our connections. It understands, there is a recommendation system that runs behind it, there is machine learning models that run behind it, very heavily AI powered, and it really allows us to know who are the people, like minded people, that we need to be connected to, and the value we can bring in each other’s life by just having that connection.

Priyanka Gariba: Then of course there is Feed. Everybody who goes on LinkedIn as a platform is going to see Feed as the first product. Jobs is another product, which is very heavily powered by machine learning behind it. Why am I talking about all these products? AI at LinkedIn is like oxygen, and one thing that all these products have in common is AI. With that, what that means is we know that machine learning is everywhere. It’s powering every single product line that we build, it’s helping us bring the best experiences to all our members across the board. So, because of that one reason, we know that what we need to do is we need to enable more people to do machine learning at LinkedIn.

Priyanka Gariba: So, there are two pieces to my talk. One, which I think I’ll dive into more than the second one, is going to be technology. There’s one way we can scale technology, is by building a solution. How do we enable our machine learning engineers to really build and deploy models faster so that the experiences that they can bring to all the members is at a faster rate. The second one is by scaling people.

Priyanka Gariba: So, to tap into the exact problem that we are trying to solve, let’s look at our machine learning development life cycle. It’s as simple as any software development life cycle, right? Basically a machine learning engineer has an idea, there’s something you want to solve for, what is the first couple of things that they would do? They’ll think about what are the machine learning features that are available to them? How do you crank up all these features together? Try and test it in an offline model, train with some datasets, and once you value it and feel comfortable that this is something good, the next big piece is going to be actually serving it in production and then seeing results through AB testing and all of that.

Priyanka Gariba: I’m not going to dive too much into this. This really just is an extension of that life cycle. Basically you start with an idea and then there are different functions along the way. There is a product management, there’s dev, and the way we really make decisions on product is very heavily powered by our AB testing platform. We make ramp decisions only based on that. Once we see the results, only then do we believe that that is a model that we want to ramp further to our members.

Priyanka Gariba: Why talk about all of this? Why talk about the life cycle, right? If all these products are being built at LinkedIn and if so many people are doing it and all the teams are doing this, what that means is every single team is doing and deploying models in a very different way. There are many, many technologies, they are all on different stacks, it’s not standardized across the board, and one thing we encourage at LinkedIn is for people to move around within teams. So today if you want to work on a Feed team, tomorrow you want to work on a Job Recommendation team, how do you do that? Your stack is different. Half the days are going to be spent in just ramping up.

Priyanka Gariba: So, we introduced something called as Productive Machine Learning. Really our goal is to enable end to end experience of machine development life cycle to be more robust, reliable, and consistent, and standardized. The experience we are looking for is for an ML engineer, all you have to worry about is come up with an idea, and then there is everything else is opaque for you. There is a big box and you don’t have to worry on how you move from one phase to the other. Ideation to machine learning features to training to scoring to serving it in the introduction. You don’t have to worry about this and how are we going to do that.

Priyanka Gariba: So, we’ve put together this program, it’s to give you context, this is a really large scale program, about 6,200 engineers across the board working on it, different geolocations. The way we are structuring it is by talking about three different phases.

Priyanka Gariba: Model creation, going back to that life cycle that you saw, everything from ideation to training and evaluating your model comes under model creation. So we have multiple components that blend into that. Then the next piece for us is deployment. Once you believe that your model is really good and ready for serving, you deploy it in production. The third piece, this is not really a phase, but something that cuts across, is making sure your quality is accurate. Meaning features that you used for your offline training are very similar to what you see in online. So online, offline consistency.

Priyanka Gariba: So, I just wanted to, because I had 10 minutes, I just wanted to give you a flavor of this big undertaking that we are doing at LinkedIn and also give you a little bit of flavor of how we are structured. Typically, every time we build something, we follow a traditional model. You have a leader, you have multiple managers, you have engineers, and you come up with a goal on a project and everyone works together. This one, we wanted to do something different. What we did is, let’s bring every single person in LinkedIn who is really passionate about solving this problem.

Priyanka Gariba: So put together what’s your team, we had everyone across the board, in different geolocations too. There is someone who will be infrastructure heavy. There is someone who is a machine learning engineer who can help us really give us inputs when we are building the solution that it’s really going to work for them. Then there’s product managers, CPMs, engineers, across the board, but it’s really all of these coming together, forgetting the boundaries of management, realizing that there is one goal that we have, is to get an end to end machine learning life cycle ready, was the key thing for us. I already mentioned that, team of teams, we’re geolocated. That is also one reason why we wanted to do that, is we wanted engineers across the board because if we were solving a problem just for headquarters, which is in Mountain View, we will not be solving for everyone at LinkedIn.

Priyanka Gariba: Then of course with any product that you build in any company, there is a big piece of adoption. So, for us, the strategy that we have used is that let’s, the three big phases that we spoke about, let’s build small components underneath it and let’s allow every product team to pick up a component and adopt that depending on what their pain point is. So, for example, if a Feed team is really struggling with how do you train a model, then what we wanted to offer them is pick up that component and get adopted on that. Once you buy the idea, then slowly and gradually navigate into the adoption of the other components too. This helped both ways. This helped us get real early feedback from our customers and users, and then it also allowed us to load balance. So we could develop things while something was already being tested and we were getting that iteration loop from our users.

Priyanka Gariba: So, I spoke about the technology, and I spoke about the solution. The second thing that LinkedIn is doing, and I’m just giving a very high level preview of this, is in order for us to democratize AI or to make it readily available and to enable more engineers to do that, there’s a program that LinkedIn’s kicked off, it’s called AI Academy. There are three different types of courseworks of program, AI 100, 200, 300. As you graduate from one to the other, really the intensity of the techniques and machine learning increases. So AI 100 is really just getting a flavor of what AI is, what machine learning is, and get you familiarized with it. And then 200 you start understanding how do you build a model, and three is when you actually build your own model and put it in production. I can talk all about this and I’m happy to talk about it later on, but this is just a preview, and there’s a lot of blogs and things that we’ve already put on LinkedIn.

Priyanka Gariba: This is another blog for Productive Machine Learning for those of you who are interested in reading more about it, and I’ll share my slides as well. That’s it. Just a quick flavor. I had 10 minutes, so I thought at least I’ll come up here and talk to you and give you a flavor of what we are doing to democratize machine learning at LinkedIn. But happy to, I don’t know if I have time for questions, but I can take questions later on as well. Thank you.

Priyanka Gariba: Okay. I can take a question or two if … After. Okay. All right. Sure.

Chloe Condon: Thank you so much. All right. So, next up, I will take that from you. Next up we have a very special treat, but before I introduce our very special guest, I’m going to show you my favorite LinkedIn feature. How many people have added someone on LinkedIn tonight? Okay. Well now you’re going to add more people. So, if you go to your LinkedIn app in the very top in the search bar, there is a barcode, a scanning barcode, and if you click on that, instead of having to type out the person’s name and awkwardly ask for spelling, you can just scan their barcode tonight. So you can share that secret tip that I learned recently from someone else at a meet up that I now pass onto you to make spelling people’s names less awkward. So definitely scan everyone’s badge here tonight. My best advice always in tech is to meet as many people as you can, and tell your story and share their stories while you’re here tonight with all these amazing people.

Chloe Condon: I am going to welcome our very, very special guest for tonight, Charlotte. Come on down. We are so excited to welcome Charlotte Yarkoni to the SF Reactor. Here you go.

Charlotte Yarkoni speaking

Corporate Vice President, Cloud + AI Division, Charlotte Yarkoni gives a warm welcome at Microsoft Girl Geek Dinner.  Erica Kawamoto Hsu

Charlotte Yarkoni: Thank you. I need to start out and tell you guys, I’m sick. I really, really apologize for my voice. I’ve been told I don’t look as bad as I sound, so I thought it’d still be okay to show up, but hopefully you’ll manage to go with me this evening. It was important for me to come. So again, I hope you can work with me on the sound quality. But my problem is as I’m watching everybody on stage, I wanted one of these mics so I can put it down, cough, and anywhere I go I’m going to … somebody’s in my blast radius. So, if I come over here and stand by the post, please don’t be offended.

Charlotte Yarkoni: Anyways, good to be here tonight. Thank you guys all for coming. I thought what I would do is first share with you a little bit about my journey of being a woman in tech and what that’s meant to me in my career. I do need a clicker. My telepathic PowerPoint clicking slides are not on today due to the head cold. So, I actually go talk a lot to universities. I go to some high schools. I love talking to young girls about STEM, but I always kind of have to ground in. Let me tell you what tech looked like when I was in middle school and high school.

Charlotte Yarkoni: This was it, by the way. There were no smartphones, there were no tablets, there were no laptops. I remember when Asteroids came out and me and my brothers thought it was amazing. Right? So that’s kind of where we were. Then this was our social network. There was no Twitter, there was no WeChat, there was no Snapchat. It was pretty much a bonfire in somebody’s field when their parents were out of town in the town I grew up in. So, that’s kind of where I come from.

Charlotte Yarkoni: I actually, I grew up in South Carolina. I was super fortunate to get a scholarship to come to UC Berkeley. I’m pretty sure I’m the only person from South Carolina to ever go to Berkeley. I was actually part of an inaugural program at the time called Electrical Engineering or Computer Science, or EECS as it was known. This is what code looked like when I was coding. Has anybody ever written in Lisp? Anyone? Did anyone? Yeah. Kicking it old school. All right. So, that was sort of my education, if you will, and my real foray into tech.

Charlotte Yarkoni: Then, I got out of college and started working and figuring out how to use technology as an applied science, not just in an academic sense, and this was kind of the world I was in. Actually cell phones came out and yes, that’s what they looked like for those of you that weren’t born then, because I know there’s a few of you here. Windows 95 was all the rage, right? You remember that? Then we get to today and it’s just a very, very different world.

Charlotte Yarkoni: One of the things that I love about technology is the fact that it has actually opened up all of our worlds, in so many ways that we can have so much more impact. We can instantly connect to people that we could never connect to 30, 40, 50 years ago. I’m not that old, I’m just framing my comments. But you think about that and it’s not just connecting to those people, it’s the access to information that you also have immediately at your fingertips. It’s amazing. It’s amazing that what you can harness with that kind of resources at your fingertips.

Charlotte Yarkoni: The challenge is, though, it comes with a responsibility, and I will tell you, at Microsoft, and GitHub, and LinkedIn, we spend a lot of time on that. In fact, it’s not just about innovating, it’s about innovating with purpose, and really making sure that you’re actually leaving the world in a better place than you found it before you introduced your solutions. So it’s those unintended consequences that you have to be very thoughtful about. As we continue to get more and more technology at our disposal, how do we use it for good? That kind of brings me to really, what’s my role.

Charlotte Yarkoni: Today in my role is, at Microsoft, I run a group called Commerce and Ecosystems. You can tell I’m not a marketing person, so there you go. But I’m really here. I focus on answering three questions. The first is, how do people actually discover who we are and what we do in our products and services? And Microsoft’s a very big company, it’s a global landscape. We offer lots of different products and services across our portfolio, but there are a lot of ecosystems and communities that actually don’t know who we are and what we do.

Charlotte Yarkoni: Five years ago it was a lot about open source, and I remember I actually went to … I started at Microsoft about three years ago and I went to an open source conference. By the way, I grew up in open source, so my background actually started out in Unix and moved to Linux. I never wrote a piece of code in .NET. Would probably look and feel a little bit like Lisp to me, honestly, if I tried to do it now. So when I came to Microsoft, I went to a familiar conference, and people were like, “Why are you here, man? Azure doesn’t run Linux.” I’m like, “What are you talking about? Yeah, it does.” People need to know, right? So we had to go fix that.

Charlotte Yarkoni: Second thing I focus on is after you discover us, how do you engage with us in a way that’s meaningful to you? And most of that is online. People don’t always want to have to go somewhere to learn how to do something. They will now have to sign up for a week long course, right? Necessarily to know how to build a solution using the technology that they have. So we spend a lot of time and energy focused on that and what’s the set of tooling or resources that we can offer.

Charlotte Yarkoni: Then the final point is, how do we just get easier to do business with our customers and partners? That’s where the commerce piece comes in and it’s all about what are some of the new business models we need to create to actually, how do we run all those capabilities across all our products and all our channels today? So there is a good bit of engineering that comes in each one of these aspects, but there’s also a lot of business work that I have to focus on. And again, it comes with that overarching layer of responsibility, is to how do we think about continuing to make progress in a positive way so we can have a positive impact on the communities we serve.

Charlotte Yarkoni: So that’s kind of who I am, and I think what we’re going to do at this stage is a little bit of like an AMA, and I’m really hoping you guys don’t ask me too many questions because the more I talk I think the worse I sound, but I will try to answer everything for sure. I was going to have Chloe join me, and I was going to have Shaloo Garg join me. So, just as a reminder of both, Chloe and Shaloo are part of my team and they’re part of the drive discovery effort. So I’ll let you guys, you guys will talk a little bit more about yourselves, I’m sure, but I’m going to turn it over to our master of ceremonies. Kick us off. Do you want that mic or you want–

Chloe Condon: Sure. Mics all round here.

Charlotte Yarkoni: This one may be contaminated.

Chloe Condon: All right. I wouldn’t want to catch the virus, the Charlotte virus. Amazing. So, I figure we’ll have a seat. Have a seat wherever. We had a bunch of people submit questions earlier in our fishbowl, thank you so much for all of the questions that we got earlier. So, what I figured I would do is we would start with an introduction with Shaloo. Would you like to tell everyone who you are, what you do?

Shaloo Garg, Chloe Condon, Charlotte Yarkoni

Microsoft girl geeks: Senior Cloud Developer Advocate Chloe Condon, Corporate Vice President for Cloud + AI Charlotte Yarkoni, and Managing Director of Silicon Valley’s Microsoft for Startups Shaloo Garg answer audience questions with candor at Microsoft Girl Geek Dinner.  Erica Kawamoto Hsu

Shaloo Garg: Yeah. Absolutely. Firstly, thank you guys so much for coming here today. It means a lot. My name is Shaloo Garg and I lead the startup business growth for Silicon Valley for Microsoft, and entire California as well. It’s an exciting space to be in, and part of Charlotte’s team and part of what we do is not only engage with founders and CTOs and CIOs here of startups, but also drive meaningful partnerships, which is … this is Silicon Valley, there are a lot of partners here, how do we work with them to drive awareness of how Microsoft can help entrepreneurs there? So good to be here.

Chloe Condon: Amazing. Thank you so much. I have these randomly selected questions here.

Shaloo Garg: Those are a lot of questions.

Chloe Condon: It’s a lot of questions. I don’t know if we’re going to get through all of them. We may do kind of a rapid inside the actor’s studio type of lightning round at the end here. But I love this first one. I chose this one first and this is for Charlotte. It says, “What’s it like being an executive at one of the top companies? Do you have a life?” Great phrasing, whoever wrote this.

Charlotte Yarkoni: I’d like to think I have a life. Yes, I do have a life. I have two children, both girls, one–

Chloe Condon: Great. Are they coding already?

Charlotte Yarkoni: One is 23, just graduated. She went to Reed College, and by the way, back to Berkeley, I thought when I went to Berkeley from South Carolina, I was an enlightened liberal. And when I dropped my daughter off at Reed College, I felt like I was the most conservative person on the planet. I was a little worried about my life choices at that point. But she graduated there in linguistics and she actually is starting school this week, getting her master’s at University of Washington.

Charlotte Yarkoni: She would be very offended if I called her a developer or an engineer, yet she spends a lot of time writing programs and are doing statistical analysis on languages because she focuses on Russian, Japanese, Spanish language and language heritage.

Chloe Condon: Wow.

Charlotte Yarkoni: So, that’s my oldest. My youngest is 13, and a prolific gamer and developer. Python is her language of choice. She has lots of opinions about every other language.

Chloe Condon: As she should.

Charlotte Yarkoni: It kind of takes me longer these days to set up an environment for her to code in than it does for her to whip out a new game that she’s thinking about. So, I’m pretty sure she’s going to end up somewhere in the engineer community as a professional at one point. I also have three horses. I ride. I grew up three day eventing, for those of you who know what that is. Now that I’m older and have kids, I wondered what my parents were thinking when they let me do that. But I still ride and I still compete. Then I do my day job.

Chloe Condon: That is a fun fact.

Charlotte Yarkoni: I think the thing about today’s technology is, the good and the bad is it allows you to be accessible all the time. So, you can actually, you have to know how to be at the right place at the right time, which is usually the conflict that occurs, but you are able to go do what you need to do personally and do things professionally as you go. So that’s something I’m really, I feel privileged by who I work for in the industry I’m in and the technologies that we’ll be bringing for all the working moms out there.

Chloe Condon: Wow. That’s actually a great segue into the next question, which I’ll direct to Shaloo first, which is, how do you relax and unwind? Like with how long and tough your day jobs are, how do you get to chill?

Shaloo Garg: So, best is tennis. I love playing tennis and that’s how I unwind, and when I go out and play tennis, I try not to take my cell phone with me or my kids. So I have a 13-year-old daughter too, and a nine-year-old son who quite a handful.

Charlotte Yarkoni: Do you have any Serena moments on the court?

Shaloo Garg: I do. But that’s how I unwind, which is just completely unplug, just a moment of Zen and just go out there and hit it.

Chloe Condon: I’m very similar. I craft. I like to do like things with my hands and not look at a screen and just build something fun, like a costume or something that lights up. And you’re riding horses.

Charlotte Yarkoni: Yeah, but I could not build a costume. So, we each have our strengths.

Chloe Condon: Hit me up for Halloween. We’ll get you guys–

Charlotte Yarkoni: I’m going to hit you up for Halloween. Okay.

Chloe Condon: This one says, “What would be your advice for your past self coming straight out of college?” I love that question.

Charlotte Yarkoni: Who you asking?

Chloe Condon: Anyone can jump in. Yeah.

Shaloo Garg: I think coming out of college, I wish I was more aware of getting a coach or a mentor, which I was not aware. And during my career I sort of looked upon women leaders and requested them to be mentors and coaches. So what I try to do now is go out and coach and mentor women or young girls myself. So, I realize that they may be in the same situation as I was in, which is, “Hey, I can ask a woman leader to say, ‘Would you mind spending 30 minutes with me?'” But they don’t ask. Right? So I preemptively do that in schools, colleges here in Silicon Valley. Actually right up our Market Street office, that’s another office of ours, every month, I host open office hours for young women who are out there, budding entrepreneurs. It doesn’t have to do anything with Microsoft. So, as soon as you walk in the door, it doesn’t have to be, “Hey, you have to sign up to work with us,” but it’s just coaching, and I love it. So, wish I had that, but a part of me is just giving back, just making sure that someone out there is benefiting.

Chloe Condon: Yeah, that’s great advice. Charlotte.

Charlotte Yarkoni: I think, for me, one of the things that it’s taken me a long time to appreciate and I really, I encourage everybody to have some thought about this for their own journey, both personally and professionally, resilience is such an important thing. When I look back on my career, I feel, again, very privileged to have worked in all the places and spaces that I have. But the successes I had weren’t one success right after the other. It was a success built off of quite frankly, a mountain of failures and trials to get there. It was about taking those learnings and applying and getting better. I think a lot of what we do as an industry is about solving a problem, solving an opportunity, and getting better as we go, and iterating, and it’s really hard to do that as a person.

Charlotte Yarkoni: I’m going to go out on a limb and assume all you people here are somewhat overachievers. So every time that you have a failure, you want to prosecute the failure and you want to prosecute yourself, and that’s okay as long as you make it a constructive thing and learn from it, and the older you get and the more experienced you get, the more you start to really embrace and almost be proud of those failures for what they taught you, because you wouldn’t be wherever you are without it. That’s just a fact. I don’t know that I appreciated that in my younger age. I was certainly an overachiever and thought I knew a lot more than I knew at the time. I know that’s shocking, but it’s true. But as I went through my career, it was a process for me to understand how to really get value in the mistakes, how to really give value in the failures, and use them to move forward.

Charlotte Yarkoni: I just would encourage everybody, get out there and try. That’s step one and step two, is make sure you learn and embrace the mistakes, right? And it is about that of resilience that will just make you so much of a better person whatever you decide to do, however you decide to do it.

Chloe Condon: My advice would be, I don’t think I knew right when I graduated what I wanted to do with the rest of my life. I wish I had taken a little time to travel or maybe to explore different industries and fields that maybe I wanted to dip my toe in. Because I think what the wonderful thing about working in tech is you don’t have to commit to doing the same thing for your entire life. You can always change and learn a completely new technology or … There was a tweet that I think I retweeted this morning, which was, “Your job that you have in five years may not even exist. So try not to plan out your life too strategically,” and I think that’s really wonderful advice because technology is growing at a rapid rate and we may be working for something we don’t even know exists yet. The new, I don’t know, a new iPhone. Who knows?

Chloe Condon: Great. Next question that I have is, I love this one, “What’s the best book you’ve read this year?” Does anyone have one? I know mine. I can go first while people think.

Shaloo Garg: Go, go for it.

Chloe Condon: I read a book. Oh no, you go first because I want to make sure I get her name right, the author’s name right.

Shaloo Garg: So I think the life-changing moment for me was the book that I read by Eckhart Tolle. It’s called The Power of Now, and it teaches you a lot about what Charlotte talked about, failure. It also teaches you how to stay engaged but not attached, which is you’re really passionate about something that you’re doing. Keep that passion, but don’t get so emotionally sucked into it that you break down. So it also teaches you mindfulness and awareness. And then how to be an A player, which is you’re mindful, you’re aware of what you’re doing, but guess what? You got to go and get it. So I thought that was completely life-changing for me because I learned quite a bit in terms of just being strong, being very passionate about what I do, but not emotional, and then just chasing it, chasing the ball and just chasing the heck out of it.

Charlotte Yarkoni: Mine’s an oldie but a goodie, because my youngest was doing a book report on this one, the Life of Pi.

Chloe Condon: That’s a good one.

Charlotte Yarkoni: I just loved that. I haven’t read it in many years and so she brought it home and I brought out my copy so we could read it together. It is just an amazing book.

Chloe Condon: That is on my list. You said yours was The Power of Now?

Shaloo Garg: Power of Now.

Chloe Condon: Okay. Write that one down, everyone. I recently read Just the Funny Parts by Nell Scovell, she’s a female comedy writer, and I found … it’s an autobiographical piece. She used to write for Saturday Night Live, David Letterman, and it’s a completely male dominated field. It was the first time I had read about an industry other than tech that was similarly structured and formatted and it talked about, she’s a comedy writer, so it comes from this place of empathy and humor, and I would highly recommend it. She helped write Sheryl Sandberg’s book. She also wrote a lot of Obama’s jokes, I found out in that book. So, a lot of the things that made us chuckle from Obama came from her.

Chloe Condon: So, next one is, “Who has influenced you most in your life and why?”

Charlotte Yarkoni: That one’s actually really hard. I will tell you both my parents passed away in the last year. They were quite older. I’m the youngest of a large family. Pretty sure I was an accident, so, it’s okay. But you spend a lot of time reflecting on your nuclear family when those kinds of things happen, and they happen inevitably to everyone. So I definitely think my parents had a large influence on my life. I think my teachers had a large influence on my life. I’m the proud product of the public education system of South Carolina, which I think at the time I was growing up was like 49th in the country. But I went from there to UC Berkeley, which was an amazing school. And I had some amazing teachers to help me learn how to learn, is what I got from that.

Charlotte Yarkoni: I’ve been super fortunate to have some great mentors and what I would call guidance counselors throughout my career, that I still do lunch with and dinners with and catch up with. So, I feel like I’ve had a lot of influences and I do think for the last 20 plus years, though, my kids have probably taught me more humility and patience and resilience and all the other virtues we speak so highly of. They’ve probably been the biggest forcing function in my life in recent years.

Chloe Condon: What about the horses?

Charlotte Yarkoni: The horses are my sanity. I will tell you, we moved to Australia for a couple of years and I couldn’t take my horses with me and I was, my husband will tell you, I was a miserable person for the time I was gone.

Chloe Condon: I’m picturing you writing postcards back to your horses at home.

Charlotte Yarkoni: I came home. I came home every two months to see them.

Chloe Condon: Aww. How about you, Shaloo?

Shaloo Garg: So, parents, but I think my mom. So I lost my parents at a very young age. I remember when thinking back growing up, so I was born in India, but I grew up in Middle East, and I grew up in a community where there was lot of domestic violence and girls were not allowed to go to school. And so there were a lot of changes that were happening around me. In fact, while growing up, I went to 14 different schools between elementary, middle, and high school. So you can imagine moving from Saudi Arabia to Iraq, to Kuwait during the war zone time. But I remember going through all this, my mom always taught me and my sister is that, if there’s ever a problem in life and there is a simpler solution, and there is a hard solution, guess what? Pick the hardest one, because it’s going to make you go through that process, whereas a simpler one, you’re just going to take it and just sit with it and you’re not going to learn anything. So I do look back and I think that she’s had an amazing influence on me.

Shaloo Garg: And as Charlotte said, my kids, I keep learning from them every single day. They teach me so many things in terms of if I get upset about something, they’ll just say, “Hey mom, just relax. This is just a small thing, just move on.” I think that’s how I keep learning more and more. And of course, amazing coaches and mentors and some really amazing female leaders who I look upon to.

Chloe Condon: I would have to agree. My mother passed away when I was 16, but she was a costume designer, graphic designer, creative arts person, and I try to bring my creative arts training and background into all the technology that I do and create. So I think that was probably the biggest influence on me, would have to be my mom as well.

Chloe Condon: What is the biggest challenge we are facing in tech currently? A tough one.

Charlotte Yarkoni: I actually think our biggest challenge as a society is climate change. I think technology can be a solution for that. So, that’s an indirect answer to a direct question, but I would say that is the thing that I would love to see all of us, I don’t care what you’re doing, where you’re working, but to start having serious thoughts about how we can go reverse decades of adverse effect on the planet. It helps everybody, and I do think the real accelerants are going to lie not just in changing our behavior and our consumption, but also in having technology help us. I don’t think we’ve really gone there yet as a society at large. So for me, it’s something I’m kind of anxious to push along however I can in whatever small way that I can. I think that’s how I think about it.

Charlotte Yarkoni: With technology, you have things like quantum, which is just amazing. The beauty of working somewhere like Microsoft is we are spending a ton of research and we have really crazy people, crazy smart people working on this, and every now and then if I have to go give a talk and I need to give my five minutes of quantum computing update for the cloud, I always ask, “Are there any theoretical physicists in the audience? Because if there are, I’m not going to do this because you know way more than me,” kind of thing.

Chloe Condon: Come on up.

Charlotte Yarkoni: But it’s amazing, and in essence you take what sits in a data center the size of a football field today and you can run it in what’s in the size of a refrigerator in your house. But, the cooling you need to do that is extraordinarily more than the power we’re consuming today, and the impact that will have, by the way, if it’s not done right, either we’re not producing it correctly and/or we’re not cooling it correctly, can have a devastating effect. So how do we think about things like that, these new trends with this aspect of sustainability around the climate, I think is super important. So I apologize, I kind of rambled on that answer, but I actually think this one’s a really important one.

Chloe Condon: I agree. I actually met someone at Open Source Summit recently who works on our IOT team here at Microsoft in Redmond, and his job on the IOT team is to help offset our carbon emissions from our server center. So I thought, “That’s such an important, important way for us to help make the environment a better place with Microsoft.” So, yeah.

Charlotte Yarkoni: Absolutely, and the lady who runs our data centers, her name is Noelle, she’s a peer of mine. I love her dearly. She’s just an amazing woman. She actually grew up as a chemical engineer.

Chloe Condon: Wow.

Charlotte Yarkoni: A lot of her time on how do we run our data centers is spent in areas that you and I wouldn’t know how to go solve, because it is about how do you think about power? How do you think about new sources like geothermal and things like that. I think it’s great. I think it’s great we’re thinking that way, but we got to do more.

Chloe Condon: Yeah.

Shaloo Garg: I think the biggest challenge is the knowledge or the lack of awareness behind power of technology. So, I often see this, I keep bringing up edtech as a very common example, and in fact, here in the Valley, edtech is right now the hottest topic in the social impact circle. I can guarantee you, when I throw the word school out here and I ask you to just close your eyes and think of, tell me what you think of. You’re going to think of a building. You’re going to think of kids running, a blackboard, and a teacher. But that’s not what education is only. Education can be a seven-year-old girl sitting in Uganda who’s not allowed to go to school, but she can sit at home and do schooling at home using an iPad, right? Just because she’s a girl, she’s not allowed to go to school.

Shaloo Garg: That is the power of technology, and it kills me every single day when I read about places like Somalia and Syria, and so many other places, where easily companies, and Microsoft does amazing job, that’s one thing I’m really proud to be, which is be part of this company. We do amazing work globally in enabling this. I think we need to continue to talk about the power of technology, which we do in our jobs and outside our jobs, but we need more and more people to go out there and coach people and say, “Hey guys, education is just not about textbooks. It can be digital education powered by technology.” I think that to me is the biggest challenge right now, which is lack of awareness.

Chloe Condon: Yeah, accessibility and access to that is so important.

Charlotte Yarkoni: Can I interrupt this broadcast? Do we have any recruiters in the audience? Because I think we have our newest recruit. She did an awesome walk-in by the way.

Chloe Condon: Love the pants. Great pants. This is a very fun question. What emoji do you use most often?

Charlotte Yarkoni: I don’t use them correctly, as my children … I always send them stuff–

Chloe Condon: It’s the horse one, right?

Charlotte Yarkoni: … and they’re like, “Why did you send me this? Do you know what this means?” I’m like, “No. No.”

Chloe Condon: I think that’s part of your job as a mom, right?

Charlotte Yarkoni: Well, I have gotten in this habit of sending random ones just to freak my kids out.

Chloe Condon: Love it.

Charlotte Yarkoni: I usually am pretty clean at work with the okay and the goofball face, and the smiley face, but it cracks me up because we were just having this discussion the other day, because I sent something that apparently I shouldn’t have sent as a parent.

Chloe Condon: It’s like a secret hidden emoji language.

Charlotte Yarkoni: It really is.

Chloe Condon: Yeah.

Charlotte Yarkoni: And you, what do you use?

Chloe Condon: I would say it’s a tie between the sobbing emoji and the laugh crying emoji, because I don’t have any other two emotions other than those two extremes. There’s no in between for me. I’m either hysterically laughing or hysterically crying.

Charlotte Yarkoni: What do you use, Shaloo?

Shaloo Garg: Smile and laughter, and that’s it. For the kids, with the kids, I’ll just use hearts, and sometimes my daughter says, “Mom, just stop using those… You’re embarrassing me, mom.”

Chloe Condon: Yeah. What are the most important decisions you face every day? Or what is the most important decision you face every day?

Shaloo Garg: How to make founders successful, and especially in a market like this. I just love it. It’s an upstream market, constantly challenging ourselves. What else can we do? What else can we do in this market? I absolutely love it. It is challenging. It’s extremely challenging.

Chloe Condon: It’s a huge question.

Shaloo Garg: It’s a huge question. I’ve been with the company for eight months and when I joined initially, I was a bit nervous. I was like, “Great, I’m so excited about this job,” and when I went out there, talked to founders, everyone was like, everyone gave me a standard response, “Well, yeah, okay.” But now slowly and slowly we’ve started building it as part of the narrative that we haven’t only the meetings, which is how do we help the founders, and if we switched that, our jobs become much more easier, which is, “I’m here to help you and this is how I can help you.” So I think that to me is absolutely the most fun part.

Chloe Condon: Yeah.

Charlotte Yarkoni: By the way, as part of my team, that’s a great answer for these little startups. I think my job is really making the set of decisions that best serve our customers, our partners, best serve the team. It’s always a balance, right? We have so much we’ve got to get done. We love innovating, we love getting new capabilities out there, making sure that we’re doing that with the right sense of urgency and the right balance for the teams delivering them. Most of my day, in any one of my teams that I look at, is just making the right calls to make sure that we’re doing right by the community, as both our community that’s working on it and the communities we’re trying to serve.

Chloe Condon: Yeah. I would say for me it’s how to get people excited to learn, and what is going to get them having fun. Because I think we work all day, we work like an eight-hour plus day sometimes in front of machines using technology, and what are fun creative ways to get people excited about that and to build really cool, amazing things together that can solve these big questions and problems like the environment and getting accessibility to folks who don’t have the access to this technology. So, it’s always fun to enable that power to people.

Chloe Condon: How much time do we have? Do we want to do maybe one or two more questions? One more question. Okay, cool. Let’s see. I think this is a really good … Actually, I would love to end with your advice to all of our amazing women in this audience, and men in the audience. What would be your advice to someone who’s looking to move up in their career and have a successful career as a person in tech?

Charlotte Yarkoni: I think being you is the most important part. Whatever that means, right? Just be your most authentic self. It’s a hard thing to do. It’s a hard thing in our industry. It’s a hard thing in super competitive environments like here in San Francisco. Seattle is very similar in that regard. I have found people get the most reward and have the most success when they’re actually themselves, whatever that means. I also think being the authentic you will not just make you better, it will actually make whatever team you’re on better. It will make whatever company you’re at better, it will make whatever product or service you’re working on better. Just be you and be proud to be you.

Chloe Condon: I love that.

Shaloo Garg: So, I would say do what you’re passionate about because when you’re passionate, you bring your best. Do not be afraid to take risk, and I know this sounds like a cliche, but really challenge yourself. If there is a risk, if you want to do something and it looks very risky, just go ahead and do it. Maximum, you’re going to fail, but you’ll learn something from it. If you come out victorious, that’s great. Then the last thing I would say is just trust yourself and just believe in your instinct that you’re doing good for the business, you’re doing good for the company, you’re also doing good for those startups or customers or whoever your stakeholders are, and just go chase it. If you keep it straight and if you keep what I call the compass straight, there’s going to be lots of amazing learning in the process.

Chloe Condon: My advice is actually a great segue into our mingling and happy hour section. Mine would be to talk to as many people as you can in this industry. If you have the opportunity to get coffee with someone you really idolize or a mentor, or someone who’s doing what you want to be doing in this industry, having conversations, I think, is so wonderful and you are all about to use that LinkedIn feature that I just taught you, and meet some really amazing people. So make connections and network and yeah, have the most amazing time.

Chloe Condon: I want to thank both of our…

Shaloo Garg: Thank you.

Chloe Condon: … panelists today. Round of applause for Shaloo and Charlotte.

Charlotte Yarkoni: Thank you for hosting.

Chloe Condon: Of course. Thank you to to Kitty. Thank you to Priyanka. Thank you to everyone, to Kaitlyn who’s not here, but oh my gosh, that amazing, amazing musical performance we had to start off the evening. Please, enjoy yourselves. I think we still have some beverages and snacks here, so have a wonderful time. Make sure you get some swag and stickers and we will be around to chat. All right. Thanks everyone.

Microsoft girl geeks, Microsoft Reactor fun

Microsoft girl geeks and allies: Thank you to all the Redmond, San Francisco and Silicon Valley teams who worked together to make this happen!   Erica Kawamoto Hsu / Girl Geek X

Kitty Yeung Microsoft Girl Geek Dinner

Microsoft Garage Manager Kitty Yeung is a creative technologist with a skirt that lights up when she spins.  Erica Kawamoto Hsu

girl geek experiencing Microsoft mix reality

Principal Program Manager Lead Jane Fang and SF Academy Head of Marketing Jo Ryall demo “Mix Reality” to a girl geek  at Microsoft Girl Geek Dinner.   Erica Kawamoto Hsu / Girl Geek X


Our mission-aligned Girl Geek X partners are hiring!

Girl Geek X OpenAI Lightning Talks and Panel (Video + Transcript)

Like what you see here? Our mission-aligned Girl Geek X partners are hiring!

Gretchen DeKnikker, Sukrutha Bhadouria

Girl Geek X team: Gretchen DeKnikker and Sukrutha Bhadouria kick off the evening with a warm welcome to the sold-out crowd to OpenAI Girl Geek Dinner in San Francisco, California.   Erica Kawamoto Hsu / Girl Geek X

Transcript of OpenAI Girl Geek Dinner – Lightning Talks & Panel:

Gretchen DeKnikker: All right, everybody, thank you so much for coming tonight. Welcome to OpenAI. I’m Gretchen with Girl Geek. How many people it’s your first Girl Geek? All right, okay. Lots of returning. Thank you for coming. We do these almost every week, probably like three out of four weeks a month. Up and down the peninsula, into the South Bay or everywhere. We also have a podcast that you could check out. Please check it out, find it, rate it, review it. Give us your most honest feedback because we’re really trying to make it as awesome as possible for you guys. All right.

Sukrutha Bhadouria: Hi, I’m Sukrutha. Welcome, like Gretchen said, Angie’s not here but there’s usually the three of us up here. Tonight, please tweet, share on social media, use the hashtag GirlGeekXOpenAI. I also, like Gretchen, want to echo that we love feedback, so any way you have anything that you want to share with us. Someone talked about our podcast episodes today. If there’s any specific topics you want to hear, either at a Girl Geek Dinner or on our podcast, do share that with us. Either you can find us tonight or you can email us. Our website is girlgeek.io and all our contact information’s on there. Thank you all. I don’t want to keep you all waiting because we have amazing speakers lined up from OpenAI, so.

Sukrutha Bhadouria: Oh, one more quick thing. We’re opening up sponsorship for 2020 so if your company has not sponsored a Girl Geek dinner before or has and wants to do another one, definitely now’s the time to sign up because we fill up pretty fast. We don’t want to do too many in one month. Like Gretchen said, we do one every week so definitely would love to see a more diverse set of companies–continue to see that like we did this year. Thank you, all. Oh, and over to Ashley.

Ashley Pilipiszyn speaking

Technical Director Ashley Pilipiszyn emcees OpenAI Girl Geek Dinner.   Erica Kawamoto Hsu / Girl Geek X

Ashley Pilipiszyn: All right, thank you.

Sukrutha Bhadouria: Thanks.

Ashley Pilipiszyn: All right. Hi, everybody.

Audience: Hi.

Ashley Pilipiszyn: Oh, awesome. I love when people respond back. I’m Ashley and welcome to the first ever Girl Geek Dinner at OpenAI. We have a … Whoo! Yeah.

Ashley Pilipiszyn: We have a great evening planned for you and so excited to see so many new faces in the crowd but before we get started, quick poll. How many of you currently work in AI machine learning? Show of hands. All right, awesome. How many of you are interested in learning more about AI machine learning? Everybody’s hands should be up. All right. Awesome. We’re all on the right place.

Ashley Pilipiszyn: Before we kick things off, I’d like to give just a brief introduction to OpenAI and what we’re all about. OpenAI is an AI research lab of about 100 employees, many of whom you’re going to get to meet this evening. Definitely, come talk to me. Love meeting you. We’ve got many of other folks here, and our mission is to ensure that safe, artificial general intelligence benefits all of humanity.

Ashley Pilipiszyn: To that effect, last year we created the OpenAI Charter. The charter is our set of guiding principles as we enact this mission and serves as our own internal system of checks and balances to hold ourselves accountable. In terms of how we organize our research, we have three main buckets. We have AI capabilities, what AI systems can do. We have AI safety, so ensuring that these systems are aligned with human values. We have AI policy, so ensuring proper governance of these systems.

Ashley Pilipiszyn: We recognize that today’s current AI systems do not reflect all of humanity and we aim to address this issue by increasing the diversity of contributors to these systems. Our hope is that with tonight’s event, we’re taking a step in the right direction by connecting with all of you. With that, I would like to invite our first speaker to the stage, Brooke Chan. Please help me welcome Brooke.

Brooke Chan speaking

Software Engineer Brooke Chan from the Dota team gives a talk on reinforment learning and machine learning at OpenAI Girl Geek Dinner.  Erica Kawamoto Hsu / Girl Geek X

Brooke Chan: Yeah. Hello. Is this what I’m using? Cool. I’m Brooke Chan. I was a software engineer on the Dota 2 team here at OpenAI for the past two years. Today, I’m going to talk a little bit about our project, as well as my own personal journey throughout the course of the project.

Brooke Chan: We’re going to actually start at the end. On April 13th, we hosted the OpenAI Five Finals where we beat the TI8 world champions OG at Dota 2 in back-to-back games on stage. TI stands for The International, which is a major tournament put on by Valve each year with a prize pool upwards of $30 million. You can think of it like the Super Bowl but for Dota.

Brooke Chan: There have been previous achievement/milestones of superhuman AI in both video games and games in general, such as chess and Go, but this was the first AI to beat the world champions at an eSports game. Additionally, as a slightly self-serving update, OG also won the world championship this year at TI9 just a few weeks ago.

Brooke Chan: Finals wasn’t actually our first unveiling. We started the project back in January of 2018 and by June of 2018, we started playing versus human teams. Leading up to finals, we played progressively stronger and stronger teams, both in public and in private. Then most recently, right before finals, we actually lost on stage to a professional team at TI8, which was the tournament that OG later went on to win.

Brooke Chan: Let’s go back to the basics for a minute and talk about what is reinforcement learning. Essentially, you can think of it as learning through trial and error. I personally like to compare it to dog training so that I can show off pictures of my dog. Let’s say that you want to teach a dog how to sit, you would say sit and just wait for the dog to sit, which is kind of a natural behavior because you’re holding a treat up over their head so they would sit their butt down and then you would give them that treat as a reward.

Brooke Chan: This is considered capturing the behavior. You’re making an association between your command, the action and the reward. It’s pretty straightforward for simple behaviors like sit but if you want to teach something more complicated, such as like rolling over, you would essentially be waiting forever because your dog isn’t just going to roll over because it doesn’t really understand that is something humans enjoy dogs doing.

Brooke Chan: In order to kind of teach them this, you instead reward progress in the trajectory of the goal behavior. For example, you reward them for laying down and then they kind of like lean over a little bit. You reward them for that. This is considered to be shaping rewards. You’re like teaching them to explore that direction in order to achieve ultimately your goal behavior.

Brooke Chan: Dota itself is a pretty complicated game. We can’t just reward it by purely on winning the game because that would be relatively slow so we applied this technique of shaped rewards in order to teach the AI to play the game. We rewarded it for things like gold and kills and objectives, et cetera. Going more into this, what is Dota?

Brooke Chan: Dota is a MOBA game which stands for multiplayer online battle arena. It’s a little bit of a mouthful. It’s a game that was developed by Valve and it has an average of 500,000 people playing at any given time. It’s made up of two teams of five and they play on opposite sides of the map and each player controls what’s considered a hero who has a unique set of abilities.

Brooke Chan: Everyone starts off equally weak at the beginning of the game, which means that they’re low levels and they don’t have a lot of gold and the goal is that over the course of a 30 to 60-minute game, they earn gold and become stronger and eventually, you destroy your opponent’s base. You earn gold and experience across the map through things like small fights or like picking people off, killing your enemy, taking objectives, things like that. Overall, there’s a lot of strategy to the game and a lot of different ways to approach it.

Brooke Chan: Why did we pick Dota? MOBAs in general are considered to be one of the more complex video games and out of that genre, Dota is considered the most complex. Starting off, the games tend to be pretty lengthy, especially in terms of how RL problems typically are, which means that strategy tends to be hard with a pretty delayed payoff. You might rotate into a particular lane in order to take an objective that you might not be able to take until a minute or a minute and a half later. It’s something that’s kind of like hard to associate your actions with the direct rewards that you end up getting from them.

Brooke Chan: Additionally, as opposed to games like Go and chess, Dota has partial information to it, which means that you only get vision around you and your allies. You don’t have a full state of the game. You don’t know where your enemies are and this leads to more realistic decision-making, similar to our world where you can’t like see behind walls. You can’t see beyond what your actual vision gives you.

Brooke Chan: Then, finally, it has both a large action and observation space. It’s not necessarily solvable just by considering all the possibilities. There’s about 1,000 actions that you can take at any given moment and the state you’re getting back has the value size of about 20,000. To put it in perspective, on average, your game of chess takes about 40 moves and Go takes about 150 moves and Dota is around 20,000 moves. That means that the entire duration of a game of chess really wouldn’t even get you out of the base in Dota.

Brooke Chan: This is a graph of our training process. On the left, you have workers that all play the game simultaneously. I know it’s not super readable but it’s not really important for this. Each game that they’re playing in the top left consists of two agents where an agent is considered like a snapshot of the training. The rollout workers are dedicated to these games and the eval workers who are on the bottom left are dedicated to testing games in between these different agents.

Brooke Chan: All the agents at the beginning of the training start off random. They’re basically picking their actions randomly, wandering around the map doing really awfully and not actually getting any reward. The machine in green is what’s called the optimizer so it parses in all of these rollout worker games and figures out how to update what we call the parameters which you can consider to be the core of its decision-making. It then passes these parameters back into the rollout workers and that’s how you create these continually improving agents.

Brooke Chan: What we do then is we take all of these agents and we play them against all the other agents in about 15,000 games in order to get a ranking. Each agent gets assigned a true skill, which is basically a score calculated on its win-loss records against all the other agents. Overall, in both training and evaluation, we’re really not exposing it to any kind of human play. The upside of this is that we’re not influencing the process. We know that they’re not just emulating humans and we’re not capping them out at a certain point or adding a ceiling on it based on the way that humans play.

Brooke Chan: The downside of that is that it’s incredibly slow. For the final bot that we had play against OG we calculated that it had about 45,000 years of training that went into it. Towards the end of training, it was consuming about approximately 250 years of experience per day. All of which we can really do because it’s in simulation and we can do it both asynchronously and sped up.

Brooke Chan: The first time they do get exposed to human play is during human evaluations. They don’t actually learn during any of these games because we are taking an agent, which is a snapshot and frozen in time and it’s not part of the training process. We started off playing against our internal team and our internal team was very much not impressive. I have us listed as 2K MMR, which is extremely generous. MMR means matchmaking rating which is a score that Valve assigns to the ranked play. It’s very similar to true skill. 2K is very low.

Brooke Chan: We were really quickly surpassed. We then moved on to contract teams who were around like 4K-6K MMR and they played each week and were able to give us feedback. Then in the rare opportunities, we got to play against professional teams and players. Overall, our team knew surprisingly little about Dota. I think there are about four people on our team who had ever played Dota before and that’s still true post-project, that no one really plays Dota.

Brooke Chan: This leads us to our very surprising discovery that complicated games are really complicated and we dug ourselves into this hole. We wanted a really complicated game and we definitely got one. Since the system was learning in a completely different way than humans, it became really hard to interpret what it was actually trying to do and not knowing what it was trying to do mean we didn’t know if it was doing well, if it was doing poorly, if it was doing the right thing. This really became a problem that we faced throughout the lifetime of our project.

Brooke Chan: Having learned this, there was no way to really ask it what it was thinking. We had metrics and we could surface like stats from our games but we were always leveraging our own intuition in order to interpret what decisions it was making. On the flip side, we also had human players that we could ask, but it turned out it was sometimes tough to get feedback from human players.

Brooke Chan: Dota itself is a really competitive game, which means that its players are very competitive. We got a lot of feedback immediately following games, which would be very biased or lean negatively. I can’t even count the number of times that a human team would lose maybe like, “Oh, this bot is terrible” and I was like, “Well, you lost. How is it terrible? What is bad about it?” This would create this back and forth that led to this ultimate question of is it bad or is it just different? Because, historically, humans have been the source on how to play this game. They make up the pro scene, they make up the high skill players. They are always the ones that you are going to learn from. The bots would make a move and the humans say it was different and not how the pros play and therefore, it’s bad. We always had to take the human interpretation with this kind of grain of salt.

Brooke Chan: I want to elaborate a little bit more about the differences because it goes just beyond the format of how they learn. This game in general is designed to help humans understand the game. It has like tooltips, ability descriptions, item descriptions, et cetera. As an example, here’s a frozen frame of a hero named Rana who’s the one with the bright green bar in the bottom left. She has an ability that makes you go invisible and humans understand what being invisible means. It means people can’t see you.

Brooke Chan: On the right, what we see is where we have like what the AI sees and it’s considered their observation space, it’s our input from the game. We as engineers and researchers know that this particular value is telling you whether or not you’re invisible. When we hit this ability, you can see that she gets like this little glow to her which indicates that she’s invisible and people understand that. The AI uses this ability and sees that the flag that we marked as invisible goes from 0 to 1 but they don’t see the label for that and they don’t really even understand what being invisible means.

Brooke Chan: To be honest, learning invisibility is not something trivial. If you’re walking down the street and all of a sudden, you were invisible, it’s a little bit hard to tell that anything actually changed. If you’ve ever seen Sixth Sense, maybe there’s some kind of concept there, but additionally, at the same time, all these other numbers around it are also changing due to the fact that there’s a lot of things happening on the map at once.

Brooke Chan: Associating that invisibility flag, changing directly to you, activating the ability is actually quite difficult. That’s something that’s easy for a human to do because you expect it to happen. Not to say that humans have it very easy, the AI has advantages too. The AI doesn’t have human emotions like greed or frustration and they’re always playing at their absolute 100% best. They’re also programmatically unselfish which is something that we did. We created this hyper parameter called team spirit which basically says that you share your rewards with your buddy. If you get 10 gold or your buddy gets 10 gold, it’s totally interchangeable. Theoretically, in a team game, that should be the same case for humans but inherently, it’s not. People at its core are going to play selfishly. They want to be the carrier. They want to be winning the game for the team.

Brooke Chan: All these things are going to influence pretty much every decision and every behavior. One pretty good example we have of this is called buybacks. Buybacks is a mechanic where when you die in the game, you can pay money in order to immediately come back to life and get back on the map. When we first enabled the AI to do this, there was a lot of criticism that we got. People were saying, “Oh, that’s really bad. They shouldn’t be wasting all their money” because the bots would always buy back pretty much immediately.

Brooke Chan: Over time, we continue doing this behavior and people kept saying, “Oh, that’s bad. You should fix it.” We’re like, “Well, that’s what they want to do.” Eventually, people started seeing it as an advantage to what we had, as an advantage to our play style because we were able to control the map. We were able to get back there very quickly and we were able to then force more fights and more objectives from it.

Brooke Chan: As a second self-serving anecdote, at TI9, there were way more buybacks way earlier and some people pointed this out and maybe drew conclusions that it was about us but I’m not actually personally going to make any statement. But it is one example of the potential to really push this game forward.

Brooke Chan: This is why it was difficult to have human players give direct feedback on what was broken or why because they had spent years perfecting the shared understanding of the game that is just like inherently different than what the bots thought. As one of the few people that played Dota and was familiar with the game and the scene, in the time leading up to finals, this became my full-time job. I learned to interpret the bot and how it was progressing and I kind of lived in this layer between the Dota community and ML.

Brooke Chan: It became my job to figure out what was most critical or missing or different about our playstyle and then how to convert that into changes that we could shape the behavior of our bot. Naturally, being in this layer, I also fell into designing and executing all of our events and communication of our research to the public and the Dota community.

Brooke Chan: In designing our messaging, I had the second unsurprising discovery that understanding our project was a critical piece to being excited about our results. We could easily say, “Hey, we taught this bot to learn Dota” and people would say, “So what? I learned to play Dota too. What’s the big deal?” Inherently, it’s like the project is hard to explain because in order to understand it and be as excited as we were, you had to get through both the RL layer which is complicated, and the Dota layer which is also complicated.

Brooke Chan: Through planning our events, I realized this was something we didn’t really have a lot of practice on. This was the first time that we had a lot of eyes on us belonging to people with not a lot of understanding of reinforcement learning and AI. They really just wanted to know more. A lot of our content was aimed at people that came in with the context and people that were already in the field.

Brooke Chan: This led me to take the opportunity to do a rotation for six months on the communications team actually working under Ashley. I wanted to be part of giving people resources to understand our projects. My responsibilities are now managing upcoming releases and translating our technical results to the public. For me, this is a pretty new and big step. I’ve been an engineer for about 10 years now and that was always what I loved doing and what I wanted to do. But experience on this team and growing into a role that didn’t really exist at the time allowed me to tackle other sorts of problems and because that’s what we are as engineers at the core, we want to be problem solvers.

Brooke Chan: That’s kind of my takeaway and it might seem fairly obvious but sometimes deviating from your path and taking risks let you discover new problems to work on. They do say that growth tends to be at the inverse of comfort so that means that the more you push yourself out of your comfort zone and what you’re used to, the more you give yourself opportunities for new challenges and discovering new skills. Thank you.

Lilian Weng

Research Scientist Lilian Weng on the Robotics team gives a talk on how her team uses reinforcement learning to learn dexterous in-hand manipulation policies at OpenAI Girl Geek Dinner.  Erica Kawamoto Hsu / Girl Geek X

Lilian Weng: Awesome. Cool. Today, I’m going to talk about some research projects with that at OpenAI robotics team. One big picture problem at our robotics team is to develop the algorithm to power general-purpose robots. If you think about how we humans are living this world, we can cook, we lift to move stuff, we add some more items with different tools. We fully utilize our body and especially our hands to do a variety of tasks. To some extent, we are general-purpose robots, okay?

Lilian Weng: That’s, we apply the same standard to our definition of such a thing. A general-purpose robot should be able to interact with a very complicated environment of the real world and able to manipulate all kinds of objects around it. However, unfortunately, most consumer-oriented robots nowadays are either just toys or very experimental or focus on specific functionalities and they are robots like factory arms or medical robots. They can interact with the environment and operating tools but they’re really operated by humans so human controls every move or they just play back a pre-programmed trajectory. They don’t really understand the environments and they cannot move autonomously.

Lilian Weng: In our projects, we’re taking a small step towards this goal and in this we try to teach a human-like robot hand to do in-hand manipulation by moving the objects. This is a six-phase block with OpenAI letters on it, move that to a target orientation. We believe this is an important problem because a human-like robot hand, it’s a universal effort. Imagine we can control that really well, we can potentially automate a lot of tasks that are currently done by human. Unfortunately, not a lot of progress have been made on human-like robot hand due to the complexity of such a system.

Lilian Weng: Why it is hard? Okay. First of all, the system has very high dimensionalities. For example, in our robot, which is as you can see this cool illustration. Shadow dexterity hand, it has 24 joints and 20 actuators. The task is especially hard because during the manipulation, a lot of observations are occluded and they can be noisy. For example, your sensor reading can be wrong and your sensor reading can be blocked by the object itself. Moreover, it’s virtually impossible to simulate your physical world 100% correctly.

Lilian Weng: Our approach for tackling this problem is to use reinforcement learning. We believe it is a great approach for learning how to control robots given that we have seen great progress and great success in many applications by reinforcement learning. You heard about OpenAI Five, the story of point AlphaGo and it will be very exciting to see how reinforcement learning can not only interact with this virtual world but also have an impact on our physical reality.

Lilian Weng: There is one big drawback of reinforcement learning model. In general, today, most of the models are not data efficient. You need a lot of training sample in order to get a good model trained. One potential solution is you build a robot farm. You just collect all the data in parallels with hundreds of thousands of robots but imagine just given how fragile a robot can be. It is very expensive to build and maintain. If you think of another problem, a new problem, or you want to work with new robots, it’s very hard to change. Furthermore, your data can get invalidated very quickly due to small changes in your robot status.

Lilian Weng: As that, we decided to take the sim2real approach, that is you train your model every single simulation but deploy that on physical robots. Here shows how we control the hand simulation. The hand is moving the object to a target orientation. The target is shown on the right so whenever the hand achieved the goal, we just sample a new goal. It just keeps on doing that and we cap the number of success at 50.

Lilian Weng: This is our physical setup. Everything is mounted in this giant metal cage. It’s like this big. The hand is mounted in the middle. It’s surrounded with a motion caption system. It’s actually the system that people use for filming special effects films, like the actor has dots on their bodies, kind of similar. This system tracks the five fingertip positions in the 3D space. We also have three high-resolution cameras for capturing images as input to our vision model. Our vision model predicts positional orientation of the block. However, our proposal sim2real approach might fail dramatically because there are a lot of model difference between simulation and reality. If your model all refer to the simulation, it can perform super poorly, the real robots.

Lilian Weng: In order to overcome this problem, we decided to take … we use reinforcement learning, okay. We train everything simulations so that we can generate technically, theoretically infinite amount of data. In order to overcome the sim2real difference, we use domain randomization.

Lilian Weng: Domain randomization refer to an idea of randomized different elements in simulation so that your policy can be exposed to a variety of scenarios and learn how to adapt. Eventually, we expand the policy to able to adapt to the physical reality.

Lilian Weng: Back in … This idea is relative news. I think they first proposed it in 2016. The researchers try to train a model to control drone like fly across furnitures or the indoor scenarios. They randomized the colors and texture of the walls and furnitures and without seeing any real-world images, they show that it performs pretty well in reality.

Lilian Weng: At OpenAI, we use the same approach to train a better model to protect the position orientation of the objects. As you can see some of the randomization looks totally unrealistic but somehow it worked very well when we feed the model with real images. Later, we also showed that you can randomize all the physical dynamics in simulations and this robot trained with domain randomization worked much better than the one without.

Lilian Weng: Let’s see the results. Okay. I’m going to click the … You really struggle a little bit at the first goal. Yes, okay. The ding indicates one success. This video will keep on going until goal 50 so it’s very, very long but I personally found it very soothing to look at it. I love it.

Lilian Weng: I guess that’s enough. This is our full setup of the training so in the box A, we generate a large number of environments in parallels in which we randomize the physical dynamics and the visual appearance. Based on those, we train two models independently. One is a policy model which takes in the fingertip position and object pose and the goal and output, a desired joint position of the hand so that we can control the hand. Another model is the vision that takes in three images from different camera angles and output the position orientation of the object.

Lilian Weng: When we deploy this thing into the real world, we combine the vision prediction based on the real images together with a fingertip position tracked by the motion capture system and feed that into our policy control model and output action so that then we just send it to the real robot and everything starts moving just like the movie shown. When we train our policy control model, we’ve randomized all kinds of physical parameter in the simulator such as masses, friction coefficient, motor gain, damping factor, as well as noise on the action, on observation. For a revision model, we randomized camera position, lighting, material, texture, colors, blah, blah, blah, and it just worked out.

Lilian Weng: For our model’s architecture, I’ll just go very quickly here. The policy, it’s a pretty simple recurrent unit. Has one layer of really connective layer and the LSTM. The vision model is a straightforward, multi-camera setup. All the three cameras share this RestNet stack and followed by a spatial softmax.

Lilian Weng: Our training framework is distributed and synchronized PBO, proximal policy optimization model. It’s actually the same framework used for training OpenAI Five. Our setup allowed us to generate about two years simulated experience per hour, which corresponds to 17,000 physical robots, so the gigantic robot factory and simulation is awesome.

Lilian Weng: When we deploy our model in reality, we noticed a couple of strategies learned by the robot like finger pivoting, sliding, finger gaiting. Those were also commonly used by human and interestingly, we never explicitly give it words or encouraged those strategies. They would just emerge autonomously.

Lilian Weng: Let’s see some numbers. In order to compare different versions of models, we deployed the models on the real robots and count how many successes the policy can get up to 50 before it dropped the block or time out. We first tried to deploy a model without randomization at all. It got a perfect performance in simulation but look, you can see it’s zero success median. Super bad on the real robot.

Lilian Weng: Then we’re adding domain randomization. The policy becomes much better because 13 success medians, maximum 50. Then we used RGB cameras in our vision model to track the objects. The performance only dropped slightly, still very good. The last one, I think this one’s very interesting because I just mentioned that our policies are recurrent units so like LSTM, it has internal memories.

Lilian Weng: Well, interesting, see how important this memory is so we replaced this LSTM policy with a FIFO or NAS and deployed that on robot and the performance dropped a lot, which indicates that memory play an important role in the sim2real transfers. Potentially, the policy might be using the memory and try to learn how to adapt.

Lilian Weng: However, training in randomized environments does come with a cost. Here we plot the number of success in simulation as a function of simulated experiencing measured in year. If you don’t apply randomization at all, the model can learn to achieve 40 success with about three years simulated experience but in order to get to same number like 40 success in a fully randomized environment took 100 years.

Lilian Weng: Okay, to quick summary. We’ve shown that this approach, reinforcement learning plus training simulation plus domain randomization worked on the real robot and we would like to push it forward. Thank you so much. Next one is Christine.

Christine Payne speaking

Research Scientist Christine Payne on the Music Generation team gives a talk on how MuseNet pushes the boundaries of AI creativity, both as an independent composer, and as a collaboration tool with human artists.  Erica Kawamoto Hsu / Girl Geek X

Christine Payne: Thank you. Let’s see. Thank you. It’s really great to see all of you here. After this talk, we’re going to take a short break and I’m looking forward to hopefully getting to talk to a lot of you at that point. I’ve also been especially asked to announce that there are donuts in the corner and so please help us out eating those.

Christine Payne: If you’ve been following the progress of deep learning in the past couple years, you’ve probably noticed that language generation has gotten much, much better, noticeably better in the last couple of years. But as a classical pianist, I wondered, can we take the same progress? Can we apply instead to music generation.

Christine Payne: Okay, I’m not Mira. Sorry. Hang on. One moment, I think we’re on the wrong slide deck. All right, sorry about that. Okay, trying again. Talking about music generation. You can imagine different ways of generating music and one way might be to do a programmatic approach where you say like, “Okay, I know that drums are going to be a certain pattern. Harmonies usually follow a certain pattern.” You can imagine writing rules like that but there’s whole areas of music that you wouldn’t be able to capture with that. There’s a lot of creativity, a lot of nuance, the sort of things that you really want a neural net to be able to capture.

Christine Payne: I thought I would dive right in by playing a few examples of MuseNet, which is this neural net that’s been trained on this problem of music generation. This first one is MuseNet trying to imitate Beethoven and a violin piano sonata.

Christine Payne: It goes on for a while but I’ll cut it off there. What I’m really trying to go with in this generation process is trying to get long-term structure so both the nuance and the intricacies of the pieces but also something that stays coherent over a long period of time. This is the same model but instead trying to imitate jazz.

Christine Payne: Okay, and I’ll cut this one off too. As you maybe could tell from those samples, I am more interested in the problem of composing the pieces themselves, so sort of where the notes should be and less in the actual quality of the solemnness and the timbre. I’ve been using a format that’s called MIDI which is an event-based system of writing music. It’s a lot like how you would write down notes in a music score. Like this note turns on at this moment in time played by this instrument maybe at this volume but you don’t know like this amazing cellist actually made it sound this way so I’m throwing out all of that kind of information.

Christine Payne: But the advantage of throwing that out is then you can get this longer-term structure. To build this sort of dataset, it involves a little bit of begging for data. I’ve had a bunch of people like BitMidi and ClassicalArchives were nice enough to just send me their collections and then a little bit of scraping and also MAESTRO’s Google Magenta’s dataset and then also a bunch of scraping online sets.

Christine Payne: The architecture itself, here I’m drawing really heavily from the way we do language modeling and so we use a specific kind of neural net that’s called a transformer architecture. The advantage of this architecture is that it’s specifically good at doing long-term structure so you’re able to look back not only at things that have happened in the recent past but really, you can look back like what happened in the music a minute ago or something like that, which is not possible with most other architectures.

Christine Payne: In the language world, I’d like to think of this, the model itself is trained on the task of what word is going to come next. It might initially see just like a question mark so it knows it’s supposed to start something. In English, we know like maybe it’s the or she or how or some like that. There’s some good guesses and there’s some like really bad guesses. If we know now the first word is hello then we’ve kind of narrowed down what we expect our next guesses should be. It might be how, it might be my, it’s probably not going to be cat. Maybe it could be cat. I don’t know.

Christine Payne: At this point, we’re getting pretty sure–like a trained model should actually be pretty sure that there should be a good 90% chance the next word is name and now it should be like really 100% sure or like 99.5% sure or whatever that the next word is going to be is. Then here we hit kind of an interesting branching point where there are tons of good answers so lots of names could be great answers here, lots of things could also be really bad answers so we don’t expect to see like some random verbs, some random … There are lots of things that we think would be bad choices but we get a point here to branch in good directions.

Christine Payne: The idea is once you have a model that’s really good at this, you can then turn it into a generator by sampling from the model according to those probabilities. The nice thing is you get the coherent structure. When you get a moment like this, you know like I have to choose … In music, it’s usually like I have to choose this rhythm, I have to choose … like if I choose the wrong note, it’s just going to sound bad, things like that. But then there are also a lot of points like this where the music can just go in fun and interesting different directions.

Christine Payne: But of course, now we have the problem of how do you translate words, how do you translate this kind of music into a sequence of words that the model can do. The system that I’m using is very similar to how MIDI itself works. I have a series of tokens that the model will always we see. Initially, it’ll always see the composer or the band or whoever wrote the piece. It’ll always see what instrument to expect in the piece or what set of instruments.

Christine Payne: Here, it sees the start token because it’s at the start of this particular piece and a tempo. Then as the piece begins, we have a symbol that this C and that C each turn on with a certain volume and then we have a token that says to wait a certain amount of time. Then as it moves forward, the volume zero means that first note just turned off and the G means the next note turns on. I think we have to wait and similarly, here the G turns off, the E turns on and we wait. You can just progress through the whole set of music like this.

Christine Payne: In addition to this token by token thing, I’m helping the model out a little bit by giving it a sense of the time that’s going on. I’m also giving it an extra embedding that says everything that happens in this purple line happens in the same amount of time or at the same moment in time. Everything in blue is going to get a different embedding that’s a little bit forward in time and so forth.

Christine Payne: The nice thing about an embedding or a system like this is that it’s pretty dense but also really expressive. This is the first page of a Chopin Ballade that is like actually encapsulates how the pianist played it, the volumes, the nuances, the timings, everything like that.

Christine Payne: The model is going to see that sequence of numbers like that. Like that first 1444 I think means it must mean Chopin and the next one probably means piano and the next one means start, that sort of thing. The first layer for the model, what it has to do is it needs to translate that number into a vector of numbers and then it can sort of learn a good vector that’ll represent so it’ll get a sense of like this is what it means to be Chopin or this is what it means to be like a C on a piano.

Christine Payne: The nice thing you can do once … The model will learn. Like initially it starts out with a totally random sense so it has no idea what those numbers should be but in the course of training, it’ll learn better versions of that. What you can do is you can start to map out what it’s learned for these embeddings. For example, this is what it’s learned for a piano scale like all the notes on a piano and it’s come to learn that like all of these As are kind of similar, that the notes relate to each other. This is like moving up on a piano. It’s hard to tell here but it’s learned little nuances like up a major third is closer than like up a tritone or stuff like that. Like actually really interesting musical stuff.

Christine Payne: Along with the same thing, given the fact that I’m always giving it this genre token and then the instrument token, you can look at the sort of embeddings it’s learned for the genres itself. Here, the embedding it’s learned for all these French composers. Ends up being pretty similar. I actually like that Ravel wrote like in the style of Spanish pieces and then there’s the Spanish composer that’s connected to him so like it makes a lot of good sense musically. Similarly, like over in the jazz domain, a lot of the ones. I think there are a couple of random ones that made no sense at all. I can’t remember now off the top of my head. It’s like Lady Gaga was connected to Wagner or something like but mostly, it made a lot of great sense.

Christine Payne: The other kind of fun thing you can do once you have the style tokens is you can try mismatching them. You can try things like literally taking 0.5 of the embedding for Mozart plus 0.5 of the embedding of jazz and just like adding them together and seeing what happens or in this case what I’m doing is I’m giving it the token for Bon Jovi, instruments for bands, but then I’m giving it the first six notes of a Chopin Nocturne. Then the model just has to generate as best it can at that point.

Christine Payne: You’ll hear at the start of this, it’s very much how the Chopin Nocturne itself sounds. I’ve cut off the very, very beginning of it but you’ll hear–so that left-hand pattern is going to be like straight out of Chopin and then well, you’ll see what happens.

Christine Payne: Sorry, it’s so soft but it gets very Bon Jovi at this point, the band kicks in. I always loved it like Chopin looks a little shocked but I really love that it manages to keep the left-hand pattern of the Nocturne going even though it’s like now thinks it’s in this pop sort of style.

Christine Payne: The other thing I’ve been interested in this project is in how musicians and everyone can use generators like this. If you go to our OpenAI blog you can actually play with the model itself. We’ve created, along with Justin and Eric and Nick, a sort of prototype tool of how you might co-compose pieces using this model. What you can do is you can specify the style and the instruments, how long a segment you want the model to generate and you hit start and the model will come back with four different suggestions of like how you might begin a piece in this style. You go through and you pick your favorite one, you hit the arrow again to keep generating and the model will come up with four new different ways. You can continue on this way as long as you want.

Christine Payne: What I find kind of fun about this is you’re actually really … like it feels like I’m composing but not at a note by note level and so I was really interested in how humans will be able to, and musicians will be able to guide composing this way. Just kind of wrapping up, I thought I would play an example of … This is one guy who took both GPT-2 to write the lyrics, which I guess is hence the Covered in Cold Feet and then MuseNet to do the music. It’s a full song but I’ll just play the beginning of it that he then recorded himself.

Christine Payne: (singing)

Christine Payne: Visit the page to hear the whole song but it’s been really fun to see those versions. The song, I ended up singing it the entire day. It gets really catchy but it’s been really fun to see musicians start to use it. People have used it to finish composing symphonies or to write full pieces, that sort of thing.

Christine Payne: In closing, I just wanted to share I’ve gone through this crazy path of two years ago being a classical pianist to now doing AI research here and I just wanted to … I didn’t know that Rachel was going to be right here. Give a shout out to fast.ai. She’s the fast.ai celebrity here but yeah. This has been my path, been doing it. These are the two courses I particularly love, fast.ai and deeplearning.ai and then I also went through OpenAI’s Scholars program and then the Fellows Program. Now I’m working here full-time, but happy to talk to anybody here if they’re interested in this sort of thing.

Christine Payne: The kind of fun thing about AI is that there’s so much that’s still wide open and it’s really helpful to come from different backgrounds where you bring a … It’s amazing how if you bring a new perspective or a new insight, there are a lot of things that are still just wide open that you can figure out how to do. I encourage anyone to come and check it out. We’ll have a concert. Thank you.

Mira Murati speaking

RL Team Manager Mira Murati gives a talk about reinformatiion learning and industry trends at OpenAI Girl Geek Dinner.   Erica Kawamoto Hsu / Girl Geek X 

Mira Murati: Hey, everyone, I’m Mira Murati and I’ll talk a little bit about the advancements in reinforcement learning from the lens of our research team here at OpenAI. Maybe I’ll kick things off by just telling you a bit about my background and how I ended up here.

Mira Murati: My background is in mechanical engineering but most of my work has been dedicated to practical applications of technology. Here at OpenAI, I work on Hardware Strategy and partnerships as well as managing our Reinforcement Learning research team alongside John Schulman, who is our lead researcher. I also manage our Safe Reinforcement Learning team.

Mira Murati: Before coming to OpenAI, I was leading the product and engineering teams at Leap Motion, which is a company that’s focused on the issue of human machine interface. The challenge with the human machine interface, as you know, is that we’ve been enslaved to our keyboard and mouse for 30 years, basically. Leap Motion was trying to change that by increasing the bandwidth of interaction with digital information such that, just like you see here, you can interact … Well, not here, with the digital space in the same natural and high bandwidth way that you interact with your physical space. The way you do that is using computer vision and AI to track your fingers in space and bring that input in virtual reality or augmented reality in this case.

Mira Murati: Before that, I was at Tesla for almost three years leading the development and launch of the Model X. That’s enough about me. I’ll touch a bit about on the AI landscape as a whole, just to offer a bit of context on the type of work that we’re doing with our Reinforcement Learning team. Then I’ll talk a bit about the impact of this work, the rate of change in the field as well as the challenges ahead.

Mira Murati: As you know, the future has never been bigger business. Every day we wake up to headlines like this and a lot of stories talking about the ultimate conversions where all the technologists come together to create the ultimate humankind dimension, that of general artificial intelligence. We wonder what this is going to do to our minds and to our societies, our workplaces and healthcare. Even politicians and cultural commentators are aware of what’s happening with AI to some extent, and politicians like this, to the extent that there’s a lot of nations out there that have published their AI strategies.

Mira Murati: There is definitely a lot of hype, but there is also a ton of technological advancement that’s happening. You might be wondering what what’s driving these breakthroughs. Well, so a lot of advancements in RL are driving the field forward and my team is working on some of these challenges through the lens of reinforcement learning.

Mira Murati: Both Brooke and Lilian did a great job going over reinforcement learning so I’m not going to touch too much upon that, but basically, to reiterate, it is you’re basically learning through trial and error. To provide some context for our work, I want us to take a look at …

Mira Murati: Oh, okay. There’s music. I wanted to take a look at this video where first you see this human baby, nine months old, how he is exploring the environment around him. You see this super high degrees of freedom interaction with everything around him. I think this is four hours of play in two minutes. In some of the things that this baby does like handling all these subjects, rolling around all this stuff, this is almost impossible for machines to do as you saw from Lilian’s talk.

Mira Murati: Then … Well, he’s going to keep going, but let’s see. Okay, now that … What I want to show you is … Okay, this is not working, but basically, I wanted you to show you that by contrast, so you have this video game over there where you would see this AI agent that’s basically trying to cross this level and makes the same mistakes over and over again. The moral of the story is that AI agents are very, very limited when they’re exploring their environment. Human babies just nine months old have this amazing ability to explore their environment.

Mira Murati: The question is, why are humans so good at understanding the environment around them? Of course, humans … We have this baby running in the playground. Of course, humans are very good at transferring knowledge from one domain to another, but there is also prior knowledge from evolution and also, from your prior life experiences. For example, if you play a lot of board games and I asked you to play a new one that you have never seen before, you’re probably not going to start learning that new game from scratch. You will apply a lot of the heuristics that you have learned from the previous board game and utilize those to solve this new one.

Mira Murati: It’s precisely this ability to abstract, this conceptual knowledge that’s based on or learned from perceptual details of real life that’s actually a key challenge for our field right now and we refer to this as transfer learning.

Mira Murati: What’s the state of things? There’s been a lot of advancements in machine learning and particularly in reinforcement learning. As you heard from the talks earlier, new datasets drive a lot of the advancements in machine learning. Our Reinforcement Learning team built a suite of games, thousands of games, that in itself you think playing video games is not so useful, but actually, they’re a great test bed because you have a lot of problem-solving and also content that’s already there. It comes for free in a way.

Mira Murati: The challenge that our team has been going after is how can we solve a previously unseen game as fast as a human, or even faster, given prior experiences with similar games. The Gym Retro dataset helps us do that. I was going to say that some of the games look like this but the videos are not quite working. But in a way, the Gym Retro dataset, you can check it out on the OpenAI blog, emphasizes the weaknesses of AI which is that of grasping a new task quickly and the ability to generalize knowledge.

Mira Murati: Why do all these advancements matter and what do the trends look like? It’s now just a bit over 100 years after the birth of the visionary mathematician Alan Turing and we’re still trying to figure out how hard it’s going to be to get to general artificial intelligence. Machines have surpassed us at very specific tasks but the human brain sets a high bar for what’s AI.

Mira Murati: In the 1960s and ’70s, this high bar was a game of chess. Chess was long considered the summit of human intelligence. It was visual, tactical, artistic, intelligence, mathematical, and chess masters could remember every single game that they played, not to mention that of their competitors, and so you can see why chess became such a symbol of mastery or a huge achievement of the human brain. It combined insight and forward planning and calculation, imagination, intuition, and this was until 1996, when the Deep Blue machine, chess machine from IBM was able to beat Garry Kasparov. If you had brought someone from the 1960s to that day, they would be completely astonished that this had happened but in 1996, this did not elicit such a reaction because in a way, Deep Blue had cheated by utilizing the power of hardware of Moore’s law. It leveraged the advancements in hardware to beat Garry Kasparov at chess.

Mira Murati: In a way, this didn’t show so much the advancements in AI, but rather that chess was not the pinnacle of human intelligence. Then the human sights were set on the Chinese game of Go, which is much more complex and just with brute force, you’d be quite far from solving Go, the game of Go with brute force and where we stand with hardware today. Then of course, in 2016, we saw the DeepMind’s AlphaGo beat Lee Sedol in Korea and that was followed by advancements in AlphaGo Zero. OpenAI robotics team of course, used some of the algorithms developed in the RL team to manipulate the cube and then we saw very recently, obviously, the Dota 5v5 beat the world champions.

Mira Murati: There’s been a very strong accelerating trend of advancements pushed by reinforcement learning in general. However, there’s still a long way to go. There are a lot of questions with reinforcement learning and in figuring out where the data is coming from and what actions do you take early on that get you the reward later. Also issues of safety, how do you learn in a safe way and also how do you continue to learn once you’ve gotten really good? Think of self-driving cars, for example. We’d love to get more people thinking about this type of challenges and I hope that some of you will join us in doing so. Thank you.

Amanda Askell speaking

Research Scientist Amanda Askell on the Policy team gives a talk on AI policy at OpenAI Girl Geek Dinner.  Erica Kawamoto Hsu / Girl Geek X

Amanda Askell: Okay, can everyone hear me? Cool. We’ve had like a lot of talks on some of the technical work that’s been happening at OpenAI. This talk is going to be pretty introductory because I guess I’m talking about what is quite a new field, but as Ashley said at the beginning, it’s one of the areas that OpenAI focuses on. This is a talk on AI policy and I’m a member of the policy team here.

Amanda Askell: I realize now that this picture is slightly unfortunate because I’m going to give you some things that look like they’re being produced by a neural net when in fact this is just an image because I thought it looked nice.

Amanda Askell: The core claims behind why we might want something like AI policy to exist in the world are really simple. Basically, AI has the potential to be beneficial. Hopefully, we can agree with this. We’ve had lots of talks showing how excellent AI can be and things that it can be applied to. AI also has the potential to be harmful so I’ll talk a little bit about this in the next slide but you know we hear a lot of stories about systems that just don’t behave the way that they’re creators intended to when they’re deployed in the world, systems that can be taken over by people who want to use them for malicious purposes. Anything that has this ability to do great things in the world can also be either misused or lead to accidents.

Amanda Askell: We can do things that increase the likelihood that AI will be beneficial so hopefully, that’s also fairly agreed-upon. But also that this includes making sure that the environment the AI is developed in is one that incentivizes responsible development. They’re like nontechnical things that we can do to make sure that AI is beneficial.

Amanda Askell: I think these are all like really simple and this leads to this idea that we should be doing some work in known technical fields just to make sure that AI is developed responsibly and well. Just to like kind of reiterate the claims of the previous slide, the potential benefits of AI are obviously kind of huge and I feel like to this audience I don’t really need to sell them but we can go over them. You know language models provide the ability potentially to assist with writing and other day-to-day tasks.

Amanda Askell: We can see that we can apply them to large complex problems like climate change potentially. This is the kind of like hope for things like a large scale ML. We might be able to enable like innovations In healthcare and education so we might be able to use them for things like diagnosis or finding new treatments for diseases. Finally, they might drive the kind of economic growth that would reduce the need to do work that people don’t find fulfilling. I think this is probably controversial. This is one thing that’s highly debated in AI ethics but I will defend it. I’ve done lots of unfulfilling work in my life and if someone could just pay me to not do that, I would have taken that.

Amanda Askell: Potential harms like language models of the same sort could be used to like misinform people by malicious actors. There are concerns about facial recognition as it improves and privacy. People are concerned about automation and unemployment if it’s not dealt with well. Like does this just lead to massive unfairness and inequity? Then people are also worried about things like decision making and bias. We already see in California that there’s ML systems being used for things like decisions about bail being set but also historically, we’ve used a lot of systems for things like whether someone gets credit. I mean so whether your loan’s approved or not given that there’s probably a huge amount of bias in the data and that we don’t know yet how to completely eliminate that, this could be really bad and it could increase systemic inequity in society, so that’s bad.

Amanda Askell: We’re also worried about like AI weapons and global security. Finally, just like a general misalignment of future AI systems. A lot of these are just like very classic examples of things that people are thinking about now, but this should just … We could expect this to be the sort of problems that we just see on an ongoing basis in the future as systems get more powerful.

Amanda Askell: I don’t think AI is like any different from many other technologies in at least some respects here. How do we avoid building things that are harmful? Doing the same kind of worries just apply to like the aviation industry. Planes can also be taken over by terrorists. Planes can be built badly and lead to accidents. The same is true of like cars or pharmaceuticals or like many other technologies with the potential to do good, it can end up … There can be accidents. It can be harmful.

Amanda Askell: In other industries we invest in safety, we invest in reducing accidents, we invest in security, so that’s like reducing misuse potential, and we also invest in social impact. In case of aviation, we know are concerned about things like the impact that flying might have on the climate. This is like the kind of third sort of thing that people invest in a lot.

Amanda Askell: All of this is very costly so this is just a kind of intro to like one way in which we might face problems here. I’m going to use a baking analogy, mainly because I was trying to think of a different one and I had used this one previously and I just couldn’t think of a better one.

Amanda Askell: The idea is, imagine you’ve got a competition and the nice thing about baking competitions, maybe I just have watched too many of them, is like you care both about the quality of what you’re creating and also about how long it takes to create it. Imagine a baking competition where you can just take as much time as you want and you’re just going to be judged on the results. There’s no race, like you don’t need to hurry, you’re just going to focus purely on the quality of the thing that you’re creating.

Amanda Askell: But then you introduce this terrible thing, which is like a time constraint or even worse, you can imagine you make it a race. Like the first person to develop the bake just gets a bunch of extra points. In that case, you’re going to be like well, I’ll trade off some of the quality just to get this thing done faster. You trade off some quality for increased speed.

Amanda Askell: Basically, we can expect something similar to happen with things like investment in areas like the areas that I talked about in the previous slide, where it’s like it might be that I would want to just like continue investing and making sure that my system is secure essentially like forever. I just never want someone to misuse this system so if I was given like 100 years, I would just keep working on it. But ultimately, I need to produce something. I do need to put something out into the world and the concern that we might have is that competition could drive down the incentive to invest that much in security.

Amanda Askell: This, again, happens across lots of other industries. This is like not isolated to AI and so there’s a question of like, what happens here? How do we ensure that companies invest in things like safety? I’m going to argue that there are four things. Some of the literature might not mention this one but I think it’s really important. The first one is ethics. People and companies are surprisingly against being evil. That’s good, that’s important. I think this gets not talked about enough. Sometimes we talk like the people that companies would just be totally happy turning up at like 9:00 a.m. to build something that would cause a bunch of people harm. I just don’t think that people think like that. People are … I have fundamental faith in humanity. I think we’re all deeply good.

Chloe Lin software engineer OpenAI Girl Geek Dinner

Software Engineer Chloe Lin listens to the OpenAI Girl Geek Dinner speakers answer audience questions.  Photo credit: Erica Kawamoto Hsu / Girl Geek X

Amanda Askell: It’s really great to align your incentives with your ethical beliefs and so regulation is obviously one other component that’s there to do that. We create these regulations and industry norms to basically make sure that if you’re like building planes and you’re competing with your competitor, you still just have to make your planes. You have to establish that they reach some of … Tripped over all of those words.

Amanda Askell: You have to establish that they reach some level of safety and that’s what regulation is there for. There’s also liability law and so companies have to compensate who are harmed by failures. This is another thing that’s driving that incentive to make sure your bake is not going to kill the judges. Well, yeah, everyone will be mad at you and also, you’ll have to pay a huge amount of money.

Amanda Askell: Finally, the market. People just want to buy safe products from companies with good reputations. No one is going to buy your bake if they’re like, “Hang on, I just saw you drop it on the floor before you put it into the oven. I will pay nothing for this.” These are four standard mechanisms that I think are used to like ensure that safety is like pretty high even in the cases of competition between companies in other domains like aviation and pharmaceuticals.

Amanda Askell: Where are we with this on AI? I like to be optimistic about the ethics. I think that coming to a technology company and seeing the kind of tech industry, I’ve actually been surprised by the degree to which people are very ethically engaged. Engineers care about what they’re building. They see that it’s important. They generally want it to be good. This is more like a personal kind of judgment on this where I’m like actually, this is a very ethically engaged industry and that’s really great and I hope that continues and increases.

Amanda Askell: With regulation, currently there are not many industry-specific regulations. I missed an s there but speed and complexity make regulation more difficult. The idea is that regulation is very good when there’s not an information asymmetry between the regulator and the entity being regulated. It works much less well when there is a big information asymmetry there. I think in the case of ML, that does exist. It’s very hard to both keep up with like, I think for regulators keeping up with contemporary ML work is really hard and also, the pace is really fast. This makes it actually quite difficult as an area to build very good regulation in.

Amanda Askell: Liability law is another thing where it’s just like a big question mark because like for ML accidents and misuse, in some cases it’s just unclear what existing law would say. If you build a model and it harms someone because it turns out that there was data in the model that was biased and that results in a loan being denied to someone, who is liable for that harm that is generated? You get easier and harder cases of this, but essentially, a lot of the kind of … I think that contemporary AI actually presents a lot of problems with liability law. It will hopefully get sorted out, but in some cases I just think this is unclear.

Amanda Askell: Finally, like market mechanisms. People just need to know how safe things are for market mechanisms to work well. In the case of like a plane, for example, I don’t know how safe my planes are. I don’t go and look up the specs. I don’t have the engineering background that would let me actually evaluate, say, a new plane for how safe it is. I just have to trust that someone who does know this is evaluating how safe those planes are because there’s this big information gap between me and the engineers. This is also why I think we shouldn’t necessarily expect market mechanisms to do all of the work with AI.

Amanda Askell: This is to lead up to this … to show that there’s a broader problem here and I think it also applies in the case of AI. To bring in a contemporary example, like recently in the news, there’s been concern. Vaping is this kind of like new technology that is currently not under the purview of the FDA or at least generally not heavily regulated. Now there’s concern that it might be causing pretty serious illnesses in people across the US.

Amanda Askell: I think this is a part of a more broad pattern that happens a lot in industries and so I want to call this the reactive route to safety. Basically, a company does the thing, the thing harms people. This is what you don’t want on your company motto. Do the thing. The thing harms people. People stop buying it. People sue for damages. Regulators start to regulate it. This would be really uninspiring as your company motto.

Amanda Askell: This is actually a very common route to making things more safe. You start out and there’s just no one who’s there to make sure that this thing goes well and so it’s just up to people buy it, they’re harmed, they sue, regulators get really interested because suddenly your product’s clearly harming people. Is this a good route for AI? Reasons against hope … I like the laugh because I’m like hopefully, that means people agree like no, this would be terrible. I’m just like well, one reason, just to give like the additional things of like obviously that’s kind of a bad way to do things anyway.

Amanda Askell: AI systems can often be quite broadly deployed almost immediately. It’s not like you just have some small number of people who are consuming your product who could be harmed by it in a way that a small bakery might. Instead, you could have a system where you’re like I’ve built the system for determining whether someone should get a loan. In principle, almost every bank in the US could use that the next day and that’s –The potential for widespread deployment makes it quite different from technologies where you just have a really or like any product where you have just like a small base of people.

Amanda Askell: They have the potential for a really high impact. The loan system that I just talked about could, basically, could in principle really damage the lives of a lot of people. Like apply that to things like bail systems as well, which we’re already seeing and even potentially with things like misinformation systems.

Amanda Askell: Finally, in a lot of cases it’s just difficult to attribute the harms and if you have something that’s spreading a huge amount of misinformation, for example, and you can’t directly attribute it to something that was released, this is concerning because it’s not like this route might work. This route actually requires you to be able to see who caused the harm and whenever that’s not visible, you just don’t expect this to actually lead to good regulation.

Amanda Askell: Finally, I just want to say I think there are alternatives to this reactive break things first approach in AI and this is hopefully where a lot of policy work can be useful.

Amanda Askell: Just to give a brief overview of policy work at OpenAI. I think I’m going to start with the policy team goals just to give you the sense of what we do. We want to increase the ability of society to deal with increasingly advanced AI technology, both through information and also through pointing out mechanisms that can make sure that technology is safe and secure and that it does have a good social impact. We conduct research into long-term issues related to AI and AGI so we’re interested in what happens when these systems become more powerful. Not merely reacting to systems that already exist, but trying to anticipate what might happen in the future and what might happen as systems get more powerful and the kind of policy problems and ethical problems that would come up then.

Amanda Askell: Finally, we just help OpenAI to coordinate with other AI developers, civil society, policymakers, et cetera, around this increasingly advanced technology. In some ways trying to break down these information asymmetries that exist and it can cause all of these problems.

Amanda Askell: Just to give a couple of examples of recent work from the teams to the kind of thing that we do. We released a report recently with others on publication norms and release strategies in ML. Some of you will know about like the GPT-2 language release and the decision to do staged release. We discussed this in the recent report. We also discussed other things like the potential for bias in language models and some of the potential social impacts of large language models going forward.

Amanda Askell: We also wrote this piece on cooperation and responsible AI development. This is related to the things I talked about earlier about the potential for competition to push this bar for safety too low and some of the mechanisms that can be used to help make sure that that bar for safety is raised again.

Amanda Askell: Finally, since this is an introduction to this whole field, which is like new and emerging field, here are examples of questions I think are really interesting and broad but can be broke down to these very specific applicable questions. What does it mean for AI systems to be safe, secure, and beneficial and how can we measure this? This includes a lot of traditional AI ethics work, like my background is in ethics. A lot of these questions about like how you make a system fair and what it means for a system to be fair. I would think of that as falling under the what is it for a system to be socially beneficial, and I think that work is really interesting. I do think that there’s just this broad family of things there are like policy and ethics and governance. I don’t think of these as separate enterprises.

Amanda Askell: Hence, this is an example of why. What are ways that AI systems could be developed that could be particularly beneficial or harmful? Again, trying to anticipate future systems and ways that we might just not expect them to be harmful and they are. I think we see this with the existing technology. Maybe it’s like trying to anticipate the impact that technology will have is really hard but like given the huge impact that technology is now having, I think trying to do some of that research in advance is worthwhile.

Amanda Askell: Finally, what can industry policymakers and individuals do to ensure that AI is developed responsibly? This relates to a lot of the things that I talked about earlier, but yeah, what kind of interventions can we have now? Are there ways that we can inform people that would make this stuff all go well?

Amanda Askell: Okay, last slide except the one with my email on it, which is the actual last slide. How can you help? I think that there’s this interesting, this is just like … I think that this industry is very ethically engaged and in many ways, it can feel like people feel like they need to do the work themselves. I know that a lot of people in this room are probably engineers and researchers. I think the thing I would want to emphasize is, you can be really ethically engaged and that doesn’t mean you need to take this whole burden on yourself.

Amanda Askell: One thing you can also do is advocate for this work to be done, either in your company, or just anywhere where people are like … in your company, in academia or just that your company is informed of this stuff. But in general, helping doesn’t necessarily have to mean taking on this massive burden of learning an entire field yourself. It can just mean advocating for this work being done. At the moment, this is a really small field and I would just love to see more people working in it. I think advocacy is really important but I also think another thing is you can technically inform people who are working on this.

Amanda Askell: We have to work closely with a lot of the teams here and I think that’s really useful and I think that policy and ethics work is doing its best, basically, when it’s really technically informed. If you find yourself working in a position where a lot of the things that you’re doing feel like they are important and would benefit from this sort of work, like helping people who are working on it is a really excellent way of helping. It’s not the only thing that you can do is spend half of your time doing the work that I’m doing and the others on the team are doing. You can also get people like us to do it. We love it.

Amanda Askell: If you’re interested in this, so thank you very much.

Brooke Chan, Amanda Askell, Lilian Weng, Christine Payne, Ashley Pilipiszyn

OpenAI girl geeks: Brooke Chan, Amanda Askell, Lilian Weng, Christine Payne and Ashley Pilipiszyn answer questions at OpenAI Girl Geek Dinner.  Erica Kawamoto Hsu / Girl Geek X 

Audience Member:  I have a question.

Amanda Askell: Yes.

Audience Member: For Amanda.

Amanda Askell: Yes.

Audience Member: Drink your water first. No, I think the ethics stuff is super interesting. I don’t know of a lot of companies that have an ethics department focused on AI, and I guess one thing that I’m curious about is, like you pointed out like your papers but like, and I know you talked about educating and all this other stuff but what are you guys…do? Do you know what I mean? Other than write papers.

Amanda Askell: Yeah.

Ashley Pilipiszyn: Oh, Christine.

Amanda Askell: Which one? Yeah, so I think at the moment there’s like a few kind of rules. I can say what we do but also what I think that people in these roles can do. So in some cases it can be like looking at what you’re building internally. I think we have like the charter and so you want to make sure that everything that you’re doing is in line with the charter. Things like GPT-2 and release decisions, I think of as a kind of like ethical issue or ethical/policy issue where I would like to see the ML community build really good norms there. Even if people don’t agree with what OpenAI try to do with its release decisions, it was coming from a place of trying to build good norms and so you can end up thinking about decisions like that.

Amanda Askell: That’s more of an example of something where you’re like it’s not writing a paper, it’s just like thinking through all of the consequences of different publication norms and what might work and what might not. That’s like one aspect, that’s the kind of like internal component. I think of the external component as like, on the one hand it’s just like writing papers so just being like what are the problems here that people could work on and in a lot ways that’s just like outreach, like trying to get people who are interested in working on this to work on it further. For that, there’s a few audiences, so you might be interested in attracting people to the field if you think that there are these like ongoing problems within both companies and maybe with other relevant actors. Like maybe you also want people going into government on this stuff.

Amanda Askell: But also just like the audience can be internal, to make people aware of these issues and they can also be things like policymakers, just inform of the kind of structure of the problem here. I think of it as having this kind of internal plus external component and you can end up dividing your time between the two of them. We spend some time writing these papers and trying to get people interested in these topics and just trying to solve the problems. That’s the nice thing about papers is you can just be like, what’s the problem, I will try and solve it and I’ll put my paper of an archive. Yeah, and so I think there’s both of those.

Amanda Askell: It’s obviously fine for companies to have people doing both, like if you haven’t and I think it’s like great if a company just has a team that’s just designed to look at what they’re doing internally and if anyone has ethical concerns about it, that team can take that on and own it and look at it. I think that’s a really good structure because it means that people don’t feel like … if you’re like just having to raise these concerns and maybe feel kind of isolated, that’d be bad but if you have people that you know are thinking about it, I think that’s a really good thing. Yeah, internal plus external, I can imagine different companies liking different things. I hope that answers the question.

Rose: My question is also for Amanda. So the Google AI Ethics Board was formed and disbanded very quickly kind of famously within like the span of less than a month. How do you kind of think about that like in the context of the work that OpenAI is doing and like how do you think about like what they failed at and like what we can do better?

Amanda Askell: This was a really difficult case so I can give you … I remember personally kind of looking at this and being like I think that one thing that was in it … I don’t know if people know the story about this case but basically, it was that Google formed a board and they were like, “We want this to be intellectually representative,” and it garnered a lot criticism because it had a person who was head of the Heritage Foundation, so a conservative think-tank in the US, as one of its members, and this was controversial.

Amanda Askell: I remember having mixed views on this, Rose. I do think it’s great to … Ultimately, these are systems that are going to affect a huge number of people and that includes a huge number of people who have views on how they should be used and how they should affect them. They’re just very different from me and I want those people to be represented and I want their views on how they do or do not want systems to affect them to be at the table. We talked earlier about the importance of representativeness and I genuinely believe that for people who have vastly different views for myself. If they’re affected by it, ultimately, their voice matters.

Amanda Askell: At the same time, I think I also … there’s a lot of complicating–you’re getting my just deeply mixed emotions here because I was like, there’s a strange sense in which handpicking people to be in the role of a representative of a group where you’re like, I don’t know, we select who the intellectual representatives are also struck me as somewhat odd. It’s a strange kind of … It set off my old political philosophy concerns where I’m like, “Oh, this just doesn’t …” It feels like it’s imitating democracy but isn’t getting there. I had and I was also just like plus the people who come to the table and there are certain norms of respect to lots of groups of people that just have to be upheld if you’re going to have people with different views have an input on a topic.

Amanda Askell: I think some of the criticisms were that people felt those norms had been upheld and this person had been insulting to key groups of people, the trans community and to immigrants. Largely, mixed feelings where I was just like I see this intention and it actually seems to me to be a good one, but I see all of these problems with trying to execute on it.

Amanda Askell: I can’t give an awesome response to this. It’s just like yeah, here it is, I’ve nailed it. It’s just like yeah, these are difficult problems and I think if you came down really strongly on this where it was like this was trivially bad or you were like this was trivially good, it just feels no, they were just like there are ways that I might have done this differently but I see what the goal was and I’m sympathetic to it but I also see what the problems were and I’m sympathetic to those. Yeah, it’s like the worst, the least satisfying answer ever, I guess.

OpenAI Girl Geek Dinner audience women in AI.

OpenAI Girl Geek Dinner audience enjoys candor from women in AI.  Erica Kawamoto Hsu  / Girl Geek X

Audience Member: Hi, I have a question for Brooke. I’m also a fan of Dota and I watched TI for two years. My question is, if your model can already beat the best team in the world, what is your next goal?

Brooke Chan: Currently, we’ve stopped the competitive angle of the Dota project because really what we wanted to achieve was to show that we could get to that level. We could get to superhuman performance on a really complex game. Even at finals, we didn’t necessarily solve the whole game because there were a lot of restrictions, which people brought up. For example, we only used 17 out of the you know 100 and some heroes.

Brooke Chan: From here, we’re just looking to use Dota more as a platform for other things that we want to explore because now we know that it’s something that is trainable and can be reused in other environments, so yeah.

Audience Member: Hi, my question is about what are some of the limitations of training robots in a simulator?

Lilian Weng: Okay, let me repeat. Question is, what’s a limitation of training the robot-controlled models in the simulation? Okay, there are lots of benefits, I would say, because in simulation, you have the ground rules. You know exactly where the fingertips are, you know exactly what’s the joint involved. We can do all kinds of randomization modification of the environment. The main drawback is we’re not sure what’s the difference between our simulated environment and reality. Our eventual goal is to make it work in reality. That’s the biggest problem. That’s also what decides whether our sim2real transfer going to work.

Lilian Weng: I will say one thing that confuse me or puzzles me personally the most is when we are running all kinds of randomizations, I’m not sure whether it’s getting us closer to the reality because we don’t have a good measurement of what the reality looks like. But one thing I didn’t emphasize a lot in the talk is we expect because we design all kinds of environment in the simulation and we asked the policy model to master all of them. There actually emerges some meta learning effect, which we didn’t emphasize but with meta learning, your model can learn how to learn. We expect this meta learning in fact to empower the model to handle something they’d never seen before.

Lilian Weng: That is something we expect with domain randomization that our model can go above what it has seen in the simulation and eventually adapt to the reality. We are working all kinds of technique to make the sim2real thing happen and that’s definitely the most difficult thing for robotics because it’s easy to make things work in simulation. Okay, thanks.

Audience Member: I was just curious as kind of another follow-up question to Brooke’s answer for earlier but for everybody on the panel too. What do you consider to be some of the longer-term visions for some of your work? You did an impressive thing by having Dota beat some real people but where would you like to see that work go or what kinds of problems do you think you could solve with that in the future too, and for some other folks on the panel too?

Brooke Chan: Sure, I would say that pretty honestly when we started the Dota project we didn’t actually know whether or not we would be able to solve it. The theory at the time was that we would need a much more powerful algorithm or a different architecture or something in order to push it kind of all the way. The purpose of the project was really to demonstrate that we could use a relatively straightforward or simple algorithm in order to work on this complex game.

Brooke Chan: I think going out from here, we’re kind of looking into environments in general. We talked about how Dota might be one of our last kind of games because games are still limited. They’re helpful and beneficial in that you can run them in simulation, you can run them faster but we want to kind of also get closer to real-world problems. Dota was one step to getting to real-world problems in the parts that I talked about like the partial information and the large action space and things like that. Going on from there, we want to see what other difficult problems you could also kind of apply this sort of things to. I don’t know if other people …

Christine Payne: Sure. In terms of a music model, I would say kind of two things I find fascinating. One is that I really like the fact that it’s this one transformer architecture which we’re now seeing apply to lots of different domains. The fact that it can both do language and music and it’s really kind of interesting to find these really powerful algorithms that it doesn’t care what it’s learning, it’s just learning. I think that that’s going to be really interesting path going forward.

Christine Payne: Then, also, I think that music is a really interesting test for like we have a lot of sense as humans so we know how we would want the music to go or we know how the music affects us emotionally or there’s all this sort of human interaction that we can explore in the music world. I hear from composers saying well, they want to be able to give the shape of the music or give the sense of it or the emotion of it, and I think there’s a lot of space to explore in terms of it’s the same sort of thing we’ll want to be able to influence the way any program is going to be, how we’ll be interacting with a program in any field but music is a fun area to play with it.

Ashley Pilipiszyn: Actually, as a followup, if you look at all of our panelists and everything everyone presented too, it’s not just human and AI interaction, but human and AI cooperation. Actually, for anyone who followed our Dota finals event as well, not only did we have a huge success but, and for anyone who is a Dota fan in the crowd, I’d be curious if anyone participated in our co-op challenge. Anyone by chance? No, all right. That’s all right.

Ashley Pilipiszyn: But actually, being able to insert yourself as being on a team with OpenAI Five and I think from all of our research here we’re trying to explore the boundaries of, you know what does human AI cooperation look like and I think that’s going to be a really important question going forward so we’re trying to look at that more.

Speaker: And we have time for two more questions.

Audience Member: Thank you. Just right on time. I have a question for you, Christine. I was at a conference earlier this year and I met this person named Ross Goodwin who wrote using a natural language processing model that he trained a screenplay. I think it’s called Sunspring or something like that. It’s a really silly script that doesn’t make any sense but it’s actually pretty fun to watch. But he mentioned that in the media it’s been mostly–the credit was given to an AI wrote this script and his name was actually never mentioned even though he wrote the model, he got the training data. What is your opinion on authorship in these kinds of tools that … also the one you mentioned where you say you’re actually composing? Are you the composer or is the AI the composer? Should it be like a dual authorship?

Christine Payne: That is a great question. It’s a difficult question that I’ve tried to explore a little bit. I’ve actually tried to talk with lawyers about what is copyright going to look like? Who owns pieces like this? Because in addition to who wrote the model and who’s co-composing or co-writing something, there’s also who’s in the dataset. If your model is imitating someone like are they any part of the author in that?

Christine Payne: Yeah, I mean I have my own sort of guesses of where I think it might go but every time … I think I’m a little bit [inaudible 01:37:11] in terms of the more you think about it, the more you’re like this is a hard problem. It’s really, like if you come down hard on one side or the other because clearly, you don’t want to be able to just press go and have the model just generate a ton of pieces and be like I now own all these pieces. You could just own a ridiculous number of pieces, but if you’re the composer who has carefully worked and crafted the model, crafted … you write a little bit of a piece, you write at some of the piece and then the model writes some and you write some. There’s some interaction that way, then sure, that should be your piece. Yeah, I think it’s something that we probably will see in the near future, law trying to struggle with this but it’s an interesting question. Thanks.

Audience Member:  Okay, last question. Oh no.

Ashley Pilipiszyn: We’ll also be around so afterwards you can talk to us.

Audience Member: This is also a followup question and it’s for everyone on the panel. Could you give us some examples of real-life use cases of your research and how that would impact our life?

Ashley Pilipiszyn: An example.

Christine Payne: It’s not an easy one to close on. You want to take it. Go for it.

Lilian Weng: I will say if eventually we can build general purpose robots, just imagine we use the robot to do a lot of dangerous tasks. I mean tasks that might seem danger to humans. That can definitely reduce the risk of human labors or doing repeated work. For example, on assembly line, there are some tasks that involve human hands, but kind of boring. I heard from a friend that there are a lot of churn or there’s a very high churn rate of people who are working on the assembly line, not because it’s low pay or anything, most because it’s very boring and repetitive.

Lilian Weng: It’s not really good for people’s mental health and they have to–like the factory struggle to hire enough people because lots of people will just leave their job after a couple months or half a year. If we can automate all those tasks, we’re definitely going to leave others more interesting and creative position for humans to do and I think that’s going to overall move the productivity of the society. Yeah. That’s still a very far-fetched goal. We’re still working on it.

Amanda Askell: I can also give a faraway thing. I mean I guess my work is,, you know with the direct application, I’m like, “Well, hopefully, ML goes really well.” Ideally, we have a world where all of our institutions are actually both knowledgeable of the work that’s going on in ML and able to react to them really well so a lot of the concerns that people have raised around things like what happens to authorship, what happens to employment, how do you prevent things like the misuse of your model, how can you tell it’s safe? I think if policy work goes really well then ideally, you live in a world where we’ve just made sure that we have all of the kind of right checks in place to make sure that you’re not releasing things that are dangerous or that can be misused or harmful.

Amanda Askell: That just requires a lot of work to ensure that’s the case, both in the ML community, and in law and policy. Ideally, the outcome of great policy work is just all of this goes really smoothly and awesomely and we don’t have any bad things happen. That’s like the really, really modest goal for AI policy work.

Brooke Chan: I had two answers on the short-sighted term, in terms of just AI being applied to video games, AI in video games historically is really awful. It’s either really just bad and scripted and you can beat it easily and you get nothing from it or it’s crazy good because it’s basically cheating at the game and it’s also not really that helpful. Part of what we found out through the Dota project was people actually really did like learning with the AI. When you have an AI that’s at your skill level or slightly above, you have a lot of potential, first of all, to have a really good competitor that you can learn from and work with, but also to be constantly challenged and pushed forward.

Brooke Chan: For a more longer-term perspective, I would leverage off of the robotics work and the stuff that Lilian is doing in terms of the system that we created in order to train our AI is what is more general and can be applied to other sorts of problems. For example, that got utilized a little bit for the robotics project as well and so I feel it’s more open-ended in that sense in terms of the longer-term benefits.

Christine Payne: Okay and I’ll just wrap up saying yeah, I’ve been excited already to see how musicians and composers are using MuseNet. There are a couple examples of performances that have happened now of MuseNet pieces and that’s been really fun to see. The main part that I’m excited about is that I think the model is really good at just coming up with lots and lots of ideas. Even though it’s imitating what the composers might be doing, it opens up possibilities of like, “Oh, I didn’t think that we could actually do this pattern instead.” Moving towards that domain of getting the best of human and the best of models I think is really fun to think about.

Ashley Pilipiszyn: So kind of how I started the event this evening, our three main research areas are really on these capabilities, safety, and policy. You’ve been able to hear that from everyone here. I think the big takeaway and a concrete example I’ll give you is, you think about your own experience going through primary education. You had a teacher and you most likely … you went to science class then you went to math class and then maybe music class and then art class and gym. You had a different teacher and they just assumed, probably for most people, you just assumed you’re all at the same level.

Ashley Pilipiszyn: How I think about it is, we’re working on all these different kind of pieces and components that are able to bring all of these different perspectives together and so a system that you’re able to bring in the math and the music and the gym components of it but also able to understand what level you’re at and personalize that. That’s kind of what I’m really excited about, is this human AI cooperation component and where that’ll take us and help unlock our own capabilities. I think, to quote from Greg Brockman, our CTO, that while all our work is on AI, it’s about the humans. With that, thank you for joining us tonight. We’ll all be around and would love to talk to you more. Thank you.

Speaker: We have a quick update from Christina on our recruiting team.

Ashley Pilipiszyn: Oh, sorry.

Christina Hendrickson: Hey, thanks for coming again tonight. I’m Christina. I work on our recruiting team and just briefly wanted to talk to you about opportunities at OpenAI. If you found the work interesting that you heard about from our amazing speakers tonight and would be interested in exploring the opportunities with us, we are hiring for a number of roles across research, engineering and non-technical positions.

Christina Hendrickson: Quickly going to highlight just a couple of the roles here and then you can check out more on our jobs page. We are hiring a couple roles within software engineering. One of them, or a couple of them are on robotics, so that would be working on the same type of work that Lillian mentioned. We are also hiring on our infrastructure team for software engineers, as well, where you can help us in building some of the world’s largest supercomputing clusters.

Christina Hendrickson: Then the other thing I wanted to highlight is one of our programs. So we are going to have our third class of our scholars program starting in early 2020. We’ll be opening applications for that in a couple weeks so sneak peek on that. What that is, is we’re giving out eight stipends to people who are members of underrepresented groups within engineering so that you can study ML full-time for four months where you’re doing self-study and then you opensource a project.

Christina Hendrickson: Yeah, we’re all super excited to chat with you more. If you’re interested in hearing about that, we have a couple recruiting team members here with us tonight. Can you all stand up, wave? Carson there in the back, Elena here in the front, myself. Carson and I both have iPads if you want to sign up for our mailing list to hear more about opportunities.

Elena Chatziathanasiadou waving

Recruiters Christina Hendrickson and Elena Chatziathanasiadou (waving) make themselves available for conversations after the lightning talks at OpenAI Girl Geek Dinner.  Erica Kawamoto Hsu / Girl Geek X

Christina Hendrickson: Thank you all again for coming. Thanks to Girl Geek X. We have Gretchen, Eric, and Erica here today. Thank you to our speakers: Brooke, Amanda, Lilian, Christine, Ashley, and thank you to Frances for helping us in organizing and to all of you for attending.

Ashley Pilipiszyn: Thank you, everybody.


Our mission-aligned Girl Geek X partners are hiring!

“Enterprise to Computer (a Star Trek Chatbot)”: Grishma Jena with IBM (Video + Transcript)

Transcript:

Sukrutha Bhadouria: Hi everyone, I hope you’ve been having a great day so far. Hi, Grishma. Hi, so yes, we are ready for our next talk. I’m Sukrutha and Grishma is here to give the next talk. Just before we get started, the same set of housekeeping rules. First is, we’re recording. We’re gonna share in a week. Please post your questions, not in chat, but in the Q and A. So you see the Q and A button at the bottom? Click on that and post there. If for some reason we run out of time, and we can’t get to your questions, we’ll have a record of it and it’s easy for us to find later and get you your answers later.

Sukrutha Bhadouria: So please share on social media #GGXelevate and look for job postings on our website at girlgeek.io/opportunities. We’ve also been having, throughout the day, viewing parties at various companies. So shout-out to Zendesk, Strava, Guidewire, Climate, Grand Rounds, Netflix, Change.org, Blue Shield, Grio, and Salesforce Portland office.

Sukrutha Bhadouria: So now, on to Grishma. Grishma is a cognitive software engineer at IBM. She works on the data science for marketing team at IBM Watson. So today her talk is about Enterprise to Computer: a Star Trek chatbot. I’m sure there’s a lot of Star Trek fans out there because I know I am one, and I can’t wait to hear about your talk, Grishma.

Grishma Jena: Thank you, Sukrutha.

Sukrutha Bhadouria: Go ahead and get started. You can share your slides.

Grishma Jena: Okay, I’m gonna minimize this. Alright, can you see my slides? Okay. Hi, everyone, I’m Grishma. As Sukrutha mentioned I work as a cognitive software engineer with IBM in San Francisco. So, a lot of my job duties involve dealing with a lot of data, trying to come up with proprietary data science or AI solutions for our Enterprise customers. My background is in machine learning and natural language processing which is why I’m talking on a chatbot today.

Grishma Jena: I’ve also recently joined this non-profit called For Her, where we’re trying deal with creating a chatbot that could act as a health center, as a resource center for people who are going through things like domestic abuse or sexual violence so I’m very interested to see you know, a totally different social application of chatbot. But for today we’ll focus on something fun. And before I begin, a very happy Women’s Day to all of you out there. So, yeah.

Grishma Jena: When was the last time you interacted with a chatbot? It could have been a few minutes before, when, you know, Akilah was talking and your Alexa probably got activated by mistake and you had to be like, “Alexa, stop.” It could be with Siri. We interact with Siri every day. It could be on a customer service chat or it could be on a customer service call.

Grishma Jena: Basically, there are so many different avenues and applications of chatbots today that sometimes it’s even hard to distinguish if are we talking to a human. Is it a chatbot in disguise of a human? And it’s quite interesting to see where chatbots have come in the past few years.

Grishma Jena: So, this was a grad school project that we did. Our idea was, okay, chatbots are amazing. We really like that they help take some of the workload off humans, but how can we make them seem a little more human, a little less mechanical? Could we give them some sort of a fun personality?

Grishma Jena: And we brainstormed for a bit and we finally came up with the idea, hey, why don’t we, I mean … Well, to be honest we weren’t that big fans of Star Trek, but we did become one during the course of this project and we were like, “Okay, let’s think of Star Trek”. It has a wide fan base and let’s try to not pick one single character from Star Trek but let’s take all of the characters and make this huge mix of references and trademark dialogues and see what kind of personality the chatbot would have.

Grishma Jena: So, like I mentioned, the motivation was to make a chatbot a little more human-like. And we wanted to have a more engaging user experience. So the application of this could be, it doesn’t have to be something related to, you know, like an entertainment industry. It could be also something like a sports lover bot so that would be very chatty and extroverted and it would support your favorite sports team. Or it could be something a little more sober like a counselor bot who is very understanding and supportive and listens to you venting out or asks you about how your day was. So yeah, we chose Star Trek infused personality.

Grishma Jena: So our objective with Star Trek was wanted it to incorporate references from the show. [inaudible 00:05:17] wanted to [inaudible 00:05:20] Spock and live long and prosper. We wanted it to be data driven model, we did not want to feed in dialogues we wanted it to just feed in a corpus and have it generate dialogues on its own. We obviously wanted it to give interesting responses and to have the user engaged because that is one of the things that a chatbot should do, right? So in really simple words, just think of a friend of yours or it could be yourself who is this, you know, absolutely big fan of Star Trek and just transfer that personality to a chatbot.

Grishma Jena: So this is what the schema of our bot look like. We had the user utterance which is basically anything that you say or that you provide as input to the chatbot. And then we had a binary classifier. I’ll delve deeper into why exactly we wanted it, but the main point is that we wanted it to be able to distinguish whether what you’re saying to the chatbot is it something related to Star Trek or is it something a little more general conversation like, “How are you feeling today?” Or “What is the weather like?” And depending on that we had on that we had two different routes which the bot would take to generate a response.

Grishma Jena: So before we begin, we obviously need some sort of data and we decided that we would take all of the data that was available for the different Star Trek movies and the TV series. You’d be surprised at how little data is available, actually. We initially thought of just doing a Spock bot, but Spock himself has very limited dialogues so we just expanded our search to the entire Star Trek universe. And that’s why we took dialogues from movies, TV series. We didn’t want to have any sort of limitations as far as the data was concerned. We ended up with about a little over 100,000 pairs of dialogues.

Grishma Jena: Then we also went and got this database, which is known as the Cornell Movie Database. This database was created by Cornell University, which has a collection of raw movie scripts. It’s just a really good data set to train your bot on, the way how humans interact and what kind of topics they talk about, what are the responses like.

Grishma Jena: And finally, we also had a Twitter data set because we wanted some topics that were related to the ongoing affairs in the world, the current news topics. Because we envisioned that if you had a chatbot then people do like to talk to the chatbot or ask for the chatbot’s opinion on something that’s happening in real time.

Grishma Jena: So the very first component of a chatbot was having a binary classifier. Like I mentioned, we had two different routes for our chatbot. One would be the Star Trek route and the other would be a general conversation route. So we had the binary classifier that would help us distinguish whether whatever the user is uttering or whatever the user is giving as an input is it related to Star Trek or is it general conversation which was getting handled by the Cornell Movie Database. So we used an 80:20, that is the training data set and the testing data set split. And the features that we used were we took the top 10,000 TF-IDF unigrams and bigrams.

Grishma Jena: TF-IDF stands for tone frequency and inwards document frequency. Tone frequency is nothing but how many times a given word occurs in your corpus and inverse document frequency,, it’s kind of a weight that is attached to a word. So think of a textbook or think of a document that you have. Words like prepositions, like the, of, and would occur multiple times. But really words that would be important that would have some sort of conceptual representation, perhaps like the topic of it. Compared to it would be a little rare in occurrence, compared to prepositions, compared to commonly used words, and that’s why they should be given more weightage. So that’s the whole idea behind TF-IDF.

Grishma Jena: Unigrams and bigrams are nothing but you divide the entire document that you have into words. An unigram would be one [bit kilo word inaudible 00:09:17] bigram would be a set of two consecutive words that occur in the document. There’s an example later on in the slide to explain it better. Stop words, when consider stop words are just filler words like I mentioned similar to the prepositions. And we were very happy with the performance of the binary classifier. We were able to get a 95% accuracy on the test set, and we decided that is good enough, let’s move on to the next one.

Grishma Jena: And finally, this is the main core of it, where deep learning comes into play. So with deep learning, we used a model called a Seq2seq which is a particular type of recurrent neural network. So if you can see the image on the right, it is a simplified version of a neural network where you give it an input and it gets an output and that output is also the input for the next cycle, so it’s kind of like a feedback looping mechanism.

Grishma Jena: First, the specific type of neural network that we use, Seq2seq. It was just two recurrent neural networks so just think of a really big component that has two smaller components, which is an encoder and a decoder.

Grishma Jena: So the encoder actually takes in the input from the user and tries to provide some sort of context. What do the words mean? What exactly is the semantics behind the sentence that the user has given? And the decoder generates the output based on the context that it has understood and also based on the previous inputs that were given to it, which is where the feedback mechanism comes into play.

Grishma Jena: So just to go a little deeper into it. This is a representation of what a Seq2seq with encoder and decoder would look like. So the input over here would be, “Are you free tomorrow?” and the encoder takes in that input and tries to understand what exactly is the context or the meaning of this sentence. And finally the decoders understands, okay, this is something someone is asking about either they want to take an appointment or someone’s availability or someone’s schedule. And that’s where the reply is something like, “Yes, I am. What’s up?”

Grishma Jena: So these are some statistics about how exactly we went on training this on AWS. We used a p2.xlarge instance with one Nvidia Accelerator GPU and then we had the Star Trek Seq2seq. So we had one Seq2seq for just Star Trek dialogues and we had another one, the Cornell Seq2seq which is on Cornell data, which is more for just a general conversation purpose.

Grishma Jena: So we went ahead, we generated some sentences, but then we realized that the ones for Star Trek were really good because you’re giving it Star Trek as input so obviously the output is also going to be Star-trekky. But for the general conversation ones, for things like, “What is the weather like?”, “How are you doing today?”, “What is the time?” it was a little difficult for us because obviously the input is not Star Trek related, right? So the output also wouldn’t be Star Trek related, but we wanted this to be a Star Trek chatbot.

Grishma Jena: So we brainstormed a bit and we thought, “Hey, why don’t we try something called a style shifting?” Which is basically like you take a normal sentence, a sentence from the general conversation, and you try to shift it into the Star Trek domain.

Grishma Jena: And the way we did this was, we went through the entire corpus, the data set for Star Trek, and we created a word graph out of it. A word graph would be, just think of it as you pass different sentences in the data set and each of the words would form a node and the edges between them would tell how they occurred in relation to one another. So if they occurred right next to each other or within the same sentence.

Grishma Jena: And along with the words in the node we also had a part of speech tag. So we indicated whether it was an adjective, or a noun, or a pronoun or a conjunction. So let’s say for example our sentence was, “Live long and prosper.” You break it down into four words which are the four different nodes and then we label them with a different part of speech tag and we connected them because they come one after the other in the sentence.

Grishma Jena: So what we did, was after we built out this really huge word graph, we looked it up to insert what could be appropriate words between two given words in the input. So once we had the sentence we would check for every two words in the sentence and see what are the words that we could insert in between to give it more of a Star Trek feel to it to just, you know, shift the domain into Star Trek.

Grishma Jena: We went ahead and we did that and these were the kind of results that we got. “I am sorry” was the input and then the word graph went ahead and inputted “Miranda” at the end. “I will go” and then it inputted “back” at the end of the sentence because “go” and “back” kind of occur very commonly with each other. And similarly for the start of the sentences, it tried to input names like “Uhura” or “Captain”. So one thing we noticed was it really good at inputting names at the start and the end of the sentence and using the character names from the show did end up giving it a slightly more Star Trek feel than before.

Grishma Jena: So we went ahead and we just randomly tried to insert words that occurred more frequently between two words but then we realized that some of the sentences were ungrammatical. So what do we do? We came up with this idea of let us use the word graph as it is and then let’s take some sort of a filter to our responses. So, like I said, we realized that the word graph was giving a few incoherent and incorrect responses. What we did was we went ahead and constructed an n-gram model.

Grishma Jena: So n over here would be unigram, bigram, trigram. You can see the example over here if n is equal to one, which is an unigram, you break down the sentence into just different words so “this” would be one unigram “is” would be another unigram. If n is two, which a bigram, you would take two words that co-occur together. So in this case the first bigram would be “This is,” second one would be “is a” and then similar for trigram it would be “This is a” and then “is a sentence”.

Grishma Jena: So we created an n-gram model which was just to understand what exactly is the kind of dataset that Star Trek has. And then finally we wanted to get a probability distribution over the sequence of words that we have had.

Grishma Jena: So once we get this, we start to filter the responses and we ran the sentences using the bigram models that we trained on the Star Trek data set. Because of this we kind of got a reference type for seeing that what structures are grammatically correct. We went ahead and we get them and the ones that were a little odd sounding or that didn’t really occur anywhere in the data set we went ahead and removed them.

Grishma Jena: Another metric that we used for this was perplexity. So just think of perplexity as some sort of an explainability metric. We went ahead and used that which would help us tell how well a probability distribution was able to predict it.

Grishma Jena: Finally, we have all of the things in place and we have to evaluate the performance of the chatbot. So we came up with two categories of evaluation metrics. The first one was quantitative metrics where we used perplexity, which was mentioned on the first slide. And the second one was we wanted to see often was it using words that were very particular to Star Trek that you don’t really use in normal day life, you know, like maybe spaceship or engage.

Grishma Jena: And the second category was human evaluations where we got a bunch of, user group and we asked them to just read the input and the output and see how good it was in terms of grammar. If the response actually made sense, if it was appropriate. And finally, on the Star Trek style. Just how Star-trekky did it sound?

Grishma Jena: And, we also came across another bot online which is called as a Fake Spock Pandora Bot which was contrary to the way we had. Our bot was data driven this was rule based so it was actually given an input of human generated responses.

Grishma Jena: We wanted to see how good would a data driven model perform as compared to a human generated one. So this is just what the Fake Spock Pandora Bot looked like. And these were the kind of responses that the Pandora Bot gave. If you said, “I’m hungry, Captain” it said, “What will you be eating?” So it’s giving really good appropriate responses because humans were the back end for this.

Grishma Jena: And then, what we did was we went ahead and evaluated the results. And we saw that our bot was performing better for Star Trek style and it also was a little more coherent. For grammar, Pandora Bot was much better and that’s not surprising because humans were the ones who actually wrote it out. For perplexity, the Star Trek perplexity started dialogues were 65, so that was our baseline number and we figured out that the kind of responses our bot was generating that are 60, 60.9 was a little closer compared to Pandora was like, way far off at 45.

Grishma Jena: So we were pretty happy with our performance. I’m just gonna give you a few examples of what the different bots generated. So the yellow ones are the Pandora Bot and the blue ones are the E2Cbot. So let’s see, if the user says, “Beam me up, Scotty” the yellow one, that is the Fake Pandora Bot, gives, “I don’t have a teleportation device” which is a good answer. And the blue one is, “Aye, Sir” which is also a good answer. A little curt, but nothing wrong with it.

Grishma Jena: In the second example if you see our bot answered, “Bones, I like you.” So the “Bones” part is actually come from the word graph which gives it a little more of a Star Trek feel. And the last one over here is the Fake Bot, the human generated one, just says, “I am just an AI chatting on the internet” which is kind of not the response that you are looking for.

Grishma Jena: A few more examples over here. The user says, “My name is Alex” and then the Fake Spock Bot says, “Yes, I know Christine.” I just told you my name was Alex, why would you call me Christine? But our bot says, “What do you want me to do, Doctor?”, which is a better response. And, yeah, these are the kind of responses.

Grishma Jena: I think some of our human focus group people said that they might be correct, appropriate responses, but they might not be factually correct, which was a challenge for us, as well as for the Fake Spock Bot. We didn’t really delve deeper into it because that would kind of dive more into having a question answering system and trying to check if it’s factually correct or not but we tried to make our focus group users understand that it’s just a bot at the end of the day.

Grishma Jena: So finally, we were able to generate Star Trek style text. We were very happy with that, we were able to use the data driven approach which meant we could automate it. And we did figure that it performed better than the human generated responses that Pandora Bot would give, at least on style and at least on the appropriateness. It still needs a little bit of improvement in grammar but we were pretty happy with it.

Grishma Jena: So that’s me. Live long and prosper. And feel free to reach out to me on Linkedin or on Twitter if you have any questions about this. Thank you.

Sukrutha Bhadouria: Thank you, Grishma. This was great. So just to close I just wanted to mention to everybody that you actually sent your speaker submission to us and that’s how we got connected. So thank you for doing that. We got a lot of comments from people who are Star Trek fans, but yeah, what inspired you to build this project?

Grishma Jena: Yes, so this was actually a grad school project. We were taking a deep learning course so all of us had to build a chatbot as an Alexa skill. We brainstormed a lot, and we actually thought that Spock because Star Trek has a really huge fan base so Spock would be a good idea to do. But Spock had very little dialogue in all of the movies and the television series and then we were like, “You know what, let’s not stick to just one character, let’s have the entire Star Trek universe.” And, the bonus was that during my semester, I could continuously binge watch Star Trek and say that, “Yeah, I’m doing research because I want to see how well my chatbot works,” but I was just binge watching to be honest.

Sukrutha Bhadouria: Nice. That’s awesome. Well, thank you so much, Grishma, for your time. We really appreciate it and for your enthusiasm in signing up through our speaker submissions.

Grishma Jena: Thank you so much, Sukrutha.