//

AI Overlords, Battling Covid-19 and Algorithmic Bias: a conversation about the importance of Human Goodness in AI.

April 21, 2020
Julie Shin Choi, VP & GM of AI Marketing at Intel AI, at Girl Geek X, Elevate 2020

On Friday, March 6th, senior female tech leaders & engineers came together to celebrate International Women’s Day with over a dozen tech talks & panels during the Girl Geek X Elevate 2020 virtual conference. Today’s blog includes takeaways from a talk by Julie Shin Choi, VP & GM of Artificial Intelligence Products & Research Marketing at Intel AI. Prior to joining Intel, Julie led product marketing at HPE, Mozilla, and Yahoo. In addition to the YouTube video replay, a full transcript from Julie’s talk is also available.


One of the reasons that Julie Shin Choi chose to join Intel, she told us, was the opportunity and the scale that Intel’s AI technology platform would provide from a career perspective, but she never anticipating falling in love with the people of Intel.

“It is really this human goodness at Intel that keeps me here.”

One of the things that we’ve learned in recent years is that AI is a powerful agent for helping people around the world. Intel CEO Bob Swan shared an example from the Red Cross earlier this year at CES. As we all know, the Red Cross is an amazing relief organization dedicated to helping people in times of disaster.

Julie explains that Intel, the Red Cross, Mila (an AI think tank in Montreal), and other organizations recently formed a data science partnership alliance — their objective was to map unmapped parts of Uganda and to identify, through deep learning, different bridges that the Red Cross could take to deliver aid in times of disaster.

In addition to viral outbreaks (a case of Ebola emerged last June), Uganda is also prone to severe flooding.

“Bridges are often washed out or impassable,” said Red Cross CEO Dale Kunce. That “can mean that your 20-minute drive all of a sudden becomes several hours.”

Ultimately, Intel and their data partners were able to examine huge satellite images and come up with algorithms that could automatically identify bridges that could be utilized by disaster relief workers — they labelled and identified over 70 previously unmapped bridges in southern Uganda.

This is just one example of why human goodness matters when we think about AI application development. There are endless applications, some of which are especially current and relevant right now.

AI is playing a huge role in fighting the spread of Covid-19.

Everyone has heard about and is taking precautions against the global Covid-19 pandemic, but are we talking about the important role AI is playing in fighting the spread of this deadly virus?

“Globally,” Julie informs us, “We’re using big data — we’re analyzing different databases of where people have gone and the different symptoms that they may present.”

State, federal and local governments are turning to big data to make policy decisions and measure the impact and effectiveness of their policies in near real-time.

“One novel use case that we [at Intel AI] identified in Singapore is of a company that’s using IoT [Internet of Things] technology to help scan people and identify thermal readings — so basically fevers — without human contact.

Intel AI’s technology is powering thermal screening that’s helping keep people safe by catching more Covid-19 cases earlier, and with less manual input from healthcare professionals.

This AI-aided screening method is proving to be about three to four times more efficient, so they can scan 7 to 10 people with this AI device, as compared to using human healthcare practitioners. They’re able to free up limited resources and keep more healthcare workers on the front lines where they’re most needed right now.”

The utilization of AI is really helping manage a lot of the issues related to coronavirus in Singapore.

We’re seeing other innovations like this cropping up all around the world as technologists team up with big data partners, healthcare providers and policy makers to help track and slow the spread of Covid-19.

AI is new to us, so folks sometimes fear the capabilities… but our kids understand it. And they’re the ones who will be programming them.

“I have two children, 8 and 12. A couple of months ago, we were talking about the world, and the one in junior high, he said, ‘Well, I think that my generation is going to be spending most of its time solving the problems that your generation created.'”

Julie continued, “And then my little one, who’s still in elementary, chimed in right away, and he said, ‘With the help of our AI overlords, right?’

These kids already, they’re so aware, and I think the advice to our children would be to really read books, play with one another, learn how to have friends from many different backgrounds, become the best humans they can be, because it’s not going to be robot overlords. We’re going to need good humans to program those AIs.

Good humans are the key.

“In AI, good humans are needed because it’s such a powerful technology and it’s such an accelerant that really depends on algorithms at the heart, and these algorithms are coded based on assumptions that we make about data.

AI starts with data but ends with humans. It’s technology that’s being built for humans. I think it’s very important that we partner with people who really understand the human problems that we’re trying to solve. We need to partner with domain experts.”

AI is going to take a diversity of talents and tools.

There’s really no one size fits all, Julie explains: “We’re going to need CPUs, GPUs, FPGAs, these are all different kinds of hardware. Tiny edge processors. We’re going to need a host of different software tools. We’re going to need data scientists and social scientists, psychologists and physicists, marketers and coders to all work together to come up with solutions that are creative. It’s really going to take a village. Be open-minded.”

“And let us always be thoughtful,” she added.

“I know that in Silicon Valley, people often say it’s important to go fast and to fail fast, but in AI, I don’t think so. I think we need to take time. We should be thoughtful and really, really careful and considerate about the assumptions we make as we create the tools that create the algorithms that feed the AIs.”

Good humans will be needed every step of the way.

A lot of people worry that AI is going to take our jobs and replace humans.

Julie Shin Choi, Vice President & General Manager, AI Marketing at Intel AI

“I’m a firm believer that AI will not be replacing humans, it will be augmenting humans. So it’s helping us, not replacing us.

For example, radiology is a major transformation area that’s being transformed by AI faster than most because of the applicability of computer vision for x-ray imaging. “But what we’re seeing is that physicians actually are welcoming the help of AI. It’s a great double check.

When you have a 97% accurate algorithm that’s going to ensure that your patient gets the right diagnosis — even though the algorithm is sometimes even more accurate than you, especially if you’re tired — it’s an absolutely phenomenal double check. The end goal for the human in that case, in medicine, is to go and help that patient with the most accurate information that the human doctor has.

What we’re seeing is that AI is helpful to humanity. It’s truly an augmenting type of technology and not a replacement.”

We talk a lot about the impact of bias in AI and how to limit it.

“Bias is certainly a problem and it’s something that we, as a community of technologists, policy makers and social scientists — all different backgrounds — we need to attack this together.

A lot of it just comes down to being intentional. There are audits of algorithms. There are ethics checklists, actually. There are best practices that have been set up, and I can actually introduce [the Girl Geek X community] to Intel’s AI for Good leader, Anna Bethke, who is an expert in this domain and a wealth of knowledge.

We need to address bias with intentional and very purposeful conversations, because again, the algorithms are based on assumptions that humans code. So the only way that we can eradicate and deal with the bias issue is by talking to one another. The right experts in the room ensuring and asking, ‘have we checked that bias off the list?’

Don’t just assume that coders know how to create a fair algorithm. I don’t think we can assume that. This is a very intentional action that we need to build into our AI development life cycles. The bias check.”

For more from Julie Shin Choi, watch the full video on YouTube, read the transcript of Julie’s talk during Girl Geek X Elevate, or follow her on Twitter.

To be notified of future Girl Geek X events and receive our weekly newsletter, subscribe to the Girl Geek X mailing list.

Interested in partnering with Girl Geek X to feature your female leaders or promote your current job openings to our community of 20,000+ mid-to-senior level women in technology? Email sponsors@girlgeek.io


Share this