Jia Li: Machine learning and artificial intelligence could transform health care and education
- Articles, Blog

Jia Li: Machine learning and artificial intelligence could transform health care and education


[APPLAUSE]
>>Hi everyone, thanks for being here. It’s my honor to
be here to, my name is Jia Li. I’m the head of R&D
at Google Cloud AI. I’ve also been a researcher
throughout my career. These rules have allowed me
to participate in every step that AI needs to go
from vision to reality. I’ve been fortunate enough
to participate in algorithm research, building data sets,
and shipping products in reality AI is an incredibly,
interesting and exciting technology.
And today, I’m here to share some of the avenues that
it could change life for millions of people. To start
with, I’d like to talk about two very traditional
industries, education and healthcare.
As we know, healthcare is a very complex field
with a lot of challenges. AI could have the potential
to change the outcome from individual patients
to the entire hospitals. Healthcare typically start
with the patient’s lifestyle. AI could help to provide
accurate guidance to their lifestyle, diet, etc. Based on
their past disease history, genetics and
prescriptions, etc, etc. It can also provide
automated monitoring and early assessment of
critical conditions associating subtle precursors
of signals that could correlate to emergent
critical conditions that a human would not
be able to detect. When a hospital visit is
necessary, AI can play additional roles to help
provide deep insights during and before and after the
patient and the doctors when one session. It could also
help ease the workflow for doctors by automatically
transcribing the session and filling out paperwork. AI could even provide deeper
assisting diagnosis so that our doctors could be
able to provide sophisticated diagnosis. Once the diagnosis
recommendation is made, artificial intelligence can
also help to provide further treatment strategies including
change of lifestyle, prescription, surge, surgery,
etc or all of them above. When a long term stay is
necessary whether a surgical patient or a senior
patient in senior care, intelligent system can provide
further help to reduce the burden of nurses and
doctors making rounds. It can help to predict
abnormal signals such as falling and
agitated movement, etc. In addition, machine learning
can also help the entire hospital to learn much
more efficiently. Patient triage will take multiple
patients medical record and help to ensure care is
carefully designed and distributed. In some cases, medical conversation agents
will help the patients to understand their symptoms
without leaving their house in the first place.
Here, I’d like to talk more about some of the new research
that I have participated in, specifically the thoracic
disease identification and the localization research.
As some of you probably know, diagnosis skill is
a very delicate skill. Some of the even
very tiny mistake, could cost very severe
consequences. In fact, 10% of patient deaths is
related to diagnosis errors. And according to professor,
Kurt 4% of all radiological interpretations
contain clinically significant errors.
This number is especially significant if we consider
that over 400 million such medical interpretations
are carried out each year in the United States alone.
So let’s look at the chest X-ray disease identification
problem even further, chest X-ray remains a significant
radiology challenge. Radiologists have to
invest significant effort to understand a go
through every single radiology image in order to
make diagnosis recommendation. If we can have some
AI-assisted the tool for them to get more insights about
the real radiology image, for example, we could to predict
the abnormal area of some potential disease in
the radiological image. That will help them to
ease the process and make the entire process
much more efficient. However, we’re facing a
chicken and egg problem here. First, we know our
radiologists are facing a lot of challenges and effort
in order to get through all the medical images in order
to give their interpretation. We want to invent an AI
assisted tool in order to make lab entire process
much more efficient. But in order to do so, we need
to get additional data and ask our radiologists to label
a lot of data, to train our method to build models.
This goes back to the exact problem that we want to help
the radiologists to ease. So in order to
solve this problem, we tried to turn
to the open source NIH Chest X-Ray data set.
This is a fairly large data set with over 100,000
radiology images. Each of the images is
associated with about up to 14 disease labels, mined
automatically from the report, which is relatively easier to
get. And as you can see here, less than 1,000 of the images
has found bounding boxes associated to them.
Which would, each of them would require a board
certified radiologist to label the bounding box, and that
will require a lot of efforts. So typically, this kind of
data set is not well suited for traditional supervised
learning, which require a lot of detailed labeling data. So, towards this problem we come
up with a novel approach by combining the holistic global
information about the disease, as well as the local
detailed annotation. And we are able to predict both
the disease type based on the global information,
as well as the local predicted area and highlight
where the abnormal areas disease types could be. And
overall disease prediction and suspicious region highlights,
works much better than state of the art machine
learning approach. We’re just at the beginning
of this direction, and we’re not alone.
There are many partners and customers who are leveraging
Google Cloud. For example Zebra Medical
are using Google Cloud to analyze new scans, and
deliver insights to hospitals, to inform clinical
decisions at scale. But there are still much more
remains to be explored and innovated in this space.
Hopefully in the future our specialists can spend less
time on repetitive and error prone tasks by working
together with AI assisted tools Another area that AI
could help is education. As we know, education is
another very traditional field that is facing
a lot of challenges. It needs to balance the need
of students and teachers, with the complexity of
schools and resources. AI could unlock a lot of unique
potential solutions here. To start with, AI could
help to ensure our students have a very safe
environment to study. And prevent them from dangerous
actions such as falling, fighting, or any other
dangerous activities. So that our educators can focus
on teaching, and artificial intelligence systems can help
taking care of, of the rest. So, more potential would exist
in the education experience itself. Artificial
intelligence algorithms could help to customize
courses that is personalized to each of the students, based
on their past experience, strength, weakness, and
personal preferences, etc. It can also turn abstract
examples to be very vivid in real world applications and
examples. And it could help our teachers
to scale up the effort by doing automated homework
and exams assessment. So, this kind of
experience can repeat through the course, over the
course of semester a year, and even the entire
education experience. So that we can provide highly
personalized experience to each of the individual
students. And best of all, such technologies can be
both applied to STEM, as well as the Arts. For example
we can easily extend some of the technology to a students
dance and violin performance. So, I have talked about how AI
could help potentially change healthcare and education
in the future. What about the countless other businesses
beyond healthcare and education? The real power
of AI can be felt once its power can be leveraged
by every possible business. But that’s a very challenging
problem. As we know that Machine Learning development
is a very complex and resource-consuming process. It will require investment and expertise in every single
step of the Machine Learning developments. Collect
the data, design model, turn model into,
tune model parameters, evaluate, deploy it and
finally update and iterate the entire process.
It will be challenging for most of
the businesses because of, of the over 21
million developers, only 1 million of them have
data science background. And even fewer, like thousands,
have deep learning background. How do we solve this problem?
We made, we made some attempts
towards the solution of this by introducing the AutoML
technology. All we need to bring to AutoML is data that
we want to label and predict. And AutoML will handle
everything from there. It gives the opportunity for
any business or organization who wants to
create customized models with very limited Machine Learning
expertise. Earlier this year, we’ve introduced the AutoML
vision product. Basically, the idea is the customers
can upload and bring their labeled images.
And AutoML technology will generate a customized
visual recognition model, based on the data that
they want to predict. Here is an example, lets
see how we do, we could do weather prediction,
weather image classification. Here, there a ten, over ten
different kinds of clouds. Each of them indicates
a different weather pattern. If we use the generically
trained visual models, here is what we are going
to get. Will be easy to predict there is sky and cloud
but, we won’t be able to know what kind of weather Or
what kind of cloud there is. Now if we try to upload all
these domain-specific training images to AutoML Vision,
here is what we can get. AutoML Vision can learn
what specific cloud or weather it means and
give the prediction here, for example cirrus here. And AutoML is a product
that based on multiple advanced technologies
including learning to learn, neural architecture search,
transfer learning, hyperparameter tuning and
more. Now let’s take a look at how, about how our
customers are using AutoML. Zoological Society of London
is a very good example. It is a non-profit
organization that uses camera trap to track the wildlife
population over the world. But that would generate
millions of unlabeled images for them, to manually label
each of the image as one of the wild animal type. So Zoological Society of London
has been closely collaborating with our team to shape
the AutoML product. And now they are able to
automatically label different wild animal
types by using AutoML. And we’re very excited. The
potential of AI could bring to the po, the way we
protect wild animal. Another example is Disney.
Disney is an early adopter of motion learning and
the cloud platforms. That changed the way they
interact with the customers, and they extend their
ability of visual recognition to recognize product
images using AutoML. Now they are able to
automatically detect characters and brand animal,
elements, such as logo and color schemes. And by
leveraging this ability they are now able to provide more
relevant search results and product recommendations. Another example is
tactile graphics. For those who are not familiar
with tactile graphics. It is a special type of
images designed for blind to understand the content.
It is very challenging to design such graphics because
it needs to be drawn without perspective. Needs to be
very simple and clear so that the blind readers can
understand the content without being distracted by
other unnecessary details. Because of the challenge
of designing it, it’s very, different
countries all over the world. They are collecting these
tactile graphics into repositories for
reuse purpose. However, these repositories they’re not connected. So a group of researchers try to
use AutoML to differentiate, what is a good tactile
graphics? What is not? And then, they can search online
and find good tactile graphics candidates. Now,
content publishers for the blind are able to find
good tactile candidates for their readers to understand. So AutoML is part of
the effort towards the trend of democratizing AI.
The real meaning of it is not just about how powerful
a technology is. It is about, also about how accessible
it is. AutoML Vision is just one of the features
that we’ve democratized and we’ve seen so
powerful examples from Disney, from Zoological of London,
from the tactile graphics search engine, it’s impact.
We’ve seen that, in Disney, that we are able to
enhance the retail experience of one of the world’s
largest retailers. And we’ve been able to empower wildlife
conservation in a scale that we’ve never been possible
to do in the past. And we’ve also helped to improve
interaction with the blind. AutoML Vision is just
the beginning. We are going to also extend these to more
features such as speech, natural language processing,
and translation, and more, to bring more of these
features to other fields. AutoML Vision,
as a single feature, can already do so much. We’re very excited to see what
the next wave can unlock. Technologies like AutoML
point to an exciting future, in which AI is available
to everyone in a format that is easy to use
regardless of what kind of problems you want to solve.
But solving a concrete
problem isn’t enough yet. It’s important, it’s equally
important to understand what kind of problems
we need to solve and to understand what people
need. In business, in academia, healthcare,
entertainment, and countless other fields that
are driving our society today. AI is an incredibly
exciting direction. And the most exciting about
it is its potential to make life better for
all of us. I hope that every one of us
can contribute to this effort to make AI even more
impactful. Thank you.>>[APPLAUSE]>>Great, thank you very much, Jil. So we have plenty
of time for questions, so please you know the drill,
raise your hand if you have a question. And
the mic will come through. Here, yep.
>>Thank you. Hi, I had a question
regarding, so as the models are
abstracted, and even combined, and this becomes more
accessible, what tools do you have for introspection on why
and how a prediction was made? So for example say a retailer
wants to identify a potential shop lifter.
>>Very good question. So, basically in order
to understand what kind of like technology we can offer to different users we
are also trying to understand what kind of problems they
want to solve, right? So in the case of a retailer,
wants to identify shoplifter, they will help us to define
what is a shoplifter and we can help them to come up
with just the technology that to help that.
>>Hi, my name’s Samantha and I work at USAA as a software
engineer. My question to you is, I mean, it’s really
obvious, everyone in this room that, you know, the need for
machine learning and artificial intelligence in our
community is prominent. But what are you doing, or how
do you build a product that recognize the complexity
around these type of techniques? And, you know,
you’re going for education and scalability with
these projects or products so that everyone can utilize
this technique but what are you doing to mitigate
the risk of the misuse of these techniques? And
the misuse of these products, right? Because I mean, I we’ve
heard it today from Latonya and, I think from Daniella
like there is a huge risk in using these techniques and
the need for basic statistics etc., is obviously
prominent. So, what are you doing to kind of mitigate that
when you’re building products for widespread use?
>>That’s a very good question.
I think as technologists and researchers this is very
important question for us to explore how, and make sure how AI technology
can be used only for good purpose. In fact, at
Google we have a internal team who are especially focusing
on this kind of problem, how to understand bias.
How to understand, and how to make sure there is
no misuse of technology. I have to say we are all of us
at the early stage. This is some serious topic that we
should all contribute and explore down the road.
>>Hi, great by the way, thank you.
So AI in education and the arts for our children
actually scares me, especially when you’re
talking about courses, tests, and even learning music
tailored, customized for each child’s preferences and
maybe even their biology. As humans we get to challenge
each other to think outside the box, to dream, to learn
what we thought we couldn’t learn, to become wiser.
What is Google’s vision and promise around
AI in education?>>[COUGH] Thank you. Wow, [LAUGH] that’s
a very big question. So, here I’m listing out
some of the potential AI, AI research that we could,
we could make education software are more powerful
to assist our teachers. The goal is to hope that with
more intelligent system and intelligent algorithms
our teachers can focus on creative and
less repetitive work. And hope to maximize
everybody’s interest, every students’ interest and capability during
this education experience.
>>[INAUDIBLE]>>Just a second, we have a mic there. Yeah, it’s just a-
>>Hi, I had a question about some
data you showed earlier on this slide where we looking
at just X-ray images and you need label data.
I have a naive question, but I always wondered if you can
just look across time to eventually when a patient did
show symptoms of some disease you were trying to diagnose.
And then go back and say, yes, this patient did have
this disease and use that as the label. Do you
know if that’s possible or if that’s too ambitious?
>>That’s definitely possible and
it’s a very good question. We have been working closely
with radiologists to understand what’s their real
need? Because in the field people are focusing on
giving a radiology image and trying to come up with
a disease label. And after we talk to many specialists
they are telling us, this is not what we want,
because we have so much other information.
That we can get, either from the patient,
the disease history, or from other signals, from
different reports etc. And it’s more helpful to
give us the indicator or some proposal an abnormal
area. That’s eventually how we come up with the idea to give
assisted recommendation and trying to give some of
the recommendation about abnormal area in our research.
Hopefully by leveraging the useful information come up
from the AI, assisted tool, the specialists com, combine
with many other information, information source
they have to come up with the best solution or
decision in the radiology analysis.
>>So I’m a student in business
analytics currently, and since you’re an expert in
artificial intelligence and machine learning,
I was wondering what your experience has been in
artificial intelligence potentially creating
a feedback loop. So in the example of a potential
shop lifter, for example, if we’re identifying
what a shop lifter is, that can create a feedback
loop about shop lifters and in future state that could
change, so are these model’s dynamic. What are some
of the challenges that you’ve experienced
with feedback loops in, on the different types of
artificial intelligence, intelligence studies
that you’ve done?>>Exactly, I think it’s totally
possible to keep create the feedback loop and
feedback loop would make it, make any AI system to
be more powerful and effective. Some of
the simple example as you, some of you probably know
the recommendation system. So, based on how many clicks
you’ve links that you’ve clicked. Proposed by the previous AI system. We can learn a better AI
algorithm based on that. And that’s one simple example
in the more mature direction. But
there are many other fields that we are still experiment, experimenting and
are trying to learn how much we can improve.
>>Hi,>>One last question, yes.
>>Thank you, the lucky one.
>>[LAUGH]>>So, when you, when you talked about AI
assisted diagnostic in the health care industry there
are other players in this industry, particularly IBM
Watson, who has had a lot of coverage in that space.
As a leader in the AI space, in the machine learning and
LP space. Can you, you know, tell us the different approach
that Google has taken versus the other vendors? What’s
the niche area that you play compared to the rest of the
players in the this industry?>>A very good question. I have less access to other
[LAUGH] company’s solution, but at Google. We really focus
on collaborating closely with our customers, for example,
hospitals and specialists, trying to understand
what’s the real need. And try to bridge the gap
between the technology and the real solution. And you
mentioned that there are many players in this field.
I want to say in health care, that’s the field we want as
many players as possible. We want everybody
to contribute to this space to help all
of our life to be better.>>That is a very nice and political answer.
>>[LAUGH]>>Thanks very much again, Lee, for talking to us today,
congrats.>>[APPLAUSE]>>Yes

About Bill McCormick

Read All Posts By Bill McCormick

Leave a Reply

Your email address will not be published. Required fields are marked *