Examples of artificial intelligence (AI) are all around us. We probably use AI
more than we think and in many ways take it for granted. Our smartphone
assistant is an excellent example of AI, even though we may not think of it as
such. In many cases, we take for granted our interaction with Siri or Google
Assistant because they consistently work. Likewise, face recognition has
become a standard unlock feature on new smartphones.
AI, for which machine learning is a subset, works by training a computer-based
neural network model to recognize a given pattern or sound. Once the neural
network has completed training, it can then infer a result. For example, if we
train a neural network with hundreds of images of dogs and cats, it should
then be able to correctly identify a picture as either being a dog or a cat.
The network model determines an answer and indicates the class probability of
its prediction.
As machine learning-based applications become more deeply ingrained in our
daily lives, system developers have become more aware that the current way
neural networks operate is not necessarily the right approach. Using the above
example, if we showed the neural network a picture of a horse, the neural
network, trained only to infer either a cat or a dog, would have to decide
which one it is going to pick within the class for which it was trained. Of
more concern, the likelihood is that it would give an incorrect prediction
with a high degree of probability; something you may not even notice. The
model has failed silently.
As humans, our approach to a similar scenario would be very different. We use
a more reasoned decision-making approach. We would expect the neural network
to answer that it didn’t know or had not seen an image of a horse
before. The example above, while very simple, serves to illustrate a flaw in
how a neural network has to operate in the human world of surprises and
uncertainties. The reality is that many industrial and automotive systems
continue development even though there are concerns about how it might operate
in certain situations.
Bright minds. Bright futures. NXP team members create breakthrough technologies that advance our world.
The future starts here.
At NXP, we’ve been investing in building our AI capabilities for many
years and are concerned with such shortcomings. Your smartphone assistant
incorrectly inferring a spoken word is far removed from the consequences that
might occur in an industrial or healthcare environment. We are delivering
advanced machine learning solutions for our customers and have been working on
an approach termed ‘explainable AI’ (xAI). xAI expands on the
inference and probability capabilities of machine learning by adding a more
reasoned human-like decision-making approach and the additional dimension of
certainty. xAI combines all the benefits of AI with an inference mechanism
that is closer to how a human would respond in a situation.
Consider the following example. Imagine you were a passenger in an autonomous
vehicle. If the vehicle was proceeding slowly and cautiously, you would
naturally wonder why the vehicle was being so careful. If the driver was human
you could ask why are you going so slow, to which the driver would explain
that with the heavy rain the visibility was poor and that they were uncertain
what hazards lay ahead. The explanation is based on uncertainty. xAI decision
making behaves in a similar way by communicating the aspects of inference that
the model is uncertain about.
At NXP we are already investigating ways we might incorporate xAI capabilities
in the machine learning solutions we are developing for automotive, industrial
and healthcare systems.
With the unprecedented global COVID-19 pandemic situation, our xAI research
teams believe that NXP xAI might help enable the rapid detection of the
disease in patients. It is still early days, but we are encouraged by the
proof points we have seen and have established interactions with some leading
hospitals to see how our xAI technology might aid the healthcare challenges
our planet currently faces.
The use of CT radiology and X-ray imaging provides a fast alternative
detection capability alongside the prescribed PCR testing and diagnosis
protocols. CT and X-ray images could be processed by a suitably trained xAI
model to differentiate between clean and infected cases. xAI allows for
real-time inference confidence and explainable insights to aid clinical staff
in determining the next stage of treatment.
Our xAI research team believes they are well-advanced with a mature model and
are engaging in discussions with medical and AI experts in Europe and across
the Americas. However, to further our research, we must have access to larger
anonymized datasets and would welcome hearing from researchers and potential
partners engaged with COVID-19 who would like to collaborate with us to
advance this detection technique.
xAI gets us closer to how humans react in situations where decision-making
involves uncertainty. It adds certainty and confidence to class
probability-based decisions. NXP sees opportunities for xAI across
safety-critical systems for automotive, industrial and healthcare
applications.
Stay safe.