Worksheet Explainability
Worksheet Explainability
This project worksheet is licensed under a Creative Commons Attribution Non-Commercial Share-Alike License
http://creativecommons.org/licenses/by-nc-sa/4.0/
7. Click the pencil buttons to name the classes after your four objects
Notice how I avoided having my fingers in the photos. If you can’t do that,
try to keep the position and shape of your hand consistent in all of your
photos, so only the object is different.
Important: You should vary the location of your object in all your photos.
Move the object around in the webcam view while you hold down the
“Hold to Record” button.
10. Use the “Preview” to check that your model is working well
You can try adding more examples and training again if it isn’t.
14. Go to https://machinelearningforkids.co.uk/pretrained
22. Hold one of your objects up to the webcam and press “w” on your
keyboard to take a photo
Important: The background must be identical to the background you used
for your training photos.
Try to position your object centrally.
Try to hold your object close enough to the webcam so that it fills a lot of
the picture.
If you’re not happy with the photo, press “w” again to take a new one.
It doesn’t tell you what parts of the image were significant for the prediction,
and what parts of the image the model thought were irrelevant.
Next, we’ll see how you can learn a little about what your model thinks is most
important in your test image.
27. Drag the sprite to a position on the stage that is as far away from
your object as possible
29. Compare the prediction your machine learning model makes with
the prediction from Step 24
The confidence level should be very similar to the confidence from Step 24.
Covering up this area has not made much difference to the prediction.
The area you covered up was not very significant to the prediction.
The contents of that square did not have much to do with why the model
thought this image looked like your object.
30. Move the square sprite to a position that covers something you
think is going to be very significant
You’ve seen that although a machine learning model makes a prediction for an
image as a whole, different areas of the image have different levels of
significance to the prediction.
You’ve seen a simple way to measure this is to cover parts of the image and
see the difference that it makes to the confidence the model has.
Finally, you will try a more organised way to use this technique – moving the
cover square to every possible location and seeing the difference it makes in
each position.
35. Click on the “coordinator” sprite and find the Green Flag code
You will need to experiment to find the right value for this variable.
Change the number in the code
Then re-run the test by:
* clicking the Green Flag
* pressing the z key on the keyboard
44. If you get the right value, the visualisation will look like this:
The overall visualisation gives you an approximate idea of the parts of the
image that the machine learning model found to be most relevant. The
more a section is covered, the less relevant it was too the prediction.
The following links can help you learn more about the sort of work
that is happening in Explainable AI.
Royal Society
Go to ibm.biz/explainableai-royalsociety
AI Explainability 360
Go to ibm.biz/explainableai-ibmresearch
IBM
Go to ibm.biz/explainableai-ibm