2 Matching Annotations
- Jul 2021
-
ai.googleblog.com ai.googleblog.com
-
An “attention map” of each prediction shows the important data points considered by the models as they make that prediction.
This gets us closer to explainable AI, in that the model is showing the clinician which variables were important in informing the prediction.
-
- Feb 2021
-
www.sciencedaily.com www.sciencedaily.com
-
Koo's discovery makes it possible to peek inside the black box and identify some key features that lead to the computer's decision-making process.
Moving towards "explainable AI".
-