This lecture is from the course Generative AI for Business.
This lecture covers the challenging area of AI explainability, and how the experimental approach to AI reliability assessment could be a viable solution.
We explain that while AI explainability is rightly desired when deploying AI systems, unfortunately it’s not currently possible.
Modern AI systems based on neural networks are essentially ‘black boxes’ that scientists don’t currently fully understand how the work.
This lecture explores an alternative method for assessing AI system reliability: the experimental approach.
Drawing parallels with how humans establish trust in one another, this method relies on observing AI behaviour across multiple tests, rather than seeking explanations for each action.
Despite the complexities of both AI and human decision-making processes, this approach suggests that confidence can be built through consistent performance evaluation.
The lecture provides insights into how businesses might develop…