Emily Sullivan: How values encroach on understanding from opaque machine learning models
10 June 2021
Online 17h00
How much is model opacity a problem for explanation and understanding from machine learning models? I argue that there are several ways in which non-epistemic values influence the extent to which model opacity undermines explanation and understanding. I consider three stages of the scientific process surrounding ML models where the influence of non-epistemic values emerges: 1) establishing the empirical link between the model and phenomenon, 2) explanation, and 3) attributions of understanding.

The speaker

Emily Sullivan, Assistant Professor of Philosophy & Ethics, Department of Industrial Engineering & Innovation Sciences, Eindhoven University of Technology (personal webpage: http://www.eesullivan.com)