Facets of Transparency in Predictive Analytics

THU 12:00 - 12:30

Lecture hall 6

Modern predictive analytics and machine learning techniques contribute to the massive automation of the data-driven decision making and decision support. It becomes better understood and accepted, in particular due to the new General Data Protection Regulation (GDPR), that employed predictive models may need to be audited. Disregarding whether we deal with so-called black-box models (e.g. deep learning) or more interpretable models (e.g. decision trees), answering even basic questions like “why is this model giving these answer?” and “how particular features affect the model output” is nontrivial. In reality, auditors need tools not just to explain the decision logic of an algorithm, but also to uncover and characterize undesired or unlawful biases in predictive model performance, e.g. by law hiring decisions cannot be influenced by race or gender. In this talk I will give a brief overview of the different facets of comprehensibility of predictive analytics and reflect on the current state-of-the-art and further research needed for gaining a deeper understanding of what it means for (interpretable) predictive analytics to be truly transparent and accountable.

  • Theme
    GDPR / AVG

    Veel was er te doen over de invoering van de GDPR-AVG eind mei 2018. Sommigen klopten het moment op tot Millennium-achtige proporties. Dat bleek wat overdreven. Toch levert de privacywetgeving organisaties wel degelijk hoofdbrekens op. Waaraan moet je bijvoorbeeld als webshop of als overheidsinstantie voldoen? Data is goud waard, wie echt alles wil weten over de ins en outs van de wetgeving mag de Big Data Expo absoluut niet missen en kan een bezoek brengen aan de presentaties die ingaan op het thema GDPR-AVG.