Toward a sociology of machine learning explainability: Human-machine interaction in deep neural network-based automated trading

Publikation: Bidrag til tidsskriftTidsskriftartikelfagfællebedømt

Machine learning systems are making considerable inroads in society owing to their ability to recognize and predict patterns. However, the decision-making logic of some widely used machine learning models, such as deep neural networks, is characterized by opacity, thereby rendering them exceedingly difficult for humans to understand and explain and, as a result, potentially risky to use. Considering the importance of addressing this opacity, this paper calls for research that studies empirically and theoretically how machine learning experts and users seek to attain machine learning explainability. Focusing on automated trading, we take steps in this direction by analyzing a trading firm's quest for explaining its deep neural network system's actionable predictions. We demonstrate that this explainability effort involves a particular form of human-machine interaction that contains both anthropomorphic and technomorphic elements. We discuss this attempt to attain machine learning explainability in light of reflections on cross-species companionship and consider it an example of human-machine companionship.

OriginalsprogEngelsk
Artikelnummer20539517221111361
TidsskriftBig Data & Society
Vol/bind9
Udgave nummer2
Antal sider13
ISSN2053-9517
DOI
StatusUdgivet - jul. 2022
Eksternt udgivetJa

ID: 319801028