Toward a sociology of machine learning explainability: Human-machine interaction in deep neural network-based automated trading

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Standard

Toward a sociology of machine learning explainability : Human-machine interaction in deep neural network-based automated trading. / Borch, Christian; Hee Min, Bo.

I: Big Data & Society, Bind 9, Nr. 2, 20539517221111361, 07.2022.

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Harvard

Borch, C & Hee Min, B 2022, 'Toward a sociology of machine learning explainability: Human-machine interaction in deep neural network-based automated trading', Big Data & Society, bind 9, nr. 2, 20539517221111361. https://doi.org/10.1177/20539517221111361

APA

Borch, C., & Hee Min, B. (2022). Toward a sociology of machine learning explainability: Human-machine interaction in deep neural network-based automated trading. Big Data & Society, 9(2), [20539517221111361]. https://doi.org/10.1177/20539517221111361

Vancouver

Borch C, Hee Min B. Toward a sociology of machine learning explainability: Human-machine interaction in deep neural network-based automated trading. Big Data & Society. 2022 jul.;9(2). 20539517221111361. https://doi.org/10.1177/20539517221111361

Author

Borch, Christian ; Hee Min, Bo. / Toward a sociology of machine learning explainability : Human-machine interaction in deep neural network-based automated trading. I: Big Data & Society. 2022 ; Bind 9, Nr. 2.

Bibtex

@article{7a4914d94acc4eb7ae736a60cbc29360,
title = "Toward a sociology of machine learning explainability: Human-machine interaction in deep neural network-based automated trading",
abstract = "Machine learning systems are making considerable inroads in society owing to their ability to recognize and predict patterns. However, the decision-making logic of some widely used machine learning models, such as deep neural networks, is characterized by opacity, thereby rendering them exceedingly difficult for humans to understand and explain and, as a result, potentially risky to use. Considering the importance of addressing this opacity, this paper calls for research that studies empirically and theoretically how machine learning experts and users seek to attain machine learning explainability. Focusing on automated trading, we take steps in this direction by analyzing a trading firm's quest for explaining its deep neural network system's actionable predictions. We demonstrate that this explainability effort involves a particular form of human-machine interaction that contains both anthropomorphic and technomorphic elements. We discuss this attempt to attain machine learning explainability in light of reflections on cross-species companionship and consider it an example of human-machine companionship.",
keywords = "Algorithmic ethnography, automated trading, deep neural networks, explainability, machine learning, human-machine companionship, ROBOT",
author = "Christian Borch and {Hee Min}, Bo",
year = "2022",
month = jul,
doi = "10.1177/20539517221111361",
language = "English",
volume = "9",
journal = "Big Data & Society",
issn = "2053-9517",
publisher = "SAGE Publications",
number = "2",

}

RIS

TY - JOUR

T1 - Toward a sociology of machine learning explainability

T2 - Human-machine interaction in deep neural network-based automated trading

AU - Borch, Christian

AU - Hee Min, Bo

PY - 2022/7

Y1 - 2022/7

N2 - Machine learning systems are making considerable inroads in society owing to their ability to recognize and predict patterns. However, the decision-making logic of some widely used machine learning models, such as deep neural networks, is characterized by opacity, thereby rendering them exceedingly difficult for humans to understand and explain and, as a result, potentially risky to use. Considering the importance of addressing this opacity, this paper calls for research that studies empirically and theoretically how machine learning experts and users seek to attain machine learning explainability. Focusing on automated trading, we take steps in this direction by analyzing a trading firm's quest for explaining its deep neural network system's actionable predictions. We demonstrate that this explainability effort involves a particular form of human-machine interaction that contains both anthropomorphic and technomorphic elements. We discuss this attempt to attain machine learning explainability in light of reflections on cross-species companionship and consider it an example of human-machine companionship.

AB - Machine learning systems are making considerable inroads in society owing to their ability to recognize and predict patterns. However, the decision-making logic of some widely used machine learning models, such as deep neural networks, is characterized by opacity, thereby rendering them exceedingly difficult for humans to understand and explain and, as a result, potentially risky to use. Considering the importance of addressing this opacity, this paper calls for research that studies empirically and theoretically how machine learning experts and users seek to attain machine learning explainability. Focusing on automated trading, we take steps in this direction by analyzing a trading firm's quest for explaining its deep neural network system's actionable predictions. We demonstrate that this explainability effort involves a particular form of human-machine interaction that contains both anthropomorphic and technomorphic elements. We discuss this attempt to attain machine learning explainability in light of reflections on cross-species companionship and consider it an example of human-machine companionship.

KW - Algorithmic ethnography

KW - automated trading

KW - deep neural networks

KW - explainability

KW - machine learning

KW - human-machine companionship

KW - ROBOT

U2 - 10.1177/20539517221111361

DO - 10.1177/20539517221111361

M3 - Journal article

VL - 9

JO - Big Data & Society

JF - Big Data & Society

SN - 2053-9517

IS - 2

M1 - 20539517221111361

ER -

ID: 319801028