As technologists reckon with the social consequences of their work, how do they place moral values alongside numerical ones? How are values like trust, fairness, and privacy reformulated using the concepts and models of fields like computer science, statistics, economics, and cryptography? What determines which values are valuable?
Machine learning research has emerged as my primary site for pursing these questions. In particular, I am following the current turns to “Human-Centered AI” and “AI Alignment” among US university and corporate machine learning researchers. What is the human in Human-Centered AI? And what is a value towards which AI can be aligned?
I previously studied trust among blockchain/Web3 proponents who claim to circumvent it using what they call trustless technologies. During ethnographic fieldwork in Berlin, I encountered multiple forms of trust relations at work across various media, from the chain itself to face-to-face interactions. Some of these trust relations seemingly contradict the “trustless” ideology of Web3, yet they enable the very existence of a Web3 community.
I presented initial findings at the Web3 Workshop. There, I argued for the importance of physical place in the production and experience of virtual technologies like NFTs. You can read that talk here. I have presented elsewhere and have an article in preparation.