An e-mail arrives from the Smart Water Infrastructures Lab at Aalborg University: ‘I think we found something you can add to the questionnaire’. Puzzled, but intrigued, I (Jonas) arrange a meeting. A couple of weeks later, my colleagues—an engineering PhD student and a professor and ‘maker of algorithms’ (as he likes to call himself) with doctoral degrees in mathematics and engineering—introduce me to the basics of ‘game theory’.
The engineers and I are colleagues in a cross-disciplinary and engineering-led project, Smart Water Infrastructures (SWIft), which works to optimise water flows and management by developing algorithms and automation technologies without compromising data security and privacy. From its onset, ethnographic observations about the socio-technical aspects of such systems were seen as vital to the project. The hope was that these insights would help foster a sense of ownership, expertise, and trust in automation among water utility personnel in Denmark and integrate actual utility practices that would enrich the technical research.
With game theory as a shared frame of reference, my colleagues were suggesting that I collect empirical data about decision-making processes at water utility companies, which they could then model into their predictive algorithms. They were trying to reach across the methodological and epistemological divide between our disciplines, and I saw game theory as an invitation to create a shared space of practice in which ethnography could contribute to their development of algorithms. But how might we transform the ethnographic richness of my data-material into the kind of contribution that the engineers were imagining? And how could game theory productively engage with and contribute to their epistemic practices, without compromising the ethnographic quality of my work?
This article presents reflections on cross-disciplinary collaboration between us—Jonas, Adrienne and Astrid (three anthropologists)—and computational engineers during two consecutive research projects. Both projects aimed to optimise resources in electronic and digital systems by automating them, while simultaneously developing methods that secure dataflows and privacy. Our colleagues are mathematicians specialising in cryptography and engineers working in the field of systems, control, and automation. For practical reasons, we refer to them as computational engineers throughout this article.
Some Background
These research collaborations began in 2017 with the formulation of the first research project (SECURE)2 and run until 2024, when the second project (SWIft) ends. Both projects are engineering-centred and led. The SECURE project (2018–2021) worked to further develop optimised and secure computation through a cryptographic method called Secure Multiparty Computation (MPC). The second project is the ongoing SWIft project (2021–2024) from which the opening vignette originates. SWIft focuses on the development of smart water infrastructures for more efficient water management at water utilities while also employing secure computational methods. ‘Smart’ is the idiom used by our engineering colleagues to refer to technologies that are responsive and somewhat automated, based on the computation of large datasets.
In this article, we show how participating in cross-disciplinary research projects with computational engineering is not enough to make fruitful collaborations happen. It takes the crafting of extra-ordinary spaces of shared practice, and new conceptualisations to actually alter disciplinary boundaries. We argue that an altering of disciplinary boundaries in collaborations between anthropologists and engineers can happen when there is (1) a shared project, (2) a practice of engaging with one another's theoretical universes, and (3) physical spaces for shared intellectual practice. The research is still ongoing, and so is our thinking about these shared modes of collaboration. For this reason, what follows will focus on how the first two elements of this triplet have led us to experiment with designing the third. For now, let us simply clarify that when referring to physical spaces, we mean both regular meetings, seminars, conference participation, workshops, or laboratory experiments. By design, they allow for ongoing conversations and co-creation across disciplines, which can lead to a curiosity about and engagement with each other's theoretical logics. This triplet for collaboration, we suggest, is not only a model of our teams’ cross-disciplinary collaborations but also holds the potential to become a model for (Geertz 1973: 93) practice in teams working across anthropology and computational engineering.
In his influential study of religion as a model of and a model for reality, Clifford Geertz defines religion as a system of symbols that provides its practitioners not only with a symbolic representation—or a model of—the general order of reality but also with a blueprint—or a model for—practice (Geertz 1973: 90–93, 127). To explain, Geertz refers to the example of a dam: A theory of hydraulics, he suggests, helps us understand how dams work. It acts as a model of reality. But hydraulic theory also assists the construction of a dam. In this case, theory serves as a model for reality (ibid.). Geertz emphasises the analytical richness of moving back and forth between those two perspectives—the symbolic and practical—in the interpretation of ethnographic phenomena (ibid.: 121–123). Similarly, we suggest a blueprint for how to collectively ‘tack back and forth’ (Helmreich 2009; Mannov et al. 2020) between a different set of models of and for practice, namely, what Mannov et al. refer to as the ideal, the real and the actual (2020). As we shall see, this framework has helped us articulate and collectively navigate the complexity that ethnographic insights from actual empirical settings bring into a cryptographic world that is otherwise populated by theoretical ideal models, against which imagined real case-scenarios are measured.
By drawing on our collaborations with computational engineers in the SECURE and SWIft projects, we do not only wish to respond to this Special Issue's call for ‘productive interferences’ in cross-disciplinary endeavours. We also wish to make an intervention into how anthropologists and computational engineers might think and work together by means of applying the relation between ideal, real, and actual as a blueprint for the crafting of physical spaces for shared intellectual practice.
With the growth of ‘ubiquitous computing’ (Dourish and Bell 2011; Mackenzie 2017) anthropologists and other social science and humanities scholars have studied the social life of big data and computing in a variety of contexts. Some have addressed the risks that AI, big data, and automation pose to the sustainability of social lives (boyd and Crawford 2012; Dourish 2016; Fisch 2013; Irani et al 2010; Lowrie 2018; Lustig et al. 2016; Mackenzie 2015; Richards and Hartzog 2019; Seaver 2018; Taylor 2017; Zuboff 2015). Others have attended to the practices and logics of data scientists in different contexts (Lowrie 2018; Breslin 2022). Knox and Walford highlight ‘the potential of ethnographies of digital technologies to disrupt anthropological ways of thinking and doing’ (Knox and Walford 2016: 2). They see the digital as an opportunity to alter disciplinary practices from within anthropology. Yet, most anthropological research on ‘the data moment’ (Douglas-Jones et al. 2021; Maguire et al. 2020) has focused more on how to practise anthropology as a critical discipline in a digital era and less on the potentials and challenges of bringing anthropological insights (big, quick, algorithmic, or thick) to work in collaboration with data scientists and the technologies they develop. Recognising that working with shifts the ethics of ethnography, we aim to contribute to a critical anthropology in action with computational sciences. We situate our arguments alongside critical data studies and ‘machine anthropology’—an umbrella term covering scholarly practices that venture into direct collaborations with data scientists (Madsen et al. 2018; Blok and Pedersen 2014; Seaver 2014) or that develop digital ethnography approaches with big data (Munk et al. 2022). How might anthropology and related disciplines contribute positively to and work with technologies that are being deployed as tools that—in addition to optimising resources and profits—also offset and manage the negative effects of, say, climate change and other major challenges of the Anthropocene?
We begin with some background from the SECURE project that focused on data security and optimisation and involved some of the same computational engineers that we encounter in the opening vignette of this article. Here, our productive interference began as an empirical insight: how computational engineers understand their theories and models through notions of ideal and real, and how we used ethnography not only to gain insight into their epistemic framings but also to reach across the scientific divide between us, by introducing the actual. Despite the fact that our focus has changed from data security to optimisation in water management, we begin by suggesting that these insights—ideal-real-actual—can act as a blueprint for interaction with our colleagues in the SWIft project. Thereafter, we show how our colleagues reached out to us with their own epistemic framings—namely, game theory—as a way to embed ethnographic insights in our shared project. By letting game theory inform our ethnographic attention, we show how ethnographic insights can be made legible for our colleagues but also where limitations occur. We conclude by showing how this approach is not only a model of how we collaborate across scientific silos but may also function as a model for further collaboration for like-minded scientists from anthropology and engineering.
Ethnographic Explorations of Ideal-Real-Actual
Our first collaboration with the computational engineers began with the SECURE project. As we have written elsewhere (Mannov et al. 2020), collaboration across disciplines requires trust and relation-building over time. This was where the idea of the triplet—a shared project, an engagement in each other's theoretical universes, and spaces of shared intellectual practice—emerged as a collaborative and theoretical device. The idea for the shared project across engineering, cryptography, and anthropology originated with Professor Rafal Wisniewski. Andersen was approached by him because, as he said, he did not know how to make people act properly in smart and automated systems and he needed a discipline familiar with human behaviour. This resulted in a successful research proposal with disciplinary work packages and a shared project. But that was not enough. The SECURE team met regularly for research meetings, but we remained firmly in our disciplinary silos. We also held a series of workshops during the project's three years in which more time together was allocated and the meeting structure was more flexible. Within those shared physical spaces, we were able to ask dumb questions (Verran 2013: 156) of each other, debate our scientific epistemologies, and become familiar with each other's ways of theorising (see Andersen et al. 2021). It was in these workshops that our understanding of the computational engineers’ ideal and real could be explored empirically. This led us to offer up a third analytical framing that our colleagues seemed to be missing in their work: the actual.
The ideal in cryptographic models refers to secure computations done by a central ‘trusted third party’. Here, all parties in a network send their sensitive data to a third party who does the computation on behalf of the collective, sends only the result back, and does not disclose the sensitive data to any party. This way, the collective gains the benefit of a shared analysis without ever disclosing data other than to the trusted third party. This is referred to as ideal because this model assumes that the third party is not corrupted and is fully trusted. All other computational methods are measured against this ideal (Mannov et al 2020: 38). This is where the cryptographic notion of real comes in. Here, secure computation methods are used, such that all parties have the benefit of a shared analysis of sensitive data, without disclosing this data to one another, and significantly, without using a trusted third party. The data is computed within the collective, also called decentralised computation. The robustness of such methods, whether they be MPC, fully homomorphic encryption, zero-knowledge proofs etc., are measured against this ideal (See e.g., Lopez-Alt et al. 2011). Such methods were referred to as real not because they took their point of departure in actually existing empirical settings, but because they were imagined real settings, models populated by cryptography's usual (fictitious) characters, such as Bob, Alice, Mallory and Eve (Mannov et al. 2020: 39).
The computational engineers struggled to further develop these existing methods because when they tested actual data in their new decentralised protocols, they did not compare well to the ideal. The problem was that they were not making a distinction between the real methods and the challenges of working with actual data. These two worlds were very different. It took lots of questioning from the anthropologists to realise that their colleagues’ real was in fact, still theory. Bob and his friends were just points on a graph, not actual actors (outside of theory) in the empirical world that wished to compute their data. As demonstrated in the SECURE project's Science TV in the Cryptic Commons exhibition (see source, Figure 1), the actual became a helpful term and was adopted into the mathematician's and engineer's language:
Jaron (mathematician): But the problem is that even though we can show that the protocol, in this situation [b], is as secure as in this [a], then it might not actually be as secure as when we have this ‘actual world’ here [c]. So, that's the reason why we need to maybe come up with a new way of defining (...) what is security, because we might not be able to achieve this situation [a], when we have a situation like this [c].
Qiong Xiu (engineer): …from the engineering side, or more applied side, what I found is their [ideal world (a)] is actually unachievable. It's (...) impossible to achieve (...) what we in engineering can do and what the mathematicians assume in the ‘ideal world’.
Jaron: I actually found this problem very interesting. When I was talking to Qiong Xiu (...), it seemed like there was a gap in the literature. (...) So, I think that we have to, kind of, redefine what ‘ideal’ is. If this is their actual world—that we do not have this full connectivity—then I think the theory should be made such that it fits the ‘actual world’.
Our colleagues had not used the term actual before our collaboration, and it does not exist in the cryptography literature. That the graph is not fully connected (c in Figure 1) on the ‘more applied side’ as Qiong Xiu explained, was a practical problem of the theory not corresponding to the empirical settings. By digging into our colleagues’ theoretical universe, we were able to offer terminology that helped them express their problem and address it. The addition of the actual to our colleagues’ ideal and real became a model of the insights that the SECURE project generated together. But because the next project, SWIft, faced similar challenges of how to collaborate across disciplines, we found it useful to transfer insights from ideal-real-actual to the work with computational and automation technologies in the new project. With this move, ideal-real-actual came to function as a model for this collaboration, as well.
The Engineers Want to Play
Let us return to the game theory meeting. At the time, Jonas did not exactly know what game theory was beyond what he had seen in the movie A Beautiful Mind (Howard 2001) about the Nobel Prize-winning economist John Nash, nor did he know how it could be applied to water management. At the meeting—a shared space in the project—he understood that our colleagues were developing and modelling algorithms that would allow them to calculate and predict optimal water management practices. In social sciences and economics, game theory rests on the assumption that ‘instrumentally rational agents’ act in an optimising and strategic way to satisfy given and well-defined objectives (Heap and Varoufakis 2004: 4–5; Tesfatsion 2017: 384). It provides a way of describing the rationales that drive decision-making practices among ‘rational’ actors in, for example, water management at specific water utility companies and enables predictions about human decisions for the achievement of a shared agenda (Marden and Shamma 2015: 862–866). By contrast, game theory is also perceived by some engineers as a ‘suggestion’ of how actors in the water sector ought to manage water flows, considering the sometimes-conflicting agenda and strategies of decision-makers. This is referred to as a prescriptive model (ibid.). In other words, game theory seeks to either describe the most probable decision taken by rational actors given the knowledge available to them or to prescribe the smartest strategy available to each ‘player’ to achieve a shared desired outcome. This outcome is referred to as equilibrium (Heap and Varoufakis 2004, 41–45; Nash 1951). Our engineering colleagues sought an equilibrium between the ideal practice—what is theoretically feasible in an optimal best-case scenario—and what they addressed as real practices, that is, models of computable and generalisable insights based on how they imagined water management negotiations take place in real life.
Our colleagues’ explanations and Jonas's subsequent reading of game theory pointed to several ways in which we were working together. Firstly, our colleagues invited us to engage in their theoretical universe, an invitation that required us to think about our scientific practice anew. Secondly, our insights from the SECURE project helped us navigate the computational engineers’ logics in game theory. One layer was described as computationally ‘optimal’, or ideal. But this did not consider the social context. The next level was how our colleagues envisioned the ways in which descriptive data (Jonas's ‘questionnaire’) about utility workers’ decision-making and how they could include this in their model. This reminds us of the cryptographers’ real. The idea was that data could be generalised and embedded in a model, rules could be established, and equilibrium could be reached. But as soon as situated and thick ethnographic data from actual practice is inserted into a model, its context is lost. In order for us to communicate this concern to our computational colleagues, it was important that we agreed on these different layers when engaging with game theory.
It was clear that our colleagues were already thinking with ideal-real-actual. For example, they were developing a model in the laboratory for ideal control in water distribution networks (Misra et al. 2023), and they were also planning on embedding this model with generalised data from actual decision-making processes and practices at water utilities. But, as Jonas explained to them, the kind of predictive decision-making and equilibrium that is inherently embedded in their understanding of game theory is quite distant from how situated practices and agency (read: the actual) are understood in anthropology. Many questions remained before ideal-real-actual could function as a model for our collaboration. Could we translate actual ethnographic material into computable, quantitative real models? And what would happen to the inherent richness, complexity, and contradictions of the ethnographic actual, when it became a part of the game theoretical real? From his interactions with the computational engineers on the SWIft team and the fieldwork he had been doing at a water utility in western Denmark, Jonas knew that he could not simply ‘collect’ generalised decision-making practices among utility workers, to be implemented into a game theory model. There were many complexities and situational nuances in the decisions he observed, so, if he was to let game theory inform his ethnographic attention, he needed to find a way to understand and work with these complexities.
Jonas decided to start from the insights that our colleagues wanted to compute in their models; namely what they expressed as ‘human specificities on decisions’ or, as they elaborated, ‘what people in specific situations and particular contexts assess as high-priority and low-priority factors or interests, in a situation where there are conflicting interests’. During the meeting, our engineering colleagues had raised questions like: ‘Which reflections have moved the decisions that agents in the water sector take? How have project managers gained the knowledge that they possess? How do they use such knowledge? What factors influence their assessments?’. These questions are well-suited to ethnographic methods, and they accompanied Jonas during the next months of fieldwork.
Water Utility: Situated Negotiations of Ideal-Real-Actual
As an indirect consequence of the Danish Water Sector Act (Vandsektorloven 2009) passed in 2009, a number of minor Danish water utility companies had been compelled to either close or merge with neighbouring utilities. This was the case for the water utility of Thyborøn-Harboøre in western Denmark, which was merged with the utility of Lemvig when Jonas started his six months of fieldwork there. Jonas learned that the management had recently decided to transition to a new SCADA3-system, a kind of graphical user-interface (see Figure 2). The SCADA provides an overview of the total system of pipes and pumps in the utility infrastructure and allows the employees to supervise how water moves through it. In addition, the SCADA interacts with the computers that control and automate specific processes in water management. According to the employees, the transition to the new SCADA system was mainly a managerial decision to simplify operations across the newly merged utilities. Brad, the technical coordinator of water-metres at the utility, explained: ‘From an operations perspective, they are both quite intuitive and very similar to each other’. The 20-year-old SCADA system used in Lemvig still worked. For the majority of the employees, it had been their primary digital tool since they had started working there. So, why get rid of it?
The Ideal Is Not Ideal
The new system had one key functionality that the old one did not: its controlling unit is more easily accessed and the processes and automations that it runs can be adjusted according to new needs or circumstances at any time. According to Frances, the Chief Operations Engineer at the utility, the old system ‘was not programmed correctly’. In addition, he explained, it ran through an:
optimised management system on our pumps that we cannot control. It's all computed into this automated ‘optimization’ that we cannot access. (...) And while I really think that we would be able to make those pumps work more efficiently if we could programme them ourselves, we are bound by the fact that they are designed to be automatic and autonomous, so we cannot adjust the software! (...) I am sure that what the company has designed is ideal in terms of the assumptions it is based on. But it's just that I don't quite agree with some of those assumptions about how the pumps should run. Their energy consumption is just too high.
When Frances spoke about ‘programming’ the new pumps, it seemed like this might be a place where the decision-making agenda could be of game theoretical interest.
The transition to the new SCADA offered a rich opportunity for the utility employees’ otherwise unarticulated considerations to surface. This offered Jonas an opportunity to ethnographically explore how decisions about water management practices were debated, negotiated, challenged, and assessed, and how doubts, situated practices, and experience informed the employees’ decisions about which smartification and optimisation practices to adopt. In addition, the discussions and negotiations taking place around the new SCADA seemed to reflect the layers in ideal-real-actual. In the actual everyday practice of water management at the Lemvig utility, the old SCADA lacked the flexibility that would allow for the contextual decision-making that was required for the system to run optimally. The system was made in relation to an ideal scenario, which did not fit Lemvig's specific situations, nor did it reflect its current priorities in terms of water management. Frances's criticism spoke directly to our insights about the cryptographic ideal, which was based on the assumption that total computational security could be achieved. Similarly, the old SCADA system was based on theoretical assumptions about efficiency, optimisation, and automation that did not take actual contexts into account.
Game theory had redirected Jonas's attention to negotiation and decision-making and the distinctions in ideal-real-actual helped him identify the complexity of a new computational and digital system. First, the conceptual distinction describes the kind of world phenomena with which we were involved. In this sense, ethnographically rich data—which in the eyes of engineers is often fluffy and too messy to work with—when seen as the actual world, becomes legible to our computational engineers because it is integrated in the logics they work with. Secondly, ideal-real-actual explains the kinds of problems that often emerge when generic technologies designed in a lab—as ideal or real—are implemented in actual complex contexts.
Hands-On Actual
Some employees have worked at the Lemvig utility for decades. They know the flaws and strengths of the piping and pumping network like their own back-pockets. Brad is one of them. He used to operate the utility's water-metres in the field. In the meantime, he received further training and is now responsible for the oversight of the whole system's pressure and flow of water through the SCADA system. In close collaboration with the utility engineers, he follows the current state of the physical network and its water flows and assesses whether or not the system works optimally (see Figure 3). Based on his experience with the SCADA and the daily and yearly rhythms of the local communities’ water consumption, Brad monitors water-consumption patterns and pressure and flow-graphs from the pumping stations that the utility manages. He does this in order to identify what he refers to as ‘irregularities’: potential leaks and damages in the network, which he then investigates in the field (see Figure 4). This requires technical skills, a deep, situated knowledge about the local neighbourhoods—how they consume water and for which purposes—and an eye for how global and geopolitical circumstances are manifest locally. For instance, it is key for Brad's work to know which areas of the local community are affected by population fluctuations due to tourism. He knew which industrial areas use water as part of their production and when and which scarcely populated areas are made up of farmland that require sudden and large amounts of water for irrigation due to a changing climate. This was important contextual information that helped him understand what should be interpreted as an ‘irregularity’ and what should not. As Brad explained: ‘Normally, the fishermen consume a lot of water by the harbour when they come back and start to clean up their ships and catch. But in the past two months, their consumption has been close-to-zero (...). Since the war in Ukraine started, it [the diesel] is too expensive for them to go fishing’.
With the fully automated and old SCADA system, Brad's situated knowledge remained external to it. He could make suggestions about how to react to problems based on his hands-on knowledge, but it was not integrated in the SCADA because the system could not incorporate that kind of situational information. Regularities in consumption-patterns are easily modelled into automated systems. Irregularities, however—such as extreme weather events, geopolitics, market-changes, and infrastructural breakdowns or damages—are hard to model and predict. Whenever Brad identifies such potential irregularities, he consults with his colleagues to assess his judgement before deciding how to react. These colleagues are engineers who can make theoretical calculations that help him make the right decision, but he also consults fieldworkers and operators. ‘They [field workers and technical operators] usually know what is currently happening in the area. Some of them even remember if some water-taps have been installed incorrectly and which service-connections are in bad condition’, Brad explained. They are the ones with extensive knowledge about the neighbourhood and the people, pumps, valves, and pipes that actually populate it.
Brad and his colleagues’ situated knowledge of the conditions and the seasonal rhythms of the community lead to a particular kind of decision-making and negotiation. They are based on the iterative relationship between day-to-day circumstances and the models that are embedded in the SCADA system. This is part of what the SWIft engineers were looking for in the game theoretical ‘decision-making practices’.
Getting Real
Brad's work is an example of how decision-making processes at the utility function through a feedback-loop between different layers of knowledge. Those ways of knowing derive from real descriptions—that is, models based on imagined real-world scenarios and needs that are built into the SCADA system—but they are always interpreted against the backdrop of inherently situated knowledges about the local surroundings: They are evaluated through actual observations from the sensed physical world.
This feedback-loop functions the other way around, too. As the utility transitioned to the new SCADA system, Frances saw this as an opportunity to re-evaluate the (infra)structure of the system, asking: ‘Is there anything that we can do differently in order to avoid having to change the physical infrastructure, without compromising the efficiency of our water supply?’ He wanted to incorporate Brad's hands-on reading of the SCADA—his situated practice that requires complex and local knowledge—into the new system. During this evaluation, key suggestions for how to change the infrastructure came from actual observations made by experienced fieldworkers and network-operators who knew the physical system and its context inside-out. This informed the solutions developed by the engineers at the utility. In other words, actual observations were generalised and inserted into the new SCADA, making them a real model for optimisation at the utility. The actual had, in other words, become real, since certain observations were considered to be likely to occur, and therefore generalisable. They could be integrated into the system in a way that was truer to the actual lived circumstances in the municipality. This testifies to how the boundaries and relationship between the actual and the real are continuously blurred, negotiated, and reworked in practice. Nevertheless, the utility employees knew that there would be situations that could not be predicted in a model. The actual remained relevant, and it was important that the new system was flexible enough to consider situated elements outside of it, as well.
Jonas needed to bring these different layers of knowledge back to the SWIft engineers. It was not just a question of collecting ‘decision-making practices’ for the purpose of developing—in game theory jargon—descriptive models that could be reworked as prescriptive models, or actual observations that could challenge the imagined real of our engineering colleagues’ models. He needed to show the multiple layers—ideal-real-actual—in these practices, as well as the recursive relationship between them. Otherwise, the SWIft engineers risked reproducing a new system that, in Frances's words ‘was not programmed correctly’.
Reworking Boundaries
The project is still ongoing, but we would be remiss not to close with a description of how some of the insights presented in this article have been put to use in smart water management systems currently being developed. Early in 2023, the employees of five different Danish water utilities, representatives from a Danish water management consultancy, the SWIft research team, and half a dozen Techno-Anthropology4 students and colleagues gathered for a one-day workshop on ‘Human and Artificial Intelligence in Future Water Systems’. The purpose of the workshop was to craft a space where utility operators, consultants, researchers, and algorithms could interact with and inform one another through actual cases, but in a future-oriented manner. It marked the conclusion of a six-month period of research collaboration between anthropologists (the authors), our computational engineering colleagues at SWIft, and a handful of engineering consultants. The insights presented in this article served as the workshop's analytical framework. Informed by the relational recursivity of ideal-real-actual, the workshop participants worked together with the idea that automation and control algorithms could be improved by incorporating actual world scenarios that included human skills and sociality. The interdisciplinary and trans-sectoral workshop was yet another physical space that made valuable insights and moments of serendipity possible.
In close collaboration with our engineering and consultancy colleagues, we designed the workshop around three interconnected stages. The first stage consisted of a laboratory exercise in which our computational engineering colleagues assisted the participants in engaging with a water management software developed as part of the SWIft project. The second stage was a mapping exercise where water utility operators portrayed how digital and physical infrastructures affected their daily work in the field. Finally, the third stage engaged all the participants in a shared discussion about how future water management practices could be made socially intelligent. The three stages allowed for different layers of knowledge to emerge and interact. The real of a laboratory experiment was tested and evaluated through the actual working habits of the various utility operators. The actual of the current digital and physical infrastructures was held up against the real of the imagined futures of the different water utilities. Finally, the ideal was reworked in terms of the shared imaginaries, needs, and situated knowledges present at the workshop.
Through our ongoing collaboration over the course of six years (2018–2024) and two consecutive research projects, the altering of disciplinary boundaries was enabled through a shared project, by engaging with each other's theoretical and epistemological universes, and by creating physical spaces for shared intellectual practices. Game theory—although not in a linear and straightforward manner—helped alter our anthropological practices by creating a conceptual space for a shared intellectual endeavour. Jonas used the distinction between the ideal, the real, and the actual to help him attune his ethnographic attention to processes and practices of decision-making and negotiation in relation to game theory logics. This made it easier to connect the observations made in the field—the actual world—with the ideal and real work carried out by our computational engineering colleagues in the smart water lab and around the SCADA at the Lemvig utility.
Drawing on Clifford Geertz’ famous distinction between religion as a model of and for practice, we have suggested that adding the actual to the distinction between the ideal and the real world—understood as orders or levels of reality in which computational engineers and data scientists do their work—anthropologists can gain an epistemic space for contributing to work carried out in data science. We have proposed the actual—the space of ethnography, where lifeworlds unfold and are experienced in unexpected ways—as a concrete anthropological tool, intervention, and contribution that attunes computational scientists to the lived worlds into which they increasingly intervene and change. Further, we have shown how ideal-real-actual became a way for anthropological insights to become legible to computational engineers and gained currency in the development of optimisation algorithms and computational technologies. As the workshop exemplified, the ideal, real and actual are unstable orders, as they intertwine, change character, and inform one another in different situations and contexts. Attention to how the actual informs the ideal and real in the development of computational technologies holds the potential of not only optimising the work of computational engineers and data scientists, but also of making it more socially accurate and just. In this way, the ideal-real-actual functions as a generalisable model for collaboration across anthropology and computational sciences.
Acknowledgements
The SECURE project was supported by Aalborg University Strategic Funds and the SWIft project by the Poul Due Jensen Foundation (Grundfos Foundation). Inspiration and patience were offered by our colleagues. Any mistakes are our own.
Notes
Corresponding authors are: Jessen (jonasfj@ikl.aau.dk) and Mannov (mannov@cas.au.dk).
SECURE is an acronym for Secure Estimation and Control using Recursion and Encryption (www.secure.aau.dk).
SCADA stands for Supervisory Control and Data Acquisition.
Techno-anthropology is a degree program offered at Aalborg University. The curriculum brings social and technical insights together for the purpose of developing sustainable technology and policy.
References
Andersen, A. O., M. H. Bruun, and A. Mannov (2022), ‘Antropologiske eksperimenter med fremtiden’, Jordens Folk 56, no. 2:158–172.
Blok, A., and M. A. Pedersen. 2014. ‘Complementary Social Science? Quali-Quantitative Experiments in a Big Data World’. Big Data & Society 1, no, 2: 1–6 https://doi.org/10.1177/2053951714543908.
boyd, d. and K. Crawford (2012), ‘Critical Questions for Big Data: Provocations for a Cultural, Technological, and Scholarly Phenomenon’, Information, Communication & Society 15, no. 5: 662–679, https://doi.org/10.1080/1369118X.2012.678878.
Breslin, S. (2022), ‘Studying Gender While “Studying Up”: On Ethnography and Epistemological Hegemony’, Anthropology in Action 29, no. 2: 1–10, https://doi.org/10.3167/aia.2022.290201.
Douglas-Jones, R., A. Walford, and N. Seaver (2021), ‘Introduction: Towards an Anthropology of Data’, Journal of the Royal Anthropological Institute 27, no. S1: 9–25. https://doi.org/10.1111/1467-9655.13477.
Dourish, P. (2016), ‘Algorithms and Their Others: Algorithmic Culture in Context’, Big Data & Society 3, no. 2: 205395171666512. https://doi.org/10.1177/2053951716665128.
Dourish, P., and G. Bell (2011), Divining a Digital Future: Mess and Mythology in Ubiquitous Computing (Cambridge, MA: MIT Press).
Fisch, M. (2013), ‘Tokyo's Commuter Train Suicides and the Society of Emergence’, Cultural Anthropology 28, no. 2: 320–343.
Geertz, Clifford. 1973. The Interpretation of Cultures: Selected Essays (New York: Basic Books).
Heap, S. H., and Y. Varoufakis (2004), Game Theory: A Critical Text, 2nd ed., rev. ed. (New York: Routledge).
Helmreich, S. (2009), Alien Ocean: Anthropological Voyages in Microbial Seas (Berkeley: University of California Press).
Howard, R., dir. (2001), A Beautiful Mind. https://www.imdb.com/title/tt0268978/.
Irani, L., P. Dourish, and K. Philip (2010), ‘Postcolonial Computing: A Tactical Survey’, Science, Technology, & Human Values 27, no. 1: 3–39. https://doi.org/10.1177/0162243910389594.
Knox, H. and A. Walford (2016), ‘Is There an Ontology to the Digital?’, Theorizing the Contemporary, Fieldsights (blog), March 24, 2016, https://culanth.org/fieldsights/is-there-an-ontology-to-the-digital.
Lopez-Alt, A., E. Tromer, and V. Vaikuntanathan (2011), ‘Cloud-Assisted Multiparty Computation from Fully Homomorphic Encryption’, IACR. Cryptol. EPrint Arch.
Lowrie, I. (2018), ‘Algorithms and Automation: An Introduction’, Cultural Anthropology 33, no. 3: 349–359, https://doi.org/10.14506/ca33.3.01.
Lustig, C., K. Pine, B. Nardi, L. Irani, M. K. Lee, D. Nafus, and C. Sandvig (2016), ‘Algorithmic Authority: The Ethics, Politics, and Economics of Algorithms That Interpret, Decide, and Manage’, in Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems—CHI EA ‘16, 1057–1062 (San Jose, CA: ACM Press), https://doi.org/10.1145/2851581.2886426.
Mackenzie, A. (2015), ‘The Production of Prediction: What Does Machine Learning Want?’, European Journal of Cultural Studies 18, no. 4–5: 429–445, https://doi.org/10.1177/1367549415577384.
Mackenzie, A. 2017. Machine Learners. Archaeology of a Data Practice (Cambridge, MA: The MIT Press).
Madsen, M. M., A. Blok, and M. A. Pedersen (2018), ‘Transversal Collaboration: An Ethnography in/of Computational Social Science’, in Ethnography for a Data-Saturated World, (ed.) H. Knox and D. Nafus. (Manchester: Manchester University Press).
Maguire, J., H. Langstrup, P. Danholt, and C. Gad (2020), ‘Engaging the Data Moment: An Introduction’, STS Encounters 11, no. 1, https://doi.org/10.7146/stse.v11i1.135273.
Mannov, A., A. O. Andersen, and M. H. Bruun (2020), ‘Cryptic Commonalities. Working Athwart Cryptography, Mathematics and Anthropology’, STS Encounters 11 (1): 27–58.
Mannov, A. (Producer), Oberborbeck Andersen, A. (Producer), & Magnussen, J. (Producer). (2021). Graph Topology. Ideal versus real world between mathematics and engineering.. Pictures, Video and sound recordings (digital) https://youtu.be/EWwjvG3iqEY
Marden, J. R. and J. S. Shamma (2015), ‘Game Theory and Distributed Control’, in Handbook of Game Theory with Economic Applications 4 (Amsterdam: Elsevier), 861–899, https://doi.org/10.1016/B978-0-444-53766-9.00016-1.
Misra, R., C. S. Kallesøe and R. Wisniewski (2023) ‘Decentralized control of a water distribution network using Repeated Games,’ 27th International Conference on Methods and Models in Automation and Robotics (MMAR), Międzyzdroje, Poland, 2023, pp. 181–186, doi: 10.1109/MMAR58394.2023.10242432.
Munk, A. K., A. G. Olesen, and M. Jacomy (2022), ‘The Thick Machine: Anthropological AI between Explanation and Explication’, Big Data & Society 9, no. 1: 205395172110698, https://doi.org/10.1177/20539517211069891.
Nash, John (1951), ‘Non-Cooperative Games’, The Annals of Mathematics 54, no. 2: 286, https://doi.org/10.2307/1969529.
Richards, N., and W. Hartzog (2019), ‘The Pathologies of Digital Consent’, 96 Washington University Law Review 1461 (2019), April. Available at SSRN: https://ssrn.com/abstract=3370433.
Seaver, N. (2014), ‘Bastard Algebra’, For Prickly Paradigm Press, July.
Seaver, N. (2018), ‘What Should an Anthropology of Algorithms Do?’, Cultural Anthropology 33, no. 3: 375–385, https://doi.org/10.14506/ca33.3.04.
Taylor, L. (2017), ‘What Is Data Justice? The Case for Connecting Digital Rights and Freedoms Globally’, Big Data & Society 4, no. 2: 205395171773633, https://doi.org/10.1177/2053951717736335.
Tesfatsion, L. (2017), ‘Modeling Economic Systems as Locally-Constructive Sequential Games’, Journal of Economic Methodology 24, no. 4: 384–409, https://doi.org/10.1080/1350178X.2017.1382068.
‘Vandsektorloven’ (2009), ‘Lov om vandsektorens organisering og økonomiske forhold’. LOV nr. 469 af 12/06/2009, https://www.retsinformation.dk/eli/lta/2009/469
Verran, H. (2013), ‘Engagements between Disparate Knowledge Traditions: Toward Doing Difference Generatively and in Good Faith’, in Contested Ecologies: Dialogues in the South on Nature and Knowledge, (ed.) L. Green (Cape Town: HSRC Press), 141–161.
Zuboff, S. (2015), ‘Big Other: Surveillance Capitalism and the Prospects of an Information Civilization’, Journal of Information Technology 30: 75–89.