Atelier CODA
Computing, Data and Arts: implementing responsible and plural algorithms and practices for data, to design responsible scientific and artistic futures
Responsables
- Genoveva VARGAS-SOLAR, LIRIS, Scicences Informatiques
- Frédéric BEVILACQUA, STMS, Scicences Informatiques
- Baptiste CARAMIAUX, ISIR, Sciences Informatiques
Correspondant ComDIR : Frédéric Bimbot
Thématiques
Art, computation and data: forms, methods, mediations
- Creative use of data: artistic visualisations, interactive installations, and generative art aided by AI algorithms.
- Hybrid methods: code, sensors, interactivity, multimodality (sound, image, movement) in both the arts and data science.
- Mediation between audiences, artefacts, and data: how art interrogates computing and vice-versa.
Plurality, inclusion and alterity in algorithms and data practices
- Designing algorithms that take into account the diversity of subjects, cultures, genders, and artistic expressions.
- Open practices, co-creation with under-represented communities, subversion of dominant norms.
- Sensitive and “other” data: how can the arts question biases, representations, and exclusions in computing and data science ?
Interdisciplinarity, infrastructures and digital ecosystems for art and data
- Shared infrastructures to host, process, and expose artistic data and algorithms.
- Collaboration models between computer scientists, artists, humanities researchers, and communities.
Sovereignty, “data care,” and non-extractivist practices
- The concept of “data care”: treating data with respect, dignity, and attention to the subjects (human or non-human).
- Practices of data sovereignty: who owns the data? Who controls it? What are the rights of the subjects ?
- Non-extractivist art: the artistic approach as a counter-model to massive and passive data collection, but rather as co-construction, sharing, and redistribution.
Ethics and data governance in the artistic and scientific context
- Practices of consent, anonymisation, secure storage and reuse of data in artistic and scientific projects.
- Participative governance: who decides on uses, algorithms, representations ?
- Issues of responsibility, transparency, data provenance (data lineage) and algorithmic audits.
Scientific-artistic trajectories and societal impacts of algorithms
- How algorithms and data transformed in an artistic-scientific framework can generate new knowledge, new forms of research (open science, digital humanities).
- Socio-economic and cultural impacts: art and data as vectors of innovation, but also as spaces of critique, of paradigm questioning.
- Social responsibility of digital technologies: equity, accessibility, side-effects, environment (consumption, ecological footprint).
Données concernées
- Digital creative-arts data: multimedia files, interactive installations, sensor logs (sound, image, motion)
- Research/scientific data: datasets from data science, AI (training sets, models, results), digital humanities datasets, textual and visual data
- Production/exploitation data: cultural base metadata, participatory collected data, community datasets, collaborative platform data
- Governance, provenance, and quality data: metadata about governance, consent forms, traceability logs, pipeline logs, data dossiers
- Plurality / alterity data: data from under-represented communities, multilingual datasets, non-dominant cultural data, artistic data from non-Western traditions, sensitive data (gender, identity, alterity)
Contexte scientifique
The contemporary digital revolution, marked by an unprecedented proliferation of data, whether coming from sensors, social platforms, scientific infrastructures, and artistic devices, has fundamentally transformed research, innovation and creative practices. At the heart of this transformation, data science and artificial intelligence (AI) are no longer mere tools but sociotechnical and cultural devices in their own right; they participate in the construction of knowledge, art and innovation in a highly connected world. Yet this renewal is accompanied by major questions: who produces the data? For what purposes? Which algorithms are used to process this data? Following which objectives and according to which values?
It is in this landscape that the project “CODA: Computing, Data and Art: implementing responsible and plural data- and algorithmic-practices for other artistic research scientific-artistic futures” situates itself, seeking to articulate, from a technofeminist, pluralist and alterity-oriented perspective, the technical and artistic dimensions of data and algorithms.
First, equitable, ethical and responsible data management has become a key issue. Stahl (2025) argues that ethics should not only accompany but underpin data governance: “data governance has an important role to play in supporting the intended and beneficial use of these data and in preventing unintended and malicious use” (Stahl, 2025). Traditional frameworks assuming the neutrality of data are now being challenged. Especially, the increasing combination of sensors, connected objects, machine learning algorithms and data-hungry platforms raises questions about who controls these flows, for what ends, and according to what participatory and inclusive processes. The project therefore proposes to study and discuss principles of informed consent, anonymisation, traceability, distributed governance and “data care”. Data care is understood as a reflective, caring approach to data, considering it not only as model input but as a shared artefact of culture and creation. Data care is an emerging notion in feminist and critical data studies that reframes data work as relational, situated, and ethically charged “care work” rather than neutral technical processing.
Data care asks who cares for data, whose interests this care serves, and what consequences result. Baker and Karasti (2018) articulate “data care and its politics” in scientific infrastructures, showing how local data advocates perform largely invisible labours of maintenance, repair, documentation, and negotiation that make collective data management possible. Jarke and Büchner (2024) extend this by theorising “data care arrangements” in which organisational actors ascribe values to datasets and continuously generate, maintain, and repair them, making care for data an ongoing accomplishment rather than a given. Feminist data ethics and Data Feminism emphasise that data care must explicitly confront power, inequality, and historical harms, reorienting attention and resources toward those most affected by data systems and insisting that data practices be used to challenge, rather than reproduce, oppression (D’Ignazio & Klein, 2020). In archival and memory institutions, Caswell and Cifor’s (2016) notion of “radical empathy” frames data care as a political commitment to communities whose lives are recorded, demanding that records be created, described, and governed with attentiveness to their vulnerabilities and claims to justice. Together, these strands define data care as an ethos and set of practices that centre responsibility, reciprocity, and justice throughout the entire data lifecycle, from collection and curation to analysis, sharing, and deletion.
Second, the field of data science and AI, in its current modalities, still harbours effects of reproducing structural inequalities, both in data sets and in algorithm design teams. Kuhlman, Jackson & Chunara (2020) show that “structural inequalities in society are reflected in the data used to train predictive models and in the design of objective functions” (Kuhlman et al., 2020). They assert that “no computation without representation,” meaning that diversity among designers and subjects is a condition for algorithmic fairness. This echoes D’Ignazio & Klein (2020) who, in Data Feminism, argue that “data science is a form of power, and that power is distributedunequally” (D’Ignazio & Klein, 2020). The project is positioned precisely within this critical dynamic: it is not merely about correcting bias but about imagining data-architectures, algorithmic models and visualisation/interaction devices in which alterity, understood as the recognition of difference (cultural, gendered, social, epistemic), is an integral part of design and governance. Therefore, data analytics pipelines and AI models become “spaces of epistemic and aesthetic co-construction”.
Third, we observe the connection between computing/algorithms/data and artistic creation, a hybrid and fertile terrain. In digital arts, the algorithm ceases to be purely a functional black box; it becomes an object of critique, mediation, and exhibition. Artists use data as raw material (e.g., the Feminist Dataset by Caroline Sinders https://regainingpoweroverai.org/docs/salons/caroline-sinders/; and Stéphanie Dinkins) , algorithms as medium, questioning the invisible architectures of data. This dynamic gives rise to a loop between scientific research (modelling, visualisation, data-processing) and artistic creation (installation, interactivity, expression) (Caramiaux, B., & Fdili Alaoui, 2022).
The project CODA thus aims to build a bridge between science and art, leveraging data science and machine learning methods in creative contexts, while using artistic practices to question computing models and open up new potentials. In this way, art becomes a laboratory of alterity, where dominant schemas are de-constructed, re-imagined, and transformed; data science becomes a platform for plurality, co-creation, and experimentation.
The positioning of the project is articulated along three essential axes:
- Governance and equitable data management. The project adopts a posture of inclusive, reflexive, distributed data governance. It incorporates the principles of informed consent, anonymisation, traceability, auditability and algorithmic responsibility. It draws on recent work showing that “governance of ethical and trustworthy AI systems must fully integrate data-governance dimensions” (Agbese et al., 2021) as well as recommendations for participatory processes in technology governance (Burr & Leslie, 2021). In other words, the project does not treat data as capital to be exploited, but as a shared good to be co-governed.
- Integration of alterity in data science, analytics and algorithm practices. Here, alterity is not considered as a secondary add-on (for example, a diversity criterion to apply a posteriori) but as a design structure. This means that data must be collected, captured, processed, and visualised in ways that embed the voices, knowledge, practices, art forms, languages, and cultures often marginalised in the models. It implies diversifying profiles in the design team, rethinking success metrics, offering multiple visualisations/multiple modelling paths, and adopting hybrid formats. D’Ignazio & Klein’s (2020) work on data feminism as well as governance-inclusive reviews (e.g., “The principle of justice in AI governance”, 2024) emphasise that “innovations in AI have raised new concerns … AI participates in systems of oppression unless special attention is given to how it is developed and deployed”. The project consequently aims to make alterity an epistemic and aesthetic resource.
- Articulation of data science/art/technologies. In this third axis, computing, AI, data science, and digital arts are not siloed but co-systemic: art interrogates data science (its norms, its representations, its societal effects), data science questions art (its devices, forms, publics), and the algorithm operates both as a technical artefact and a media object. This approach enables the design of hybrid creation devices (interactive installations, generative visualisations, participatory algorithms) that experiment with new modalities of research, dissemination, and mediation. It aligns scientific objectives (modelling, visualisation, and explainable algorithms)with artistic goals (expression, alterity, and mediation) and societal aims (inclusion, diversity, and participatory governance).
The epistemic stakes of this positioning are manifold: the goal is to redefine what we mean by “data” and “algorithm” (no longer merely technical products or economic resources), to dismantle the notion that data are neutral, and to affirm that algorithms embody societal choices (D’Ignazio & Klein, 2020; Kuhlman et al., 2020). The artistic stakes involve making the invisible visible (biases, exclusions, technical mechanisms) and transforming the algorithm into a medium, not just a tool. The societal stakes consist in ensuring that research and innovation de-centre expert monopoly, becoming an inclusive, co-constructed endeavour rooted in distributed knowledges, in which alterity is not merely studied but acts. This aligns with European and French research and innovation policies that advocate for open science, responsible innovation, plural disciplines, and socially engaged research.
From a methodological standpoint, the project is structured over several phases: (i) mapping existing scientific, artistic and data/algorithm practices that already operationalise alterity; (ii) co-design with communities, artists, collectives, engineers, researchers around data and algorithm governance principles, requiring a participatory protocol to gather their knowledges, expectations and values; (iii) experimentation of data science pipelines, algorithms and hybrid artistic devices constructed under equity, plurality and transparency principles; (iv) reflexive and artistic evaluation of results: visualisations, installations, prototypes, but also community feedback, algorithmic audits, and a governance viability review. This approach allows coupling research, creation and societal intervention.
Moreover, the project situates itself in an international, interdisciplinary frame. It aligns with the European “smart, sustainable and inclusive growth” agenda and with France’s strategy for digital sovereignty, responsible innovation and data valorisation. It responds to the expectations of more open science, more responsible technology and more engaged art. In this sense, it constitutes a strategic contribution at national and European scales: in France, the objective is to develop AI and data uses that respect plurality, ethics and sovereignty; in Europe, programmes such as Horizon Europe encourage interdisciplinary research-innovation, inclusion, diversity and societal engagement. CODA therefore aims to move beyond extractive logic of data and algorithms by envisaging a distributed governance, co-constructed design, and a science–art that enacts alterity.
Finally, this positioning translates into a vision for the future: it is not only about producing artefacts (“algorithms”, “models”, “visualisations”) but about creating responsible scientific-artistic futures in which the practices of data, algorithms, computing and art are simultaneously critical, inclusive, generous and creative. This calls for rethinking data-science curricula, making the invisible labour of data visible, valorising minoritized artistic and scientific forms, and embedding alterity not as an afterthought but as the foundation of research and creation. As D’Ignazio and Klein state: “The narratives around big data and data science are overwhelmingly white, male, and techno-heroic” (D’Ignazio & Klein, 2020). CODA aims to reverse that trend by placing alterity, plurality and co-construction at the heart of its challenge.

