- Jan 2019
CTP is a key method for reflective design, since it offers strategies to bring unconscious values to the fore by creating technical alternatives. In our work, we extend CTP in several ways that make it particularly appropriate for HCI and critical computing.
Ways in which Senger, et al., describe how to extend CTP for HCI needs:
• incorporate both designer/user reflection on technology use and its design
• integrate reflection into design even when there is no specific "technical impasse" or metaphor breakdown
• driven by critical concerns, not simply technical problems
CTP synthesizes critical reflection with technology production as a way of highlighting and altering unconsciously-held assumptions that are hindering progress in a technical field.
Definition of critical technical practice.
This approach is grounded in AI rather than HCI
(verbatim from the paper) "CTP consists of the following moves:
• identifying the core metaphors of the field
• noticing what, when working with those metaphors, remains marginalized
• inverting the dominant metaphors to bring that margin to the center
• embodying the alternative as a new technology
- Aug 2018
But addressing problematic internal culture of design teams is not enough. As an industry we must also confront the real-world socio-political outcomes of our practice. If we accept a code of conduct as necessary, we must also accept a code of outcomes as necessary. We must create ethical frameworks to evaluate our work at all stages, especially once it is alive in the world. Our lack of ongoing critical evaluation of our profession means that design continues to reinforce a harmful status quo, creating exploitable systems at the expense of societies.
Beyond better design paradigms, designers must look beyond the field, toward practices that directly criticise or oppose their work. In particular, security research and user experience design have significant practice and goal overlap and this relationship is often antagonistic. Both fields primarily focus on the systems of wide-scale interactions between users and technology, but the goals of the two fields are diametrically opposed; design is to create the best possible experience for a user, security is to create the worst possible experience for an attacker. By focusing of the outcomes of the two fields, it’s clear that security research is a form of user experience design. Design should reciprocate, and become a form of security research.
Design is inherently political, but it is not inherently good. With few exceptions, the motivations of a design project are constrained by the encompassing platform or system first, and the experiences and values of its designers second. The result is designers working in a user hostile world, where even seemingly harmless platforms or features are exploited for state or interpersonal surveillance and violence.As people living in societies, we cannot be separated from our political contexts. However, design practitioners research and implement systems based on a process of abstracting their audience through user stories. A user story is “a very high-level definition of a requirement, containing just enough information so that the developers can produce a reasonable estimate of the effort to implement it23.” In most cases, user are grouped through shared financial or biographical data, by their chosen devices, or by their technical or cognitive abilities.When designing for the digital world, user stories ultimately determine what is or is not an acceptable area of human variation. The practice empowers designers and engineers to communicate via a common problem-focused language. But practicing design that views users through a politically-naive lens leaves practitioners blind to the potential weaponisation of their design. User-storied design abstracts an individual user from a person of lived experience to a collection of designer-defined generalisations. In this approach, their political and interpersonal experiences are also generalised or discarded, creating a shaky foundation that allows for assumptions to form from the biases of the design team. This is at odds with the personal lived experience of each user, and the complex interpersonal interactions that occur within a designed digital platform.When a design transitions from theoretical to tangible, individual user problems and motivations become part of a larger interpersonal and highly political human network, affecting communities in ways that we do not yet fully understand. In Infrastructural Games and Societal Play, Eleanor Saitta writes of the rolling anticipated and unanticipated consequences of systems design: “All intentionally-created systems have a set of things the designers consider part of the scope of what the system manages, but any nontrivial system has a broader set of impacts. Often, emergence takes the form of externalities — changes that impact people or domains beyond the designed scope of the system^24.” These are no doubt challenges in an empathetically designed system, but in the context of design homogeny, these problems cascade.In a talk entitled From User Focus to Participation Design, Andie Nordgren advocates for how participatory design is a step to developing empathy for users:“If we can’t get beyond ourselves and our [platforms] – even if we are thinking about the users – it’s hard to transfer our focus to where we actually need to be when designing for participation which is with the people in relation to each other25.”Through inclusion, participatory design extends a design team’s focus beyond the hypothetical or ideal user, considering the interactions between users and other stakeholders over user stories. When implemented with the aim of engaging a diverse range of users during a project, participatory design becomes more political by forcing teams to address weaponised design opportunities during all stages of the process.