#necs2017 Panel „Sense-app-ility“

or: How the interconnection of human senses and app-technology en- or disables users

Join us for this panel at NECS 2017 in Paris on 1 July, 11:00-12:45!

In the digital age, electronics no longer only provide distractions, but also educational tools and assistive apps „for those with significant vision, […] hearing, mobility, language and communication or learning needs“ (Microsoft Accessibility 2017). The frequently used notion of ‘assistive technol-ogy’ (cf. Roulstone 2016), however, turns out to be less useful, since it is tending to be misunder-stood as a (neutral) actor of normalisation and thus eventually prevents the possibility of observing and analyzing the techno-situatedly produced app-practices of en- or disablement (Schillmeier 2014). Starting from what Callon calls a considerable „asymmetry […] between agencies“ (Callon 2005, 5) – between macro-actors like our actual app-culture (Gardner/Davies; Binczek/Jäger/Linz) and the minority of people with sensory disabilities – is a strategic decision providing an analytical framework for analyzing relations of inclusion-exclusion and modes of participation between agen-cies. The term of “attachment” (Hennion 2010) as an element operating beyond the difference between active and passive thus becomes a relational factor that reciprocally affects and is af-fected, that mediates and is mediated between assistive apps, users and their ecologies. The panel will therefore analyze exemplary materials like advertising clips, testimonials, reports or articles in journals and periodicals about digital communication practices between ‘disabled’ individuals con-fronted to usability problems in everyday situations. We try to slow down our lecture and retrans-form, in a cosmopolitical perspective, the so-called „matter of facts“ –the will of deaf people to communicate with the hearing world, their wish to participate, their minorization – into situated “matter of concern open to technical device” (Stengers 2005, 188) in order to offer a contrast to normalisation processes.

Overview

Robert Stock
Be my eyes and the media environment of distributed dis-appilities

Markus Spöhrer
‚Audio Games‘: Playing with Sound in Snake 3D

Axel Volmar
The Image of Technology: Videophones between enhancement and assistive technology (cancelled)

 

 

Abstracts

Robert Stock
Be my eyes and the media environment of distributed dis-appilities

In this talk, I analyze the app Be my eyes (http://www.bemyeyes.org/) from a media studies and disability studies perspective in order to critically reflect on contemporary app cultures (Miller / Matviyenko 2014). According to the app’s homepage, Be my eyes “enables blind people to con-tact a network of sighted volunteers for help with live video chat”. By drawing on ads, testimonials and other material related to this app, I will demonstrate how the media practice of seeing is pro-duced in contemporary digital mobile culture under a new technological condition. While a “call for assistance”’ is being performed, non-seeing, seeing, as well as hearing and speaking are being processed through the media environment of the app encompassing software, miniaturized cam-eras, loudspeakers and microphones, HD displays, smartphones, wireless connectivity and last but not least the users themselves… One can conceive this as a form of co-creation, “a world of mediations and effects” (Hennion 2010) where disparate elements are not only attached to each other but where their very meaning is created after all.
Developed by a start-up to support people with visual disabilities by putting them into instantane-ous contact with members of the app-community, Be my eyes should enable its users to manage daily life situations where reading or perceiving visually is necessary but no screen-reader or other of the so-called ‘assistive technologies’ (Roulstone 2016) is at hand. Against this back-ground, it also becomes urgent to observe processes of community building (Ellis/Kent 2016) re-lated to the app and its user network. That is, one has to question the formation of so-called blind resp. disabled users and seeing resp. abled-bodied users in order to understand how these cate-gories emerge and are formatted through this media environment.

Markus Spöhrer
‚Audio Games‘: Playing with Sound in Snake 3D

Audio Games highlight ‚audio‘ as the major narrative, ludic and interactive element in the process of gaming. While some of the most popular ‚mainstream‘ games sporadically feature ludic auditory sequences, these games heavily rely on visuals in the interactive process established between the player and the gaming dispositive (cf. Waldrich 2016). Audio games, however, enroll the player in this process and distribute agency by translating auditive (and tactile) cues into interactive „pings“ (Pias 2005) and thus provide a potential for what can be called an ‚auditory virtual space‘ (cf. Cayatte 2014). The games I will look at are designed for either blind persons and thus dismiss graphical elements alltogether or as ‚learning softwares‘ for hard of hearing people by using the auditory ludic element as a means of practicing ’spoken language‘ and thus foreground auditory perception as a main condition for ‚playing the game‘. I will demonstrate this by using the example of 3D Snake (abandonware 2004), usually played with headphones or sourround speakers, using both verbal instructions and different sound effects (and corresponding tone pitches, sound loca-tions and volume levels) to produce an auditory image of a snake, that can be moved by using the computer keyboard or a game controller. I will not discuss how playing such games can be consi-dered as practices of dis-, enabling, in- or excluding persons from visual cultures, but instead intend to analyze how in such an ‚auditory environment‘, the relation of both human and non-human ele-ments (e.g. controller devices, the arrangement of speakers, cultural practices of gaming, aesthetic devices and software configurations) produce and translate a specific mode of perception and a ‚gaming situation‘ „that can potentially neutralize conceptions even the most imclusive definition of video games“ (Cayatte 2014, p. 204).

Axel Volmar
The Image of Technology: Videophones between enhancement and assistive technology

This paper aims to deepen the conversation between media studies, software studies, and disa-bility studies. Users with hearing impairments have been addressed and presented as a main tar-get group for videophone services since the introduction of the AT&T Picturephone in the 1960s. By focusing on the discourse of the “assistive app” in the history visual telephony, I first assess the role of the “disabled user/user in need” for the development and marketing of videophones. Along with alledged benefits for education, the assistive potential of videophones formed part of promotional campaigns in search for new users and public interest. In light of this quest for ren-dering technology “assistive,” I discuss the impact of videophones, e.g. as tools for communica-tive empowerment, for deaf, hard of hearing and speech impaired individuals.
In the second part I show that technologies for “normal” and hearing/speech impaired users para-doxically parted ways when digital codecs for image and video compression were being devel-oped in the 1980s. Turning to the history of standardizing digital video telephony and conferenc-ing codecs in international standardization bodies, I argue that due to certain conceptualizations of potential crisis (especially drops in bandwidth), codecs for video conversations became “bridges” for users with “normal” hearing but “barriers” for users with hearing and speech impair-ments (Star 1999, 388–189). Even today’s app-based videophone infrastructures are not opti-mized for signing and as they primarily cater to users with “normal” hearing, leaving companies, such as Microsoft, with the necessity of having to explicitly advertise their engagement with “as-sistive technology” rather than making their standard technology accessible for users with differ-ent needs. However, special apps such as Skype translator, aim at bridging “language barriers” by use of machine learning. I will close with a discussion of language, communication and image technology between enhancement and assistance.

Speakers CVs

Beate Ochsner is a professor of media studies at the University of Konstanz (Germany). Since 2015 she is the speaker of the DFG research group „Media and participation. Between Demand and Entitlement“ (www. mediaandparticipation.com/). She is also responsible for an DAAD exchange program with the University of Valparaiso (Chile). Her main research fields include me-dia participation, audiovisual production of dis/ability, assistive technology, monsters and monst-rosities. She is co-editor of „Mediale Praktiken des Sehens und Hörens“ („Media practices of see-ing and hearing“) (together with Robert Stock, Bielefeld: transcript 2016) and of Applying the Ac-tor-Network Theory in Media Studies (together with Markus Spöhrer, ICI Global 2016). Her recent publications include “Documenting Neuropolitics: Cochlear Implant Activation Videos”, in: Docu-mentary and Disability, ed. by Helen Hughes/Catalin Brylla, Palgrave & MacMillan: London 2016 and „Human, Non-Human, and Beyond: Cochlear Implants in Socio-Technological Environments” in: NanoEthics 9.3, 237-250, together with Robert Stock and Markus Spöhrer.

Markus Spöhrer studied American Cultural Studies, German Studies and English Literature at the University of Tübingen, Germany and also Film Production, Film History and Popular Music at the University of Miami, Coral Gables. He did his Ph. D. at the University of Konstanz, Germany (Media Studies). Currently he is a Postdoctoral researcher in the DFG project “Mediale Teilhabe” (Media and Participation). Also he is working as a lecturer of Game Studies and theory of media, culture and film. His research interests are film production, Game Studies, philosophy of science and Sci-ence and Technology Studies and participation cultures of the cochlear implant.

Robert Stock is the coordinator of the DFG-research group “Media and Participation” at the Uni-versity of Konstanz, Germany. He holds a Master Degree in European Ethnography from the Humboldt-University Berlin. His main research interests are the mediality of dis-abling processes in digital culture, media practices of hearing and seeing, representations of disability, museums and the question of accessibility as well as documentary film and audiovisuality. He is co-editor of ReClaiming Participation. Technology – Mediation – Collectivity (with Mathias Denecke/Anne Ganzert/Isabell Otto, Bielefeld: transcript 2016) and senseAbility – Mediale Praktiken des Sehens und Hörens (with Beate Ochsner, Bielefeld: transcript 2016)). Recent publications also include “Singing altogether now. Unsettling images of disability and experimental filmic practices”, in Doc-umentary and Disability, ed. by Catalin Brylla/Helen Hughes, London: Routledge Forthcoming and „Human, Nonhuman, and Beyond. Cochlear Implants”, in: NanoEthics 9.3 (2015, with Markus Spöhrer and Beate Ochsner).

Axel Volmar is a postdoctoral fellow at the collaborative research centre “Media of Cooperation” (SFB 1187 Medien der Kooperation) at the University of Siegen. From 2014 to 2016, he has been a Mellon Postdoctoral Fellow in the Department of Art History and Communication Studies at McGill University. His current research interests reside at the intersections of media studies, his-tory of science and technology, and sensory studies. His current research focuses on the history of audiovisual telecommunications. He is the author of Klang-Experimente. Die auditive Kultur der Naturwissenschaften 1761–1961 (Frankfurt/M: Campus, 2015) and has co-edited a number of special issues and collected volumes in sound studies, among them Auditive Medienkulturen. Techniken des Hörens und Praktiken der Klanggestaltung (Bielefeld: transcript, 2013) and Das geschulte Ohr. Eine Kulturgeschichte der Sonifikation (Bielefeld: transcript, 2012).

Teilen mit: