Seeing machine vision as more than a technology
From image manipulation to generative artificial intelligence (AI) and facial recognition, machine vision has become a central part of our everyday lives. But how do these machines and their ability to register, analyse, process and represent visual information affect us? To answer that question, the EU-funded Machine Vision project decided to look beyond the technology. “While there is a lot of information on the technology itself and how it can be used, there is a lack of research that sees machine vision as something cultural, aesthetic, and as a medium,” says Jill Walker Rettberg, a professor of Digital Culture at the University of Bergen and the Machine Vision project’s principal investigator. The project, which received support from the European Research Council, focused on understanding how everyday machine vision affects the way ordinary people understand themselves and their world. To do so, it analysed digital art, games and narratives that use machine vision as a theme or interface. It also looked at the use of consumer-grade machine vision applications in social media and personal communication.
Understanding machine vision situations
At the heart of the project is the concept of machine vision situations, which Rettberg defines as the moment in which machine vision technologies come into play and make a difference in the course of events. “Machine vision technologies must be understood within the specific context in which they are put to use,” explains Rettberg. This concept laid the foundation for creating a database that includes 500 creative works using or representing machine vision technologies. “This database will be useful to humanities and social science scholars interested in the relationship between technology and culture, and for designers, artists and scientists developing machine vision technologies,” notes Rettberg.
Machine vision technologies in everyday life
To understand how machine vision technologies are used in everyday life, the project conducted ethnographic research in different contexts, ranging from places such as Taipei, Hong Kong and Chicago, to digital spaces such as social media and online communities. For example, researchers traced the role of neighbourhood surveillance infrastructure in Chicago, connecting it to the city’s past history and present politics. They also documented the social practices of Chinese deepfake creators gathering on specific video streaming platforms. “Through participant observation, interviews and qualitative data collection, this ethnographic research allowed us to triangulate our analyses of creative works with the everyday use of machine vision technologies,” remarks Rettberg.
A go-to source for cultural approaches to artificial intelligence
A lot has changed since the project launched in 2018. “When we first started researching machine vision, generative AI models were at an early stage and deepfakes were new and strange but not yet easy to create,” notes Rettberg. When generative AI came roaring onto the scene in 2023, the project had already been researching the subject for years. “When machine vision went mainstream, we were ready and quickly became a go-to source for information, providing advice to policymakers, teaching students and giving talks to industry and other academia,” concludes Rettberg. Rettberg and her team are continuing their research on AI via the EU-funded AI STORIES project, along with the ALGOFOLK project, which is funded by the Trond Mohn Research Foundation.
Keywords
Machine Vision, technologies, generative AI, facial recognition, digital art, social media, artificial intelligence, AI STORIES