RMIT School of Art_Imaging Futures Lab_Nirma Madhoo.png
 

Imaging Futures Lab

Members

 
 

Associate Professor Adrian Dyer, Face recognition, forensic data validation and information access

Associate Professor Adrian Dyer


Who controls the narrative? When Steven Sasson invented the digital camera at Kodak in 1975, few people could have envisaged the possibility of mobile phones with high resolution cameras that could instantly transmit images around the world. At that time photographic evidence chain protocols for law enforcement were embedded in work practice where only authorised persons were permitted to collect and manage images in a formal way. Now, widespread camera deployment, for example Police Body cameras, has jumped ahead of legal considerations of what is legitimate evidence. This is important for who can supply and subsequently access multiple sources of visual evidence, and what is potentially admissible in a given court jurisdiction. When such information can be networked with powerful deep learning face recognition AI we have a dangerous cocktail that even dystopian authors fell short on envisaging.

Associate Professor Adrian Dyer RMIT Staff Profile

 

Dr Jair Garcia


We increasingly trust in algorithms to make decisions about everyday issues. Applications go from sorting images based on content and relevance to the adjustment of speed by driverless vehicles based on traffic levels. However, we question on how these algorithms use digital images to inform their decisions, particularly, when images are recorded without human intervention. Which are the visual constructs from an image coded by the learning machine? This question links to ideas posed to photography during its infancy when cameras began capturing reality without humans, when Nature was depicting itself, as pointed out in 1839 a correspondent for Le Commerce observing one of the first daguerreotypes ever produced. Surprisingly, there is a bridge between the photographic aesthetic and its interpretation by computer algorithms, which perhaps surprisingly, is inspired by the way information is coded by the visual system of humans and other animals.​

Dr Jair Garcia RMIT Staff Profile

 

Nirma Madhoo, Obsidian V2, 2021

Nirma Madhoo


School of Fashion and Textiles
HDR Doctorial Candidate


“OBSIDIAN 2.0
A minor planet
A rogue planet captured by AXP 1e-11’s gravitational well
Basaltic. Melanated. Noir
A virtual world inhabited by multiple symbiont and avatar, the I.N.A
homage to Octavia Butler’s protagonist in Fledgling.”

Fashion becomes image in the digital, and Obsidian XR explores the notion of digital fashioned bodies in VR and AR at a physical exhibition launched at Melbourne Fashion Festival at MARS Gallery in March 2021.

Obsidian is an embodied way to experience fashion in what Anna Munster (Materializing New Media, 2006) theorizes to be the enmeshing of digital and baroque aesthetics. The motivation for the project is posthumanist, as are the ongoing and evolving methodologies. Obsidian was initiated collaboratively and as an open-ended project hosted on social VR, garnering more contributors from the VRChat community.

For Obsidian 2.0, the first iterations of the I.N.A performing with an animated kinetic sculpture are a direct reference to the collaboration of fashion designer Iris van Herpen with artist Anthony Howe, but rendered in a world not constrained by physics. Future iterations will explore further the potential of this transmaterial assemblage for performing in VR.

Obsidian 2.0 are: Nirma Madhoo | Jason Stapleton | Kiara Gounder | Ponz | Aaron

anatomythestudio.com

 

J Rosenbaum, Untitled
Photo credit: GSPF2 and Bernie Phelan

J Rosenbaum


School of Art
HDR Doctorial Candidate


Gender Bias in machine learning image detection and classification systems is a significant issue because the systems are implemented on the assumption of a gender binary. This assumption often excludes binary and non-binary, passing and non-passing transgender people. I will focus my practice-based research on computer visions of gender and the way that figures created by image generation algorithms are interpreted on a gender spectrum. I will train biased image generation algorithms and then observe the changes as I introduce new data and make artworks with the sample images generated during training. I will work with existing neural networks and custom trained neural networks to explore the differences between out of the box gender creation and perception and fine-tuned results.

We are at the cusp of a new vision of gender and a new revolution in the way that AI works and intervenes in our daily lives. There is no better time to address these issues because the magnification of bias will only worsen unless addressed.​

Key Projects
Frankenstein’s Telephone
‘Frankenstein’s Telephone’ is a conceptual artwork created using artificial intelligence to examine how computer vision expresses gender classifications. Each of the images in the sequence are snapshots that document how various neural networks have generated an image based on a prompt phrase.

jrosenbaum.com.au

 
 
 

Banner image credit: Nirma Madhoo, OBSIDIAN XR, 2021