Innovation & Human-Centered Designer, specializing in 0-1 phase for emerging technology
Screen+Shot+2023-09-24+at+2.56.54+AM.png

MFA HCI | AI/ML UX Prototyper

Overview:

A research study exploring video-faking algorithms and the defacing of everyday digital identities. “Deepfake” is a phenomenon that is becoming absurdly dicult to distinguish from authentic footage. These fake videos are used to replace people’s faces to create the illusion of having that person's identity. With this technology, the everyday person could manufacture video “proof” as evidence to make any claim put forth.

Mundane Deepfakes

A generative auto encoder exploration into deepfakes

Client

Creative Technology at Art Center College of Design

Scope

September 2020–May 2021

Role

Design Researcher, Prototyper

TECHNOLOGY

AI/ML: generative Auto Encoder

Deliverables

Video demo reel, Prototype


Overview

A research study exploring video-faking algorithms known as deepfakes and the defacing of everyday digital identities. The project entailed prototyping algorithms using generative auto encoders and design methods that can explain how AI systems reach their decisions or predictions, making them more transparent. The generative AutoEncoder was the foundation for creating deepfakes. It is a neural network that is trained to utilize an input image and output an identical image.

Training the Deepfakes mechanism starts by two parallel AutoEncoders, one for the original face and another for the new face. During this process, each AutoEncoder is trained to only produce images that resemble the originals. Reconstruction begins upon executing the training of both AutoEncoders, the deepfake process begins when the decoders are switched. The output is a reconstructed image of data, but has the same head alignment and expression from the original photograph.

This provided the unique opportunity to explore how users developed, understood, navigated within and interacted with Generative AutoEncoder to deploy deepfakes. Several use case studies were outlined as part of this research.



Reflection

  • AI systems are not sentient, rather they are driven by a sets of ethical or unethical goals and rules.

  • AI/ML have memory, learn, change over time. This evolving nature may be very crude.

  • When machines confront a morally ambiguous situation what do they do? React based on a rule? Do they ask a human? How would this work? Who is responsible for the decision? 

What you might explore further in the future?

  • Al/ML co-existing with human workers

  • Coded inequities by goals and rules

  • Exploitation via AI/ML