Researchers from Penn, Massachusetts Institute of Technology, Google, and the Max Planck Institute for Informatics have released an image editing tool based on artificial intelligence that allows users to manipulate features by dragging on them.
The tool, called DragGAN, allows users to click and drag a few points in an image until they achieve the desired modification. For example, mouths can be opened or closed, sleeves and shorts can be extended or shortened, and mountains can be made taller or shorter, according to the paper released by the researchers.
DragGAN is based on a generative adversarial network, which can be trained to generate photorealistic images. A GAN consists of two neural networks: a generator and a discriminator. The generator creates an image, and the discriminator penalizes results that are discernibly fake. The training process continues until the generator produces images that fool the discriminator into thinking that they are real, according to Google Machine Learning.
The group of researchers, which included Penn Computer and Information Science assistant professor Lingjie Liu, trained DragGAN to precisely follow user movements while maintaining photorealism.
DragGAN is capable of recreating hidden content, such as the teeth inside a lion’s mouth, and has an understanding of anatomy, such as the bending of a horse’s leg, according to the paper. It only takes a few seconds for each operation to complete, allowing users to edit images in real time.
The project website contains video demonstrations of the tool in action but is experiencing intermittent downtime due to its popularity, The Verge reported. The code behind the tool will be released in June, according to the GitHub repository linked from the project website.
The paper warns that due to the capabilities of the tool and its potential for misuse, “any application or research that uses [the researchers’] approach has to strictly respect personality rights and privacy regulations."
Penn faculty and students have raised concerns about the use of AI tools in academia, as students may use it to cheat on homework assignments, and the data used to train models may contain stereotypes. In the absence of a University-wide policy on these tools, professors have taken a variety of approaches to address the use of AI in their classes.