The creative way forward for generative AI

Date:

Boutiquefeel WW
Pheromones
Giftmio [Lifetime] Many GEOs
Cotosen WW

Few technologies have shown as much potential to shape our future as artificial intelligence. Specialists in fields starting from medicine to microfinance to the military are evaluating AI tools, exploring how these might transform their work and worlds. For creative professionals, AI poses a novel set of challenges and opportunities — particularly generative AI, using algorithms to remodel vast amounts of knowledge into latest content.

The longer term of generative AI and its impact on art and design was the topic of a sold-out panel discussion on Oct. 26 on the MIT Bartos Theater. It was a part of the annual meeting for the Council for the Arts at MIT (CAMIT), a gaggle of alumni and other supporters of the humanities at MIT, and was co-presented by the MIT Center for Art, Science, and Technology (CAST), a cross-school initiative for artist residencies and cross-disciplinary projects.

Introduced by Andrea Volpe, director of CAMIT, and moderated by Onur Yüce Gün SM ’06, PhD’16, the panel featured multimedia artist and social science researcher Ziv Epstein SM’19, PhD’23, MIT professor of architecture and director of the SMArchS and SMArchS AD programs Ana Miljački, and artist and roboticist Alex Reben MAS ’10.

Play video

Panel Discussion: How Is Generative AI Transforming Art and Design?
Thumbnail image created using Google DeepMind AI image generator.
Video: Arts at MIT

The discussion centered around three themes: emergence, embodiment, and expectations:

Emergence  

Moderator Onur Yüce Gün: In much of your work, what emerges is normally a matter — an ambiguity — and that ambiguity is inherent within the creative process in art and design. Does generative AI show you how to reach those ambiguities?

Ana Miljački: In the summertime of 2022, the Memorial Cemetery in Mostar [in Bosnia and Herzegovina] was destroyed. It was a post-World War II Yugoslav memorial, and we desired to determine a technique to uphold the values the memorial had stood for. We compiled video material from six different monuments and, with AI, created a nonlinear documentary, a triptych playing on three video screens, accompanied by a soundscape. With this project we fabricated an artificial memory, a technique to seed those memories and values into the minds of people that never lived those memories or values. That is the kind of ambiguity that will be problematic in science, and one which is fascinating for artists and designers and designers. It’s also a bit scary.

Ziv Epstein: There’s some debate whether generative AI is a tool or an agent. But even when we call it a tool, we’d like to keep in mind that tools are usually not neutral. Take into consideration photography. When photography emerged, loads of painters were fearful that it meant the tip of art. Nevertheless it turned out that photography freed up painters to do other things. Generative AI is, in fact, a unique kind of tool since it draws on an enormous quantity of other people’s work. There’s already artistic and artistic agency embedded in these systems. There are already ambiguities in how these existing works can be represented, and which cycles and ambiguities we are going to perpetuate.

Alex Reben: I’m often asked whether these systems are literally creative, in the way in which that we’re creative. In my very own experience, I’ve often been surprised on the outputs I create using AI. I see that I can steer things in a direction that parallels what I may need done by myself but is different enough from what I may need done, is amplified or altered or modified. So there are ambiguities. But we’d like to keep in mind that the term AI can be ambiguous. It’s actually many alternative things.

Embodiment

Moderator: Most of us use computers every day, but we experience the world through our senses, through our bodies. Art and design create tangible experiences. We hear them, see them, touch them. Have we attained the identical sensory interaction with AI systems? 

Miljački: As long as we’re working in images, we’re working in two dimensions. But for me, no less than within the project we did across the Mostar memorial, we were capable of produce affect on a wide range of levels, levels that together produce something that is bigger than a two-dimensional image moving in time. Through images and a soundscape we created a spatial experience in time, a wealthy sensory experience that goes beyond the 2 dimensions of the screen.

Reben: I assume embodiment for me means having the ability to interface and interact with the world and modify it. In one among my projects, we used AI to generate a “Dali-like” image, after which turned it right into a three-dimensional object, first with 3D printing, after which casting it in bronze at a foundry. There was even a patina artist to complete the surface. I cite this instance to indicate just what number of humans were involved within the creation of this artwork at the tip of the day. There have been human fingerprints at every step.

Epstein: The query is, how can we embed meaningful human control into these systems, so that they could possibly be more like, for instance, a violin. A violin player has all types of causal inputs — physical gestures they will use to remodel their artistic intention into outputs, into notes and sounds. Straight away we’re removed from that with generative AI. Our interaction is largely typing a little bit of text and getting something back. We’re principally yelling at a black box.

Expectations

Moderator: These latest technologies are spreading so rapidly, almost like an explosion. And there are enormous expectations around what they’ll do. As an alternative of stepping on the gas here, I’d wish to test the brakes and ask what these technologies are usually not going to do. Are there guarantees they won’t have the opportunity to meet?

Miljački: I’m hoping that we don’t go to “Westworld.” I understand we do need AI to unravel complex computational problems. But I hope it won’t be used to interchange considering. Because as a tool AI is definitely nostalgic. It will probably only work with what already exists after which produce probable outcomes. And meaning it reproduces all of the biases and gaps within the archive it has been fed. In architecture, for instance, that archive is made up of works by white male European architects. Now we have to determine how to not perpetuate that kind of bias, but to query it.

Epstein: In a way, using AI now’s like putting on a jetpack and a blindfold. You’re going really fast, but you don’t really know where you’re going. Now that this technology appears to be able to doing human-like things, I feel it’s an awesome opportunity for us to take into consideration what it means to be human. My hope is that generative AI is usually a form of ontological wrecking ball, that it will probably shake things up in a really interesting way.

Reben: I do know from history that it’s pretty hard to predict the longer term of technology. So attempting to predict the negative — what may not occur — with this latest technology can be near unimaginable. Should you look back at what we thought we’d have now, on the predictions that were made, it’s quite different from what we even have. I don’t think that anyone today can say for certain what AI won’t have the opportunity to do someday. Identical to we are able to’t say what science will have the opportunity to do, or humans. The very best we are able to do, for now, is try to drive these technologies towards the longer term in a way that can be useful.

Share post:

Popular

More like this
Related

73% Belgian consumers never return orders

Greater than seven out of...

Contained in the Allegations Against His Reality Show – Hollywood Life

MrBeast (real name: Jimmy Donaldson) was recently hit with...

Robert Kraft Picked Jerod Mayo As Bill Belichick’s HC Successor Five Years Ago

Not featuring a training search this yr, the Patriots...

Origami paper sensors could help early detection of infectious diseases in recent easy, low-cost test

Researchers at Cranfield University have developed an modern recent...