RabbleRouse News

"The final Story, the final chapter of western man, I believe lies in Los Angeles." – Phil Ochs

Paris AI Summit, Deepfakes, Deregulation & the WGA Takes a Stand

I hope everyone is safe from the fires and recovering as best as possible. The strength and resilience of the LA community during this difficult time remind us of the power of solidarity, especially in creative spaces. The local support networks and recovery efforts have been incredible, and we’ll continue sharing information and resources for artists and the broader community.

Meanwhile, we return to our ongoing exploration of AI for Artists. Recently, I had the opportunity to attend the Paris AI Summit, where I presented my study on AI and fairness. The summit gathered world leaders, tech executives, and academics to explore AI’s societal impact and implications for governance and the environment. To kick off the event, French President, Emmanuel Macron showcased a montage of deepfake videos featuring himself in scenes from popular films and TV series, commenting, “Nicely done.” (Deepfakes are AI-generated videos that manipulate actual footage, often placing people into situations they were never actually in.)

While the use of deepfakes by a national leader raises concerns about their normalization, the summit concluded with discussions on reducing regulatory barriers, increasing funding for AI, and pledges from several countries (though the UK and US chose not to sign) to promote open and inclusive AI. A key development is Europe’s shift toward ‘deregulation‘ to encourage AI innovation. While Europe has led in AI regulation, growing competition from the U.S. and China is pushing this change. This potential deregulatory trend in Europe could also influence policies in other countries, and artists should pay attention, as it may have immediate impacts on the creative industry.

WGA Calls for Production Companies to Challenge AI Firms

Back home, Brady Corbet, director of The Brutalist (a 10-time Oscar nominee), recently announced to Deadline that while actors Adrien Brody and Felicity Jones spent months with a dialect coach perfecting their accents, AI was used in post-production to refine Hungarian vowel sounds. In a recent letter, the Writers Guild of America (WGA) has also called for production companies to challenge AI firms using studio intellectual property to train generative models. The Guild argues that tech companies are appropriating works created by generations of union labor without permission, raising critical copyright and artistic ownership issues.

As AI tools and the news cycle evolve rapidly, it’s crucial to understand generative AI, the powerhouse driving the exponential growth of these technologies.

How Does Generative AI work

In the last column, we covered the basics of AI, Machine Learning (ML), and Generative AI. We discussed how Generative AI is based on deep learning, a form of machine learning that processes vast amounts of data to create mathematical models or ‘representations’ of that data. These representations are then used to predict, classify, or create entirely new content in the case of generative AI. Now, let’s break down how generative AI works by looking at the key steps: data collection, training, testing, and content generation.

1. Data Collection & Preparation:

Gathering data is the first step in any generative AI project. This data consists of large datasets reflecting the content the AI will eventually generate. For example, MidJourney, an AI art generator, is fed several billion images tagged with descriptive text. This helps the AI learn the features, styles, and patterns that define various artworks. If an artist inputs a prompt like “a futuristic cityscape at sunset with neon colors and flying cars,” the AI will pull from its extensive database of cityscapes, sunsets, neon color schemes, and cars to create a unique, new image.

2. Training the Model:

Once the data is gathered, the AI is “trained” to recognize patterns and relationships using different statistical methods (or ‘architectures’ in machine learning terms). For example, AI tools like Runway are trained on extensive film footage to learn how lighting, camera angles, and actor movements combine to create engaging scenes. After training, the AI can generate or manipulate video clips based on learned knowledge and simple inputs, like a scene description. A recent example is the Coca-Cola Christmas commercial, which used tools like Runway to create a two-minute AI-generated video.

3. Testing and Refining the Model:

Once the model has been trained, it goes through a testing phase. This phase evaluates how well the AI can generate content based on learned patterns. If the output isn’t quite right, the model is fine-tuned. For example, when an artist uses Soundraw, an AI music generator, to compose a piece, the AI may initially produce a melody that doesn’t fit the artist’s requested genre or emotional tone. The AI adjusts its approach through repeated testing and refinement, learning from the artist’s feedback and improving the output over time.

4. Content Generation:

Once fully trained and tested, the model can generate new content based on the learned patterns and the user’s input. For instance, Artbreeder allows artists to generate portraits or landscapes by blending multiple images. An artist can select specific traits—like facial features, color schemes, or abstract art elements—and the AI uses its trained knowledge to create a new image. These AI tools introduce new ways for artists to explore and experiment with different ideas, influencing the creative process in various ways.

What is WGA asking

With this understanding of how generative AI works, let’s turn our attention to the concerns raised by the WGA and how the increasing use of AI is reshaping the creative industries.  As copyright holders, studios are responsible for protecting creative works from unauthorized use, including AI training. The fact that studios have not acted against AI companies exploiting their intellectual property raises concerns about the future of artistic labor and ownership in a world increasingly shaped by AI.

At the same time, companies like Promise Studios and Eleven Labs are showcasing how AI can be leveraged in the production process. Promise Studios, for example, collaborates with artists to produce original films and series using generative AI as a creative tool. Eleven Labs generates synthetic voices with the potential to disrupt voice-over for audiobooks, commercials, and video. The landscape is still evolving, and as AI companies and Hollywood studios continue to negotiate the use of generative AI, the future of AI in entertainment remains uncertain. As AI tools become more accessible, we must advocate for ethical practices and fair compensation for artists.

Looking Ahead

In the next column, we’ll dive into prompt engineering—the art of crafting inputs to guide generative AI tools effectively. We’ll also discuss the latest AI governance efforts in Hollywood and other creative spaces. But it’s not just about challenges—we’ll also explore how artists can harness AI to streamline workflows and automate tedious administrative tasks. Stay tuned for a balanced take on the risks and opportunities ahead.

I’d love to hear your thoughts. This column is meant to be a conversation where we explore AI and its impact on artists together. Feel free to contact me at swaptikchowdhury16@gmail.com with your comments, questions, or suggestions for future topics. I look forward to reviewing your feedback and creating content that is helpful. – Swaptik