A Conversation with Responsible AI Researcher Swaptik Chowdhury, “AI for Artists,” RabbleRouse News, by James Scarborough
10 min read
Tower of Babel by Pieter Bruegel the Elder - Creative Commons
This interview first appeared on What The Butler Saw, March 21, 2025. Used with permission.
Swaptik Chowdhury’s column “AI for Artists” serves as a critical bridge between rapidly evolving AI technologies and a creative community caught in their crosshairs. His approach is twofold: he explains the technical underpinnings of generative AI and contextualizes them within the real challenges artists face. What makes Chowdhury’s perspective valuable is his dual positioning as both an AI researcher and an advocate for artistic integrity.
The column walks a tightrope between technological enthusiasm and ethical caution. Chowdhury acknowledges AI’s creative possibilities while directly addressing concerns about copyright infringement, job displacement, and the devaluation of authentic artistic expression. His reference to Adorno’s philosophy – that art should challenge rather than conform to mass expectations – reveals a deeper concern: can AI-generated work truly embody the human resistance and vision central to meaningful artistic creation?
Most telling is his coverage of the Paris AI Summit, where French President Macron’s casual embrace of deepfakes contrasts sharply with the Writers Guild of America’s (WGA) urgent call for intellectual property protection. This juxtaposition highlights the disconnect between political and economic interests driving AI deregulation and the labor concerns of working artists. Chowdhury’s commitment to “co-create knowledge” with artists positions his column not merely as informational but as a tool for advocacy during a pivotal moment when creative industries are being fundamentally reshaped by technological forces that many creators barely understand but must increasingly confront.
Below follows an email conversation with Swaptik Chowdhury and James Scarborough
JS: Your column aims to “demystify” generative AI for artists. What sparked your interest in the intersection of AI technology and artistic practice? What gaps did you identify in how these technologies were being explained to creative communities?
SC: Over the last couple of years, I’ve been lucky to work closely with artists and have made some great friends who are in the industry. The pace of AI development, especially, since the launch of ChatGPT, has been nothing short of exponential. In my conversations with artist friends about different AI developments, I noticed many felt overwhelmed by the sheer volume of information, with no clear way of discerning useful information. I worried this could lead to disengagement from a technology that could potentially massively impact the creative world. That concern became my primary motivation to explore the intersection of AI and artistic practice.
From my research on responsible AI development, I’ve also realized that the technical language surrounding AI often makes it feel inaccessible to those without specialized knowledge. This creates a barrier for artists, preventing them from actively participating as stakeholders in the adoption and evolution of AI. My goal is to break down these complexities and make AI more approachable for creative communities.
Creative Workflows
JS: You mention several AI tools adopted across different artistic domains: ChatGPT for writing, Midjourney for visuals, and Suno for music. Could you elaborate on which specific tools you’ve seen being most effectively integrated into professional creative workflows?
SC: I don’t think there is there isn’t a universal “best” AI tool for artists; it truly boils down to the specific task at hand and what the artist is trying to achieve, as well as the fine print of using that particular software. For example, if an artist is looking for highly artistic and visually appealing image generation, Midjourney is usually a great option, but it comes with the caveat that, by default, your creations are publicly viewable. But, if seamless integration with existing workflows is key, especially if they’re already using Adobe products, then Adobe Firefly might be a more practical choice, as it’s designed to work directly within Photoshop and Express.
The same principle applies to image editing. Adobe Photoshop, with its AI features, is an industry standard for a reason. It offers a comprehensive set of tools, but it comes with a subscription cost. For photographers who want an AI-powered approach specifically for photo enhancement, Luminar Neo might be more appealing. For quick, online edits, Pixlr offers accessible AI tools without even requiring a signup for basic use.
Even in music composition, the “best” tool depends on the goal. AIVA is known for generating emotionally resonant music, particularly in classical and orchestral styles, while Mubert focuses on creating personalized, royalty-free music suitable for various digital content. And if an artist wants to generate complete songs with lyrics, Suno AI is a unique option, though it’s currently facing some copyright concerns.
So, it’s about understanding the specific needs of the creative’s workflow, the particular tasks they need to accomplish, and carefully considering the terms of use and any potential limitations of the software.
Deepfakes
JS: Your reporting on the Paris AI Summit revealed a concerning normalization of deepfakes. How do you evaluate the implications of President Macron’s lighthearted presentation of deepfake technology for visual artists whose likenesses or stylistic approaches might be appropriated?
SC: Like any tool, deepfake technology can be used for innovation or for more harmful purposes. Initially, deepfakes gained notoriety in 2017 when the technology was used to create non-consensual pornographic images of actress Gal Gadot. Since then, the issue has only grown. Just this year, a woman in France lost $850,000 after scammers used an AI-generated image of Brad Pitt to deceive her. Similarly, Steve Harvey and Scarlett Johansson have recently spoken out about the need for stricter regulations on computer-generated synthetic images and videos (deepfakes) to curb their more sinister applications.
Legislators are also stepping in. Congress is working on policies like the No Fakes Act and the Take It Down Act, which have bipartisan support. However, beyond legal measures, there’s a pressing need for greater public awareness and education about AI-generated content. People need the tools to recognize and understand different types of computer-generated images.
I don’t think President Macron’s intention was to normalize the harmful use of deepfakes but rather to highlight their potential positive applications and shift public perception. That said, any effort to showcase the “good” side of deepfake technology must go hand in hand with public education and awareness campaigns to ensure people are informed about both the risks and benefits.
Fair Use?
JS: You highlight the tension between copyright protections and AI training methods. How do you respond to the claim made by some AI developers that their use of artists’ work for training constitutes “fair use” rather than appropriation?
SC: I’m not a lawyer, but as I understand it, fair use is a part of copyright law that allows copyrighted material to be used without permission in certain cases, like teaching, research, or news reporting. AI developers argue that when they train their models, they’re not simply copying artwork—they’re teaching AI to recognize styles and patterns to create something entirely new. They compare it to how human artists learn by studying the work of others. They also emphasize the potential benefits of this technology for everyone.
But for artists, it’s a different story. They see their work being used to generate images that can compete with them, potentially affecting their income. Many also point out that AI can be trained to mimic specific artists’ styles, which feels like a direct rip-off. Just because AI eventually creates something new doesn’t mean the initial use of copyrighted work was fair—especially when artists never gave permission for their work to be used in the first place.
Right now, multiple lawsuits are trying to sort all of this out. The key issue is transformative use—AI developers claim their process is transformative because they turn images into code and generate something new. But many artists argue that AI isn’t truly transforming the original work; it’s just imitating styles. Beyond the legal debate, there’s also a real economic concern that AI-generated art could disrupt artists’ incomes and the broader art market.
A Crucial Conversation
JS: The WGA has called for production companies to challenge AI firms using studio intellectual property. What specific protective measures do you believe would most effectively safeguard creative workers’ rights while still allowing for technological innovation?
SC: I believe that a participatory approach and a broad stakeholder analysis should be the first steps in integrating AI into critical areas of the creative industry. Right now, there’s a clear asymmetry in decision-making when it comes to adopting AI in different aspects of the creative process. To address this, we need to engage diverse stakeholders, ensuring creative professionals have a voice in shaping how AI is used.
Organizations like SAG-AFTRA and the WGA have a crucial role to play in this conversation. They can facilitate stakeholder exercises, such as world-building sessions, to understand better how their members envision AI’s role in their art and where they see it adding value.
Additionally, they can hold educational sessions to equip members with knowledge about different AI technologies. These initiatives would ensure that creatives are well-prepared to advocate for their interests in discussions about AI’s role in the industry.
Through my columns, my goal is to spark these conversations and demystify AI so that creatives feel confident engaging in these discussions and shaping the future of their industries.
It Comes Down To Equity
JS: Both of your columns reference examples of AI being integrated into high-profile commercial projects, such as Coca-Cola’s Christmas advertisements. How do these implementations differ from the way smaller independent artists might utilize AI tools?
SC: To really understand the difference, we need to look at how AI is implemented in high-profile commercial projects versus how independent artists use it. For me, it comes down to equity—who benefits from AI in the creative process, and who bears the cost?
In big commercial projects, AI is often used to streamline production, allowing for faster turnarounds and more iterations. Companies like Coca-Cola can use AI to generate high-quality visuals quickly, saving time and resources. But the question is: where do those savings go? Are they reinvested in creative workers, or do they just lead to job losses and reduced royalties for artists who would have traditionally been part of the process?
For independent artists, AI can be a game-changer in a different way. It allows them to experiment, create more efficiently, and compete with larger studios without needing huge budgets. So, the key difference isn’t just how AI is used, but who benefits from it.
Stronger Copyright Protections Are Key
JS: You’ve explained that generative AI works by creating “mathematical representations” of vast datasets. For artists concerned about their work being incorporated into these datasets, what technical or policy solutions might allow them greater control over how their creative output is used?
SC: The biggest concern for the artist is that their work is being used to train AI models without their knowledge or consent. Since generative AI works by creating mathematical representations of massive datasets, once an artwork is part of the training data, it’s not as simple as just “removing” it.
On the technical side, the solutions may include the use of opt-out mechanisms—like the noai and no code metadata tags that denote a piece of content that shouldn’t be scraped for AI training. Another approach is digital watermarking, where artists embed invisible markers in their work to track them.
On policy side, stronger copyright protections are key. Some proposed regulations, like the No Fakes Act in the U.S., aim to give artists legal recourse if their work is used to train AI without consent. Collective bargaining efforts such as recent actions from WGA and SAG-AFTRA could also push for industry standards that require transparency and compensation when artists’ works are included in training datasets.
Nuance and Context
JS: Looking beyond immediate concerns about job displacement or copyright, how do you envision the long-term relationship between human creativity and AI technologies evolving over the next decade?
SC: This is really the fundamental philosophical question—what is art, and what role does it play in our lives? Personally, I see AI as a powerful “effort equalizer.” It can supercharge the execution of tasks on a massive scale. But at its core, most AI architectures function by identifying common patterns within a dataset, patterns that can then be used to process new data or even generate new content that reflects those patterns. As such, AI has a kind of flattening effect, where nuance and context are often sacrificed in favor of finding a broad “average” that represents most, but not all, of the data.
However, for me, art is all about nuance and context. When I look at The Kiss by Gustav Klimt or The Tower of Babel by Pieter Bruegel the Elder, I’m not just taking in the aesthetic of the painting. I’m also thinking about why the artist chose a particular color scheme, who funded the project and how that influenced the work, how the subjects in the painting were impacted by the time period, how the artist depicted light, and how the social and political climate—wars, plagues, major cultural events—shaped the final piece. These layers of meaning, these deeply human decisions, are what make art exciting for me.
As AI technology continues to evolve, so will the debate about its long-term relationship with human creativity. But at the heart of it, I think the real question is: What is art meant to do for us? And that’s something only individuals can answer for themselves.