🇮🇹 🇬🇧 🇪🇸 🇫🇷 🇩🇪
Notizie IA Logo

News and analysis on Artificial Intelligence

Go back

When Artificial Intelligence Makes Music: The Phenomenon of The Velvet Sundown and Iam Between Technology and Ethics

by Dario Ferrero (VerbaniaNotizie.it) Band_AI.jpg

Imagine discovering that your favorite song, the one that has accompanied you for months, the one you shared with friends and that has garnered millions of listens, was never played by human hands. There is no singer who performed it with their voice, no guitarist who found those perfect chords, no musicians who gathered in a studio to create that sonic alchemy. Everything was born from algorithms, neural networks, and artificial intelligence.

The Introduction of a New Paradigm

Welcome to the era of synthetic music, where the line between human and artificial creativity is becoming increasingly blurred, and where two emblematic cases are sparking a global debate: The Velvet Sundown, the mysterious band that conquered Spotify by hiding its true nature, and Iam, the first Italian singer entirely generated by AI.

The revolution is silent but unstoppable. While we are still debating whether artificial intelligence can truly be creative, millions of people are already listening to, sharing, and even falling in love with tracks created entirely by machines. The phenomenon is no longer confined to laboratory experiments or technological curiosities: it has entered the charts, personal playlists, and the soundtrack of our daily lives. This raises profound questions that go far beyond simple technological innovation. What does it mean to be "authentic" in art? Can a machine express emotions and convey them through music? And above all: are we ready to redefine the very concept of artistic creativity?

The story of The Velvet Sundown and Iam represents much more than two successful technological experiments. They are the symbol of an epochal transformation that is sweeping the music industry, with implications that touch on economic, cultural, and ethical aspects. On the one hand, we have the democratization of music creation, and on the other, the risk of standardization that could impoverish artistic diversity. On one hand, the possibility for anyone to bring their musical ideas to life without years of studying instruments, and on the other, the fear that professional musicians could be gradually replaced by increasingly sophisticated algorithms.

The Velvet Sundown Phenomenon: When AI Conquers Spotify

The story of The Velvet Sundown begins like a mystery worthy of a technological thriller. At the beginning of 2024, this apparently unknown band starts releasing tracks that immediately capture the attention of listeners. The sound is immersive, the melodies catchy, the lyrics deep and evocative. In a few months, their numbers on Spotify grow exponentially, reaching over 500,000 monthly listeners and attracting the attention of playlist curators and specialized media.

What is initially striking is the professional quality of the productions and the stylistic coherence that runs through their entire catalog. The songs seem to be born from a mature artistic vision, with sophisticated arrangements and a production curated down to the smallest detail. Fans begin to form an online community, discussing the meaning of the lyrics and sharing interpretations on social media. No one suspects that behind those tracks there are no flesh-and-blood musicians.

The revelation comes gradually, through a carefully orchestrated communication strategy. First the suspicions, then increasingly evident clues, and finally the full confession: The Velvet Sundown is a project entirely based on artificial intelligence, an "artistic provocation" that has confessed to being a synthetic music project guided by human creative direction. The technology used is mainly Suno AI, a platform that is "building a future where anyone can make great music. No instrument needed, just imagination." the-velvet-sundown.jpg Image from the Instagram profile thevelvetsundownband

The Velvet Sundown case has set a precedent for several reasons. First of all, it has shown that AI-generated music can compete in quality with that produced by human artists, at least from the point of view of casual listening. Suno, launched in 2022 by former OpenAI engineers, uses neural networks trained on millions of songs of all genres, allowing for the creation of compositions that respect musical conventions while maintaining elements of originality.

But perhaps the most interesting aspect of the project is its conceptual dimension. The creators have transformed what could have been a simple technological experiment into an artistic reflection on authenticity and the perception of music in the digital age. The band has become a "mirror" that reflects our prejudices and our expectations regarding creativity. How many of those 500,000 monthly listeners would have continued to appreciate the music if they had known from the beginning of its artificial origin?

The evolution of The Velvet Sundown's communication strategy has been particularly refined. Initially, when direct questions were asked about the nature of the project, the managers denied or evaded the topic of artificial intelligence. This "hiding" phase lasted long enough to allow the music to find its audience based solely on its intrinsic value. Only when the listening base had consolidated did the revelation come, accompanied by a broader reflection on the meaning of authenticity in contemporary music.

The success of The Velvet Sundown has also highlighted the creative potential of current AI music tools. In March 2024, Suno released version V3 for all users, allowing the creation of 4-minute tracks with free accounts, while version 4.5+ introduces "professional audio production tools never seen before." This rapid technological evolution is making the creation of professional-quality music increasingly accessible.

Iam: The First Italian AI Singer

Parallel to the international phenomenon of The Velvet Sundown, Italy has seen the birth of its first emblematic case of a completely artificial musical artist. Iam, the virtual singer created by director Claudio Zagarini in collaboration with the Artificial Intelligence Italian Creators (AIIC) collective, represents a different but equally significant approach to the integration of AI in music.

The Iam project was born in April 2025 with a declaredly experimental objective: to create not only artificial music, but a true digital public figure. Unlike The Velvet Sundown, which maintained a mysterious profile, Iam was presented from the beginning as an AI artist, complete with interviews, a presence on social media, and a personality defined through advanced conversation algorithms.

The debut single "Pazzesco" immediately captured the attention of the Italian media, not only for the quality of the production but also for the way it was presented to the public. The track mixes contemporary pop sounds with electronic influences, creating a sound that is familiar yet innovative. Iam's voice, generated through advanced vocal synthesis systems, has distinctive characteristics that make it recognizable and memorable.

What makes the Iam project unique is the narrative approach that accompanies it. The virtual singer has been given a biography, musical preferences, artistic opinions, and even personal quirks that emerge during interviews. This has created a curious phenomenon: the public finds itself interacting with an artificial intelligence that not only creates music, but can also talk about it, explain its creative choices, and answer journalists' questions. Iam.jpg Image from the YouTube video "Pazzesco"

The media impact of Iam has been significant precisely because of its ability to "exist" as a public figure. The interviews given by the AI singer have aroused conflicting reactions: on the one hand, fascination with the technological capabilities demonstrated, and on the other, unease at the naturalness with which the artificial presents itself as authentic. Claudio Zagarini and the AIIC team have stated that the goal is not to deceive the public, but to explore the expressive possibilities offered by new technologies and to stimulate a critical reflection on the future of entertainment.

The comparison with the traditional Italian music scene has highlighted some interesting aspects. While the Italian music scene is often characterized by a strong attachment to tradition and a certain resistance to more radical innovations, the reception given to Iam has shown an unexpected openness to technological experimentation. Critics and industry professionals have been divided between enthusiastic supporters of innovation and conservatives worried about the impact on human creativity.

The Iam project has also raised specific questions related to the Italian cultural context. How does an artificial artist fit into a musical tradition strongly linked to territorial identity and lived experience? Can an algorithm capture and reinterpret the cultural nuances that make Italian music unique? These questions have become central to the debate that accompanied the launch of the project.

Technical and Creative Aspects: How AI Music is Born

To fully understand the phenomenon of music generated by artificial intelligence, it is necessary to explore the technologies that make these results possible. The process of creating music through AI involves several sophisticated techniques, from text-to-audio generation to advanced vocal synthesis, through the analysis and recombination of existing musical patterns.

Suno AI is a generative artificial intelligence music creation platform, designed to allow users to generate realistic songs that incorporate both voice and instrumentation based on text prompts. The process begins with a textual description of the desired track: musical genre, atmosphere, lyrical themes, preferred instrumentation. The algorithm analyzes these inputs and generates a complete composition that includes melodies, harmonies, rhythms and, when requested, lyrics and vocal performances.

The underlying technology is based on deep neural networks trained on huge music databases. Suno is different from other AI music generators because it can create complete songs with singing, lyrics, and even an album cover. This training allows the AI to recognize musical patterns, compositional structures, and stylistic conventions typical of different musical genres, and then recombine them in new and creatively interesting ways.

A particularly advanced aspect is the ability to generate convincing vocal performances. The vocal synthesis used in these systems goes far beyond simple text-to-speech conversion. The algorithms are able to interpret the emotional content of the lyrics and to modulate timbre, intonation, dynamics, and vocal articulation accordingly. The result is performances that convey emotions and expressive nuances comparable to those of human singers.

An advanced feature allows users to take real-world sounds—such as ambient noise, spoken words, or simple rhythms—and transform them into complete musical compositions. This functionality opens up new creative possibilities, allowing any sound input to be transformed into structured musical material.

However, it is important to emphasize that behind every AI-generated track there is always an element of "human creative direction." In the case of The Velvet Sundown and Iam, the final results are the fruit of an iterative process in which human operators guide the AI through increasingly specific prompts, select the best results, and often process them further through traditional music production software.

This raises a fundamental question: how "artificial" is this music really? The creative process, although mediated by technology, maintains elements of intuition, taste, and artistic choice that are typically human. The creators of these projects describe their role as that of "creative directors" who use AI as a tool, albeit a very advanced one, to realize their artistic visions.

The current limitations of the technology are still evident in several areas. Narrative coherence in long texts, the ability to create complex musical progressions, and the interpretation of specific cultural nuances still represent significant challenges. However, the evolution is very rapid: at the end of 2024, Suno launched a promotional campaign with Timbaland, one of the most influential hip hop producers, signaling a growing recognition from the professional music industry.

The Sector's Reactions: Voices from the Music World

The emergence of projects like The Velvet Sundown and Iam has triggered conflicting reactions within the music industry, revealing deep divisions between those who see AI as a revolutionary opportunity and those who consider it an existential threat to musical art.

The reactions of the artists have polarized along predictable but no less significant lines. Musicians and creatives argue that AI-created music lacks the essential elements of human creativity, such as emotion, lived experience, and cultural context. Grammy-winning composer Hans Zimmer, who has experimented with AI-assisted music, argues that AI cannot replicate the emotional depth that comes from direct human experience.

On the other end of the spectrum, a growing number of artists are embracing these technologies as creative tools. Some emerging musicians have begun to use AI as a creative collaborator, generating initial ideas that they then develop and refine through traditional processes. This hybrid approach is creating a new creative paradigm that combines algorithmic efficiency with human artistic intuition.

Streaming platforms are facing unprecedented challenges. Spotify, Apple Music, and other industry majors must develop policies to manage AI-generated content, balancing technological innovation with the protection of the interests of traditional artists. Some platforms are experimenting with specific labels for AI-generated content, while others prefer to maintain a neutral approach, letting the market decide.

Hundreds of artists have signed an open letter warning against the "predatory" use of AI in music, asking technology companies not to use artificial intelligence to violate the rights of human artists. This initiative, promoted by the Artist Rights Alliance, highlights the growing concerns about the economic impact of AI on the artistic community.

The public, for its part, shows complex and often contradictory reactions. While many listeners appreciate AI music when they do not know its origin, the revelation of its artificial nature often leads to a critical re-evaluation. However, a growing part of the public, especially among the younger generations, shows greater openness to technological innovation in the artistic field.

Music critics find themselves in a particularly delicate position. How to artistically evaluate a track that does not come from traditional human creativity? What criteria to use to judge the authenticity and aesthetic value of algorithmically generated music? Some specialized publications have begun to develop new critical frameworks specifically designed for the era of musical AI.

Marketing experts in the music industry identify three main conclusions: the ethical use of AI-generated music is scalable, we are in the "Wild West" of this technology where the decisions made now will set the precedents for the future, and creating music with AI can be fun. This pragmatic perspective highlights how the sector is gradually adapting to the new technological reality.

The Ethical Dimension: Authenticity, Rights, and Creativity

The ethical implications of musical AI raise fundamental questions that go to the very heart of artistic creativity and the entertainment industry. The case of The Velvet Sundown, with its initial strategy of hiding the artificial origin of the music, has highlighted the problem of transparency towards the public. Is it ethical to allow listeners to develop emotional connections with artificial music without their being aware of it?

The question raises questions about the authenticity of the art form. Some argue that AI-generated music lacks the emotional depth and personal expression that human-created music possesses. This position, while understandable, in turn opens up deeper questions: what defines authenticity in an era in which most commercial music is already heavily mediated by technology?

The concept of creativity is at the center of the ethical debate. Is creativity an exclusively human prerogative or can it be replicated and even surpassed by artificial systems? Projects like Iam and The Velvet Sundown suggest that AI can produce creative results that resonate emotionally with the public, regardless of their non-human origin. This could indicate that creativity is more a process of innovative recombination of existing elements rather than a mysterious divine spark exclusively human.

Copyright and author's rights issues represent a legal and ethical minefield. AI music developers must prioritize ethical licensing practices and collaborate closely with composers and copyright owners. But how are the rights to music generated by algorithms trained on millions of existing tracks defined? Who owns the copyright of an AI song: the programmer of the algorithm, the user who provided the prompt, or no one?

A US court ruling has established that completely AI-generated compositions - where an artist simply presses a button and lets the AI create a song from start to finish - cannot be copyrighted. This puts purely AI-generated tracks in the public domain, making them freely available for use or reproduction by anyone. The crucial distinction is the level of human input: the United States Copyright Office has established that "the human touch makes all the difference," but there are important distinctions to consider.

Tennessee has passed the ELVIS Act (Ensuring Likeness Voice and Image Security Act), the first US law that protects musicians from the unauthorized use of artificial intelligence, updating the state's Protection of Personal Rights law to include protections for voice and image. Once the law came into effect on July 1, 2024, it will be forbidden to use AI to imitate an artist's voice without permission. But the technology, skilled in copying voices and styles of real artists, moves too fast for a single law to keep up. This race between technological innovation and legal regulation highlights the need for more agile and adaptable ethical and legal frameworks.

The impact on professional musicians is perhaps the most immediate and concrete ethical aspect. The research identifies two urgent issues: the inevitable increase in the surplus artistic population and the decrease in the cost of creative work. If AI can produce commercial quality music at marginal costs, what will be the economic future for musicians, composers, and producers?

However, not all scenarios are necessarily negative. Some experts propose a man-machine collaboration model where AI amplifies human creative abilities rather than replacing them. In this scenario, musicians could use AI to explore new creative directions, overcome compositional blocks, or carry out projects that would otherwise require prohibitive resources.

Organizations like Sound Ethics are "embracing AI in the music industry by protecting and supporting artists, ensuring our future careers. Through partnerships with educational institutions, legal experts, and stakeholders, we are establishing new standards and promoting policies that protect artists' rights."

The democratization of music creation presents both opportunities and ethical risks. On the one hand, AI can allow people without formal musical training to express their creativity and reach a global audience. On the other hand, this ease of access could lead to a saturation of the music market with content of variable quality, making it even more difficult for artists to emerge.

The issue of cultural diversity is just as complex. AI music algorithms are trained mainly on commercial Western music, risking the perpetuation of cultural biases and the homogenization of global musical expression. How to ensure that AI does not become a tool of cultural standardization but maintains and celebrates the diversity of world musical traditions?

Future Scenarios: Where is AI Music Headed

The technological evolution in the field of musical AI is proceeding at an accelerated pace, suggesting future scenarios that could radically transform the entertainment industry in the coming years. The forecasts of industry experts outline possibilities that are as fascinating as they are disturbing for the future of music creation.

From a purely technological point of view, the expected improvements are significant. By 2026-2027, it is likely that we will see musical AI systems capable of creating extended-duration compositions while maintaining narrative coherence and thematic development. The integration with virtual and augmented reality technologies could allow for immersive musical experiences where AI generates soundtracks in real time based on the user's emotions and behaviors.

The recent evolution of Suno, with partnerships with leading producers like Timbaland, suggests a transition from a tool for amateurs to a professional platform. This trend could lead to the emergence of new professional figures: "AI creative directors" who specialize in algorithmic guidance for commercial music productions.

The optimistic scenario sees AI as an amplifier of human creativity. In this vision, musicians and producers will use artificial intelligence as an inexhaustible collaborator, capable of suggesting infinite variations, exploring unexplored sound territories, and creating complex arrangements in record time. Small record labels could compete with the majors thanks to the democratization of production tools. Independent artists could create complete albums with limited budgets, focusing on the creative vision while AI manages the most complex technical aspects.

In this positive scenario, a new creative economy would emerge where value shifts from technical ability to artistic vision and the ability to connect emotionally with the public. Human artists could specialize in live performances, conceptual narratives, and artistic experiences that AI cannot replicate, while artificial intelligence manages mass production and the creation of personalized content.

However, the pessimistic scenario presents considerable risks. The ease of production could lead to a saturation of the music market with algorithmic content of average quality, making it increasingly difficult for human artists to emerge and maintain economic sustainability. Algorithmic standardization could homogenize musical tastes, reducing the stylistic and cultural diversity that has always characterized musical art.

A particular risk is represented by the possible loss of connection between artist and audience. If music becomes primarily an algorithmic product optimized for maximum engagement, we could lose that dimension of vulnerability and human authenticity that has always been the heart of the musical experience. Music could be transformed from a form of artistic expression to a consumer product optimized for recommendation algorithms.

The issue of regulation will become crucial. It will be necessary to develop legal frameworks that balance technological innovation with the protection of artists' rights and transparency towards consumers. Some countries may require mandatory labeling of AI-generated content, while others may adopt more liberal approaches. In the United States, the Generative AI Copyright Disclosure Act of 2024 has been proposed, which would require companies that develop generative AI to disclose the datasets used for training. Record labels that hold contracts with artists could take legal action against parties that infringe on rights, although these remedies are currently geographically limited.

Music education will necessarily have to evolve. Music schools and conservatories will have to integrate AI into their curricula, teaching students not only to play instruments and compose, but also to collaborate effectively with intelligent systems. New disciplines will emerge such as "algorithmic creative direction" and "AI music experience engineering."

The business model of the music industry will undergo profound transformations. "Music on demand" services could emerge where users commission personalized tracks for specific occasions. Playlists could become real-time algorithmic compositions, dynamically adapting to the listener's mood and activities.

Conclusions: The Inevitability of Change

The analysis of the cases of The Velvet Sundown and Iam reveals that the integration of artificial intelligence in music is no longer a future prospect, but a present reality that is already redefining the creative and commercial paradigms of the industry. These pioneering projects have shown that AI can produce music that is not only technically competent, but also emotionally engaging and commercially viable.

The true lesson that emerges from this silent revolution is that the value of music does not lie exclusively in its human origin, but in its ability to connect with the emotional experience of listeners. The Velvet Sundown won over half a million monthly listeners before anyone discovered its artificial nature. Iam has generated passionate discussions in the Italian media not only as a technological curiosity, but as an authentic artistic phenomenon.

However, this transformation raises ethical questions that require thoughtful answers. Transparency towards the public, the protection of the rights of human artists, the preservation of cultural diversity, and the economic sustainability of the music sector are challenges that require the coordinated commitment of technologists, artists, regulators, and civil society.

The ongoing evolution suggests that the future of music will not necessarily be a replacement of the human with the artificial, but rather a re-articulation of creative roles where human and artificial intelligence collaborate in new and unprecedented ways. The artists of the future may not be replaced by AI, but they will have to learn to work with it, using it as a tool to amplify their creative abilities.

The democratization of music creation offered by AI presents extraordinary opportunities for creative expression, allowing people without formal musical training to bring their artistic visions to life. At the same time, this accessibility requires new criteria for evaluating quality and originality in a world where anyone can produce professional music with a few clicks.

Change is inevitable, but its direction is not predetermined. The choices we make today - as a society, as an industry, as consumers - will determine whether musical AI will become a tool of creative liberation or commercial standardization, whether it will broaden artistic diversity or reduce it, whether it will support human artists or replace them.

The story of The Velvet Sundown and Iam has just begun. They represent the first chapters of a transformation that will continue to evolve in the coming years, challenging our conceptions of creativity, authenticity, and artistic value. Their success reminds us that, ultimately, the music that matters is the one that manages to touch the human soul, regardless of its origin. The challenge for the future will be to ensure that this ability for emotional connection is not lost in the transition to the era of artificial intelligence music.

In the meantime, the "foundations of copyright in music as we know it" continue to be shaken by the rapid evolution of AI, at a time when the music industry is at the center of the debate on global intellectual property. The game has just begun, and the rules of the game are still taking shape.

Digital Resurrection: Making Those Who Are Gone Sing

Given the topic, I thought it was right to put it after the conclusions, I imagine you will understand the ironic why. I would like to mention the latest "frontier" of musical artificial intelligence: digitally "resurrecting" artists who have been dead for decades to make them sing new songs.

The case exploded just a few days ago when a song titled "Together" appeared on the official Spotify page of Blaze Foley, a country singer-songwriter murdered in 1989, followed by "Happened To You" attributed to Guy Clark, a Grammy winner who passed away in 2016.

The tracks, marked with the copyright of a mysterious "Syntax Error" and uploaded via TikTok's SoundOn platform without authorization from heirs or labels, sparked fierce controversy before being removed.

Craig McDonald, owner of the label that manages Foley's catalog, accidentally discovered the unauthorized publication, raising disturbing questions about the ethical limits of this technology.

If we are already discussing transparency and authenticity with "living and breathing" AI artists, imagine the implications when it comes to making the dead sing without their consent. Evidently, in the era of artificial intelligence, not even death is a limit to a recording career.

But this is another story that would deserve a separate article, perhaps titled "When AI Meets the Afterlife: A Practical Guide to Digital Necromancy."