Although the large language models that generate text have exploited in the last three years, a different type of AI, based on what is called diffusion models, is having an unprecedented impact on creative domains.
When transforming random noise into consistent patterns, diffusion models can generate new images, videos or speeches, guided by text indications or other input data. The best can create results that are not indistinguishable from people’s work, as well as the strange and surreal results that feel clearly non -human.
Now, these models are directed to a creative field that is possibly more vulnerable to the interruption than any other: music. Musical models can now create songs capable of obtaining real emotional responses, presenting a marked example of how difficult the authorship and originality in the AI era are being defined. Read the complete story.
“James O’Donnell.”
This story is from the next edition of our printed magazine, which is about how technology is changing creativity. Subscribe now To read it and get a copy of the magazine when it lands!
A small American city is experimenting with AI to find out what residents want
Bowling Green, Kentucky, is the home of 75,000 residents who recently concluded an experiment in the use of AI for democracy: Can an online voting platform, driven by automatic learning, capture what residents want to happen in their city?
After a month of advertising, the Pola.is portal was launched in February. Residents could go to the website and send an idea anonymously (in less than 140 characters) for which a 25 -year plan for their city should include. They could also vote if they agreed or disagree with other ideas.
But some researchers question whether to request contributions in this way is a reliable way of understanding what a community wants. Read the complete story.
“James O’Donnell.”