Can artificial intelligence generate live therapeutic music?
How may artificial intelligence affect live therapeutic music? How can Certified Music Practitioners be prepared?
A new way of interacting with artificial intelligence (AI) has dominated headlines in recent months: we can now generate pictures, videos, and even music just by typing a description of what we want to see and hear.
That means we might be quickly approaching a future where we can say, “Hey Siri, play me music to make me feel better.”
How can a computer make therapeutic music?
The most recent AI model is one that has the ability to analyze patterns, propose answers based on those patterns, and learn from its mistakes.
For example, an AI can be trained to detect breast cancer by being shown thousands of mammograms that a human has already determined show signs of breast cancer. Then the AI is shown thousands of mammograms that a human knows are cancer free. The AI looks for patterns that differentiate the cancerous images from the healthy images. Humans then test the system, indicating when the AI gets the answers right or wrong until the system becomes equal to or better than a human mammogram reader.
In music, AI can be given millions of sound samples that humans have already labeled with all of the elements of music each piece represents. (You might have experienced something like this if you use the music service Pandora.) The AI can take every sample marked “blues,” analyze what patterns are common, and thus learn the blues. The AI can take every sample marked “piano,” determine what a piano sounds like, and thus learn the piano. A human can type “play blues piano” and give the result a thumbs up or thumbs down, helping the AI to refine its blues piano output.
This is similar to how many humans learn skills or crafts. Certified Music Practitioners might find it reminiscent of their own training to play live therapeutic music and modify it on the fly according to feedback they observe from patient reactions..
What does AI-generated music sound like?
Here are some examples created with Suno.ai:
• The request was for. “Lullaby. Soothing. Acoustic piano. Very slow tempo. 3/4 meter. Aeolian mode.” The AI created Sleepy Melodies.
• “Lullaby. Soothing. Harp. Very slow tempo. Three beats per measure. Aeolian mode,” became Dream Serenade.
• And “Soothing lullaby. Acoustic guitar. Very slow rhythm,” turned into Sweet Dreams.
No, the AI did not follow all my instructions. No, the sound quality is not the same as a live, acoustic instrument.
But it’s available 24/7, and it’s free.
Is having AI-generated music better than having no music at all?
Can a computer change therapeutic music based on patient observations?
The Apple Watch and other wearables currently track things like heart rate, blood oxygen, respiration rate, sleep patterns, and even state of mind. It seems plausible that this biometric data could be connected to the AI music generator.
Companies like LUCID Therapeutics have already developed “a deep learning system that learns the human emotional response to music, helping LUCID understand which musical features elicit particular responses for desired health & wellness outcomes.”
Should Certified Music Practitioners be concerned?
After typing “Should musicians who play music in healthcare facilities for patients be concerned about therapeutic music generated by artificial intelligence?” into the popular AI called ChatGPT: here are excerpts of the response:
“While AI-generated therapeutic music offers some benefits in terms of customization and accessibility, it may not fully replace the human touch and authenticity that live musicians provide. Human musicians bring a level of emotional intelligence and adaptability that AI may struggle to replicate fully. Human musicians can tailor their music in real-time based on patient reactions, emotions, and interactions.
“On the other hand, AI-generated music can be customized to individual preferences and needs more easily than live music. It can adapt in real-time based on data and algorithms, potentially enhancing its therapeutic effectiveness in some cases.
“Musicians in healthcare settings may want to stay informed about AI developments in therapeutic music and consider how they can complement and enhance rather than replace human interaction and empathy. Ultimately, musicians in healthcare settings should view AI-generated music as a tool that can enhance the therapeutic environment rather than as a replacement for their own contributions.”
What is the best-case scenario for CMPs and AI?
One of the biggest hurdles that CMPs have is getting the word out about live therapeutic music. Many hospital administrators still don’t know what it is or how beneficial it can be for patients, staff, and caregivers.
With all the attention that AI is getting, and with the low cost of AI-generated music in comparison to live musicians, it’s conceivable that AI-generated therapeutic music could be implemented in many hospitals in the future.
This would mean that many more administrators in healthcare facilities will finally know that there is such a thing as therapeutic music.
Once someone experiences AI-generated therapeutic music, they may wonder, “What if a real person were here with me, creating healing music on a real instrument?” CMPs may suddenly be in demand like never before.
Because while AI-generated music might be acceptable in many health circumstances, it’s difficult to imagine choosing AI over a real human when you or your loved one is truly suffering or actively dying.
What’s next with AI-generated therapeutic music?
Today, healing music can be live or recorded. Live healing music can be played by a Music Therapist, a certified therapeutic musician, or a volunteer musician. Recorded music can be played by the patient themself, a family member, or a healthcare professional.
Tomorrow, recorded AI-generated therapeutic music will just be one more option among many.
But if the appearance of AI-generated music in healthcare finally gives the field of live therapeutic music the recognition it deserves, MHTP and its almost 1,100 graduates will be ready to provide authentic, human connection, a healing presence, and the pure sound of live music.
If you’d like to learn more about AI, a very accessible, concise podcast episode with Tom Merritt of Daily Tech News and comedian Andrew Heaton may be a starting point for you : “What People Get Wrong About A.I.”