I know computers are capable of determining how music affects human physiology and the brain. We just haven’t figured out how yet.
Computers can certainly generate music. It could be as simple as uploading common MIDI patterns, providing a selection of instruments and samples, and batch exporting full tracks. But the results would almost invariably be bad without training the computer first.
Where could we start the journey? I have some ideas. First we need a few tools:
- Basic music loops. If melodic or harmonic in content, then a layman’s perception should be that it’s all “one instrument”, even if the synths themselves are layered. If rhythmic content, drum loops might be sufficient, but some custom built loops would be more helpful. More on that in a moment.
- Full tracks by professional musicians, for comparison. I recommend grabbing 50-100 tracks from Anjunabeats, since they will have the same “sound” and fanbase.
- An apparatus to measure a participant’s physiological response to the music. Measure whatever possible: heart rate, blood pressure, muscle tension. Researchers might find this article useful. Could measure more than one participant at a time.
- Participants. Could select randomly, or make a group of people who all like x, y and z type of music to test the responses people have to music they like.
- A computer to collect the data and search for trends.
Now I am not a scientist. But experiments will surely reveal commonalities. For instance, dance music producers commonly raise the pitch of a sound 1, 2 or 3 octaves over 4, 8, 16, etc. bars of music to direct the listener’s attention. I’d bet an entire paycheque it shows up in the heart rate of listeners, especially if they like the track.
The procedure would be:
- Prepare participant and apparatus.
- Play music (loop or full track).
- Record physiological results.
- Repeat one hundred or two thousand times (depends on budget).
- Use data science to find correlations between the music’s audio content and the listener’s physiological reaction.
No subjective reporting is necessary from the participants, though it could be useful. Frankly, the participant is not necessary, since the discography of music from a sufficiently large music label (e.g. Anjunabeats) or radio show (ABGT) should reveal trends on its own.
Presumably, some kind of function will be revealed in the data. I was bad at math so I can’t speculate what it might look like, except that: x ~= y where x is volume and y is physiological arousal. I’d also say that a sudden doubling of a one-note instrument’s timing (e.g. changing a snare from 8/4 to 16/4) never results in less excitement, though it might be boring on its own.
From there, instruct the computer program to generate music which also follows the patterns discovered previously, but does not necessarily replicate them exactly.
Boom. Music generated by a computer.
Someone who’s a better programmer than me could even go directly from Ableton project file –> Numpy data. At that point, you could mine project files from professional musicians to discover trends in music that was good enough for DJ support.