"DAY" a pure data generated performance in the key of E.
* This page is a work in progress and might be repetitive.
Random vs expression calculations in generating composition.
When constructing code that generates music, the go to choice for the bulk of the compositions heard across the net, is to use random number generating objects or code. To the musician's ear this creates a rather dull & non harmonious score or as I call it "bad next note choice". Example... [random 10] = generates a output 8,2,9,4,10,5,1...etc. If we assigned these generated numbers to notes, the end result is a non-harmonious fumbling around within an octave, C D D F E G C. Whereas instead if we use expressions, [expr x+1] or [expr x-1] (where variable x is a chosen first note, lets say x=6=A) we generate a pattern that stays more central to this first chosen note, resulting in a output 6,7,8,7,6,5,4,5. Using the aforementioned x=6=A, this group of generated numbers to notes creates the more harmonious pattern A B C B A G F G.
Consider when Mozart penned "Twinkle Twinkle Little Star" or as the notes go; C C G G A A G, his chosen notes were in harmony with one another. Mozart’s notes continue to climb up scale and then concludes by falling one note back. Code can be written to expand these expression choices to include [expr x+1] to [expr x+?] & [expr x-1] to [expr x-?]. Which would give your calculation the ability to jump from C to G, step up to A, & then back step down to G. Attempting to produce this outcome with random codes or objects, observed in the previous random set; C D D F E G C, is impossible.
The video "DAY" that I have posted displays this difference in employing [expression] code over random. The result is a more humanistic & harmonious approach to generating a musical score.
All notes and chords for my piano performance "DAY" are generated in key of E. The algorithm can run indefinitely and does so in my home. Enjoy!
AI technology employs a process of frequency analyzing the audio samples of other musician's work in order to create melody, chords & harmonies. A ethically questionable procedure. In order for AI to compete with human musicians it will require samples of talented musicians. No samples of other musician's work were used to generate the piano performance heard in this video. This code directs it's output to the MS GS Wavetable found on every Windows desktop.
-----------------------------------------------------------Determining midi notes using the [expr] object instead of the [random] object.
In the pure data alternative purr data there is a object called [drunk]. The [drunk] object outputs random numbers in a moving range. Rather than using say [random 100] where the output can jump from 1 to 100, then 75 to 25. Purr data object [drunk] will stagger up and down it's random output, centering the random result more closely to it's initial numerical starting point.
- Example... pure data [random 10] = output = 8,2,9,4,10,5,1...etc
- Example... purr data [drunk 10] = output = 5,6,4,5,3,4,6,7,5...etc
Building patches that use one over the other creates two different outcomes. I was very excited at first with the purr data [drunk] object as the resulting midi score created was more central and consolidated random choice of the next note. [random] even in it's randomosity, does create a predictable outcome that is not really very musical and becomes repetitive when composing score after score. This statement is arguable. For what is musical is determined by the listener. Maybe a better word is harmonious. When a human plays a instrument, the human moves from one note to the next in a pattern that is harmonious to the musician's ear. When Motzart penned "Twinkle Twinkle Little Star (TTLS)" or as the notes go; C C G G A A G, his chosen notes were in harmony with one another. Motzart's notes continue to climb up scale and then concludes by falling one note back. The result is a melody that has been cherished, whistled & played for centuries. Attempting to produce this outcome with the [random] object outcome; A G C A C G A G is next to impossible. With the [drunk] object you might get a bit closer; C G C G A G A. Never to achieve the memorable & melodic C C G G A A G through randomosity. Solving this in pure data will be quiet the challenge. As the TTLS melody is way too human & certainly not a melody that was achieved through random means (Motzart's Dice Game?). Remember random numbers are not calculated, nor are they chosen sequential. Random numbers are chosen randomly from a set that has a max & min value. So when using random objects in pure data or purr data, you can set the results to values & functions which can be determined to apply to a specific group of notes, octave(s), note properties... but in the end there will be no calculated move or mathematically related approach to the choice of the note that is to be played next. It will be determined randomly. As a musician I next to seldom write, play or perform my music by jumping randomly around the instrument's available octaves. Melodies & chords develop through a more centralized exercise of the notes within a particular octave and develop outward from this location of the instrument.
Using the [random] object, a boolean scheme or fractal equation in the [expr] object patch is probably why so many programmers of pure data, who are not musicians, emploee it as the go to choice for determining a next note outcome. We have so many pure data examples that are of a random determination. I can only say this, there are just too few musicians who are programmers or programmers that are musicians, that are focused on constructing patches that attempt to create a more harmonious musical composition over a randomly created experimental composition. This is one of the reasons I have chosen to work pure data in a total midi only format over manipulating the [dac] object output. I want to stay more focused to the note than processing audio when working in pure data. There is a bigger conundrum here than any to be found in the frequency analysis procedures of the highly talked about music AI programs in the news? To find the human in the algorithm structure. The other rout that could be taken is to recreate the human form from the sounds we make or the songs we sing. Both maybe fruitless, as both endeavors are attempting to make something humanistic from a device that humans created. These conversations remind me of a drinking game using the Drake Equation.
After a week or so of tearing through paper designs, attempting to arrive at a simple pure data construction that attempts to calculate the next note over randomly choosing one, I arrived at this piano performance patch. I haven't in any way solved this problem but what I have stumbled upon is a hybrid structure that creates a more harmonious outcome. Hybrid in that the patch, presented in this journal entry, employs both [random] & [expr] elements. I arrived at the construction by thinking about the left & right hand movement of a pianist. The left hand (bottom right half of the patch) plays chords that are triggered off the rhythm of the 8 step sequencer of the [metro] tree, then the chord that is played is randomly determined from the 20 available 3 note chords. The right hand (top right half of the patch), then randomly chooses from a group of [expr] objects. Which is a calculated move to the next midi note to be played; [expr x + 1], [expr x - 1], [expr x + 2], [expr x - 2] ...etc up to [expr x +/- 4]. At this stage I have limited the expressions to a max of 4, even though the human hand can reach for a full octave up or down. These [expr] objects are calculating changes off of a variable [value x], the starting value of x is selected by the user before the patch is performed. In the patch displayed here you will notice that I have set selection of [value x] to numbers 9-15. These values correspond to the center of the row of [select] object values which in turn play the predetermined midi notes of the right hand in the key of E across 3 octaves.
A necessary patch within this patch is the use [moses 0] & [moses 24] objects as conditional statement. If the [value x] is < 0 or > 24 then [expr x + 4] or [expr x - 4] is calculated and the value of variable x is kept within the range of notes I have determined to be allowed for the right hand. If this expression was not in the patch, the value of x wwanders out of range to the [select] object values. Rather than just nudge the note back into the [select] object range, I throw the note back into the range using a +4 addend or a -4 subtrahend. This use of [moses] as a conditional keeps the calculated note from banging it's head against the limits of the [value x] object range.
The final result of this patch is a more humanistic approach to playing a piano improv in the key of E than anything I achieve using random objects. I have run the patch for almost 3 hours continuous & what I found interesting were the comments by listeners inside of my house. Those hearing the patch asked, "Who is this playing the piano?". Now, that's what I'm hoping to hear! The performance of the patch is not unlike the improv you might hear from a piano player contemplating a new song. This response inspired me to create the YouTube video "DAY a pure data performance with visuals". The gamma visuals were created with a separate software app and are not generated through pure data GEM. I like to think of this patch as a simple achievement in my goal of creating a pure data patch that generates a midi composition that exhibits a more humanistic approach to the music it performs.
Most of the songs that we enjoy in life are performed using a scale in a specific key, but you usually find that a musician less often ever uses all of the notes from the scale during the performance. At this stage of my pure data patches, I am using all of the notes found within a scale of a specific key. I'm thinking for a next step in this development to take a recognizable melody, determine the notes of that melody and limit the [select] objects to only those notes. Even repeating them as notes to be selected. It will be interesting to hear during the patch's performance if or how often the patch arrives at the recognized melody that inspired the patch's notes through it's [expr] calculation. Or a better thought might be, how I will have to construct the [expr] objects so that they might reach that recognizable melody. There is this other idea I have and that is to found the chords or the notes off of each other. A chord in the key of E initiating a melody in the key of E or the notes in the key of E building off of a chord in the key of E.
For the historical, in developing a piano composition patch I studied the groundbreaking Max composition of Philippe Manoury's, Pluton.
While I have found no original patch available for Philippe Manoury's, Pluton on the internet to study, a description of his composition can be found in this article. From what I have read, Manoury's pure data patch was more expressive than generative. His pure data patch followed his playing and then altered the midi characteristics to manipulate the synthesis processing.
* KicKRaTT; MUSIC, ALGORITHMS, DOCUMENTS, GRAPHICS & LOGOS: ISNI# 0000000514284662 *



No comments:
Post a Comment