Friday, April 26, 2024

KicKRaTT Letters

From the start of this blog, which sidecars the audio developments uploaded to SoundCloud. The journal entries have all been about the directive to create generative forms of music. When using a programming environment, AI procedure or other software to create it is important to convey as much as possible about how you did it. My best attempt at demonstrating the pure data algorithms that developed into the KicKRaTT songs and development of synthetic datasets are available on YouTube. Hopefully an enjoyment of the music develops the interest in the process. These generative KicKRaTT songs are not composed in the traditional way. This process can best be observed through the video demonstrations on YouTube.

was asked, "How did you write the music & what is KicKRaTT?

reply, "KicKRaTT is a trade name used for years & currently given to the first algorithm developed for this endeavor." "KicKRaTT", is now the name of the band. The names KicKRaTT & KaOzBrD have also been given to the algorithms.

the inquiry opens a can of worms... "How did you write that music?

From KicKRaTT's Band Theory tracks to KaOzBrD's Arboreal, there was no preconceived idea, emotion or mood that drove the process. The driver to compose these tracks is to explore a process to which a computer could write its own music. With that, the structured arraignment of this computer-generated music has been a three-piece band. This group of tracks are algorithm performed compositions, except for Arboreal which is algorithm developed and Ai predicted. The three tracks that display this effort: Key's to The Lock, Arboreal & island. Each has its merits. The tracks were formed by algorithms executing a probability conditional mathematical structure that results in a midi score. If emotion is expressed or displayed, then it is a result of the notes selected by this algorithm's performance. The idea here is to explore the processor's ability to create music, and this is what will be continued in this blog. There is much to be developed to create an algorithm structure that on execution performs a structured song.

Between the pure data patch developments, synthetic dataset developing, AI learning diversions & studio workings are the elements to how this music is created. The generative directive is still very much a part of the process. We are just at the beginning of this particular direction in electronic music. The beginning of a creative process is always the best. KickRaTT has crept out of the closet and with that has developed all this other online requirement that has taken away from creating music.

This journal & blog for KicKRaTT are backwards as to how it all should be read. The KicKRaTT namesake, once a stage cart now the project TITLE, is an entity that has music available online & actively developing into somewhat of an online avatar. The project is electronic music, the directive is composing generative forms of music that are more genre centrist than experimental. YouTube hosts all of the pure data videos that you can view here on the journal or blog. I have moved some of the performance videos over to Vimeo. If your only interested in just the music, it can all be heard at SoundCloud, SoundClick and ReverbNation. I have and will continue to edit, re-edit & redo journal & blog entry post. It's only after I have had a chance to reread it is that I have written that I realize I must go back and rewrite the entry for clarity. Thank you for the authoring understanding, I am new to logging & blogging.

Growing up my favorite bands would release a new album every 1-2 years. KicKRaTT has reached the end of its initial development and released a first album, SOL. I think it's time for the KicKRaTT music to float around on the net with the least amount of influence by me. I will always review the entries that I have posted & rewrite the confusing moment of ramble. If you know or have learned pure data you can take from or rebuild the poly-rhythmic structures I have presented for your own performances. Understanding pure data, a little bit of C programming & you can start generating the same music that I am developing. Only the song Arboreal has been composed using AI predicting. There is still so much to be developed off the pure data patches that generated tracks island & Keys to the Lock. The midi generating possibilities off of the midi generating process that we are developing is endless. We have so named the process, Algorithm Music Development Systems. This covers algorithm development generating music for the AI predicting models.

The algorithm piano patch that started in DAY, evolved to Keys to The Lock, This patch still has so much room to develop. Using expression [expr] over [random] to determine the NEXT NOTE determined has produced two ways to generate notes and two different file types for midi input files.. Using [spigot] to turn ON / OFF the generating process to different instruments creates part variation & that variation can be looped like in Arboreal. Generating patches for island & Keys to The Lock are that will play out forever but can be perceived as complete songs.

We have developed two types of pure data poly-rhythmic patch structures. Patches that allow for the algorithms to generate forever & patches that perform out to a point. Call them INFINITE & FINITE pure data patch structures. Using [spigot] to turn ON / OFF the generating process to different instruments creates part variation & that variation can be looped. These two algorithms & patch structures allow for alot of generative creativity that can produce backup bands, grooves that play out forever, compose songs and learning/ training files for the AI models. The process can generate midi information, the process can predict midi information, the process can generate & predict together as demonstrated in Arboreal.

The algorithm building blocks that I will continue are best demonstrated in tracks Key to The Lock, Arboreal and island. These algorithms being developed in pure data are instructing my home PC on how to generate or create; a new song (arboreal), or a new type of style song (island) and perform them in a humanistic way like, Keys to The Lock. It will be the success of this journal & blog to convey these creative ideas to other musicians in the hope they develop their own music generating structures and advance this new genre of music. There will be two groups of generating artist, those who generate from a algorithm structure (C+, Max, pure data, patch programming) & those that predict from either their own music or the music of others. Generating new midi information & predicting new midi information are essentially two different way to go about doing the same thing. In generating midi info, you set the parameters and start the process. The generative process proceeds with forward thinking calculations to determine the next note played without any consideration of the past. Generating midi info with prediction considers the past midi info to determine the options for the next note played.

Is there a generative genre? First, there are a number of ways to generate a song using a computer processor. C+ or Csound, puredata, AI models and numerous commercial options. These can all be very powerful music compositional tools for generating audio. I can only speculate from what I have read that there is this argument being made for AI models and the generative midi they produce to avoid copy-write protections from the song data AI derived it's compositions from. IMO software is software, it's a tool and the process of prediction is equal to that of generating in pure data. Like I said they both do it, in two different ways. The process of prediction is extremely time consuming and slow to reach a final solution to a musical composition. In our music's development, the generating & predicting formats that we have investigated basically yield the same results.

Should we create a genre catalog for generated audio that includes such classifications as algorithm, predictive, infinite & finite. The BIG industry is pretty hung up on royalties, classifications that are either YES audio samples were used or NO samples were used. Artists should stay more interested in their developments and the creative developments of others. Where does your generated music fit in? Developing genres takes time. This new genre needs more musicians that are programmers & more programmers that are musicians to really take off. Let's not forget there is a pursuit here. If more musicians of this type (programmers/musicians) don't enter this field to develop the generative music form than the only generative form of music will be that created will from samples and commercial recycled music. Let the BIG industry waste it's time developing music dispensing machines. The troubling factor is this, using the predictive technology, developed AI music machines will have derive content from frequency analyzing successful music in order to compete. Their excuse to steal music for their own machine's needs will only be matched by the amount of money they have invested. As for my idealistic generated music? It will more than likely be found in some genre of progressive electronic music. Today's music billboards make no distinction between creative procedural differences in any of it's categories.

image hostimage host

No comments:

Post a Comment