KicKRaTT
Tuesday, November 25, 2025
Monday, November 24, 2025
SOLARIUM - Ai Song Contest 2025
SOLARIUM - 2025 AI Song Contest
computers composing music through predicted variations
SOLARIUM is a presentation of midi score predictions generated by LLMs trained on in-house synthetic datasets. A musical score composed in an entirely electronic format consisting of three predicted midi movements. This is an original Ai predicted score that targets genre music ranging from progressive rock of Canada to acid house of the United States and kosmische of Germany. Enhanced with vocal performances from Great Britain. The GitHub LLMs were hosted in-house on a learning workstation build and trained on unique midi input datasets built in three distinct formats: algorithms, XOXBOX (tb-303 clone) & Moog 960 sequencer (performed in this order). The first progressive rock band movement showcases a layered predicted jam by LLMs trained on datasets built with Roland divided pickups virtualizing human guitar & bass guitar performances.
"Genre music can be achieved by Ai without infringing on copyright!"
No commercial, historical, online service, opt-in or freely available midi input data was used to train the LLMs. The midi input datasets are generated by algorithm or midi device and the final score developed through the predicted variations. The first & third movements are in the key of A major, the second movement in a change of key G major. All notes, chords, drum patterns and triggered vocals heard are part of the predicted score. A triptych of musical grooves conceived and developed to a performance by the machine.
Ai 2025 Contest Process Document
The introduction is a decomposition of the first movement's 8 measures slowed to 60 bpm. The conclusion presents the LSTM sequence predicted from the Moog 960 synthetic dataset.
"Dylan Thomas Poem" read by Michael Caine "Sunshine" sung by Alison David "Day to Night" voiced by Mike Kelley "Tomorrow" spoken by Jane LancasterKicKRaTT (all Drums & Vocal triggering) & KaOzBrD (Left & Right hand piano, Bass & Chords) are names given to the two pure data midi input generating structures used in both 2024 & 2025 Ai contest song entrees.
Ai 2025 Additional Attached Document
Contest Judges was a battery of Tech, Music & musician industry insiders - Jordan Rudess, Portrait XO, Mark Simos, Julian Lenz, LJ Rich, Andrew Fyfe, Neutone, Riley Knapp, Natasha Mangai, John Ashley Burgoyne, Alan Lau, Sandra Blas, Sydney Schelvis, Max Shafer, Dorien Herremans, Atser Damsma, Anja Volk, Seiya Matsumiya, Miguel Oliveira, Katerina Kosta, Der Kuchenchef, Alexander Rehding, Joe Bennet, Bernika, Andres Mondaca, Ben Camp - Thank you all!
Thursday, November 6, 2025
Ai Contest YouTube Shorts
Monday, October 20, 2025
SOLARIUM in the Top10
Announced in the webinar on 10/18/2025 SOLARIUM has been chosen for the Ai Song Contest of 2025 Songs are scored with a combined judges score & public vote.
Vote for your favorite now!
congratulations! HEL9000, ~marts, DJ Swami, Genealogy, BRNRT Collective, Dadabots, Nikki, GANTASMO, Auditory Nerve
Thank you!
The Ai Song Contest Team: Rebecca Leger, Alia Morsi, Ryan Groves, Marcei Vasquez, Seva Markov, Pedro Sarmento, Natasha Mangai, John Ashley Burgoyne, Rujing Stacy Huang, Anna Huang
Associated: Jordan Rudess, Portrait XO, Mark Simos, Julian Lenz, LJ Rich, Andrew Fyfe, Neutone, DJ Swami, Riley Knapp, Natasha Mangai, John Ashley Burgoyne, Alan Lau, Sandra Blas, Sydney Schelvis, Max Shafer, Dorien Herremans, Atser Damsma, Anja Volk, Seiya Matsumiya, Miguel Oliveira, Katerina Kosta, Der Kuchenchef, Alexander Rehding, Joe Bennet, Bernika, Andres Mondaca, Ben Camp
Tuesday, July 1, 2025
Ai2025 Song Contest Entry - AWAY!
The 2025 Ai Song Contest has started accepting submissions. Uploading song(mp3), video(mp4), graphics(jpg) and process document(pdf) on July 1st, 2025. The SOLARIUM video uploaded to YouTube has been set to public. The link is active. In all, three audio versions on SoundCloud KaOzBrD account & three videos that align with the audio versions on YouTube made public.
SOLARIUM - 2025 AI Song Contest Video
Theme: computers composing music through predicted variations
SOLARIUM is a presentation of midi score predictions generated by LLMs trained on in-house synthetic datasets. SOLARIUM is entirely electronic, consisting of three predicted midi motions executed over the course of four minutes. This is an original predicted score that celebrates genre music ranging from progressive rock of Canada to acid house of the United States and kosmische of Germany. Enhanced with voice performances from Great Britain. The GitHub LLMs were hosted in-house on a learning workstation build and trained on unique midi input datasets built in three distinct formats: algorithms, XOXBOX (tb-303 clone) & Moog 960 sequencer (performed in this order). The first progressive rock band movement showcases a layered predicted jam session made by LLMs trained on datasets built with Roland divided pickups virtualizing human guitar & bass guitar performances. No commercial, historical, online service, opt-in or freely available midi input data was used to train the LLMs. The initial midi input is generated by algorithm or midi device and the final score developed through the predicted variations. The first & third movements are in the key of A major, the second movement in a change of key G major. All notes, chords, drum patterns and triggered vocals heard are part of the predicted score. A triptych of musical grooves conceived and developed to a performance by computers.
"Sunshine" sung by Alison David
"Day to Night" voiced by Mike Kelley
"Tomorrow" spoken by Jane Lancaster
Ai2025 Song Contest
KicKRaTT linktree
The video contains the introduction which makes it longer than the four minute contest entry. The introduction is not heard on the contest entree. The decomposed introduction is first 8 measures of SOLARIUM's 1st movement slowed to 60bpm.
2025 AI SONG CONTEST ENTRY (no decomposed introduction)
The algorithm that generated the KicKRaTT composition island was used to construct the initial learning dataset. This song serves as a reference to where the final predicted score began. For SOLARIUM, the original algorithm was reprogrammed to generate midi input data in the key of A major.
Tuesday, April 22, 2025
Ai2025 Contest Announcement
The Ai2025 International Song Writing Contest will be held in Amsterdam.
July - August : Open for Submission on the Website
September : Semifinal & Jury Vote
October : Public Vote
November : Award Show
The official website can be found here
The KicKRaTT Directive: The directive for this contest song entry is to examine the predicted midi score made by LLMs trained on a generated midi input learning dataset. The synthetic dataset used in this contest entry will be built through a number of unique ways. The entry will be a mix of the predicted scores within the four-minute frame. The final score must be built on the predicted variations made by the LLMs trained on the generated synthetic datasets. The midi score will be (from the beginning) conceived electronically by algorithm or device and developed through predicted variation by the LLMs (to the end). A totally ethical Ai systematic approach to electronic midi composition.
Sunday, January 12, 2025
First prediction - part 4
A pure data designed algorithm (part 1) generating about 10 hours of midi data using five different configurations (conditional statement alterations in the same key & pitch) to produce a dichotomy in the 10 sperate hour long performances. From these 10 hours of midi performances around 150 midi files between 4-35kb in size have been edited out, constructing a synthetic dataset almost 5mb in size. At this point, all of the original midi song files have been broken up into individual instrument files; drums, bass, chords & melody. Each instrument assigned to a specific instrument dataset. In total there are 4 datasets of 150 midi files. I convert all the midi files with Music21 into text and then back to midi after prediction. Breaking up the instruments into separate midi files allows for moving the tracks around within datasets for different LLM models (GPT&LSTM) and with the ongoing arraigning of the current score in development.
The video in this post is a unique one. This is the first prediction made from the original island midi score trained on the synthetic dataset.
TRAINING STATS
number of training files = 150 - batch size = 30 files - number of iterations (number of training files/batch) = 5 - one epoch = 5 iterationsThe bass guitar heard in this video is the first instrument predicted for a single epoch. For this audio, this first predicted track has been placed it back into the original midi score. Comparing the bass guitar in the island video of my previous post (part 3) to the bass guitar in this video (part 4) demonstrates a first step using the LLM to predict a new midi score for this single instrument in the group.
Note the interesting artifacts of the bass guitar performance. While these notes are in key, they are reminiscent of stray(bad) midi notes often produced in live midi performances. Velocity and note durations can be a prediction issue. The predicted outcome will change even more so with increased epochs. Ai prediction is a very cyclic procedure. When auditioning and recording these midi outcomes, little attention is given to the audio. Please excuse the lower quality instrument sounds used to demonstrate in this video.
Also to note, are the different drum performances heard in the two videos. Drum pattern prediction differs from note prediction and will be explained in a future post.
initiating a synthetic dataset - part 3
The drums are set to fixed scale (can be any key) midi percussion zone configuration. Which helps out greatly when drum pattern predicting. Structuring the inbound generated midi drum notes to a fixed scale/ zone will make it easy to set up an outbound predicted fixed scale/ zone from the AI model. This way every predicted midi note will be assigned to a percussion instrument in your midi rig. Even if it's scrambled you might hear what you are looking for. Drum pattern prediction can produce some zany results. The bass & right-hand piano parts are broken up into two separate harmonious instrument sequences. The chords are the total number of triads (three note combinations (around 20)) are predetermined from the scale (F#). These chord combinations are selected when triggered from the patch during performance, with a harmonic relation to the notes chosen in the bass & melody sequences. Always room for design improvement when it comes to harmony. A closer look at the harmony of the left & right hand of a piano is examined in my first music video post, "day".
For the purpose of presenting how I use AI to create original (no copyright issues) music in this series of posts, "Island" represents the song that will be used to initiate a synthetic dataset for AI prediction. The creative initiative if you will for the dataset's theme. Representing a point in the song composing process where the artist decides whether or not to start building a dataset based on the algorithm's design performance. Or continue on the algorithm in pure data to achieve something different.Saturday, January 11, 2025
Ethical Ai - call to create synthetic datasets
Ethical Ai - a call to create original "synthetic" datasets.
With regards to music prediction, what is ethical Ai?
The issue lies in the datasets utilized. The ethical considerations surrounding the technology underpinning large language models (LLMs) are often overstated. LLMs derive their predictive capabilities related to syntax, semantics, and ontologies embedded within human-generated corpora; however, inheriting inaccuracies and biases present in the training data. The nature of the learning data within a given dataset ultimately determines whether the outputs produced by the LLM are deemed ethical. If a organization develops an AI system and generates modified versions of existing musical works for personal enjoyment, this practice is generally regarded as ethical. Conversely, if the objective is to distribute or profit from these alternative renditions of established music, such actions would be considered unethical.
A studio might develop two moral AI systems. First, a dataset of all the music the artist has produced over the years. Based on earlier compositions, this technique could be used to forecast new music ideas or compositions. An LLM system that uses a dataset made out of created input data is the endeavor. I'm going to demonstrate a system that can be applied to the quest for new music. As AI prediction advances and the original dataset and predictions are re-incorporated into the dataset, the second idea ultimately becomes the first.
There are many ways to generate MIDI input data for AI datasets, including sequencers, drum machines, MAX, Pure Data, Audiomulch, noise, voltages, and scientific data. The potential for discovering new music genres is limitless when exploring the predicted MIDI outputs derived from these original sources. It is clear that large language models (LLMs) can play a significant role in music exploration and can facilitate the creation of new compositions from original datasets. The music industry would benefit from more artists developing their own AI models and utilizing unique datasets to advance this emerging genre. I employ Pure Data to generate my MIDI input, focusing on structuring algorithms that produce MIDI data sculpted to a genre type and in a given musical key(A#b). I advocate the importance of artists building their own datasets and moving away from a reliance on historical and commercial data to produce music.
Technologies establish cyclical procedures for our adherence. Engaging in the iterative refinement of a process is essential to achieving the desired outcome. Is there work to be accomplished? Indeed, determining the specific type of input data required for your dataset and devising methods for generating that data is a time-intensive endeavor. If the objective is to create something original, like band practice, it becomes a labor of love.
pure data polyrhythmic metronome - part 2
With a little modifying of the conditional statements & performance auditioning of the pure data patch presented in part 1 of my posts, the results have produced this song (groove): island. There was no preconceived idea for this song. The song comes about from building the pure data patch and listening to the results. All of my algorithm patches are structured in a 4-piece instrument band concept. Drums, bass, left hand chords & right-hand melodies. I'm structuring my algorithms for improvised pop genre song types. Pure data song design patches can be structured for any genre. In pure data patch design, I strive on making the generated instrument performance sound like a band in key. Achieving results that sound like music from different genres takes some tooling. Creating an algorithm that generates a specific type of song is only limited by your understanding of MAX, pure data or other midi generating system and music.
The drums are set to fixed scale (can be any scale) midi percussion zone configuration. Which helps out greatly when drum pattern predicting. Structuring the inbound generated midi drum notes to a fixed scale/ octave will make it easy to set up an outbound predicted fixed scale/ octave from the AI model. This way every predicted midi note will be assigned to a percussion instrument in your midi rig. Even if it's scrambled you might hear what you are looking for. Drum pattern prediction can produce some zany results. The bass & right-hand piano parts are broken up into two separate harmonious instrument sequences. The chords are the total number of triads (three note "triad" combinations (around 20)) are predetermined from the scale (F#). These chord combinations are selected when triggered from the patch during performance, with a harmonic relation to the notes chosen in the bass & melody sequences. Always room for design improvement when it comes to harmony. A closer look at the harmony of the left & right hand of a piano is examined in my first music video post, "day".
For the purpose of presenting how I use AI to create original (no copyright issues) music in this series of posts, "Island" represents the song that will be used to start a synthetic dataset for AI prediction. The creative initiative if you will for the dataset's theme. Representing a point in the song composing process where the artist decides whether or not to start building a dataset based on the algorithm's design performance. Or continue on the algorithm in pure data to achieve something different.








