Meristream Artist Statement - Letter from the composer

Meristream Artist Statement - Letter from the composer
April 28, 2024

Meristream has just launched, and with it comes a new day for generative music. Aidan Vass, composer and founder of Meristream, discusses the origins of the application, the compositional ideas that went into it, and the creative implications on the broader field of generative music

Aidan Vass
founder @ Meristream
Click the circle below to hear a demo of Meristream's ambient music algorithm in action
LEARN MORE

To all of the listeners, creatives, and anyone else interested,

Today I set Meristream live. It’s been a year now of constant artistic and technical discovery, and at this point I think I feel comfortable sharing my next biggest project to date.

Meristream is a music algorithm/platform I wrote, which generates infinite ambient soundscapes. Due to browser restrictions that were only recently improved on, this idea of truly web-based generative music has only been possible as of recently, and today I share this creation with the world.

While I’ve seen many attempts at layering or compiling music in the browser, this is a real step forward, in my opinion, of truly real-time web-generated music. This means that as you listen to the music, your browser window is actually building that soundscape purely from code. No imported looping samples, just raw, organic musical textures.

This letter is not intended to sell you on why you need to sign up for this platform. If you want to just check out the platform yourself, click here to browse the site. I have free trials so you can play around with the software for free. This letter is just my invitation to welcome you into my artistic process and deliver insight into what I believe this project means not just for generative music but for art in a post-AI world.

Meristream and AI

Let me start out by saying, Meristream is not AI. Meristream is not trained on musical data in an ML sense, and is not built on any sort of ANN (artificial neural network) or deep learning framework. 

Meristream is a “restricted” music algorithm. This means that decisions are made at random by the confines of what I have decided as the composer should be the parameters.

So what does this mean in the context of Gen-AI music engines?

Well, let me take you back in time a bit, all the way back to the 9th and 10th Centuries, where we’ll unpack what it means to make “generative” music.

Gregorian Chant was invented at some point in this window, and with it came the concept of a Psalm Tone. There are 8 Psalm Tones, each of which constitutes a formula for drafting chanted melodies off of liturgical text. For what we know, this is the earliest example of people making music while handing off musical parameters to external factors, here being the text.

Fast forward to 1787, and Wolfgang Amadeus Mozart is having fun experimenting with his latest musical invention: the Musikalisches Würfelspiel, or Musical Dice Game. in Musikalisches Würfelspiel, the composer writes melodies based on rolling dice and using those results to construct melodic lines. This game became very popular with his contemporaries, notably composer C.P.E. Bach.

In 1952, American composer John Cage premiered his “Music of Changes”, a piece for solo piano based on the I Ching, an ancient Chinese divination text from which Cage was able to derive an indeterministic composition process using coin flips in certain sequences. Just a few years later, Raymond Scott invents his Electronium, the first electronic machine to offer real-time music generation.

Using fixed parameters as the basis of musical composition is far from new.

So what does it mean today? Every major tech company is rushing to reach AGI, the pinnacle of offloading decision-making to algorithms. Maybe by the time you’re reading this, AGI has been developed and you know what the world looks and feels like with this (presumably) drastic shift.

Let me pose an idea – the same idea that inspired Meristream in the first place.

AI excels at offering absolute abundance for very cheap, at least from an economic standpoint.

So what if musical artists could create works that were also abundant and “cheap”, meaning low cost to the end-user/listener, but bringing with it artistic vision, intent, and humanity?

While I believe Meristream fits nicely into the historical line of indeterminate music, this was the inspiration for what I hoped (and still hope) Meristream will accomplish.

For now, Meristream is just a small platform, with a simple interface, where users can stream my algorithm.

In the future, I aim to grow Meristream into a community of artists who are interested in exploring parameter-based music more seriously, and maybe once we reach that community we will be able to prove that our music still has economic purpose even with AI in the marketplace.

“Linear” composers, myself included (linear meaning constrained to time), will always be the leaders in propelling artistic meaning in music, no matter what the marketplace for music looks like.

AI doesn’t present the risk of putting us artists out of relevance, nor does it present the risk of deterring the meaning in what we do.

But maybe there’s a way we can triumph over AI at the thing AI is good at, or at least exist in harmony with it.

Artistic thesis

That’s my conception behind this project. Recently, however, I’ve found an additional source of meaning in the artistry behind what I’ve been working on that I want to emphasize as a point of artistic direction going forward.

In my work as a composer, one of my biggest influences is the work of several architects (Gehry and Hadid stand out amongst others). Many of my pieces grapple with abstract shape concepts, especially with form that embodies natural formations in an organic sense.

The nature of experiencing a work of architecture is quite different from a musical work.

Earlier I mentioned how we work as “linear” composers (Meristream aside for now), meaning we write artistic experiences that are experienced through time.

Someone who experiences a work of architecture doesn’t have this confinement. Architecture is an “alinear” experience, meaning that the observer decides how long each view of the work takes.

I can go to a building such as the Walt Disney Concert Hall in Downtown Los Angeles, and I can spend an hour or two really observing each unique angle the building has to offer.

Or, I can fly past it in a car and witness the same structure morph from my perspective.

This is a fascinating experience in my opinion. And forever, music was never able to exhibit this quality.

Algorithmic music in the sense of what I’m working on here is a doorway into the possibility of this experience.

Meristream’s music algorithm (‘s to come) is an entryway into real, genuine musical architecture, a musical experience informed by the user's real-time perspective on the musical parameters they’re hearing.

Last thoughts

This is an interesting time to be alive, where our very understanding of what makes our conscious activity special is being truly put to the test.

Right now, Meristream is a very small musical experiment, still in its infant stages.

I really do believe in what I’m doing here, and I hope something in this project leads to an increase in artistic optimism for those who may be having any doubts.

And for those of you who are artists, keep creating. Every moment of change in society is a moment to create new meaningful things, things which have never been experienced before and will permanently represent the artistic output of this era.

- Aidan Vass

Enhance your musical experience.

Sign up to gain early access to the future of ambient music.
sign up