Season 4 Episode 4 Mar 2, 2021

How to Combine Music and Code

Pitch

Putting music and code together can be very generative

Description

In this episode, we talk about music and code with Amirreza Amouie, aka Amu, indie artist and software engineer, and Jérémie Astor, creator of Gwion, a programming language aimed at making music.

Hosts

Ben Halpern

Forem - Co-founder

Ben Halpern is co-founder and webmaster of DEV/Forem.

Jess Lee

Forem - Co-founder

Jess Lee is co-founder of DEV.

Guests

Jérémie Astor

Gwion - Creator

Jérémie Astor is a musician, hobbyist programmer, and creator of Gwion, the programming language for music.

Amirreza Amouie (Amu)

Funktional Stdo - Software Engineer

Amu is a computer engineer currently focusing on creating tools related to sound/music and an indie artist making electronic music and soundtracks for video games and films.

Show Notes

Audio file size

54952912

Duration

00:38:10

Transcript

[MUSIC BREAK]

 

[AD]

 

[00:00:01] BH: A common scene in technology companies everywhere, big conference table with the CTO on one end, developer teams on the other, the showdown. We have an idea, “Will it get funded?” More companies are feeling the pressure to go faster and stay ahead of the competition. Projects that have long timelines or no immediate impact are hard to justify. DataStax is sponsoring a contest with real projects, real money, and real CTOs. If you have a Kubernetes project that needs a database, the winner will get funded with a free year of DataStax Astra. Follow the link in the podcast description to submit your project. It’s time to impress the CTO and get your project funded.

 

[00:00:41] New Relic helps engineering teams all over the world visualize, analyze, and troubleshoot their software. Discover why some of the most influential companies trust the New Relic One Observability Platform for better uptime and performance, greater scalability, faster time to market, and more software. Go to developer.newrelic.com to find out more.

 

[00:01:02] Eyes glaze over from debugging a remote Kubernetes service? Instead, run your service locally in your favorite debugger and instantly find the problem. Ambassador Telepresence is the easiest way to debug microservices on Kubernetes. Spend more time fixing problems instead of reproducing them. Ambassador Telepresence is free to use for teams with unlimited developers. Get started today at getambassador.io/devdiscuss.

 

[00:01:28] Educative.io is a hands-on learning platform for software developers. Learn anything from Rust to system design without the hassle of setup or videos. Text-based courses let you easily skim back and forth like a book while cloud-based developer environments let you get your hands dirty without fiddling with an IDE. Take your skills to the next level. Visit educative.io/devdiscuss today to get a free preview and 10% off on annual subscription.

 

[AD ENDS]

 

[00:02:01] AMU: I’ve been thinking about the philosophical problem of this matter, like, “Is this really music when it is not created by humans?”

 

[00:02:23] BH: Welcome to DevDiscuss, the show where we cover the burning topics that impact all our lives as developers. I’m Ben Halpern, a co-founder of Forem. 

 

[00:02:32] JL: And I’m Jess Lee, also a co-founder of Forem. Today, we’re talking about music and code with Amu, Indie Artist and Software Engineer, and Jeremie Astor, Creator of Gwion, a programming language aimed at making music. Thank you both so much for joining us.

 

[00:02:45] AMU: I’m excited to be here.

 

[00:02:46] JA: Thanks for having me.

 

[00:02:47] BH: It’s so awesome to have both of you and we can’t wait to get into music, the software side, and how things intersected and how they came together. Can we start with Amu? Can you tell us a little bit about your background and how music and coding have intersected in your life?

 

[00:03:03] AMU: So I’ve been programming since I was in high school. So six or seven years. I’ve been mostly doing web dev projects, and slowly over time, I start to lose interest. And then I discovered a program called Sonic PI, which is basically a high-level programming language and environment that enables you to code music. So you can have a for loop in which you can choose an instrument in the environment and have it take notes for a certain number of times, and you can assign some effects to it, like a reader or an echo. So that inspired me to look into this field and research more. And after some time, I started doing some hardware projects with Arduino. And right now, at the moment, I’m working on two projects, the first one is an open source DAW. DAW is basically a program that allows sound designers and music producers to make music with. And the other project that I’m working on is a digital hardware synthesizer, which is my university thesis project.

 

[00:04:18] BH: You mentioned DAW. You said what it is. Do you know what that stands for? The acronym?

 

[00:04:23] AMU: It’s short for Digital Audio Workstation. Famous programs that most people use are like FL Studio or Ableton Live or Pro Tools.

 

[00:04:35] JL: Awesome. And Jeremie, can you tell us about your musical and coding background?

 

[00:04:39] JA: I started playing violin when I was three and a half because I had problems with my hips. I don’t have the skills in English to explain any further. I did Bizet and Zen. When I was 12, I started playing guitar, and a few years later, I was gigging every weekend as a Jazz guitarist. And then I had several bands. Had more bands. Had even more bands. That’s about it. As far as the coding, I only had the computer up when I was 18. I had Windows. I have to confess it was terrible. So I quit at one point. I tried to install, I think it was Debian. It was terrible too. So I quit. Keep in mind, I never had a computer before. And then a few years later, I tried, I guess, Ubuntu and I went into some hacking serverless programs and learning C and then I met the ChucK programming language from the CCRMA, I think, which I loved. But at one point, I became too fluent on it or maybe my machine was to a potato, you say, so I had to get a better tool to do what I wanted to and just grew.

 

[00:06:24] JL: Amu, what about your music?

 

[00:06:26] AMU: So I work as a solo artist and these days I’m mostly working on some soundtrack projects or games and a short film. I also have my own projects that are mostly electronic music or lo-fi hip-hop.

 

[00:06:49] BH: Does your knowledge in software development give you any perspective when you’re making music for games? I’m wondering if there’s like a crossover because you have a bit of perspective there.

 

[00:07:00] AMU: Yeah, sure. So before I started making music, I’ve seen a lot of game designers and game developers and observed how they approach sound in their games. And when I started making music, I used that knowledge and I’m also applying that knowledge in my own DAW to make the process easier.

 

[00:07:24] BH: When does the music in a game come into the production process? Is it when the game is fully done and it needs a soundtrack or does it come before the end to possibly inform decisions in the game development?

 

[00:07:37] AMU: So I’d say it depends on the team and the game itself. For example, the game that I’m currently working on, the team approached me when they were pretty much done with the game. So you had a short time to know the game and make music based on the game’s vibes and story. But the ideal situation is that you’re already in contact with the team before they start the project. And you’re involved with the idea of the game as they’re developing it. And yeah, I think the best results come from the teams that approached music, since the beginning of the process.

 

[00:08:20] BH: Jess, you have some background in music. Can you tell us a little bit about your experience in this craft?

 

[00:08:28] JL: I learned piano at a pretty young age, and then ultimately went to school and studied performance piano in a school of music, very classical based. And since then, I’ll play in bands every now and then, but much less so these days and those bands tend to be pretty out there. One of my most favorite projects more recently was an ode to The Right of Spring by Stravinsky. We called it Riot of Spring. For anyone who doesn’t know the background of that piece, when it first came out, it caused a riot. People were up in arms. They were like, “This isn’t music,” and they’re all mad and people stormed out. And so our take on it was a totally like electric version of it. Instead of an orchestra, we had like two electric guitars, two synths, two drum sets, and just like went all out and we were trying to evoke the same feelings there. But, yeah, I’ve never actually combined my two interests in music and coding in any way, but find it really fascinating and very exciting to learn more about going on the music coding language you’ve created. Can you share some of the functionality it has that makes it particular to music and how you even approached creating the language?

 

[00:10:07] JA: As I said before, I used ChucK before. I even had a few gigs when I added a few Raspberry Pis or QNX. I could certify people moving in places, which means I sync turning data into music, into sound so it’s pretty inscribed at ChucK, which is an object-oriented programming language, which is pretty cool, so that you can define it last says, guitarist, which has some kind of styles, some kinds of skills, some kind of sound and you can check it, is the greater than, which is kind of an arrow. So you can ChucK it into an instance of effects fiddle class and then check it into an instance and for an amplifier class. That workflow is pretty good to get your sunshine going, but in ChucK, it was coming from C and C++. I really missed function point arrows, NMs, generics, and this kind of stuff. So when I went into making Gwion, it just felt natural to exist.

 

[00:11:25] JL: Can you share why Gwion was necessary and why you chose to build that tool instead of using existing tools?

 

[00:11:35] JA: Yes. Simply at some point, I was building quite a big project for it and my CPU just won’t follow. So I needed something which would better fit my needs. So that’s mainly because conceptually these things I want to explore about scales, beat, the frequency scales or time scale, I could do this with ChucK, but my computer just wouldn’t follow. So I had to build a tool.

 

[00:12:15] JL: How did you build it differently in a way that your computer could handle that?

 

[00:12:20] JA: I was pretty naive at the beginning. It was mainly, “Okay. So I should build a lighter version in C and it’ll do.” And it was not quite this, but it made me able to quite clean the structures ever simpler workflow, be it for the virtual machine of the song processing. And also I put it in some C stuff. For instance, I did a memory pool. It was pretty impressive because at that time it almost cut my benchmark times by 30%. So that was huge. Also, I have some kind of VM with computed go-tos, so with a pretty C specific. But that’s also very, very fast, whereas the ChucK does not have a Pro VM and codes, pointer to functions, and it makes a loss of indirections and it’s pretty slow.

 

[00:13:29] BH: So it seems like it isn’t necessarily the case that you had an aha moment about the architecture of how a music language should work in a general since, but that you took on the problem and discovered a lot of wins along the way as you worked on it.

 

[00:13:48] JA: Yeah. Obviously when I started, I only had things like tiling windows managers or stuff like that. So I knew next to nothing about making a programming language. At that time, I did not have the internet, so I would go to my friends, tell them, “Okay, can I borrow your computer for 20 minutes?” And I downloaded a lot of theses, read them, tried to understand them, put that into my work. Also at one point, I discovered the Reddit program design. So there’s also a programdesign.com or.net, I don’t remember, and then ESA, and there are a lot of language designers over there. So I learned so many things. I mean, I didn’t even know what functional programming was. And then I was able to try a scale of things like that and understand what is pattern-matching or features you do with C.

 

[00:14:57] BH: In terms of the intersection of music and software development, how do you two feel about computers composing their own music? So not necessarily as a tool for humans to develop what’s in their mind and put it out there, but for computers to make the choices and come up with new music. I’m sure that’s a subject that’s crossed your mind.

 

[00:15:21] JA: Oh, yes. I think that’s something pretty interesting. When I started working with ChucK, there was the ID. I needed a tool to get new scale systems and it’s great to have some rules, potentially rules that are different to what I do as a professional musician. And as the computer iterates a read for you, I really like this ID of auto generating music or styles of music.

 

[00:15:53] AMU: So this idea might have stirred a lot of people, but it has been around for a long time. I think it was introduced in the 1970s by Brian Eno. And it is basically referred to as generative music and music that the artist doesn’t have full control over the result. So basically an artist sets and initializes the parameters and lets the system or computer or the instrument to do the job for him. And I’ve been thinking about the philosophical problem of this matter, like, “Is this really music when it is not created by humans?” I think that’s what many people’s concern too. Like many artists are scared of the rise of AI and it’s very fast progress. They may lose their jobs or something. But I think what makes music relatable is the fact that it’s that human element behind that music. When you’re reading the notes that have been written 300 years ago by a dead artist and you’re playing it for yourself or you’re listening to music on Spotify, that human element is what’s music relatable for you. So I think that machine music doesn’t have a bright future because of the fact that it’s not human, because that’s what makes arts arts.

 

[00:17:22] JL: Is the human element completely taken away though in the generative or machine music? I imagine there’s like a human behind the algorithm or a human can step into influence it? Or is it purely from the computer?

 

[00:17:38] AMU: Certain types of music, especially amongst the artists who are interested in modular synthesizers, which are basically some types of instruments that come in smaller modules and you have to build the whole instruments piece by piece yourself. And artists interested in modularity make you a lot of generative music. And that feels human because artist is constantly changing the parameters and modulating the sounds and creating different feelings. But a few months back, I was working on a neural network that I fed it in notes of about 100,000 songs. As a result, I exported a new song from that neural network. And although it sounded great as a music, but it wasn’t as relatable, because all the results that that program gave me was something artificial. It’s so hard to explain because music is something that we can feel with our soul.

 

[MUSIC BREAK]

 

[AD]

 

[00:19:12] BH: Sick of your laptop overheating every time you try to run your Kubernetes application locally? With Ambassador Telepresence, you can intercept your services on your local machine so that you can develop on your services as if your laptop was running in the cluster. Never worry about running out of memory again no matter how complex your Kubernetes application gets. Ambassador Telepresence is free to use for teams with unlimited developers. Get started today at getambassador.io/devdiscuss.

 

[00:19:43] New Relic’s Application Monitoring Platform gives you detailed performance metrics for every aspect of your software environment. Manage application performance in real time, troubleshoot problems in your stack, and move beyond traditional monitoring with New Relic One, your complete software observability solution. Get started for free at developer.newrelic.com.

 

[00:20:04] To connect with the team behind New Relic directly, join The Relicans. The Relicans is a new community hub designed to help developers create cool projects, inspire one another, level up and learn in public. You can start a discussion about your favorite programming language, ask a question about software observability, share tutorials, and lots more. Join today at therelicans.com.

 

[AD ENDS]

 

[00:20:33] JL: Jeremie, do you agree that the human element is lost when working with generative music?

 

[00:20:40] JA: I tried a lot with generative systems in QNX. So there’s obviously human input in that as you are the data of the body. Also, I think what we feel about music is pretty much on the social ground. In my opinion, that’s what makes us feel music is human because we refer to some kind of sociality behind it. I’m not quite sure this element is necessarily lost when you make generative music because you can either tweak your parameters or your algorithm with a social meaning behind.

 

[00:21:22] BH: Jess, how do you feel about this subject?

 

[00:21:23] JL: I can definitely relate and understand what Amu was trying to say. Like if it’s completely made from the computer, I can just see how it doesn’t cap. Like I could have a hard time picturing a computer pulling off a really beautiful symphony. But yeah, if we talk about mimicking like electronic music, I think it can get a lot of it. But the human element is really a crucial part to music making.

 

[00:21:52] JA: I don’t remember the name, but I think it’s 30 years ago. I was at a studio with an algorithm. Basically it’s today in Bach music and the guys have the algorithm writer base and they add some university music and artist, analyze it. Well, as I remember, I think they were all bluffed and said it was from a human from this date and this date. So it kind of works on the classical field also mostly because, well, if you studied about Bach or [INAUDIBLE 00:22:34] writing, there are very precise rules. So you can’t just boot them out somewhere because you want it because some guy is going to say, “Oh, no. At that point, you got parallel octaves or parallel fifths. So you got to shift this.” So that's the rules.

 

[00:22:56] JL: Yeah. I was actually going to say that I feel like of all the less modern genres that baroque music is probably the one that would be most mimicable. I guess when I think about Bach, I think about it from the piano perspective, but Bach was originally played on the clavier and that instrument itself is kind of limited. And so it doesn’t have quite as much range. It’s like a lot more pingy and I think that that’s more replicable than other genres that have a lot more depth and detail in a different way because Bach really is all about precision and computers are great at precision.

 

[00:23:34] AMU: So Google has a whole music research division called Magenta and they recently launched a new website, which could turn any sound that you give it as an input to multiple other instruments. So I could upload my voice talking there and you can turn that voice into like violin or trumpet. And one interesting fact for me was that they trained all their networks with Bach songs and the places where you could feel that this program has Bach’s was very inquisitive melodies into that program where they sounded too different from Bach’s type of writing music. And so Bach is pretty popular for scientists when they want to research on music and writing melodies.

 

[00:24:28] BH: I feel like a lot of music has a lot to do with how the audience feels about the artist, popular music, at least. How one feels about the Beatles probably has a lot to do with the humans and how much in the ’60s, how many people loved and respected the artists. I think the same is true for any modern popular artists. And then there’s music maybe in other contexts where it might be fun to know that it’s been generated, maybe even generated on the fly just for you by a computer. I could think maybe an application hypothetically where my music played to the way I was driving my car to kind of give you the sensation of being in a soundtrack. Something like that there might be enough quality in the novelty of the music working kind of in reaction to you. That might be kind of in a special way awesome when the computer does it. And then when you’re talking about popular music, it has such a human relationship dynamic. Maybe it’s not just music, but a lot of the context and the novelty.

 

[00:25:39] AMU: So a few months back, I made a plugin that generated melodies all by itself and I didn’t have to do anything with the program. It generated a piece that shocked me so much that I spent a whole night just listening to it and wondering how is this possible that a computer which doesn’t have a certain type of human element create such a veered and fascinating piece that is constantly giving me goosebumps. I call it the same wisdom. So I generally think that most of these types of music, whether it’s created by humans or not, are both enjoyable but on their own ways.

 

[00:26:46] JL: What kind of tools do you need to make it happen?

 

[00:26:49] AMU: So I want to approach it in the AI way. I want to train it and normally that generates music for me. So you have two types of networks. They are either discriminative or they are generative. Discriminative networks give you information about different subjects, like they can predict the price of housing in the next year, but generative projects make something new out of some given data. And I saw lawsuits in the recent years that mentioned that if you want to train a generative neural network, the data that you use to train the network might be a subject of copyright. So I should find a way to either release those songs as Creative Commons licenses or something similar. And I think this area of research in music is quite new one and there are many, many subjects that you can work on and innovate something new that hasn’t even existed before.

 

[00:28:04] JL: Yeah. That copyright piece is really interesting. It kind of really limits the amount of people who would have that type of access to the network that we’d need to really capture a lot of music in there.

 

[00:28:18] BH: So computers probably don’t really understand copyrights and plagiarism well.

 

[00:28:24] AMU: I don’t either.

 

[00:28:26] BH: Yeah. In another way a computer might be exceptionally good at understanding these things. If given the right constraints, I think maybe a program could do the best job at maybe being technically correct about copyright law and dynamically import that understanding into the composition. But I also suspect that our notions of copyright law in general need to keep evolving if we’re going to keep doing more and more things with computers and music might be a bit of a canary in the coal mine in terms of will the court rule that the computer drew too much inspiration from a past composition where they might be able to find human evidence as to motivations, whether it was accidental or malicious, it seems like that might be a weird future in terms of how society works around computers in general but within music in an interesting way.

 

[00:29:28] JA: Yeah. I’d say if we kept the musical system, we mostly use the wisdom music system with these 12 notes and codes [INAUDIBLE 00:29:39], this kind of stuff, the major chord change where we are going to hit a wall anyway. It could be a lot of ways because the number might be huge of things you can do with those rules, but it’s obviously finite. Plus, we must redo songs in 4/4 or maybe 3/4 or 12/8, but we tend to give to the same rhythms, the same kinds of harmonies, the same kinds of progressions. So there's a limit. We’re going to address the limit at some point.

 

[00:30:18] BH: Do you think we’ll ever get to a time when computers can learn to appreciate music?

 

[00:30:26] AMU: Oh, that’s a hard question.

 

[00:30:29] JA: I think we could do some stuff and put social data, how people react to this, give it parameters type of rule, some kind of consistency or favor or some kind of structure, some kind of sounds, and you’ll have it prefer some pieces of [INAUDIBLE 00:30:47].

 

MUSIC BREAK]

 

[AD]

 

[00:31:06] BH: Chances are, like other software developers, you learn better by doing than just watching. Unfortunately, most online learning platforms still have you passively sit through videos instead of actually getting your hands dirty. Educative.io is different. Their courses are interactive and hands-on with live coding environments inside your browser so you can practice as you go. They’re also text-based, meaning you can skim back and forth like a book to the parts you’re interested in. Step up your learning in 2021. Visit educative.io/devdiscuss today to get a free preview and 10% off of an annual subscription.

 

[00:31:42] A common scene in technology companies everywhere, big conference table with the CTO on one end, developer teams on the other, the showdown. We have an idea, “Will it get funded?” More companies are feeling the pressure to go faster and stay ahead of the competition. Projects that have long timelines or no immediate impact are hard to justify. DataStax is sponsoring a contest with real projects, real money, and real CTOs. If you have a Kubernetes project that needs a database, the winner will get funded with a free year of DataStax Astra. Follow the link in the podcast description to submit your project. It’s time to impress the CTO and get your project funded.

 

[AD ENDS]

 

[00:32:26] JL: So since you both have experience with generative music, for our listeners who are interested in playing around, how would you recommend that they get started?

 

[00:32:38] AMU: A great place to start would be to experiment with tools like Sonic PI, or if they have experience in music production and sound design, a program called VCV Rack would be useful. Or another friend of mine recently made a cool open source synthesizer. It’s called Bespoke, which is pretty fun to play with. And if you want to start coding generative music, I think it would be very useful to learn a framework called JUCE, which basically provides you some functions and abstractions that help you create synthesizers that you can use your own algorithms for them to generate melodies or anything that you want.

 

[00:33:31] JL: Very cool. We’ll be sure to link to those resources in our show notes. Jeremie, what recommendations do you have for someone who’s looking to explore here?

 

[00:33:40] JA: Well, I think [INAUDIBLE 00:33:42] seems a good fit. Of course, I think we always are a pretty good fit, but since it’s less documentation and stuff, maybe it’s not that easy. When I started with ChucK, I found some community on electro.com, I think. And it was an amazing forum. They would share one-liners, which sometimes were just amazing, just a few generators of chain in different ways and a wide variety of sounds.

 

[00:34:22] JL: And one last question on that similar note. For people who are just starting out, what are some common mistakes that you see people do, like anything to avoid, to help save time?

 

[00:34:34] JA: I think mistakes are part of the process. So just like when you play, maybe you improvise, just fail and try not to fail the same way as the next time. Try to fail some other way, some other way.

 

[00:34:53] JL: That's good general life advice too.

 

[00:34:56] AMU: One thing that I’ve expressed myself is that most of the times I run into problems that they don’t have any answers on the internet and I have to spend, like, I don’t know, 10 or 12 hours, just not to only looking at the code, but look at the whole framework that I’m using. That might be exhausting, but it’s important to keep going and even share the solution to the problem that you ran into and the answer is not the internet to help, the next people that may run into problem in the future because this field is not that popular worldwide and it’s important to have a supporting community.

 

[00:35:45] BH: Awesome. Yeah. I think the best way for coding and music to intersect is in a way that’s going to be accessible broadly in both areas. The best coding platforms even for the most advanced users are ones that are not inherently complex. I mean, we try to remove complexity from the best kind of software development and not try to make something that’s experts only. So I would think a well done software development in the space of music is something that even if it truly is a programming language, it’s one that’s not only for the greatest experts in software development, but one that’s actually possibly something anyone with an ear for music and training in music can get their feet wet with.

 

[00:36:37] JA: I think as a parallel, as far as I see it, I’m a professional musician, but I’m a coder, if you are doing some kind of modern jazz improvisation, you might have using different scales that are then the ones implied by the codes behind you. But if you want to be able to do this with some efficacity, you’ll have to internalize it. So you just put a small break over a small break, over a small break, over a small break, and that’s it. And I think good development is just the same, a simple break of arrows minus the simple break.

 

[00:37:25] BH: Thank you both so much for joining us.

 

[00:37:28] JA: Thank you.

 

[00:37:29] AMU: Thank you for inviting.

 

[00:37:39] JL: This show is produced and mixed by Levi Sharpe. Editorial oversight by Peter Frank and Saron Yitbarek. Our theme song is by Slow Biz. Additional music provided by Amu and Jeremie. We’ll put links to the songs on our show notes. If you have any questions or comments, email [email protected] and make sure to join our DevDiscuss Twitter chats on Tuesdays at 9:00 PM Eastern, or if you want to start your own discussion, write a post on DEV using the #discuss. Please rate and subscribe to this show on Apple Podcasts.