developer tip

절차 적 음악 생성 기술

optionbox 2020. 9. 8. 07:53
반응형

절차 적 음악 생성 기술


나는 한동안 절차 적 콘텐츠 생성에 대해 많은 생각을 해왔고 절차 적 음악에 대한 많은 실험을 본 적이 없습니다. 우리는 모델, 애니메이션, 텍스처를 생성하는 환상적인 기술을 가지고 있지만 음악은 여전히 ​​완전히 정적이거나 단순한 레이어 루프 (예 : Spore)입니다.

그래서 최적의 음악 생성 기술을 고민 해왔고, 다른 사람들이 무엇을 염두에두고 있는지 궁금합니다. 이전에 고려하지 않았더라도 무엇이 잘 작동 할 것이라고 생각하십니까? 답변 당 하나의 기술을 사용하고 가능한 경우 예제를 포함하십시오. 이 기술은 기존 데이터를 사용하거나 일종의 입력 (분위기, 속도 등)에 따라 처음부터 완전히 음악을 생성 할 수 있습니다.


Cellular Automata- 읽기 .

여기에서 시도해 볼 수도 있습니다 .

편집하다:

rakkarage는 http://www.ibm.com/developerworks/java/library/j-camusic/과 같은 다른 리소스를 제공했습니다 .


가장 성공적인 시스템은 여러 기술을 결합 할 가능성이 높습니다. 모든 장르의 음악에서 멜로디, 하모니, 리듬 및베이스 시퀀스 생성에 잘 맞는 기술을 찾을 수 있을지 의심됩니다.

예를 들어 마르코프 체인 은 멜로디 및 고조파 시퀀스 생성에 적합합니다. 이 방법은 체인 전환 확률을 구축하기 위해 기존 노래를 분석해야합니다. Markov 체인의 진정한 아름다움은 주가 원하는대로 될 수 있다는 것입니다.

  • 멜로디 생성의 경우 건반에 상대적인 음표 번호를 시도합니다 (예 : 건반이 C 단조이면 C는 0, D는 1, D #은 2 등).
  • 하모니 생성을 위해 코드의 근음, 코드 유형 (메이저, 마이너, 감소, 증가 등) 및 코드 반전 (근음, 첫 번째 또는 두 번째)에 대한 건반 관련 음표 번호 조합을 시도해보십시오.

신경망시계열 예측 (예측)에 매우 적합합니다. 즉, 기존의 인기있는 멜로디 / 하모니에 대해 학습 할 때 음악 시퀀스를 '예측'하는 데 똑같이 적합합니다. 최종 결과는 Markov 체인 접근 방식의 결과와 유사합니다. 메모리 공간을 줄이는 것 외에 Markov 체인 접근 방식에 대한 이점은 생각할 수 없습니다.

피치 외에도 생성 된 음 또는 코드의 리듬을 결정하기위한 기간이 필요합니다. 이 정보를 Markov 체인 상태 또는 신경망 출력에 통합하도록 선택하거나 별도로 생성하고 독립적 인 피치 및 기간 시퀀스를 결합 할 수 있습니다.

유전 알고리즘 을 사용하여 리듬 섹션을 발전시킬 수 있습니다. 간단한 모델은 처음 32 비트가 킥 드럼의 패턴을 나타내고, 두 번째 32 비트는 스네어, 세 번째 32 비트는 닫힌 하이햇 등을 나타내는 이진 염색체사용할 수 있습니다 . 이 경우의 단점은 새로 진화 된 패턴의 적합성을 평가하기 위해 지속적인 인간 피드백이 필요하다는 것입니다.

전문가 시스템은 다른 기술에 의해 생성 된 서열을 확인하는 데 사용될 수있다. 이러한 유효성 검사 시스템에 대한 지식 기반은 좋은 음악 이론 책이나 웹 사이트에서 가져올 수 있습니다. Ricci Adams의 musictheory.net을 사용해보십시오 .


컴퓨터 음악과 알고리즘 구성의 역사에 익숙하지 않은 개발자가 종종 간과하는 이러한 기술에 대한 50 년 이상의 연구가 있습니다. 이러한 문제를 해결하는 시스템 및 연구의 수많은 예는 여기에서 찾을 수 있습니다.

http://www.algorithmic.net


쉽고 다소 효과적인 알고리즘은 1 / f 노이즈 (일명 "핑크 노이즈")를 사용하여 스케일에서 지속 시간과 음표를 선택하는 것입니다. 이것은 일종의 음악처럼 들리며 좋은 출발점이 될 수 있습니다.

더 나은 알고리즘은 "markov chains"를 사용하는 것입니다. 몇 가지 예제 음악을 스캔하고 확률 테이블을 만듭니다. 가장 간단한 경우에는 C가 A를 따를 가능성이 20 % 일 것입니다.이를 개선하려면 지난 몇 개의 음표의 순서를 살펴보십시오. 예를 들어 "CA B"는 15 % 뒤에 B가 올 가능성이 있습니다. 4 % 뒤에 Bb 등이 올 가능성이 있습니다. 그런 다음 이전에 선택한 음표의 확률을 사용하여 음표를 선택하면됩니다. 이 놀랍도록 간단한 알고리즘은 꽤 좋은 결과를 생성합니다.

음악 생성을위한 마르코프 체인


Dmitri Tymoczko는 여기에 몇 가지 흥미로운 아이디어와 예가 있습니다.

http://music.princeton.edu/~dmitri/whatmakesmusicsoundgood.html


My software uses applied evolutionary theory to "grow" music. The process is similar to Richard Dawkins' The Blind Watchmaker program - MusiGenesis adds musical elements randomly, and then the user decides whether or not to keep each added element. The idea is to just keep what you like and ditch whatever doesn't sound right, and you don't have to have any musical training to use it.

The interface blows, but it's old - sue me.


I have always liked the old Lucasarts games that used the iMuse system, which produced a never-ending, reactive soundtrack for the game and was very musical (because most of it was still created by a composer). You can find the specs (including the patent) here: http://en.wikipedia.org/wiki/IMUSE

Nintendo seems to be the only company to still use an approach similar to iMuse to create or influence the music on the fly.

Unless your project is very experimental, I would not abandon the use of a composer - a real human composer will produce much more musical and listenable results than an algorythm.

Compare it to writing a poem: You can easily generate nonsene poems which sound very avant-garde, but to replicate shakespeare with an algorythm is difficult, to put it mildly.


Have you taken a look at SoundHelix (http://www.soundhelix.com)? It's an Open-Source Java framework for algorithmic random music creation that produces pretty neat music. You can use SoundHelix as a standalone application, as an applet embedded in a webpage, as an JNLP-based applet or you can include it into your own Java program.

Examples generated with SoundHelix can be found here: http://www.soundhelix.com/audio-examples


Research on non-boring procedural music generation goes way back. Browse old and new issues of Computer Music Journal http://www.mitpressjournals.org/cmj (no real domain name?) This has serious technical articles of actual use to music synthesis tinkerers, soldering iron jockeys, bit herders and academic researchers. It's ot a fluffy reviews and interviews rag such as several of the mags you can find in major bookstores.


Such a big subject. You could take a look at my iPad app, Thicket, or my Ripple software at morganpackard.com. In my experience, most of the academic approaches to dynamic music generation come up with stuff that sounds, well, academic. I think the more successful stuff is found on the fringes of the club/electronica world. Monolake is my hero in this respect. Very listenable stuff, very much computer-generated. My own music isn't bad either. Paul Lansky's "Alphabet Book" is a nice example of extremely listenable algorithmic music, especially considering that he's an academic guy.


The technique I've been considering is to create small musical patterns, up to a bar or so. Tag these patterns with feeling identifiers such as 'excitement', 'intense', etc. When you want to generate music for a situation, pick a few patterns based on these tags and pick an instrument you want to play it with. Based on the instrument, figure out how to combine the patterns (e.g. on a piano you may be able to play it all together, depending on hand span, on a guitar you may play the notes in rapid succession) and then render it to PCM. In addition, you could change key, change speed, add effects, etc.


The specific technique you're describing is something Thomas Dolby was working on ten or fifteen years ago, though I can't remember now what he called it so I can't give you a good search term.

But see this Wikipedia article and this Metafilter page.


The book Algorithmic Composition is a good tour of the several methods used:

"Topics covered are: markov models, generative grammars, transition networks, chaos and self-similarity, genetic algorithms, cellular automata, neural networks and artificial intelligence."

It is a good starting point on this wide topic, however it does never describe in depth how each method works. It provides a good overview of each, but will not be enough to if you do not already have knowledge about them.


Back in the late 90's, Microsoft created an ActiveX control called the "Interactive Music Control" which did exact what your looking for. Unfortunately, they seem to have abandon the project.


Not quite what you're after, but I knew someone who looked at automatically generating DJ sets called Content Based Music Similarity.


If you're into deeper theories about how music hangs together, Bill Sethares site has some interesting twists.


Ive been looking into doing this project proposal - "8.1" from the "Theory and praxis in programming language" research group from the University of Copenhagen - department of CS:

8.1 Automated Harvesting and Statistical Analysis of Music Corpora

Traditional analysis of sheet music consists of one or more persons analysing rhythm, chord sequences and other characteristics of a single piece, set in the context of an often vague comparison of other pieces by the same composer or other composers from the same period.

Traditional automated analysis of music has barely treated sheet music, but has focused on signal analysis and the use of machine learning techniques to extract and classify within, say, mood or genre. In contrast, incipient research at DIKU aims to automate parts of the analysis of sheet music. The added value is the potential for extracting information from large volumes of sheet music that cannot easily be done by hand and cannot be meaningfully analysed by machine learning techniques.

This - as I see it - is the opposite direction of your question the data generated - I imagine - could be used in some instances of procedural generation of music.


My opinion is that generative music only works when it goes through a rigorous selection process. David Cope, an algorithmic music pioneer, would go through hours of musical output from his algorithms (which I think were mostly Markov Chain based) to pick out the few that actually turned out well.

I think this selection process could be automated by modeling the characteristics of a particular musical style. For instance, a "disco" style would award lots of points for a bassline that features offbeats and drum parts with snares on the backbeats but subtract points for heavily dissonant harmonies.

The fact is that the music composition process is filled with so many idiomatic practices that they are very difficult to model without specific knowledge of the field.


I've been working on a Python module for procedural music. I just programmed out what I know about notes, scales, and chord construction, then have been able to let it randomly generate content from those constraints. I'm sure there's more theory and patterns a system like that could be taught, especially by someone who understands the subject better. Then you can use those systems as constraints for genetic algorithms or randomized content generation.

You can go over my implementation here, especially the randomly generated lead example may be useful to you. Someone with a solid understanding of chord progressions could create a song structure from techniques like that and implement constrained random melodies like this over it. My knowledge of music theory does not extend that far.

But basically, you'll need to encode the theory of the kind of music you want to generate, and then use that as a constraint for some algorithm for procedurally exploring the range of that theory.

참고URL : https://stackoverflow.com/questions/180858/procedural-music-generation-techniques

반응형