asc2svg

asc2svg was intended to be an ASCII diagrams to SVG, like `ditaa` and Svgbob
Log | Files | Refs

commit 6210c91bee79cb185778f933dbc3a5482b2478f0
parent e887120ad48b4936beb021f59b69bca80e42751c
Author: bkopf <vetlehaf@stud.ntnu.no>
Date:   Thu, 29 Nov 2018 20:15:47 +0100

Add a bunch of .note files as examples of good style

Actually just moved them here because I don't want to clutter
the other repos with .note files until asciinote actually works :))

Diffstat:
Anotes/aprog_ideas_brainsynth_ideas.note | 65+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Anotes/aprog_ideas_mini_projects.note | 46++++++++++++++++++++++++++++++++++++++++++++++
Anotes/aprog_ideas_taact.note | 43+++++++++++++++++++++++++++++++++++++++++++
Anotes/aprog_ideas_vinstruments.note | 29+++++++++++++++++++++++++++++
Anotes/aprog_pynblinc_readme.note | 59+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
5 files changed, 242 insertions(+), 0 deletions(-)

diff --git a/notes/aprog_ideas_brainsynth_ideas.note b/notes/aprog_ideas_brainsynth_ideas.note @@ -0,0 +1,65 @@ + + Brainsynth Modules +================================================================================ + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +| 1 Csound synthesis | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +- Give parameters as input. +- Produces a .wav file based on parameters and a predefined synth. ALT: + Synth can be given as argument. + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +| 2 Data generation | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +- Make parameter combinations, (either by randomly changing param. values + on each iteration, or by iterating through + values with predifned step lengths). +- Make wavefiles from the parameters using the synthesis module. +- Save the parameters in a .txt file along with the file name of the + synthesized sound. + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +| 3 Model training | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +- Go through the .txt file line by line. +- For each line, read the .wav file in as input (train x), and the parameters + as output (train y). + + +== 3.1 =========================== Some notes ================================== + +- Might have to deal with *lots* of wav-files, so it could be a good idea to + make them in batches, train the network on a batch and then delete that batch + before repeating with new params. + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +| 4 Generation | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +- Take a *new* periodic wave (with the same length as the training data) and + feed it into the network. +- Use the resultsing parameters a input to the Csound synthesis. + + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +| Old description | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +1. Use Csound Python API to make the Brainsynth idea, i.e. a neural network to + get Csound parameters.\* +2. Using the same principle for AR modelling, i.e. training a network to choose + the best AR coefficients (or any other [speech] synthesiz technique) + +(Generally lots of possiblilities with Python + Keras + Csound.) + +\* In more detail: Make a DNN that... +- is trained (with supervision) on an input wavefile and its corresponding + Csound synth parameters output. +- The wavefile is of course the sound produced by Csound when supplied with the + parameters used for the training *output*. So: + Lots of data can be generated simply by giving the Csound synth lots of + different parameters (either randomly chosen, or iterated through all + params, or both) and creating the corresponding wave files. diff --git a/notes/aprog_ideas_mini_projects.note b/notes/aprog_ideas_mini_projects.note @@ -0,0 +1,46 @@ + + Mini Projects +================================================================================ + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +| 1 C(++) Speech Features | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Make a lower level version of Python Speech Features that can perform the +following: +- FFT +- Filtering (for filter bank aka MFSC) +- DCP (for MFSC decorrelation) + + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +| 2 Pitch detection | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +- Python Script / program that lets user whistle melodies and get a Csound / OSC + / MIDI melody out (with some adjusting) +- First use simple autocorrelation. Maybe use some open source pitch detection + later. + + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +| 3 (JACK) Looper program | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +- Triggered by a single button (/ USB foot pedal) +- Window functionality for crossfade / smooth transitions + + ++------------------------------------------------------------------------------+ +| 4 CLI sampler component: `detect_start()` | ++------------------------------------------------------------------------------+ +- `detect_start()` helps finding where the actual audio starts in an audio file. +- Just find the biggest leap in RMS (using a window) to approximate where the + sound starts. +- Arguments: `(double * sound, uint frame_size)`. That should be it... + + ++------------------------------------------------------------------------------+ +| V Far-future / Never | ++------------------------------------------------------------------------------+ +- RaspPi can be used as sound module for MIDI keyboards / dig. piano w/ Csound. +- Can make a complete workstation out of a keyboard connected to a RaspPi (or + a more powerful computer...) + diff --git a/notes/aprog_ideas_taact.note b/notes/aprog_ideas_taact.note @@ -0,0 +1,43 @@ + + taact +================================================================================ + + +**T**erminal **A**udio **Act**ion + + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +| 1 What? | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +A TUI program that lets the user perform audio editing in the terminal, with +the ability to "scroll" through the audio file and play segments, like one +would with programs like Audacity. + +_If I find that it's worth it, taact might also implement some audio effects, +mixing and other functionality found in Audacity. It could also be a good +opportunity to use some of the ideas found in mini\_projects.md +I suspect that sox, or at least Csound already has a lot of effects covered, +though, so I will focus on creating a program for quickly editing audio +samples in the terminal_ + + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +| 2 Why? | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +The program sox seems like a good program for certain audio editing tasks, +however a command based program can only go so far. Rerunning `play` and `trim` +commands until the timing is right seems tedious. `taact` will be a vim-like +audio editing program that lets you scroll through an audio clip with a chosen +level of precision (sec, ms, samples) and with a single button press play from +the current position in the audio clip. + +This way it should be easier to find the right place to slice the audio or set +markers that can be used by other programs, for instance Csound or sox. + + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +| 3 How? | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +As this will be a TUI, the easiest way to implement it might be with (n)curses, +although it is worth looking into using ANSI escape sequences directly to +determine keys pressed... diff --git a/notes/aprog_ideas_vinstruments.note b/notes/aprog_ideas_vinstruments.note @@ -0,0 +1,29 @@ + + CSound / JACK Instrument ideas +================================================================================ + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +| 1 Drum synth | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Analyze some common (synthetic) drum sounds and make a synth based on these +components, among others: +- Initial (higher frequency) punch +- Tail (lower frequency) +- Noise (for snare) +- etc... + +Some available effects and functionality: +- Distortion (start with clipping. All compo +- Graphical representation of the synthesized signal and all effects applied + + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +| 2 Glitchy sequencer | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +See GlitchySequencerID. + + ++------------------------------------------------------------------------------+ +| 3 Gross Beat knock-off | ++------------------------------------------------------------------------------+ +Just an open source JACK / Csound knock off of Gross Beat... diff --git a/notes/aprog_pynblinc_readme.note b/notes/aprog_pynblinc_readme.note @@ -0,0 +1,59 @@ + + pynblinc +================================================================================ + +`pynblinc` is the Python implementation of `nblinc` (*n*early *bli*nd +*c*omposition), which is kind of a middle ground between a tracker / step +sequencer and a piano roll. The goal of (py)nblinc is to achieve the following: + +- Keyboard based pattern / melody composition without the need to write text. +- VIM-like keyboard bindings +- TUI instead of GUI based, i.e. using [curses](https://docs.python.org/3/howto/curses.html). + +Although it can be extended to support MIDI or OSC output, pynblinc will first +and foremost be developed with *Csound* in mind. The basic idea is that pynblinc +will be used for composing a score, while nvim is used to define the orchestra +(instruments) + + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +| 1 Suggested use | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +- Paper plan (high level composition). You can't avoid composition on paper. + Don't try to! +- Jamming (recording and melody composition) +- Putting melodies and samples together. +- Use Csound with separate score (`.sco`) and orchestra (`.orc`) files. +- Use (py)nblinc to compose score files and (n)vim to define the orchestra. + + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +| 2 Programming resources | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +This section links to documentation for some of the libraries used in pynblinc. + +== 2.1 ============================== Mido ===================================== +For midi support: +- [MIDO General](http://mido.readthedocs.io/en/latest/) +- [MIDI Files](https://github.com/olemb/mido/blob/master/mido/midifiles/midifiles.py) +- [MIDI Tracks](https://github.com/olemb/mido/blob/master/mido/midifiles/tracks.py) + +== 2.2 ============================== TUI ====================================== + +- [curses](https://docs.python.org/3/howto/curses.html) +- [more curses](https://docs.python.org/3/library/curses.html) + + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +| 3 Notes on programming | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +- As the score will be edited by pynblinc and the orchestra by vim it makes + sense to use separate files. +- For this to work with the current program, the score must be limited to + *i* statements only. This should be fine for now, because: + 1. *f* statements can be moved to the orchestra (using *ftgen* opcode, which + is really the preferred way to load tables anyway) + 2. *t* statements aren't supported by ctcsound anyway, it seems. + + To deal with tempos, allow a single t statement as the first line of the score + and parse this manually in pynblinc.