asc2svg

asc2svg was intended to be an ASCII diagrams to SVG, like `ditaa` and Svgbob
Log | Files | Refs

aprog_ideas_brainsynth_ideas.note (3239B)


      1 
      2                                Brainsynth Modules                              
      3 ================================================================================
      4 
      5 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      6 |  1                            Csound synthesis                               |
      7 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      8 
      9 - Give parameters as input.
     10 - Produces a .wav file based on parameters and a predefined synth. ALT:
     11   Synth can be given as argument.
     12 
     13 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     14 |  2                            Data generation                                |
     15 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     16 
     17 - Make parameter combinations, (either by randomly changing param. values 
     18   on each iteration, or by iterating through
     19   values with predifned step lengths).
     20 - Make wavefiles from the parameters using the synthesis module.
     21 - Save the parameters in a .txt file along with the file name of the
     22   synthesized sound.
     23 
     24 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     25 |  3                             Model training                                |
     26 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     27 
     28 - Go through the .txt file line by line.
     29 - For each line, read the .wav file in as input (train x), and the parameters 
     30   as output (train y).
     31 
     32 
     33 == 3.1 =========================== Some notes ==================================
     34 
     35 - Might have to deal with *lots* of wav-files, so it could be a good idea to
     36   make them in batches, train the network on a batch and then delete that batch
     37   before repeating with new params.
     38 
     39 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     40 |  4                               Generation                                  |
     41 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     42 - Take a *new* periodic wave (with the same length as the training data) and
     43   feed it into the network. 
     44 - Use the resultsing parameters a input to the Csound synthesis.
     45   
     46 
     47 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     48 |                               Old description                                |
     49 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     50 
     51 1. Use Csound Python API to make the Brainsynth idea, i.e. a neural network to
     52    get Csound parameters.\*
     53 2. Using the same principle for AR modelling, i.e. training a network to choose
     54    the best AR coefficients (or any other [speech] synthesiz technique)
     55 
     56 (Generally lots of possiblilities with Python + Keras + Csound.)
     57 
     58 \*  In more detail: Make a DNN that...
     59 - is trained (with supervision) on an input wavefile and its corresponding
     60   Csound synth parameters output.
     61 - The wavefile is of course the sound produced by Csound when supplied with the
     62   parameters used for the training *output*. So:
     63   Lots of data can be generated simply by giving the Csound synth lots of
     64   different parameters (either randomly chosen, or iterated through all
     65   params, or both) and creating the corresponding wave files.