
I'm at the point where I need to design how the user will (optionally of course) string all the AudioUnits together to make their custom DSP graph. There are two approaches that come to mind, but there are probably others I haven't thought of.
Approach one is the simplest, but least flexible. The editor window would have a list of available AudioUnits in it, and a list of the AudioUnits in the current graph. The order of the units in the list would determine the order the DSP is applied.
Approach two is considerably more complex, but also more flexible. The editor window would still have a list of available AudioUnits, but the list of current AudioUnits would be replaced by a graphical representation of the DSP graph. The inputs and outputs to each AU could then be set graphically, with the same result as above.
I say that approach two is more flexible, because at some point it could allow for multiple inputs and outputs from each AU, things like sidechains feeding compressors, or anything else people could come up with. The logical question for me is this: does an audio player really need those kind of DSP capabilities? Shouldn't a simple graph of one input to one output suffice? Is there a compelling reason to allow more complicated processing?