[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[dimond@xxxxxxxx: Re: [SIGMusic] Architecture thoughts]
- To: sigmusic-l@xxxxxxxxxxxx
- Subject: [dimond@xxxxxxxx: Re: [SIGMusic] Architecture thoughts]
- From: Al Dimond <dimond@xxxxxxxx>
- Date: Thu, 23 Feb 2006 08:29:06 -0600
- Reply-to: Al Dimond <dimond@xxxxxxxx>
- User-agent: Mutt/1.5.11
I hit "reply" on a message and it sent it just to Jacob and not to the list.
I think. So here the message is, for the list:
----- Forwarded message from Al Dimond <dimond@xxxxxxxx> -----
Date: Thu, 23 Feb 2006 08:26:47 -0600
From: Al Dimond <dimond@xxxxxxxx>
To: Jacob Lee <jelee2@xxxxxxxx>
Subject: Re: [SIGMusic] Architecture thoughts
Reply-To: Al Dimond <dimond@xxxxxxxx>
Content-Type: text/plain; charset=us-ascii
One potential problem with this scheme is that if a character exits
with notes still in the ALSA queue, then returns shortly, there might still
be notes left in the queue upon return. How big of a problem it is depends
on just how far ahead the loop can run. Though I'm not sure how critical it
is that we worry about entrances and exits of individual characters based on
the direction the program is taking right now.
I won't be at the meeting today, because I have to watch all the ECE 391
kiddies take their exam and then grade it until some unholy hour of the
morning (for some reason we're grading night-of but the class doesn't meet
again until Tuesday... I don't get it.)
- Al Dimond
On Tue, Feb 21, 2006 at 02:03:06AM -0600, Jacob Lee wrote:
> I think I have a credible architecture to connect our program to its
> inputs and to chuck. I don't know much about how we're planning to
> construct the music generation routines, so someone who does can tell me
> if those will mesh with my model.
> Shared data:
> vector<int,voice> voices
> maps MIDI note number to a particular "voice" (the musical
> representation of a single character on screen).
> misc. state variables
> e.g. beats per measure (time signature), intensity of action, etc.
> 1. MIDI input
> This uses the ALSA sequencer api to listen for events; a callback runs
> whenever new data arrives. This callback will add or remove voices based
> on note on/off events, set shared variables based on controller events,
> and so forth.
> 2. Main loop
> This loop must get woken up once per beat. I think that POSIX timers can
> do this; if not, it is also ok to have the main loop simply sleep for
> the duration of slightly less than a beat (see below). It queries every
> voice to ask it "what notes are you playing at this particular beat?"
> Because each voice is an object, voices can keep whatever state they
> want from beat to beat. The public function that each voice exposes
> accepts the current global state and returns a vector of notes.
> 3. MIDI output
> The main loop fires MIDI events to chuck. The ALSA sequencer api gives
> us the ability to schedule events for a particular time, meaning that
> there is nothing wrong with sending the MIDI data early.
> At this point, you may be wondering "if our main loop runs fast, what
> happens when a character leaves? Won't there be a bunch of messages
> still in the queue?" Read on...
> 4. Chuck
> Every instrument is active for the whole time; an instrument receives
> MIDI events to play a note, to active, and to deactivate. The activate
> and deactivate messages are delivered immediately (alsa has this ability
> as well), and the note events are delivered at the appropriate time. The
> instrument keeps track of whether it is active or inactive; it simply
> ignores any note events that are scheduled for when the instrument is
> I think that this architecture handles temporal issues correctly and
> provides a workable dataflow from the MIDI input to chuck. Is the method
> of providing information to voices sufficiently expressive? Does anyone
> see any logical flaws?
> Jacob Lee <jelee2@xxxxxxxx>
> SIGMusic-l mailing list
----- End forwarded message -----