[07:32] <ailo> persia: We will use puredata for the system load, so we should quite easily be able to scale the system load for any specific machine. I'm sure we could use our own program for that, but puredata makes things easier at least at this early stage.
[07:33] <ailo> My first idea was just to have two system loads, one low and one high. We can turn that into a latter.
[07:34] <ailo> ScottL: I made this hierarchy for the test results, device - kernel - test, and for each device, test all kernels
[07:36] <ailo> When we post the results of our tests, we just need to post for the same device, but different kernels.
[18:20] <persia> ailo, Does puredata support arbitrary numbers of threads/processes?  The key load issue isn't usually raw CPU cycles, but context switches.
[18:39] <ailo> persia: I need to find out more about that. I know you can have some sort of separation of processes, but I need to find out how exactly.
[18:41] <ailo> persia: Could you give me an example of what would be desirable.
[19:02] <persia> ailo, Basically, the goal would be to mimic someone running multiple JACK clients simultaneously.  Consider a recording session with lots of effects, or a polyphonic synthesizer (using different engines), or similar.
[19:03] <persia> Unfortunately, context switches are expensive, and are one of the major causes of missing scheduling deadlines.  Hard RT forces deadlines to be met, but may drop processes to achieve this.  Soft RT maintains all the processes, but may drop deadlines to achieve this.
[19:03] <persia> So it would be interesting to find the failure points for both soft- and hard- strategies on a variety of hardware.
[19:04] <persia> Similarly, it becomes interesting to compare soft RT handling of failures from -generic, -server, and -lowlatency
[19:07] <persia> Forcing the failure is a matter of increasing the demand for scheduled events under processor load/context switch pressure.
[19:07] <persia> This can be because you have a lot of different things happening at the same time, or something happening exceedingly often.
[19:08] <persia> So, latency measurement affects this: lower latency requires more demand events, which then limits the system's ability to handle multiple simultaneous event generators within the scheudle (considering the cost of context switches)
[19:10] <ailo> persia: Puredata can have subprocesses, which are created manually. I guess that is what we need.
[19:12] <persia> That works fine then, and gains the benefits of scriptability.
[19:15] <ailo> I don't know how that compares with having different actual programs running, since they may have different jack coding too. 
[19:16] <ailo> And what about multiple cores, won't that affect how many processes need to be started?
[19:20] <ailo> persia: thanks for the info. I will look it over and try to put all that into the puredata patch
[19:55] <persia> Multiple cores do affect it, but in a complex way.  You can run more threads on more cores simultanseously, which helps scheduling, but if you're running the kernel, JACK, and a few generators/effects, you can run out of cores quickly.
[19:55] <persia> For that matter, the kernel ends up being something like 25 threads these days.
[22:56] <rlameiro> hey evryone
[22:57] <rlameiro> the last iso build isnt installing....
[22:58] <ailo> rlameiro: http://irclogs.ubuntu.com/2011/01/30/%23ubuntustudio-devel.html
[23:00] <rlameiro> ailo: well, you could open diferent instances of pd...
[23:01] <ailo> rlameiro: That's true. Don't know which is better, but that would work.
[23:01] <rlameiro> also if you use [pd~] you open a pd grapher inside other, making another process
[23:01] <rlameiro> mutlithread sort of speak
[23:02] <ailo> Yes, that is what I was thinking about
[23:03] <rlameiro> but i think persia was talking about switching of diferent process
[23:03] <rlameiro> even connectin in jack etc
[23:03] <persia> rlameiro, Precisely.  Mind you, this could be several pd instances
[23:03] <rlameiro> like that, we could make some internal midi latency test also between pd instances
[23:05] <ailo> I don't understand, switching between processes?
[23:05] <persia> Yes.  This is an expensive operation for most hardware, which makes it prone to causing issues with meeting timing contracts.
[23:07] <ailo> Opening pd will likely cause xruns. Don't think we have that problem when switching between different pd~ objects (pd~being an object that creates a new sub-process)
[23:07] <rlameiro> ailo, in a nutshell, each pprocess or PID has a schedulling timer on the kernel
[23:08] <rlameiro> The more demanding process are active the more is the CPU / RAM and system buses are pushed
[23:08] <persia> ailo, Precisely.  Opening processes causes xruns for low-latency environments.  Switching between processes causes xruns.  The key is that if we discover that if we're only running one process in some environment and can get latencies of 2ms at sampling rates of 192kHz, it doesn't apply well to real-world usage.
[23:09] <rlameiro> if you have less demanding process the scheduler simply gives more time to that process
[23:11] <persia> Well, not always.  If all current contracts are met, and all current non-RT processes have no pending processing, and all buffers are populated, the processor will idle.
[23:12] <persia> (a frequent cause of "no pending processing" is waiting on I/O, which can be dealing with a slow 44100 input feed, or reading off disk, or slowest, waiting for network content)
[23:12] <ailo> I didn't realize pd~ opened a new instance of pd. 
[23:12] <rlameiro> persia: or even Firewire overhead...
[23:12] <persia> rlameiro, Or USB :)  Indeed, any sort of I/O.
[23:12] <rlameiro> ailo: yes on the back
[23:12] <ailo> btw, I'm not believing the performance I'm getting from the latest -generic. 
[23:13] <ailo> Interesting to find out in more detail
[23:13] <rlameiro> I remebr having to change my bluetooth dongle because it was messing with my audio interface
[23:13] <rlameiro> same hub channe;l
[23:14] <rlameiro> persia: thats aanother problem people seem to disregard
[23:14] <rlameiro> we focuss to much on CPU.... i think its a legacy thing
[23:15] <persia> Yeah, IO optimisation is mostly considered a "server thing", sadly.
[23:15] <rlameiro> for composing on a DAW and lot of synts /sampler the most needed stuff is Memory and Fast I/o
[23:15] <rlameiro> for instance using RAID 0
[23:15] <rlameiro> for data 
[23:15] <persia> How does the -server kernel compete?
[23:15] <rlameiro> and having the OS on a SSD
[23:16] <rlameiro> in audio persia ?
[23:16] <persia> Yes.
[23:16] <rlameiro> good question
[23:16] <rlameiro> never tried it out...
[23:16] <persia> OS on SSD doesn't always help: the path of the SSD is important, and the speed of the SSD (some aren't actually that fast)
[23:17] <rlameiro> welll, The RAID0 disk i mentioned werent 5400 rpm either :D
[23:17] <rlameiro> i was talking 10krpm
[23:17] <persia> -server is supposed to be optimised for IO and fair scheduling under resource constraint, as opposed to the "Ooh, shiny, the user moved the mouse, let's update the screen!!!!!" focus of the -generic kernels
[23:17] <rlameiro> at least 7200rpm
[23:18] <rlameiro> persia: I wonder how is that diferentiated...
[23:18] <rlameiro> audio could be considered shiny...
[23:20] <persia> I don't know the details.  My limited understanding is that it's something like the -generic kernel giving priority to new processes, redraws, and similar, whereas the -server kernel gives priority to ensuring that IO isn't waiting on the processor.
[23:20] <rlameiro> hummm
[23:20] <persia> So for -generic, foreground vs, background is more important, etc.
[23:21] <rlameiro> RT kernels were made for that stuff....
[23:21] <persia> But I don't understand the details.
[23:21] <rlameiro> stock market and stuff
[23:21] <persia> RT kernels are about ensuring the scheduling contracts are met, not about optimising performance one way or another.
[23:22] <rlameiro> yeap, on the stock market they need big I/o and Strict scheduling
[23:22] <persia> This ends up looking like optimised performance, but it's kinda by accident, because the RT-using applications are meeting their contracts, although the rest of the system may be resource-starved.
[23:22] <persia> 22 seconds :)
[23:23] <rlameiro> well, a wrong app with RT permission can make a computer unusable for some time :D
[23:23] <persia> Any app with RT permission can, it's just a matter of the contract requirements vs. the capacity of the system.
[23:24] <persia> Personally, I'm of the view that RT is only required when either pushing the system beyond normal limits *OR* dealing with a system that has unpredictable resource requirements due to non-essential processes.
[23:24] <persia> That said, it's not very hard to push a current-tech system beyond it's limits, if one tries :)
[23:25] <rlameiro> yeap
[23:25] <rlameiro> on my case Firewire with a suboptimal FW chipset from ricoh
[23:26] <persia> The driver is so bad you need RT constraints to be able to stream audio?
[23:32] <rlameiro> at low latencies 
[23:33] <rlameiro> its a chipset problem too