[07:32] persia: We will use puredata for the system load, so we should quite easily be able to scale the system load for any specific machine. I'm sure we could use our own program for that, but puredata makes things easier at least at this early stage. [07:33] My first idea was just to have two system loads, one low and one high. We can turn that into a latter. [07:34] ScottL: I made this hierarchy for the test results, device - kernel - test, and for each device, test all kernels [07:36] When we post the results of our tests, we just need to post for the same device, but different kernels. === ScottL_ is now known as ScottL [18:20] ailo, Does puredata support arbitrary numbers of threads/processes? The key load issue isn't usually raw CPU cycles, but context switches. [18:39] persia: I need to find out more about that. I know you can have some sort of separation of processes, but I need to find out how exactly. [18:41] persia: Could you give me an example of what would be desirable. [19:02] ailo, Basically, the goal would be to mimic someone running multiple JACK clients simultaneously. Consider a recording session with lots of effects, or a polyphonic synthesizer (using different engines), or similar. [19:03] Unfortunately, context switches are expensive, and are one of the major causes of missing scheduling deadlines. Hard RT forces deadlines to be met, but may drop processes to achieve this. Soft RT maintains all the processes, but may drop deadlines to achieve this. [19:03] So it would be interesting to find the failure points for both soft- and hard- strategies on a variety of hardware. [19:04] Similarly, it becomes interesting to compare soft RT handling of failures from -generic, -server, and -lowlatency [19:07] Forcing the failure is a matter of increasing the demand for scheduled events under processor load/context switch pressure. [19:07] This can be because you have a lot of different things happening at the same time, or something happening exceedingly often. [19:08] So, latency measurement affects this: lower latency requires more demand events, which then limits the system's ability to handle multiple simultaneous event generators within the scheudle (considering the cost of context switches) [19:10] persia: Puredata can have subprocesses, which are created manually. I guess that is what we need. [19:12] That works fine then, and gains the benefits of scriptability. [19:15] I don't know how that compares with having different actual programs running, since they may have different jack coding too. [19:16] And what about multiple cores, won't that affect how many processes need to be started? [19:20] persia: thanks for the info. I will look it over and try to put all that into the puredata patch [19:55] Multiple cores do affect it, but in a complex way. You can run more threads on more cores simultanseously, which helps scheduling, but if you're running the kernel, JACK, and a few generators/effects, you can run out of cores quickly. [19:55] For that matter, the kernel ends up being something like 25 threads these days. [22:56] hey evryone [22:57] the last iso build isnt installing.... [22:58] rlameiro: http://irclogs.ubuntu.com/2011/01/30/%23ubuntustudio-devel.html [23:00] ailo: well, you could open diferent instances of pd... [23:01] rlameiro: That's true. Don't know which is better, but that would work. [23:01] also if you use [pd~] you open a pd grapher inside other, making another process [23:01] mutlithread sort of speak [23:02] Yes, that is what I was thinking about [23:03] but i think persia was talking about switching of diferent process [23:03] even connectin in jack etc [23:03] rlameiro, Precisely. Mind you, this could be several pd instances [23:03] like that, we could make some internal midi latency test also between pd instances [23:05] I don't understand, switching between processes? [23:05] Yes. This is an expensive operation for most hardware, which makes it prone to causing issues with meeting timing contracts. [23:07] Opening pd will likely cause xruns. Don't think we have that problem when switching between different pd~ objects (pd~being an object that creates a new sub-process) [23:07] ailo, in a nutshell, each pprocess or PID has a schedulling timer on the kernel [23:08] The more demanding process are active the more is the CPU / RAM and system buses are pushed [23:08] ailo, Precisely. Opening processes causes xruns for low-latency environments. Switching between processes causes xruns. The key is that if we discover that if we're only running one process in some environment and can get latencies of 2ms at sampling rates of 192kHz, it doesn't apply well to real-world usage. [23:09] if you have less demanding process the scheduler simply gives more time to that process [23:11] Well, not always. If all current contracts are met, and all current non-RT processes have no pending processing, and all buffers are populated, the processor will idle. [23:12] (a frequent cause of "no pending processing" is waiting on I/O, which can be dealing with a slow 44100 input feed, or reading off disk, or slowest, waiting for network content) [23:12] I didn't realize pd~ opened a new instance of pd. [23:12] persia: or even Firewire overhead... [23:12] rlameiro, Or USB :) Indeed, any sort of I/O. [23:12] ailo: yes on the back [23:12] btw, I'm not believing the performance I'm getting from the latest -generic. [23:13] Interesting to find out in more detail [23:13] I remebr having to change my bluetooth dongle because it was messing with my audio interface [23:13] same hub channe;l [23:14] persia: thats aanother problem people seem to disregard [23:14] we focuss to much on CPU.... i think its a legacy thing [23:15] Yeah, IO optimisation is mostly considered a "server thing", sadly. [23:15] for composing on a DAW and lot of synts /sampler the most needed stuff is Memory and Fast I/o [23:15] for instance using RAID 0 [23:15] for data [23:15] How does the -server kernel compete? [23:15] and having the OS on a SSD [23:16] in audio persia ? [23:16] Yes. [23:16] good question [23:16] never tried it out... [23:16] OS on SSD doesn't always help: the path of the SSD is important, and the speed of the SSD (some aren't actually that fast) [23:17] welll, The RAID0 disk i mentioned werent 5400 rpm either :D [23:17] i was talking 10krpm [23:17] -server is supposed to be optimised for IO and fair scheduling under resource constraint, as opposed to the "Ooh, shiny, the user moved the mouse, let's update the screen!!!!!" focus of the -generic kernels [23:17] at least 7200rpm [23:18] persia: I wonder how is that diferentiated... [23:18] audio could be considered shiny... [23:20] I don't know the details. My limited understanding is that it's something like the -generic kernel giving priority to new processes, redraws, and similar, whereas the -server kernel gives priority to ensuring that IO isn't waiting on the processor. [23:20] hummm [23:20] So for -generic, foreground vs, background is more important, etc. [23:21] RT kernels were made for that stuff.... [23:21] But I don't understand the details. [23:21] stock market and stuff [23:21] RT kernels are about ensuring the scheduling contracts are met, not about optimising performance one way or another. [23:22] yeap, on the stock market they need big I/o and Strict scheduling [23:22] This ends up looking like optimised performance, but it's kinda by accident, because the RT-using applications are meeting their contracts, although the rest of the system may be resource-starved. [23:22] 22 seconds :) [23:23] well, a wrong app with RT permission can make a computer unusable for some time :D [23:23] Any app with RT permission can, it's just a matter of the contract requirements vs. the capacity of the system. [23:24] Personally, I'm of the view that RT is only required when either pushing the system beyond normal limits *OR* dealing with a system that has unpredictable resource requirements due to non-essential processes. [23:24] That said, it's not very hard to push a current-tech system beyond it's limits, if one tries :) [23:25] yeap [23:25] on my case Firewire with a suboptimal FW chipset from ricoh [23:26] The driver is so bad you need RT constraints to be able to stream audio? [23:32] at low latencies [23:33] its a chipset problem too