[01:20] ogasawara: hi! https://bugs.freedesktop.org/show_bug.cgi?id=41059 - there are two new patches to test in comment 31 and comment 34. can you maybe build two new kernels for soren to test (unless he wants to build them himself)? [01:20] soren: ^^ [01:20] Freedesktop bug 41059 in Driver/intel "XRANDR operations very slow unless (phantom) HDMI1 disabled" [Major,Assigned] [03:47] bjf: you're quick === jibel_ is now known as jibel [07:01] Dry morning .+ [07:10] morning [08:45] where do i find tar files with the vmlinuz and initrd for different ubuntu kernels? trying to do a live CD customization [08:47] brendand: tar files? You can find the packages and pull them appart, the source packages and pull them appart, the kernel git tree [08:53] what i'm really looking for then is the package with the proposed kernel [08:55] for natty, that is [09:19] brendand: specifically the proposed kernel package or just the kernel source? git is the easiest way to get that [09:21] jjohansen - i'm just trying to test a fix, so i only need the package [09:22] brendand: and you only want the kernel package not the rest of packages in -proposed? [09:24] brendand: also i386 or x86-64 [09:24] jjohansen - definitely just the kernel. it's a fix for an install issue with one system, so i need to do a live cd customization to include the proposed kernel. the steps recommend just copying over the original vmlinuz and initrd with the new ones [09:24] jjohansen - it's i386 [09:25] brendand: oh also which release, natty? [09:27] jjohansen - it is, but i'm doing the customization on an oneiric box [09:28] brendand: well there is no oneiric-proposed kernel yet, so you are you looking for the natty-proposed kernel [09:29] jjohansen - absolutely. i'm just inferring that i don't (think i) have the option to just enable -proposed and install it [09:32] brendand: http://us.archive.ubuntu.com/ubuntu/pool/main/l/linux/linux-image-2.6.38-9-generic_2.6.38-9.43_i386.deb [09:32] brendand: you can find them by going to http://us.archive.ubuntu.com/ubuntu/dists/ [09:33] clicking through to the package set natty-proposed [09:33] jjohansen - duly bookmarked. thanks [09:33] natty-proposed/main/binary-i386/Packages.gz [09:34] unzip and look for linux-image in the Packages text file. It will give you the pool address [09:34] eg. pool/main/l/linux/linux-image-2.6.38-9-generic_2.6.38-9.43_i386.deb [09:34] and then you need to prepend which archive mirror, ... [09:35] its not as easy as it should be, you can use packages.ubuntu.com for updates, backports, and for release packages but not proposed === arun_ is now known as arun [13:48] apw: just so someone else knows what I've done, per the request of the release team I'm uploading just the fix for the linux-image-extra's install dependency issue [13:49] apw: only they've requested I upload to -proposed, that way if it takes too long to build, they'll just let it sit in -proposed and promote to -updates as a day 0, otherwise they'll promote to the actual release pocket. [13:49] ogasawara, ack, and i assume that the linux-meta changes went up too [13:49] apw: yep, linux-meta went up yesterday [13:49] ogasawara, sounds good, sorry for the mess [13:50] ogasawara, also once its in the queue get them to score it up [13:50] apw: no worries, like you said it's an elective install, I'm actually surprised they just don't prefer to wait for the first SRU [13:51] it may be making errors appear in their builds, though why we didn't find out about it sooner from those same errors i am unsure [13:51] apw: yah, I'm indeed surprised it went undetected until yesterday [13:51] also they seem to have a lot of other problems meaning they need a rebuild mondayish anyhow [13:51] apw: yep, I think that's the plan. [13:52] we release unity updates on mondays too :) [14:26] bjf, sconklin - you free to talk testing? [14:26] brendand: yup, i'm here [14:30] * ogasawara back in 20 [14:30] * roadmr also here for the testing thing [14:30] bjf, sconklin - so, as i mentioned yesterday we plan to do as much as we can next cycle to improve the SRU test suite that we run in our certification labs [14:31] bjf, sconklin - a major area we've neglected so far has been wireless testing. which is a big oversight, but we're on it now and we want to make sure our efforts are as worthwhile as possible [14:32] also audio testing has been minimal and graphics testing is okay, but not as good as it should be (admittedly graphics testing is tricky to do automatically) [14:34] obvious points to address in wireless testing are scanning, and some sort of connectivity test [14:34] brendand: as i mentioned yesterday, i look at QA testing as being fewer systems involved but deeper testing and cert. as being more shallow testing but accross more systems [14:34] brendand: and i think you agreed [14:34] bjf - exactly, so we're not trying to cover every possible use case, just the most important ones [14:34] brendand: i think it would be a *huge* win if tests can be shared between cert. and QA [14:35] brendand: I'm here, sorry I'm late [14:35] sconklin - no problem [14:35] brendand: for wireless, what are you asking / thinking ? [14:36] brendand: we agree it needs testing [14:36] bjf - you said yesterday it was one of the areas where you most commonly saw regressions [14:37] bjf - the nature of the regressions, was it mainly total breakage of the wireless? [14:37] brendand: it's the area where regressions can have a major impact on users, people are just so dependent on wireless these days [14:38] one way that wireless problems show up is in low throughput or in repeated disconnect/connect. [14:38] brendand: one area is rfkill, there occasionally seem to be changes that cause rfkill to stop working or work in unexpected ways [14:38] Throughput can be tested automatically, not sure about the other, unless by log inspection [14:38] ack on rfkill [14:39] for wireless, you need to test scanning, association, throughput [14:39] sconklin, bjf - excellent. do you know are QA already testing throughput? [14:39] it would be nice to test suplicants [14:39] don't know [14:39] brendand: no, we are not testing wireless at all [14:39] don't think so, but there should be some QA folks here [14:40] (except for ad hoc testing) [14:40] okay, so wireless regressions are wholly the responsibility of the community? [14:40] hggdh: do we have any ethernet testing that we could leverage on to automate some wireless testing? [14:40] (not causing them obviously. catching them) [14:40] not yet, no [14:41] brendand: yes [14:41] bjf - i was thinking including different security tests, e.g. WPA, WEP and no security [14:41] brendand: another area is we get reports that "with latest update/upgrade my wireless is 50% slower" [14:42] bjf - we have a bandwidth test, but obviously it's only wired at the moment and probably could be improved [14:42] hm, trending is something we don't do at the moment, but it looks like it would be useful in this case [14:42] brendand: the wireless can be tricky because it depends on how noisy the wireless environment and how good the AP is the bug submitter is talking to [14:43] bjf - our testing environment would be *extremely* noisy [14:43] yes I see a problem if all systems under test start slamming the AP at the same time [14:43] bjf: but in a testing lab, all those things are sort of controlled [14:43] gema - i would say the opposite in ours [14:43] or at least, consistent [14:43] gema - theoretically it could be, but since we are testing so many systems at once [14:43] gema, depends on your setup [14:44] gema - and they all have different speeds, an unknown % of them might be exercising the wireless at a given point [14:44] gema, also some of the wireless nics don't work as well in very noisy environs like conferences with hundreds of attendees [14:44] In think that in general, trending is something we should strive for in any metric that makes sense. Boot speed, benchmarks, network, file systems. But I think that falls more under QA 'deep' testing than cert 'broad' testing [14:44] gema - we could control it but that would involve some clever server side synchronisation [14:45] gema: that's actually good testing (of a sort) [14:45] * gema is thinking faster than she can write [14:46] sconklin - being able to detect trends would be great, again we'd need to invest a lot of time in infrastructure and results analysis to acheive that across all our systems though [14:46] brendand: gema: we talked to the intel wireless folks this week, they are very interested in working with us on better testing for the community as well as upstream developers [14:46] bjf: brendand: I am thinking we probably should put some sort of test environment together in our new lab for this testing [14:46] bjf: do the intel folks have testing tools we could use? [14:46] gema - the new lab is in Lexington, right? [14:46] brendand: yep [14:47] gema - half our hardware is there, approximately [14:47] roadmr: some, yes, and we will be talking to them about those, and we'll be sure to include cert. and qa [14:47] brendand: simply collecting the data and presenting it graphically in a report might be cheaper, and then humans can spot anything worrisome. That might be low hanging fruit [14:48] sconklin: that's an interesting idea, like we are doing with boot-speed testing right now [14:48] sconklin - the question is if we have the bandwidth in this cycle. i'll make a note of it and perhaps we can brainstorm something between ourselves and QA [14:48] bjf: awesome, thanks, proven tools will be quite helpful [14:48] i'd like to interject though that our first goal needs to be fill in the obvious gaps [14:48] more than proven tools, an idea of how they setup the environment will give us clues [14:48] +1 [14:49] brendand: are you in london next week? [14:49] hggdh: I don't understand what you mean [14:49] gema - no, UDS is my next destination [14:49] roadmr: http://linuxwireless.org/en/developers/Testing/wifi-test I have *no* idea if this is used at all or not [14:49] brendand: ok, so let's talk about it at UDS [14:49] hggdh: that'd be good, although given the bulk nature of what we do in cert, that might not be feasible for us (i.e. can't put a faraday cage around the lab or isolation-test 100+ systems) [14:50] sconklin: I am worried with noise -- we would need a few APs, and tens of machines will be chatting at the same time. As a result tests will potentially be contamined unless we set it right [14:50] roadmr - +1 [14:50] roadmr: hggdh this could be a good place where the testing that QA does is different that the cert. testing, cert can be in a noisy environ. QA in a "quiet" environ. ? [14:51] can we agree to discuss this topic at UDS and focus on purely functional tests for now? [14:51] brendand: +1 [14:51] we have two more topics to discuss at well [14:51] bjf: +1 [14:51] bjf: +1 [14:52] brendand: +1 for going ahead [14:52] (and flesh it out at UDS) [14:52] i think we have a really good base for improving wireless testing now, so i'll be adding these notes to the blueprint and everyone is welcome to subscribe: https://blueprints.launchpad.net/certify-planning/+spec/hardware-p-cert-sru-coverage [14:52] just a Q -- sconklin, bjf: is there any plans for meeting with Intel soon? [14:52] let's talk video/graphics testing [14:53] brendand: what's your definition of 'functional test'? [14:53] sconklin - one where we don't depend on previous results or heuristics to determine a results. it either works or doesn't [14:54] would that include a test in which we want to insure that network bandwidth meets a certain lower bound? [14:54] sconklin: if there is a value defined for that lower bound, yes [14:54] sconklin - if we can say for sure what that lower bound is and it doesn't depend on environmental factors then yes [14:55] hggdh: we just had our twice yearly meeting with them [14:55] sconklin: we have a test like that for ethernet bandwidth, it gives false positives often because throughput ends up depending on network load [14:55] sconklin: I think that kind of info would be more useful if collected and analyzed for trends than for pass/fail tests [14:56] roadmr, sconklin - also for disk read speed. we put an arbitrary lower limit of 30MB p/s and one system achieves only 29 [14:56] ok, then I think I'm clearer on the limitations of what's under discussion today. Thanks [14:57] as i said, video/graphics [14:58] what we do at the moment is quite simplistic. check if x is running. check if compiz *can* be run (not if it is) [14:58] we also take a screengrab and anlayse those in a batch. but the tool used is not 100% reliable [14:58] s/anlayse/analyse/ [14:59] brendand: do you test external monitors? [14:59] Do you test hotplugging of those? [14:59] multi-monitors ? [14:59] sconklin - no, and we don't, unfortunately [14:59] not in the scope of this. we'll have to achieve testing of that in a different way [15:00] but that's a major area where we see problems, and don't we certify that behavior? [15:00] sconklin - we do. but certification is done over 2 weeks [15:00] oh, right, that's the full cert, and what we're talking about here is the quick test that's done for SRU kernels [15:00] sconklin: we just don't test it during the automated SRU and/or weekly tests [15:00] sconklin - so we would have a 2 week regression-testing phase for each SRU and the 7 person hwcert team would have no other role but testing SRUs [15:01] problem is, these are problems that show up as regressions in SRUS. [15:01] sconklin - yep. really, no manual tests [15:01] brendand: that works for us, sold! [15:01] heheh :) [15:01] bjf - oh forgot [15:02] :) [15:02] brendand: that bit is always flipped on me [15:02] brendand: i really don't know how to give guidance on video testing, i think you need to talk to the xorg guys [15:03] bjf, sconklin - actually the lab engineers (like roadmr) already have enough unavoidable manual work [15:03] brendand: our issue is that we *do* see regressions there and they can be really painful [15:03] e.g. powering on laptops, pressing keys to go through bios screens, resuming systems from suspend [15:04] I'm trying not to focus on your processes, but on which actual failures we have seen which aren't currently covered by cert testing. [15:04] sconklin - that's good, but i need to point out when something won't be feasible [15:04] I shouldn't have to worry a lot about how you do your testing, only whether the results are valuable to us :-) [15:05] so take it or leave it, but I want for you to know where we see problems [15:05] brendand: i'd like to change that "not feasible" to "cert. is unable to perform that testing on sru kernels today" [15:05] yes, well if we're in brainstorm mode anything goes, determining whether it's feasible or not can come later [15:05] sconklin: it's really useful to know that's a regression-prone area [15:05] ack +1 [15:06] brendand: that way, QA can pick up testing that cert. can not do or we identify testing that could be done sometime in the future [15:07] bjf - true. 'not feasible' is always qualified with 'at this time, with the resources we have' [15:07] ok, we sidetracked while talking about grahics testing . . . [15:07] I would rather go, then, in brainstorm mode, and later on decide what can, or cannot be done right now, in 6 months, in 1 year, etc [15:07] but we need to know what the kernel team see as problem areas [15:07] hggdh: agreed, talk about what we want to do then figure out how much it would cost, time frames, etc later [15:08] so ... during the dev cycle we can see a lot of churn in the upstream drivers themselves which cause problems [15:08] agreed. i'll put notes of things that might be worth brainstorming like the external monitor and wireless speed trends [15:08] once a kernel goes "stable" we see less churn, we see more 'quirking' to enable graphics for specific hw [15:09] i shall be inviting bryce to the session, to get some input on graphics testing [15:09] however, some of this 'quirking' can impact systems other than those it was intended to target [15:09] outside of external monitors, what else is there? [15:10] and there are some, more general, bug fixes that also impact multiple systems [15:10] bjf - how do these bugs manifest themselves, is it usually corruption? [15:10] often it's they boot to a black screen === kamalmostafa is now known as kamal [15:11] brendand: failures generally occur either at boot or at login, based on my memory [15:11] or on resume [15:11] and the symptom is mostly a black screen [15:11] yes, no graphics at boot is the most common, along with "sparkles", "lines" or other symptoms of incorrect timing. [15:11] brendand: often yes [15:12] brendand: can you explain "batch" testing of screen shots ? [15:13] bjf - the imagemagick 'import' command is used to do a capture (which as i mentioned may not be reliable). this is stored with the test results [15:14] bjf - and we have a script which downloads all of these into a single directory and i just browse over them and check for blank ones or ones with graphical glitches. takes about 5 minutes [15:14] brendand: thanks [15:15] bjf - this week we found one blank one when testing the maverick SRU. turns out the system failed to resume from suspend (not a regression, it was like that since release) [15:15] brendand: sidetracking a bit here ... do you do suspend/resume testing ? [15:16] bjf - basic suspend/resume testing. i.e. make sure the system still can [15:16] brendand: that causes major issues with everything we've been talking about, wireless, graphics audio [15:16] bjf - i would like to expand that to include testing subsystems after suspend [15:16] the system will rfkill a device on suspend but not restore it on resume [15:17] bjf - we'll probably include some element of post-S3 testing for each area [15:17] the graphics worked perfectly fine before suspend but it resumes to a black screen [15:17] audio, bluetooth, graphics, wireless [15:17] and so forth [15:17] etc. [15:17] Yeah, I think just by tatking what we have and also running it all post-resume, we gain a lot [15:17] taking [15:17] sconklin: +1 [15:18] sorry to throw yet another thing it ... do we do any hybernate / resume testing ? [15:19] bjf - no. isn't that a dodgy area? i.e. not all systems support it and that's just one of those things? [15:19] and we should probably move on to audio [15:19] yep [15:19] brendand: yes, thought i'd ask though [15:19] before that though, just one question [15:20] I've heard a lot of factoids thrown around like "only server people care about it", but never any good data to back up who actually cares, or what the use cases are. I know we get bug reports for it, so some people use it. [15:20] hibernate, that is [15:20] bjf, sconklin - to get an idea of the suitability of the screen capture tools we're using, it would be good to know when one of these regressions is seen (hopefully on a system we can get access to) [15:21] i want to know if we've maybe been missing stuff [15:21] brendand: noted [15:21] onto audio anyway [15:21] also about suspend/resume, do we have any stress testing? like suspend/resume 100 times, some problems are intermittent, and only appear after some iterations [15:21] noted, we can get some historical data I think [15:22] herton - no, we even are struggling with the problem of how to achieve that in our certification [15:23] herton - i'll note it down though [15:23] so, audio [15:23] brendand: cking is worth talking to about suspend/resume testing (if you haven't already) [15:24] if diwic is around maybe he can speak to the kinds of regressions he sees [15:24] bjf - not directly in relation to SRU testing [15:24] and the kinds of testing that would be helpful [15:24] yep, I'm happy to discuss this with you brendand [15:24] we've had a look at cking's fwts testing, but it may be worth revisiting [15:24] we have a couple of fwts tests in there [15:24] * diwic is around [15:24] especially with the improved goodness of the Oneiric features [15:24] cpu_scaling and wake_alarm [15:25] cking - will you subscribe to the blueprint? https://blueprints.launchpad.net/certify-planning/+spec/hardware-p-cert-sru-coverage [15:25] brendand: for audio you want to test internal and external equally [15:25] sure [15:25] diwic - you're discussing regressions with audio in -proposed [15:26] usb speakers would be nice (yes i'm completely ignoring if these can be automated or not) [15:26] brendand: we could also add the other fwts test as it's full auto and runs most of Colin's automated tests [15:26] For for testing audio regressions in general I'd just say test playback and recording of various inputs and outputs, the more the better. [15:27] diwic, bjf - so if we can cover record/playback on internal, external mic and usb then it's all good? [15:27] usb will be most difficult i think [15:27] external we can use a patch cable [15:27] brendand: that would be pretty good, yes [15:27] brendand: lots of bluetooth headsets in use these days [15:28] brendand, yeah to a reasonable degree...I mean, you could go on by testing volume controls, low-latency / high-latency scenarios [15:28] diwic - if the audio breaks does it usually break completely? testing audio quality might be tricky [15:28] so are we ultimately hoping to be able to write per-SRU tests to catch regressions? [15:28] brendand: diwic is the domain expert for audio [15:29] diwic - if you could attend the session about this at UDS that would be great (link just a little but up in the scrollback) [15:29] brendand, that's a good question. In general I think we haven't that many regressions in SRUs for audio in the past - do you agree bjf? [15:30] brendand, ok, added myself to blueprint [15:30] i think we've got a load of good ideas together now [15:30] i'll sit down on monday and try and sift through this [15:30] brendand, maybe testing sound after suspend/resume could make sense as well [15:31] brendand: yah, more than an hr is just asking for trouble (no sarcasm) [15:31] diwic - yeah. a question there though. if the mixer settings get changed after suspend, is that a proper problem? [15:31] actually we should test sound, video, and network on resume [15:32] yeah, this is a good start [15:32] hggdh - that's the plan. perhaps bluetooth too [15:32] brendand: yes, bluetooth, I forgot it [15:32] brendand, hmm, I think it is, but minor in the sense that if it just happens to one of the machine, it should be fixed, but maybe it is not enough to block an SRU with fixes to thousand of users [15:33] actually, i don't know will we be able to address it much, but at least one functional test for bluetooth will be nice (before and after suspend) [15:33] brendand: for bluetooth, you want to actually pair to a device [15:33] diwic - at the moment i think we already have certified a lot of systems which won't keep the mixer settings after suspend (roadr, bladernr?) [15:34] brendand: do you do targetted testing for each SRU or do you run all the test cases on each or all the test cases on a mix of SRUs? [15:34] brendand: and be able to do that after resume [15:34] brendand: yeah, that's a pain point, personally, but we have [15:34] bjf - we would plan to include a file transfer. we're doing this automatically in cert so it shouldn't be too hard [15:34] brendand: nice [15:34] gema - same test suite on each one [15:34] separately? [15:34] and then together? [15:35] gema - seperately what? [15:35] brendand, interesting. I was not aware of this problem (and haven't seen loads of bugs about it either) [15:35] I am trying to figure out whether the SRUs would interfere with each other and in which order you test them [15:35] brendand, diwic: yep, as long as audio does work, we don't care that much about the mixer going up/down after resume [15:36] roadmr, it feels like one of all minor annoyances we should fix for the P cycle [15:36] gema - i'm not quite sure what you're trying to say [15:36] brendand: no worries, I will ask offline [15:36] gema - sorry [15:36] diwic: yep, it's annoying, a papercut if you will [15:37] cking: asnwering you question about per-SRU tests: ideally, we should have a collection of tests that grow as time goes by, regarding regressions [15:37] hggdh - we can't afford to let our test suite grow unchecked [15:38] brendand: indeed, for certification, but not so for QA [15:38] brendand, as long as test don't require manual intervention I guess testing is cheap, but for manual tests we should carefully consider every one [15:38] hggdh - yeah, you guys feel free ;) [15:38] heh [15:38] diwic: tests are never for free, they need to be maintained [15:38] diwic: like any other code [15:39] brendand: will you be sending out an edited version of this discussion ? [15:39] gema +1 [15:40] bjf - yeah, it's going to be what i do on monday probably. i'll probably attach it to the blueprint [15:40] gema, fair point [15:40] brendand: ok, look forward to it [15:41] brendand: looking forward to it too, thanks for all the information :) [15:41] regarding the mixer settings, if we got a baseline of systems which can restore them properly than at least we could look for sudden regressions in those [15:41] for the ones that could never restore them, we can't really hold SRUs back for that [15:41] brendand, makes sense [15:41] that's my last though [15:41] last thought, that is [15:42] thanks for everyone's input. i hereby return ubuntu-kernel to its normally scheduled programing [15:42] move to adjourn [15:42] :-) [15:42] thanks! [15:42] brendand: one of the things we should look for is deviation from "standard" -- i.e. those tests that consistently failed/succeeded and are now succeding/failing [15:43] sconklin: +1 [15:43] thanks everyone, it was really useful, hope to continue this at some point / UDS [16:01] . [16:02] .. [16:02] no. just dot. ;-) [16:02] .... [16:03] ........ [16:03] I'm getting tired. [16:03] it started as a dot and now we have a progress bar going [16:03] Maybe it's time to call it a day. === yofel_ is now known as yofel [18:19] jsalisbury: your making me do real work! :-) [18:20] bjf, heh, sorry bout that [19:11] Hi, I have a problem with the new firewire stack in 11.04 (but also still exists in 11.10 beta2). Where would be the best place to ask for help. Can provide error messages and detailed information. [19:14] Claudio9641: you should file a bug, add your error messages and detailed information and then come back here and tell us the bug # [19:14] Problem is: I have a firewire external harddisk which is not recognized automatically by Kubuntu 11.04 when connected. Strange thing: when I boot up a life-CD of 10.04 it works perfectly (old firewire stack). When I then reboot into 11.04, the drive also works in 11.04. [19:15] Hi bjf: file a bug - where? [19:15] Claudio9641: from the command line you can type: ubuntu-bug [19:19] Great, now I have a problem with ubuntu-bug. I tell it: "Problem with external storage device" and next window that pops up is asking for "Which audio devicee are you having a problem with?" - Hey, dude, I said 'storage', not 'audio'. Grmpf [19:21] ... and when I press "abort", ubuntu-bug hangs .. great! [19:23] Claudio9641: ouch, try ubuntu-bug linux [19:26] ogasawara, bjf, Regression from 11.04 to 11.10, but does not happen with latest mainline build: [19:26] https://bugs.launchpad.net/ubuntu/+source/linux/+bug/870123 [19:26] Launchpad bug 870123 in linux "Twinhan USB DVB device failed to operate after upgrade to 11.10" [Undecided,Confirmed] [19:27] jsalisbury: ack, I'll take a look and see if there's a patch from upstream we should pull in for SRU [19:28] ogasawara, thanks. [19:46] jsalisbury: cool, I think the patch to resolve that bug actually made it into upstream stable v3.0.5 (ie we have the patch queued for Oneiric's first SRU). [19:46] jsalisbury: am going to build them a test kernel just to confirm [19:46] ogasawara, that's good news [19:46] jsalisbury: indeed, thanks for the heads up. [19:46] Ok, I now filed a bug. Bug #870250. Hope that helps. [19:46] Launchpad bug 870250 in linux "Problem with external firewire disk and new firewire stack in Kubuntu 11.04 and 11.10 beta2" [Undecided,Confirmed] https://launchpad.net/bugs/870250 [19:46] ogasawara, np, thanks for looking at it [20:11] jsalisbury: ping [20:12] bjf, pong [20:12] jsalisbury: pm [20:14] bjf, jsalisbury: thanks for the quick comments on my bug report. Will test the upstream kernel tomorrow. Must leave now ... thanks so far for the directions and instructions!!! [20:14] Claudio964, np, let us know how the testing goes