/srv/irclogs.ubuntu.com/2011/10/07/#ubuntu-kernel.txt

htorqueogasawara: hi! https://bugs.freedesktop.org/show_bug.cgi?id=41059 - there are two new patches to test in comment 31 and comment 34. can you maybe build two new kernels for soren to test (unless he wants to build them himself)?01:20
htorquesoren: ^^01:20
ubot2Freedesktop bug 41059 in Driver/intel "XRANDR operations very slow unless (phantom) HDMI1 disabled" [Major,Assigned]01:20
macobjf: you're quick03:47
=== jibel_ is now known as jibel
smbDry morning .+07:01
aboganimorning07:10
brendandwhere do i find tar files with the vmlinuz and initrd for different ubuntu kernels? trying to do a live CD customization08:45
jjohansenbrendand: tar files?  You can find the packages and pull them appart, the source packages and pull them appart, the kernel git tree08:47
brendandwhat i'm really looking for then is the package with the proposed kernel08:53
brendandfor natty, that is08:55
jjohansenbrendand: specifically the proposed kernel package or just the kernel source?  git is the easiest way to get that09:19
brendandjjohansen - i'm just trying to test a fix, so i only need the package09:21
jjohansenbrendand: and you only want the kernel package not the rest of packages in -proposed?09:22
jjohansenbrendand: also i386 or x86-6409:24
brendandjjohansen - definitely just the kernel. it's a fix for an install issue with one system, so i need to do a live cd customization to include the proposed kernel. the steps recommend just copying over the original vmlinuz and initrd with the new ones09:24
brendandjjohansen - it's i38609:24
jjohansenbrendand: oh also which release, natty?09:25
brendandjjohansen - it is, but i'm doing the customization on an oneiric box09:27
jjohansenbrendand: well there is no oneiric-proposed kernel yet, so you are you looking for the natty-proposed kernel09:28
brendandjjohansen - absolutely. i'm just inferring that i don't (think i) have the option to just enable -proposed and install it09:29
jjohansenbrendand: http://us.archive.ubuntu.com/ubuntu/pool/main/l/linux/linux-image-2.6.38-9-generic_2.6.38-9.43_i386.deb09:32
jjohansenbrendand: you can find them by going to http://us.archive.ubuntu.com/ubuntu/dists/09:32
jjohansenclicking through to the package set natty-proposed09:33
brendandjjohansen - duly bookmarked. thanks09:33
jjohansennatty-proposed/main/binary-i386/Packages.gz09:33
jjohansenunzip and look for linux-image in the Packages text file.  It will give you the pool address09:34
jjohanseneg. pool/main/l/linux/linux-image-2.6.38-9-generic_2.6.38-9.43_i386.deb09:34
jjohansenand then you need to prepend which archive mirror, ...09:34
jjohansenits not as easy as it should be, you can use packages.ubuntu.com for updates, backports, and for release packages but not proposed09:35
=== arun_ is now known as arun
ogasawaraapw: just so someone else knows what I've done, per the request of the release team I'm uploading just the fix for the linux-image-extra's install dependency issue13:48
ogasawaraapw: only they've requested I upload to -proposed, that way if it takes too long to build, they'll just let it sit in -proposed and promote to -updates as a day 0, otherwise they'll promote to the actual release pocket.13:49
apwogasawara, ack, and i assume that the linux-meta changes went up too13:49
ogasawaraapw: yep, linux-meta went up yesterday13:49
apwogasawara, sounds good, sorry for the mess13:49
apwogasawara, also once its in the queue get them to score it up13:50
ogasawaraapw: no worries, like you said it's an elective install, I'm actually surprised they just don't prefer to wait for the first SRU13:50
apwit may be making errors appear in their builds, though why we didn't find out about it sooner from those same errors i am unsure13:51
ogasawaraapw: yah, I'm indeed surprised it went undetected until yesterday13:51
apwalso they seem to have a lot of other problems meaning they need a rebuild mondayish anyhow13:51
ogasawaraapw: yep, I think that's the plan.13:51
apwwe release unity updates on mondays too :)13:52
brendandbjf, sconklin - you free to talk testing?14:26
bjfbrendand: yup, i'm here14:26
* ogasawara back in 2014:30
* roadmr also here for the testing thing14:30
brendandbjf, sconklin - so, as i mentioned yesterday we plan to do as much as we can next cycle to improve the SRU test suite that we run in our certification labs14:30
brendandbjf, sconklin - a major area we've neglected so far has been wireless testing. which is a big oversight, but we're on it now and we want to make sure our efforts are as worthwhile as possible14:31
brendandalso audio testing has been minimal and graphics testing is okay, but not as good as it should be (admittedly graphics testing is tricky to do automatically)14:32
brendandobvious points to address in wireless testing are scanning, and some sort of connectivity test14:34
bjfbrendand: as i mentioned yesterday, i look at QA testing as being fewer systems involved but deeper testing and cert. as being more shallow testing but accross more systems14:34
bjfbrendand: and i think you agreed14:34
brendandbjf - exactly, so we're not trying to cover every possible use case, just the most important ones14:34
bjfbrendand: i think it would be a *huge* win if tests can be shared between cert. and QA14:34
sconklinbrendand: I'm here, sorry I'm late14:35
brendandsconklin - no problem14:35
bjfbrendand: for wireless, what are you asking / thinking ?14:35
bjfbrendand: we agree it needs testing14:36
brendandbjf - you said yesterday it was one of the areas where you most commonly saw regressions14:36
brendandbjf - the nature of the regressions, was it mainly total breakage of the wireless?14:37
bjfbrendand: it's the area where regressions can have a major impact on users, people are just so dependent on wireless these days14:37
sconklinone way that wireless problems show up is in low throughput or in repeated disconnect/connect.14:38
bjfbrendand: one area is rfkill, there occasionally seem to be changes that cause rfkill to stop working or work in unexpected ways14:38
sconklinThroughput can be tested automatically, not sure about the other, unless by log inspection14:38
sconklinack on rfkill14:38
bjffor wireless, you need to test scanning, association, throughput14:39
brendandsconklin, bjf - excellent. do you know are QA already testing throughput?14:39
bjfit would be nice to test suplicants14:39
sconklindon't know14:39
hggdhbrendand: no, we are not testing wireless at all14:39
bjfdon't think so, but there should be some QA folks here14:39
hggdh(except for ad hoc testing)14:40
brendandokay, so wireless regressions are wholly the responsibility of the community?14:40
gemahggdh: do we have any ethernet testing that we could leverage on to automate some wireless testing?14:40
brendand(not causing them obviously. catching them)14:40
hggdhnot yet, no14:40
bjfbrendand: yes14:41
brendandbjf - i was thinking including different security tests, e.g. WPA, WEP and no security14:41
bjfbrendand: another area is we get reports that "with latest update/upgrade my wireless is 50% slower"14:41
brendandbjf - we have a bandwidth test, but obviously it's only wired at the moment and probably could be improved14:42
roadmrhm, trending is something we don't do at the moment, but it looks like it would be useful in this case14:42
bjfbrendand: the wireless can be tricky because it depends on how noisy the wireless environment and how good the AP is the bug submitter is talking to14:42
brendandbjf - our testing environment would be *extremely* noisy14:43
roadmryes I see a problem if all systems under test start slamming the AP at the same time14:43
gemabjf: but in a testing lab, all those things are sort of controlled14:43
brendandgema - i would say the opposite in ours14:43
gemaor at least, consistent14:43
brendandgema - theoretically it could be, but since we are testing so many systems at once14:43
bjfgema, depends on your setup14:43
brendandgema - and they all have different speeds, an unknown % of them might be exercising the wireless at a given point14:44
bjfgema, also some of the wireless nics don't work as well in very noisy environs like conferences with hundreds of attendees14:44
sconklinIn think that in general, trending is something we should strive for in any metric that makes sense. Boot speed, benchmarks, network, file systems. But I think that falls more under QA 'deep' testing than cert 'broad' testing14:44
brendandgema - we could control it but that would involve some clever server side synchronisation14:44
bjfgema: that's actually good testing (of a sort)14:45
* gema is thinking faster than she can write14:45
brendandsconklin - being able to detect trends would be great, again we'd need to invest a lot of time in infrastructure and results analysis to acheive that across all our systems though14:46
bjfbrendand: gema: we talked to the intel wireless folks this week, they are very interested in working with us on better testing for the community as well as upstream developers14:46
gemabjf: brendand: I am thinking we probably should put some sort of test environment together in our new lab for this testing14:46
roadmrbjf: do the intel folks have testing tools we could use?14:46
brendandgema - the new lab is in Lexington, right?14:46
gemabrendand: yep14:46
brendandgema - half our hardware is there, approximately14:47
bjfroadmr: some, yes, and we will be talking to them about those, and we'll be sure to include cert. and qa14:47
sconklinbrendand: simply collecting the data and presenting it graphically in a report might be cheaper, and then humans can spot anything worrisome. That might be low hanging fruit14:47
bjfsconklin: that's an interesting idea, like we are doing with boot-speed testing right now14:48
brendandsconklin - the question is if we have the bandwidth in this cycle. i'll make a note of it and perhaps we can brainstorm something between ourselves and QA14:48
roadmrbjf: awesome, thanks, proven tools will be quite helpful14:48
brendandi'd like to interject though that our first goal needs to be fill in the obvious gaps14:48
hggdhmore than proven tools, an idea of how they setup the environment will give us clues14:48
sconklin+114:48
gemabrendand: are you in london next week?14:49
sconklinhggdh: I don't understand what you mean14:49
brendandgema - no, UDS is my next destination14:49
bjfroadmr: http://linuxwireless.org/en/developers/Testing/wifi-test I have *no* idea if this is used at all or not14:49
gemabrendand: ok, so let's talk about it at UDS14:49
roadmrhggdh: that'd be good, although given the bulk nature of what we do in cert, that might not be feasible for us (i.e. can't put a faraday cage around the lab or isolation-test 100+ systems)14:49
hggdhsconklin: I am worried with noise -- we would need a few APs, and tens of machines will be chatting at the same time. As a result tests will potentially be contamined unless we set it right14:50
brendandroadmr - +114:50
bjfroadmr: hggdh this could be a good place where the testing that QA does is different that the cert. testing, cert can be in a noisy environ. QA in a "quiet" environ. ?14:50
brendandcan we agree to discuss this topic at UDS and focus on purely functional tests for now?14:51
gemabrendand: +114:51
brendandwe have two more topics to discuss at well14:51
hggdhbjf: +114:51
gemabjf: +114:51
hggdhbrendand: +1 for going ahead14:52
hggdh(and flesh it out at UDS)14:52
brendandi think we have a really good base for improving wireless testing now, so i'll be adding these notes to the blueprint and everyone is welcome to subscribe: https://blueprints.launchpad.net/certify-planning/+spec/hardware-p-cert-sru-coverage14:52
hggdhjust a Q -- sconklin, bjf: is there any plans for meeting with Intel soon?14:52
brendandlet's talk video/graphics testing14:52
sconklinbrendand: what's your definition of 'functional test'?14:53
brendandsconklin - one where we don't depend on previous results or heuristics to determine a results. it either works or doesn't14:53
sconklinwould that include a test in which we want to insure that network bandwidth meets a certain lower bound?14:54
gemasconklin: if there is a value defined for that lower bound, yes14:54
brendandsconklin - if we can say for sure what that lower bound is and it doesn't depend on environmental factors then yes14:54
bjfhggdh: we just had our twice yearly meeting with them14:55
roadmrsconklin: we have a test like that for ethernet bandwidth, it gives false positives often because throughput ends up depending on network load14:55
roadmrsconklin: I think that kind of info would be more useful if collected and analyzed for trends than for pass/fail tests14:55
brendandroadmr, sconklin - also for disk read speed. we put an arbitrary lower limit of 30MB p/s and one system achieves only 2914:56
sconklinok, then I think I'm clearer on the limitations of what's under discussion today. Thanks14:56
brendandas i said, video/graphics14:57
brendandwhat we do at the moment is quite simplistic. check if x is running. check if compiz *can* be run (not if it is)14:58
brendandwe also take a screengrab and anlayse those in a batch. but the tool used is not 100% reliable14:58
brendands/anlayse/analyse/14:58
sconklinbrendand: do you test external monitors?14:59
sconklinDo you test hotplugging of those?14:59
bjfmulti-monitors ?14:59
brendandsconklin - no, and we don't, unfortunately14:59
brendandnot in the scope of this. we'll have to achieve testing of that in a different way14:59
sconklinbut that's a major area where we see problems, and don't we certify that behavior?15:00
brendandsconklin - we do. but certification is done over 2 weeks15:00
sconklinoh, right, that's the full cert, and what we're talking about here is the quick test that's done for SRU kernels15:00
roadmrsconklin: we just don't test it during the automated SRU and/or weekly tests15:00
brendandsconklin - so we would have a 2 week regression-testing phase for each SRU and the 7 person hwcert team would have no other role  but testing SRUs15:00
sconklinproblem is, these are problems that show up as regressions in SRUS.15:01
brendandsconklin - yep. really, no manual tests15:01
bjfbrendand: that works for us, sold!15:01
roadmrheheh :)15:01
brendandbjf - oh forgot </sarcasm>15:01
brendand:)15:02
bjfbrendand: that bit is always flipped on me15:02
bjfbrendand: i really don't know how to give guidance on video testing, i think you need to talk to the xorg guys15:02
brendandbjf, sconklin - actually the lab engineers (like roadmr) already have enough unavoidable manual work15:03
bjfbrendand: our issue is that we *do* see regressions there and they can be really painful15:03
brendande.g. powering on laptops, pressing keys to go through bios screens, resuming systems from suspend15:03
sconklinI'm trying not to focus on your processes, but on which actual failures we have seen which aren't currently covered by cert testing.15:04
brendandsconklin - that's good, but i need to point out when something won't be feasible15:04
sconklinI shouldn't have to worry a lot about how you do your testing, only whether the results are valuable to us :-)15:04
sconklinso take it or leave it, but I want for you to know where we see problems15:05
bjfbrendand: i'd like to change that "not feasible" to "cert. is unable to perform that testing on sru kernels today"15:05
roadmryes, well if we're in brainstorm mode anything goes, determining whether it's feasible or not can come later15:05
roadmrsconklin: it's really useful to know that's a regression-prone area15:05
sconklinack +115:05
bjfbrendand: that way, QA can pick up testing that cert. can not do or we identify testing that could be done sometime in the future15:06
brendandbjf - true. 'not feasible' is always qualified with 'at this time, with the resources we have'15:07
sconklinok, we sidetracked while talking about grahics testing . . .15:07
hggdhI would rather go, then, in brainstorm mode, and later on decide what can, or cannot be done right now, in 6 months, in 1 year, etc15:07
hggdhbut we need to know what the kernel team see as problem areas15:07
sconklinhggdh: agreed, talk about what we want to do then figure out how much it would cost, time frames, etc later15:07
bjfso ... during the dev cycle we can see a lot of churn in the upstream drivers themselves which cause problems15:08
brendandagreed. i'll put notes of things that might be worth brainstorming like the external monitor and wireless speed trends15:08
bjfonce a kernel goes "stable" we see less churn, we see more 'quirking' to enable graphics for specific hw15:08
brendandi shall be inviting bryce to the session, to get some input on graphics testing15:09
bjfhowever, some of this 'quirking' can impact systems other than those it was intended to target15:09
brendandoutside of external monitors, what else is there?15:09
bjfand there are some, more general, bug fixes that also impact multiple systems15:10
brendandbjf - how do these bugs manifest themselves, is it usually corruption?15:10
bjfoften it's they boot to a black screen15:10
=== kamalmostafa is now known as kamal
sconklinbrendand: failures generally occur either at boot or at login, based on my memory15:11
hggdhor on resume15:11
brendandand the symptom is mostly a black screen15:11
sconklinyes, no graphics at boot is the most common, along with "sparkles", "lines" or other symptoms of incorrect timing.15:11
bjfbrendand: often yes15:11
bjfbrendand: can you explain "batch" testing of screen shots ?15:12
brendandbjf - the imagemagick 'import' command is used to do a capture (which as i mentioned may not be reliable). this is stored with the test results15:13
brendandbjf - and we have a script which downloads all of these into a single directory and i just browse over them and check for blank ones or ones with graphical glitches. takes about 5 minutes15:14
bjfbrendand: thanks15:14
brendandbjf - this week we found one blank one when testing the maverick SRU. turns out the system failed to resume from suspend (not a regression, it was like that since release)15:15
bjfbrendand: sidetracking a bit here ... do you do suspend/resume testing ?15:15
brendandbjf - basic suspend/resume testing. i.e. make sure the system still can15:16
bjfbrendand: that causes major issues with everything we've been talking about, wireless, graphics audio15:16
brendandbjf - i would like to expand that to include testing subsystems after suspend15:16
bjfthe system will rfkill a device on suspend but not restore it on resume15:16
brendandbjf - we'll probably include some element of post-S3 testing for each area15:17
bjfthe graphics worked perfectly fine before suspend but it resumes to a black screen15:17
brendandaudio, bluetooth, graphics, wireless15:17
bjfand so forth15:17
brendandetc.15:17
sconklinYeah, I think just by tatking what we have and also running it all post-resume, we gain a lot15:17
sconklintaking15:17
bjfsconklin: +115:17
bjfsorry to throw yet another thing it ... do we do any hybernate / resume testing ?15:18
brendandbjf - no. isn't that a dodgy area? i.e. not all systems support it and that's just one of those things?15:19
bjfand we should probably move on to audio15:19
brendandyep15:19
bjfbrendand: yes, thought i'd ask though15:19
brendandbefore that though, just one question15:19
sconklinI've heard a lot of factoids thrown around like "only server people care about it", but never any good data to back up who actually cares, or what the use cases are. I know we get bug reports for it, so some people use it.15:20
sconklinhibernate, that is15:20
brendandbjf, sconklin - to get an idea of the suitability of the screen capture tools we're using, it would be good to know when one of these regressions is seen (hopefully on a system we can get access to)15:20
brendandi want to know if we've maybe been missing stuff15:21
bjfbrendand: noted15:21
brendandonto audio anyway15:21
hertonalso about suspend/resume, do we have any stress testing? like suspend/resume 100 times, some problems are intermittent, and only appear after some iterations15:21
sconklinnoted, we can get some historical data I think15:21
brendandherton - no, we even are struggling with the problem of how to achieve that in our certification15:22
brendandherton - i'll note it down though15:23
brendandso, audio15:23
bjfbrendand: cking is worth talking to about suspend/resume testing (if you haven't already)15:23
bjfif diwic is around maybe he can speak to the kinds of regressions he sees 15:24
brendandbjf - not directly in relation to SRU testing15:24
bjfand the kinds of testing that would be helpful15:24
ckingyep, I'm happy to discuss this with you brendand15:24
roadmrwe've had a look at cking's fwts testing, but it may be worth revisiting15:24
brendandwe have a couple of fwts tests in there15:24
* diwic is around15:24
ckingespecially with the improved goodness of the Oneiric features15:24
brendandcpu_scaling and wake_alarm15:24
brendandcking - will you subscribe to the blueprint? https://blueprints.launchpad.net/certify-planning/+spec/hardware-p-cert-sru-coverage15:25
bjfbrendand: for audio you want to test internal and external equally15:25
ckingsure15:25
brendanddiwic - you're discussing regressions with audio in -proposed15:25
bjfusb speakers would be nice (yes i'm completely ignoring if these can be automated or not)15:26
bladernrbrendand:  we could also add the other fwts test as it's full auto and runs most of Colin's automated tests15:26
diwicFor for testing audio regressions in general I'd just say test playback and recording of various inputs and outputs, the more the better.15:26
brendanddiwic, bjf - so if we can cover record/playback on internal, external mic and usb then it's all good?15:27
brendandusb will be most difficult i think15:27
brendandexternal we can use a patch cable15:27
bjfbrendand: that would be pretty good, yes15:27
bjfbrendand: lots of bluetooth headsets in use these days15:27
diwicbrendand, yeah to a reasonable degree...I mean, you could go on by testing volume controls, low-latency / high-latency scenarios15:28
brendanddiwic - if the audio breaks does it usually break completely? testing audio quality might be tricky15:28
ckingso are we ultimately hoping to be able to write per-SRU tests to catch regressions?15:28
bjfbrendand: diwic is the domain expert for audio15:28
brendanddiwic - if you could attend the session about this at UDS that would be great (link just a little but up in the scrollback)15:29
diwicbrendand, that's a good question. In general I think we haven't that many regressions in SRUs for audio in the past - do you agree bjf?15:29
diwicbrendand, ok, added myself to blueprint15:30
brendandi think we've got a load of good ideas together now15:30
brendandi'll sit down on monday and try and sift through this15:30
diwicbrendand, maybe testing sound after suspend/resume could make sense as well15:30
bjfbrendand: yah, more than an hr is just asking for trouble (no sarcasm)15:31
brendanddiwic - yeah. a question there though. if the mixer settings get changed after suspend, is that a proper problem?15:31
hggdhactually we should test sound, video, and network on resume15:31
sconklinyeah, this is a good start15:32
brendandhggdh - that's the plan. perhaps bluetooth too15:32
hggdhbrendand: yes, bluetooth, I forgot it15:32
diwicbrendand, hmm, I think it is, but minor in the sense that if it just happens to one of the machine, it should be fixed, but maybe it is not enough to block an SRU with fixes to thousand of users15:32
brendandactually, i don't know will we be able to address it much, but at least one functional test for bluetooth will be nice (before and after suspend)15:33
bjfbrendand: for bluetooth, you want to actually pair to a device15:33
brendanddiwic - at the moment i think we already have certified a lot of systems which won't keep the mixer settings after suspend (roadr, bladernr?)15:33
gemabrendand: do you do targetted testing for each SRU or do you run all the test cases on each or all the test cases on a mix of SRUs?15:34
bjfbrendand: and be able to do that after resume15:34
bladernrbrendand:  yeah, that's a pain point, personally, but we have15:34
brendandbjf - we would plan to include a file transfer. we're doing this automatically in cert so it shouldn't be too hard15:34
bjfbrendand: nice15:34
brendandgema - same test suite on each one15:34
gemaseparately?15:34
gemaand then together?15:34
brendandgema - seperately what?15:35
diwicbrendand, interesting. I was not aware of this problem (and haven't seen loads of bugs about it either) 15:35
gemaI am trying to figure out whether the SRUs would interfere with each other and in which order you test them15:35
roadmrbrendand, diwic: yep, as long as audio does work, we don't care that much about the mixer going up/down after resume15:35
diwicroadmr, it feels like one of all minor annoyances we should fix for the P cycle15:36
brendandgema - i'm not quite sure what you're trying to say15:36
gemabrendand: no worries, I will ask offline15:36
brendandgema - sorry15:36
roadmrdiwic: yep, it's annoying, a papercut if you will15:36
hggdhcking: asnwering you question about per-SRU tests: ideally, we should have a collection of tests that grow as time goes by, regarding regressions15:37
brendandhggdh - we can't afford to let our test suite grow unchecked15:37
hggdhbrendand: indeed, for certification, but not so for QA15:38
diwicbrendand, as long as test don't require manual intervention I guess testing is cheap, but for manual tests we should carefully consider every one15:38
brendandhggdh - yeah, you guys feel free ;)15:38
hggdhheh15:38
gemadiwic: tests are never for free, they need to be maintained15:38
gemadiwic: like any other code15:38
bjfbrendand: will you be sending out an edited version of this discussion ?15:39
sconklingema +115:39
brendandbjf - yeah, it's going to be what i do on monday probably. i'll probably attach it to the blueprint15:40
diwicgema, fair point15:40
bjfbrendand: ok, look forward to it15:40
gemabrendand: looking forward to it too, thanks for all the information :)15:41
brendandregarding the mixer settings, if we got a baseline of systems which can restore them properly than at least we could look for sudden regressions in those15:41
brendandfor the ones that could never restore them, we can't really hold SRUs back for that15:41
diwicbrendand, makes sense15:41
brendandthat's my last though15:41
brendandlast thought, that is15:41
brendandthanks for everyone's input. i hereby return ubuntu-kernel to its normally scheduled programing15:42
sconklinmove to adjourn15:42
sconklin:-)15:42
sconklinthanks!15:42
hggdhbrendand: one of the things we should look for is deviation from "standard" -- i.e. those tests that consistently failed/succeeded and are now succeding/failing15:42
hggdhsconklin: +115:43
roadmrthanks everyone, it was really useful, hope to continue this at some point / UDS15:43
kamal.16:01
roadmr..16:02
kamalno.  just dot.   ;-)16:02
diwic....16:02
diwic........16:03
diwicI'm getting tired.16:03
roadmrit started as a dot and now we have a progress bar going16:03
diwicMaybe it's time to call it a day.16:03
=== yofel_ is now known as yofel
bjfjsalisbury: your making me do real work! :-)18:19
jsalisburybjf, heh, sorry bout that18:20
Claudio9641Hi, I have a problem with the new firewire stack in 11.04 (but also still exists in 11.10 beta2). Where would be the best place to ask for help. Can provide error messages and detailed information.19:11
bjfClaudio9641: you should file a bug, add your error messages and detailed information and then come back here and tell us the bug #19:14
Claudio9641Problem is: I have a firewire external harddisk which is not recognized automatically by Kubuntu 11.04 when connected. Strange thing: when I boot up a life-CD of 10.04 it works perfectly (old firewire stack). When I then reboot into 11.04, the drive also works in 11.04.19:14
Claudio9641Hi bjf: file a bug - where?19:15
bjfClaudio9641: from the command line you can type: ubuntu-bug19:15
Claudio9641Great, now I have a problem with ubuntu-bug. I tell it: "Problem with external storage device" and next window that pops up is asking for "Which audio devicee are you having a problem with?" - Hey, dude, I said 'storage', not 'audio'. Grmpf19:19
Claudio9641... and when I press "abort", ubuntu-bug hangs .. great!19:21
jjohansenClaudio9641: ouch, try ubuntu-bug linux19:23
jsalisburyogasawara, bjf, Regression from 11.04 to 11.10, but does not happen with latest mainline build:19:26
jsalisburyhttps://bugs.launchpad.net/ubuntu/+source/linux/+bug/87012319:26
ubot2Launchpad bug 870123 in linux "Twinhan USB DVB device failed to operate after upgrade to 11.10" [Undecided,Confirmed]19:26
ogasawarajsalisbury: ack, I'll take a look and see if there's a patch from upstream we should pull in for SRU19:27
jsalisburyogasawara, thanks.  19:28
ogasawarajsalisbury: cool, I think the patch to resolve that bug actually made it into upstream stable v3.0.5 (ie we have the patch queued for Oneiric's first SRU).  19:46
ogasawarajsalisbury: am going to build them a test kernel just to confirm19:46
jsalisburyogasawara, that's good news19:46
ogasawarajsalisbury: indeed, thanks for the heads up.19:46
Claudio964Ok, I now filed a bug. Bug #870250. Hope that helps.19:46
ubot2Launchpad bug 870250 in linux "Problem with external firewire disk and new firewire stack in Kubuntu 11.04 and 11.10 beta2" [Undecided,Confirmed] https://launchpad.net/bugs/87025019:46
jsalisburyogasawara, np, thanks for looking at it19:46
bjfjsalisbury: ping20:11
jsalisburybjf, pong20:12
bjfjsalisbury: pm20:12
Claudio964bjf, jsalisbury: thanks for the quick comments on my bug report. Will test the upstream kernel tomorrow. Must leave now ... thanks so far for the directions and instructions!!!20:14
jsalisburyClaudio964, np, let us know how the testing goes20:14

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!