=== Eisbrecher_xnox is now known as xnox === psivaa-off is now known as psivaa === ara is now known as Guest56630 [11:55] xnox, any progress on that "installer bug"? [12:03] bjf: yes and no. Haven't reproduced the failure. [12:03] bjf: and my local utah setup, no longer at all works. [12:13] xnox, has utah changed (again?) [12:14] cking: there was stable release in june this year. and i'm assured that's what running in production, however it no longer provisions VMs in my local setup as it used to. [12:14] cking: also [12:15] bjf: cking: " Eisbrecher_xnox: looks like I can produce this problem on VM with latest trusty now that we have them" [12:16] bjf: it's not utopic unique. (at least in terms of guest) [12:16] xnox, ack [12:17] xnox, i'm also confused by the "now that we have them" part of that statement and wonder "why didn't you have them before?" [12:19] bjf: for some time we didn't have trusty dailies built (due to contention with precise dailies, it's first time we have .5 release) thus at first it was blocked on to moving building cds from nusakan into launchpad. [12:19] bjf: the launchpad bit is complete, and all images are build by launchpad buildd farm now. [12:19] xnox, ah, ok, makes sense [12:19] bjf: however, UTAH was searching for /daily-live/trusty-desktop-amd64.iso [12:20] bjf: whereas stable dailies are published in /trusty/daily-live/trusty-desktop-amd64.iso [12:20] bjf: top-level /daily-live/ is for devel series only, that is utopic now. And hence utah didn't pick up the new images when they started to be generated. [12:21] bjf: (well there is an extra subdir for image build number & current) [12:24] xnox, what do you need at this point to make progress? do you need help from plars and nuclearbob for the utah side? [12:26] bjf: i don't know if nuclearbob works on utah. I've been directed at ev, psivaa, plars, doanac. [12:27] bjf: in the mean time, I am investigating how to provision and run tests without UTAH in between. [12:27] xnox, if you would like utah help, i'm happy to go have a chat with evan [12:27] bjf: e.g. using a simple pxeboot provisioning and test provision/collection. Which should work with VMs and bare-metal alike. [12:29] bjf: i don't care much about utah. The hard-pressing thing is that: (a) bootspeed tests are not executed (b) power tests are not executed (c) utopic desktop daily tests has not run since 20th of May (thus not migrated from pending -> current) [12:30] xnox, the weakness does seem to be at the utah side of things [12:33] xnox, i agree with you. i would hope that you could repro the issue outside of utah but if you can't we should get them involved [12:35] xnox, is there any additional information they can provide to you which would help debug this? if they provided you with access to a system in this particular state? [12:38] bjf: access to utah server, from which i can trigger utah provisioning runs would be useful, yes. Since my local utah setup is non-operation (as in it doesn't get as far as ci.ubuntu.com utah runs get to) [12:38] bjf: and it's not like current utah production server does anything useful =) given all tests timeout and don't run. [12:39] it's interesting that trusty 14.04.0 test run today, shows the in-ability to unmount /target ( https://jenkins.qa.ubuntu.com/job/trusty-desktop-amd64-smoke-default/163/artifact/log/utah-24420-install.log/*view*/ ) [12:40] based on artifacts there, i believe the machine did install and reboot was requested. [12:41] but the target machine never showed up with working ssh server open. [12:41] https://jenkins.qa.ubuntu.com/job/trusty-desktop-amd64-smoke-default/163/artifact/log/utah-server-ssh.log/*view*/ [12:51] xnox: just going through the backlog.. re: 'current utah production server does anything useful =) given all tests timeout' we have server iso installations running ok [12:51] xnox: the issue we are facing is only pertainning to desktop iso's [12:51] xnox: the issue only started when we upgraded the production servers from precise-> trusty [12:57] psivaa, that's a completely different story than we have been getting [12:57] psivaa, we've been told that the problem is with the installer when installing on the test system [12:57] psivaa, it sounds like you are saying that after you updated your infrastructure it started failing [12:58] bjf: well as you know the 'failing' has been for quite a long time and there was one or two installer issues two in between [12:59] bjf: when i checked last time, it wasn't alone the installer thats at fault. [12:59] bjf: and that was a couple of weeks ago [13:00] psivaa, how do we all come together on this and get the tests running again? [13:00] bjf: manual installation went fine. and i tried a preseeded installation too and that worked too, outside of utah [13:00] psivaa, so how did the finger get pointed at the installer? [13:01] bjf: as i said there were installer bugs too in between even with manual installation. but i *think that got fixed now [13:01] (the finger pointed earlier might not have folded :)) [13:02] psivaa, so what is the current issue(s) that need to be addressed to get the tests running again? [13:03] bjf: the current issue was (when i checked about a week and half ago) that utah installation of desktop goes on a loop on trusty production servers and instead of rebooting to login screen after the install, it starts the installation again. [13:04] psivaa, and where do you believe that issue exists? is it utah? [13:05] bjf: i'd start from utah for a fix. but not ruling out any changes in the installer side that could make utah server to think that the installation is complete [13:06] s/complete/not complete/ [13:07] psivaa, ok, lets start with utah. who works on utah these days? [13:09] bjf: so I think utah is now basically in maintenance mode awaiting deprecating, in light of the ci-airline development. doanac, plars and myself in our team do the fixes as we see fit. [13:09] this appear to be a complicated issue and there fore outside my capability [13:15] * cking wonders how many infrastructure development iterations are required to get something working for at least 2 cycles w/o breaking [14:15] psivaa, i thought we used the same framework for the SRU validation, is this going to cause issues qualifying .5 ? [14:20] apw: for kernel SRU, we dont use utah. for precise iso testing we dont use utah either. so should not affect precise .5. Not sure if anyone is using it for SRU validation of the other packages === fmasi is now known as Guest75332 [19:35] rsalveti (and rtg): did the apparmor signal/ptraces patches make it into phone images yet? [19:36] jdstrand: yes [19:36] rsalveti: I was looking at https://launchpad.net/ubuntu/utopic/+source/linux-mako, but didn't see it in the changelog. is that expected? [19:37] oh, I see it in 3.4.0-5.30 [19:37] rsalveti: nm, thanks! [19:37] it got hidden due to the ftbfs entry [19:41] rsalveti: thanks for that testing btw :) [19:43] jdstrand: np! [22:01] sforshee, could you or someone on your team take a look at: https://bugs.launchpad.net/ubuntu/+source/linux-mako/+bug/1337753 [22:01] Ubuntu bug 1337753 in wpa (Ubuntu) "[mako] can not create an ad-hoc network" [Medium,Incomplete] [23:15] This isn't a kernel development question, but regarding internal kernel operations: when would it be necessary to manually use the sync command line utility? Shouldn't any pending writes be flushed when I run `losetup -d /dev/loopX` or `dmsetup remove exampledevice`? What types of operations could a user perform that the kernel wouldn't automatically sync? Can you provide examples of when I would need to run the sync command?