[00:06] That's kinda creepy. The wine and python-eventlet packages I accepted to precise-proposed both had the same version number. [00:09] by the by, the xorg-server 2:1.11.4-0ubuntu10.6 in the unapproved proposed queue can go away, as I published it through the precise-security pocket (since we wanted the revert there as well) [00:09] Done. [00:09] thanks [01:13] infinity: new sru-report if you want to pull it [01:29] hi all, Is the archive frozen or anything? Or why is a package waiting for approval? [01:33] roaksoax: yes, test freeze https://lists.ubuntu.com/archives/ubuntu-devel/2012-July/035528.html [01:34] jbicha: thanks! [01:52] bdmurray: Mmkay. [02:50] bdmurray: Hrm. That's an awful lot of yellow. (It should probably also be a more readable on the background, perhaps { color: #BBBB00; font-weight: bold; }) [02:50] bdmurray: I can commit the latter change, but I'm curious if we're getting too many unintended hits with this for it to ba valuable. [03:12] infinity: I'll send an email tomorrow for comments but my testing with precise made it look useful [03:13] bdmurray: What's the current heuristic? I didn't dive into it, I just played with colours. [03:13] infinity: the commit message had it but bug.last_message_date > date the package was published in proposed [03:13] Ahh, kay. [03:14] Well, of course, that will yellow up every bug with a "me too" verification message. [03:14] +1 [03:14] But I guess it doesn't hurt to be notified to go read those. ;) [03:15] If we view yellow as "potentially good" (I was thinking the inverse when we first discussed it, because I'm a pessimist), I guess the large number of bugs that went from blue to yellow might be a good sign. [03:15] Given that blue bugs mean "no one appears to give a crap about this SRU". [03:21] bdmurray: Oh, and now that you ordered the page by age, I guess that bumps up the urgency of my fixing up the kernel stuff to be a bit more special snowflake (break it out, etc). [03:21] bdmurray: Cause, otherwise, it's pretty easy to miss the linux-* junk that goes with linux. [08:04] jamespage, ping [08:04] babyface_, morning [08:04] tomcat7? [08:04] nope - lvm I guess [08:04] yes, please have a look at this http://10.189.74.2:8080/job/quantal-server-amd64_tomcat-server/55/artifact/55/test-results/TEST-tomcat-server.xml/*view*/ [08:05] * xnox something up with lvm?! [08:05] jamespage, seems there is still a problem with tomcat while 1009579 is closed [08:07] babyface_, I can't actually see that URL ATM [08:08] jamespage, you can not access the internal jenkins? [08:08] babyface_, the bug that we discussed yesterday has nothing todo with the MIR [08:09] which is bug 1009579 [08:09] Launchpad bug 1009579 in tomcat7 "[MIR] tomcat7 (replaces tomcat6)" [High,Fix released] https://launchpad.net/bugs/1009579 [08:09] and is fixed [08:09] babyface_, I can see the internal jenkins normally - but not today it would seem [08:09] babyface_, sorry but I'm out for the next 2 hours - I'll be back around 1130 BST [08:10] jamespage, ok, [08:10] jamespage, see u [08:10] babyface_, please feel free to raise a bug about it tho - I do think its an intermittent issue for some sort which probably points to a bug. [08:11] jamespage, ok [08:11] jamespage, thanks. [08:11] babyface_, BTW I don't think jenkins is running for some reason on 10.189.74.2 - you might want to get that looked at first... [08:13] jamespage, yeah , I will [08:27] jamespage, it is not running because I shut it down, the problem with tomcat is the port detection test with IPv6 and you're right there is also something with LVM [08:27] unexpected question partman/confirm_nooverwrite [08:31] i generally preseeded partman/confirm_nooverwrite [08:31] that question appears if there is already a volume group with the same name [08:31] e.g. reinstalling a VM [08:31] so it may or may not appear. [08:32] similarly there is partman-lvm/confirm_nooverwrite [08:32] also [08:51] xnox, partman/confirm_nooverwrite is preseeded to true but a question is asked when doing an lvm installation with a fresh drive. [08:51] xnox, https://jenkins.qa.ubuntu.com/view/Quantal/view/ISO%20Testing%20Dashboard/job/quantal-server-i386_lvm/51/artifact/51/test-results/ for example [08:51] xnox, I'll file a bug [08:51] xnox, the preseed didn't change and the test pass until today [08:52] right. I have started seeing something like that as well. But my testing was different [08:52] please file a bug, and I will dig into it. [08:54] jibel_: where can I find the preseed file? [08:55] xnox, https://jenkins.qa.ubuntu.com/view/Quantal/view/ISO%20Testing%20Dashboard/job/quantal-server-i386_lvm/51/artifact/ [08:55] https://jenkins.qa.ubuntu.com/view/Quantal/view/ISO%20Testing%20Dashboard/job/quantal-server-i386_lvm/51/artifact/preseed/*view*/ more precisely [09:09] we're frozen? [09:09] see -devel [09:10] ah === popey_ is now known as popey === mmrazik is now known as mmrazik|lunch [10:38] jibel_, ah - that would explain why jenkins was not running then :-) === jibel_ is now known as jibel [13:05] can someone review python-cliff, python-django-compressor, python-django-appconf, and python-tablib please [14:15] Doing queue processing through the queue script is SO much better than the web U/I. Thanks cjwatson. [14:16] Yay. [14:19] does sru-report now work with verification-done-$SERIES? [14:20] can someone let my precise-proposed flash-kernel upload in ? i would like to do the bug tagging etc [14:20] after spending a few minutes verifying it, I noticed that bug 1007405 was actually already tested but the tag was reset when the oneiric SRU landed in -proposed, proper support for -$SERIES in the bug tags would make that easier [14:20] Launchpad bug 1007405 in libvirt "apport hooks installed with wrong source package name" [Undecided,Fix committed] https://launchpad.net/bugs/1007405 [14:21] hmm, according to a quick grep, sru-report doesn't recognize that syntax yet... tagging verification-done then... [14:44] cjwatson: Not sure if you're still working on queue (aren't you on vacation today), but I just pushed a small change. [14:47] can somebody please approve ^^^^^^ partman-lvm [14:47] =)))) it should unbreak the CDs [14:47] alternative and server, lvm installs [14:49] Done [14:49] ScottK: thanks ;-) [14:50] You're welcome. [15:19] ScottK: *nod* === jbicha is now known as Guest50245 === shadeslayer_ is now known as evilshadeslayer === yofel_ is now known as yofel [17:19] tyhicks: ^^ all accepted [17:19] \o/ [17:30] micahg: There was no lightning for lucid. Intentional, or oops? [17:30] infinity: non-existent [17:30] micahg: Check. [18:11] infinity, http://paste.ubuntu.com/1097029/ [18:11] do you have any idea what that could be ? (latest ubuntu-server preinstalled attempt) [18:12] intrestingly celbalrai works fine for all other images [18:12] ogra_: Looks like someone ported germinate to python3 but missed something. [18:12] ogra_: Only server calls germinate. [18:12] ah [18:12] aha ! === Guest50245 is now known as jbicha === jbicha is now known as Guest72435 === Guest72435 is now known as jbicha_ [18:14] And it's probably the only place where we use quantal's germinate in production. [18:14] infinity: but it doesn't affect the server daily images because germinate is run on nusakan and is python2? [18:14] So, I suspect it's just hitting something no one noticed in the porting and testing. [18:14] * slangasek nods [18:14] slangasek: Right, alternates germinate on nusakan, the archive germinates on cocoplum, but the livefs preinstalled-pool madness germinates on the buildd. [18:15] well, we will keep that for the server squashfs builds i guess [18:15] Nah. [18:15] dont you need to run that natively on the target subarch ? [18:15] s/sub// [18:15] I'd assume the server builds will look more like the current desktop builds, with a pool outside the squash, built on nusakan. [18:16] well, but you still need the squash [18:16] but the squashfs doesn't need germinate [18:16] ^ [18:16] regardless, the germinate bug should be fixed [18:16] yes, i was more wondering about live-builder occupation :) [18:16] I look forward to tearing that code out. [18:17] But germinate should be fixed anyway, yes. ;) [18:17] how can I reproduce that germinate invocation? [18:17] I guess that's in live-build/auto/config ? [18:17] likely just by invoking live-build the right way [18:18] slangasek: It is, yeah. [18:19] slangasek: germinate --no-rdepends --no-installer -S http://people.canonical.com/~ubuntu-archive/seeds/ -m http://ports.ubuntu.com/ubuntu-ports -d quantal -s ubuntu.quantal -a armhf [18:19] slangasek: Ish. [18:20] * slangasek nods [18:20] LANG=C germinate --no-rdepends --no-installer -S http://people.canonical.com/~ubuntu-archive/seeds/ -m http://ports.ubuntu.com/ubuntu-ports/ -d quantal -s ubuntu.quantal -c main -a armhf [18:20] LANG=C is key [18:21] I wonder if we can ever get away with just aliasing C to C.UTF-8 [18:21] Which would trade one set of annoying bugs for a whole new one, I suppose. [18:26] infinity, another thing slangasek and i noticed is that ubuntu-server daily-preinstalled is attempted to build even though default-arches says it shuldnt, any idea about that ? [18:26] ogra@anubis:~/Devel/branches/nusakan/cdimage-deployment$ CDIMAGE_ROOT=. ALL_DISTS=quantal bin/default-arches ubuntu-server daily-preinstalled quantal [18:26] amd64 amd64+mac i386 powerpc [18:26] s/says/said/ [18:27] obviously there is no arm in the list ... [18:27] Err, that output makes no sense. [18:27] (unless i run it wrongly) [18:27] and note that 'buildlive' is respecting etc/default-arches, but cron.daily-preinstalled was not [18:27] Since we never had daily-preinstalled on !arm. [18:27] hmm, riiight [18:28] it just seems to return the wildcard match for ubuntu-server [18:29] yes [18:29] which it shouldnt [18:30] so in fact, cron.daily-preinstalled is failing because there are no livefses built at all for any of the archs it's being called for [18:30] right, but it should also query default-arches nontheless [18:30] ogra_: shouldn't why? [18:30] it *does* query default-arches [18:31] and default-arches was returning the wrong arch list since the 27th [18:31] because there is a more specific match above in /etc/default-arches that matches daily-preinstalled [18:31] no, there wasn't [18:31] you listed maverick-precise [18:31] quantal was excluded [18:31] oh, crap [18:32] (which was the whole point of your change, but I'm not sure you were meaning to apply this to server?) [18:32] right, because it would also trigger desktop builds that way ... [18:32] * ogra_ changes [18:32] I already changed it [18:32] to be honest i didnt think much about server since we planned to move away from preinstalled there too [19:12] ogra_, skaet, infinity, elmo tells me the mandala2.0 box will be up and running tomorrow [19:13] yay! [19:16] pgraner: Shiny. [19:17] pgraner: Remind me, was that one Pandas, or PandaESes? [19:17] infinity, ES [19:17] Hrm. Kay, then I might need to make some requests around that. [19:17] infinity, hows the imx6 looking [19:17] pgraner: It's looking sad in its little plastic baggie while I do other work. :( [19:17] infinity, the RT for the mandala2.0 is RT #54490 [19:18] infinity, you are a sad sad man [19:20] pgraner: No, no. The MX6 is sad. I'm just weird. [19:22] infinity: finally got your imx6? [19:23] rsalveti: Yeah. Now to find time to do something with it. [19:23] infinity: :-) [19:24] rsalveti: Also, you need to go put the boot to the Freescale LT guys and get them to produce me a unifies mx53/mx6 image that actually works on both. [19:24] rsalveti: For bonus points, a unified uBoot too. Thanks. [19:28] infinity: unified u-boot I'm not that sure, but I know they were working on getting at least one single kernel for it [19:28] will ping jcrigby to have a look at it [19:30] rsalveti: See, I got the impression that they's almost entirely given up on even caring about mx53 when they started working on mx6. Would definitely be nice to support both from the same image "for free" (from our perspective). [19:31] infinity: for sure [19:47] hrm, is the updates security copy job broke? [19:48] it seems it was only partially successful [19:48] micahg: ? [19:48] infinity: all those security updates you approved, only some were auto-copied to -updates [19:49] micahg: Define auto-copied? You mean some of them were done by hand? [19:49] micahg: Or you mean some of them failed after the fact? [19:49] infinity: no, supposedly there's a cron to copy from security to updates ~hourly [19:50] micahg: That might be more of a security team thing. Or it's a weird bit of infrastructure I don't know about. ;) [19:51] Oh, unless copy-report actually copies stuff too. [19:51] is that at 58? [19:51] Which it does. [19:51] 28,58 [19:51] yeah [19:51] Though I just ran it by hand now. :P [19:52] thanks :), let's hope it works [19:53] Well, it did something. It's not wildly verbose. What were you hoping for it to do? [19:53] infinity: copy over the remaining stuff that wasn't copied from -security to -updates [19:54] It did 7 somethings. [19:54] Does that sound about right? :) [19:54] checking [19:55] That should probably be moved off cocoplum and into an API script before we kill our shell access. [19:55] cjwatson: ^ [19:55] infinity: I think I only needed 2 [19:55] cjwatson: Unless you think copy-report is so foolproof to just let it be an infastructure job that never needs looking at. [19:55] micahg: Well, 7 is bigger than 2, so you obviously win. [19:55] apparently it did a few others before the last publisher run [19:56] micahg: I suspect it's working fine, and you're just dealing with publisher versus patience issues. [19:56] micahg: But, let's look again in an hour or so. :P [19:56] infinity: heh, probably, just impatient :) [19:56] * micahg will have tyhicks verify after he's done with his USNs === jbicha_ is now known as jbicha [20:10] infinity, ogra_: fyi, lp:~vorlon/germinate/lp.1025818 [20:10] (will wait for cjwatson to review & merge) === medberry is now known as med_ [20:43] micahg: Seems to have all properly copied eventually. [20:46] infinity: thanks [20:47] bdmurray: speaking of sru-report, how difficult would it be to get verification-done- support (considering it to be equivalent to verification-done if present and matching the series)? [20:48] stgraber: not too hard I think [20:48] since we have release tables in there [20:48] bdmurray: my current workflow is to mark these bugs as verification-done- + verification-done so that it shows green on the report, then hoping that the SRU team member will reset to verification-needed after accepting (as some other series still need verification) [20:49] stgraber: there was some talk of having sru-release modify bugs so that might help with removing verification-needed [20:50] stgraber: but modifying the sru-report would be easier ;-) [20:52] Holy exploding component-mismatches Batman! [20:53] bdmurray: http://paste.ubuntu.com/1097305/ ? [20:55] stgraber: looks good to me - you can test it locally by changing 'quantal' to 'lucid' and uncommenting that line [20:55] stgraber: line 375 and maybe precise would make more sense [21:11] queuebot: /win 162 [21:11] Err. [21:11] La la la. [21:25] Hrm, did the server team just pall half of universe into main again? :P [21:25] bdmurray: change works fine here, most of the kernel bugs are green after that and so is my test bug (a libvirt bug) [21:26] bdmurray: can you land that change? [21:28] stgraber: what do you mean by land? I could update the branch but not on people [21:28] I can pull it. [21:29] (If someone commits this mystical change) [21:29] I could commit it but don't see why... [21:29] Tempted to cron the bzr pull on lillypilly, but then I'd have to get people to promise never ever to commit untested code. :P [21:29] bdmurray: it's in ~ubuntu-archive, I can't commit there [21:30] stgraber: ah okay! I'll do it [21:30] thanks [21:33] infinity: done [21:36] infinity: FWIW I think it's best to have the report on lillypilly to always match what's in the branch. That branch is pretty much never broken and if it was for more than an hour, I think I'd like people to notice, so having a broken report would ensure that :) [21:38] stgraber: Done. Should auto-update before each run (10,40) now. [21:39] cool [22:25] infinity: copy-report - it's already on my lilst [22:25] *list [22:26] (https://blueprints.launchpad.net/ubuntu/+spec/foundations-q-replace-archive-admin-shell-access) [22:27] infinity: but bug 1006917, in particular, blocks it [22:27] Launchpad bug 1006917 in launchpad "Distribution archive owners cannot necessarily copy packages" [Low,Triaged] https://launchpad.net/bugs/1006917 [22:27] well, unless I want to add ubuntu-archive-robot to ubuntu-core-dev, which I'm not very sure I do [22:27] also bug 1006871 doesn't help matters [22:27] Launchpad bug 1006871 in launchpad "Copying packages to -updates always goes through unapproved queue, even when copying user is privileged" [Low,Triaged] https://launchpad.net/bugs/1006871 [22:28] I'll also have to work out how to do copy-report's changelog parsing totally reliably over the API, but that's probably a minor detail [23:00] Whoever accepted the most recent set of linux-armadaxp binaries: please use kernel-overrides to deal with kernel binaries [23:01] * cjwatson moves stuff back to main [23:09] stgraber: queuebot seems to have got stuck. I just accepted a bunch of stuff and it hasn't noticed, nor did it notice the corresponding unapproved entries (at least back to eog/3.5.3-0ubuntu1) [23:11] * stgraber looks [23:12] apparently running two queuebots on the same machine is causing some problem... respawned just the main instance, will move the dev one somewhere else [23:12] Corrupted launchpadlib cache? [23:13] likely. I flushed it, hopefully that'll help [23:14] actually, let's try to flush it and spawn the new queuebot, maybe that was the cause of the weird stacktraces I couldn't reproduce... [23:15] mute tracker [23:15] unmute tracker [23:15] I was considering having the u-a-t scripts each pass a different launchpadlib_dir to avoid this problem [23:15] Though the cache bloat will suck after a while [23:20] so far so good :) [23:22] Ta [23:26] linux-armadaxp> OTOH I wonder if this was a copy bug. The overrides were right in precise-updates ... [23:26] Not sure I feel like doing the archaeology to find out. [23:39] now moved to forcing a separate cache directory for each queue, hopefully that'll avoid some remaining corruption when both new and unapproved update at the same time...