[00:19] Project windmill-devel build #268: STILL FAILING in 1 hr 5 min: https://lpci.wedontsleep.org/job/windmill-devel/268/ [00:28] lifeless: Your LXC guide is vastly superior to the U1 one. [00:28] LXC looks really nice. === spm` is now known as spm [00:40] Project parallel-test build #62: STILL FAILING in 1 hr 11 min: https://lpci.wedontsleep.org/job/parallel-test/62/ === cinerama_ is now known as cinerama [01:05] Yippie, build fixed! [01:05] Project db-devel build #657: FIXED in 5 hr 53 min: https://lpci.wedontsleep.org/job/db-devel/657/ [01:10] jelmer: You seem to have removed the custom KDE layout from bzr-svn on the basis that they've moved to git, but I see new svn revs yesterday... [01:13] wallyworld_, do you have time to mumble [01:14] sinzui: yes, starting mumble now [01:19] lifeless: Hm, did you run into trouble with needing to run buildout under linux32 when using an i386 container on an amd64 kernel? [01:26] Project windmill-db-devel build #418: STILL FAILING in 1 hr 6 min: https://lpci.wedontsleep.org/job/windmill-db-devel/418/ [01:28] wgrant: I'm just fiddling with that now [01:29] wgrant: yeah the u1 seemed to ignore issues ;) [01:29] * StevenK sighs at the topic. Wasn't the critical bug count only 205 yesterday? [01:32] wgrant: feel free to tweak the page! [01:32] wgrant: hallyn has a magic lxc build coming to his ppa that will automate the bind mount [01:32] lifeless: I'm not sharing my ~, but it all seems to work OK as long as make always runs under linux32. [01:33] Otherwise buildout uses x86_64 meliae :( [01:33] interesting [01:33] Surely it doesn't try to parse uname... [01:33] Still, it is buildout. [01:33] that could be because arch reports 64bit (which is uname based I wager) [01:33] wgrant: file a bug ? [01:33] Maybe/ [01:34] arch claims equivalent to uname -m [01:34] Now, automated COW-on-tmpfs LXC parallel testing... [01:35] LVM [01:35] Slow! [01:36] But maybe. [01:36] 'meh' [01:36] Well, I need a faster disk. [01:36] And/or SSD. [01:36] only deltas get written [01:36] Sure. [01:36] you may need more bandwidth available [01:36] how many cores? [01:36] But I don't care about persistence. [01:36] 6 [01:37] doit [01:37] testr --paralell [01:37] I've been thinking about adding a profile option [01:37] testr --parallel --profile=lxc [01:39] Project devel build #828: FAILURE in 5 hr 23 min: https://lpci.wedontsleep.org/job/devel/828/ [01:44] lifeless: How does --parallel divide the tests? [01:44] by time [01:44] Huh? [01:44] testr records the runtime of tests [01:44] How does it know about new tests? [01:45] they get round robined [01:45] Right, but how does it know to dispatch them? [01:45] it runs --list-tests [01:45] Ahh [01:45] I see. [01:46] if you supply a list it could skp that [01:46] wgrant: linux32 on lxc-start [01:46] Ahh, that could work. [01:46] s/could/does [01:47] its in the wiki page now [01:48] sudo lxc-start -n lucid-test-lp -d usr/bin/linux32 sbin/init [01:48] robertc@lifeless-64:~$ ssh 192.168.122.44 [01:48] $ arch [01:48] i686 [01:48] wgrant: so you're not bind mounting; nfs? [01:49] lifeless: Nothing at the moment. [01:49] Since I don't want to break my external environment. [01:49] Which rebuilding it for i386 will. [01:49] another reason to use packages for deps [01:55] wgrant: the eggs are arch specific [01:55] wgrant: so nothing should break [01:57] lifeless: It's not just eggs. [01:57] lifeless: Some of sourcecode/ has some binaries. [02:00] Oops [02:01] ? [02:02] lxc-destroy does not just shut down the container. [02:02] -stop is what you wanted [02:02] Yeah, I'm too used to Xen. [02:02] but its glitching with lucid + arch diff for some reason [02:03] Also, I ended up running two aufs clones of a container while still running the original container. That doesn't work very well. [02:10] heh [02:11] wgrant: the binariies in subverypy etc are also arch guarded paths. [02:12] https://lpstats.canonical.com/graphs/LPProjectCriticalBurndown/20100624/20110624/ is upsetting. [02:12] lifeless: IIRC it's pygettextpo or pygpgme [02:12] ah, yes [02:12] bad gettextpo [02:12] urgle, we symlink gettextpo.so ?!? into lib [02:13] Of course! [02:13] we should import gettextpo.gettextpo, then we could indirect sensibly [02:14] Yes, but that wouldn't be ancient and crufty enough. [02:15] and I think I'm finally setup [02:15] with a tonne more memory available. [02:15] and a 32 bit build for leaner tests [02:24] ugh, something is generating awful spew and breaking testr -- -t codehosting :( [02:36] :( [02:43] File "/home/robertc/source/launchpad/lp-branches/working/lib/canonical/launchpad/scripts/__init__.py", line 62, in execute_zcml_for_scripts [02:43] 'Setting up Zopeless CA when Zopefull CA is already running' [02:43] AssertionError: Setting up Zopeless CA when Zopefull CA is already running [02:44] Yay [03:13] that was unexpected === lifeless_ is now known as lifeless [03:14] Oh? [03:14] I'm guessing OOM kill got gdm [03:14] Nice! [03:14] Haha [03:15] Certainly fixed the memory pressure, though [03:15] given I was watching a railsconf video I can't really be sure [03:52] StevenK, wgrant: does either of you have time to show me how to create a sync upload on dogfood? [04:15] wgrant: 15:11 < hallyn> lifeless: do'h, that was easier than i thought, lxc already does the right thing, you just need 'lxc.arch = i686' in the config file. [04:40] lifeless: Ah! [04:40] lifeless: Which channel is that? [04:40] #ubuntu-virt? [04:42] Why is my pipe to the DC so terrible :( [04:42] wgrant: #ubuntu-server [04:50] also, https://bugs.staging.launchpad.net/ubuntu/+bugtarget-portlet-bugfilters-stats [05:03] what part of our code would be doing fork in twisted ? [05:03] Upon execvpe sleep ['sleep', '2'] in environment id 280573852 [05:03] :Traceback (most recent call last): [05:03] File "/home/robertc/source/launchpad/lp-sourcedeps/eggs/Twisted-11.0.0-py2.6-linux-i686.egg/twisted/internet/process.py", line 412, in _fork [05:03] self._setupChild(**kwargs) [05:03] ... [05:03] File "/home/robertc/source/launchpad/lp-sourcedeps/eggs/Twisted-11.0.0-py2.6-linux-i686.egg/twisted/internet/posixbase.py", line 171, in wakeUp [05:03] util.untilConcludes(os.write, self.o, 'x') [05:03] File "/home/robertc/source/launchpad/lp-sourcedeps/eggs/Twisted-11.0.0-py2.6-linux-i686.egg/twisted/python/util.py", line 783, in untilConcludes [05:03] return f(*a, **kw) [05:03] OSError: [Errno 9] Bad file descriptor [05:21] Project parallel-test build #63: STILL FAILING in 1 hr 10 min: https://lpci.wedontsleep.org/job/parallel-test/63/ [05:23] anyone around that knows our mm integration ? [05:24] I know a liiiittle. [05:24] Enough to debug it mostly. [05:24] so [05:24] lib/lp/services/mailman/tests/test_mm_cfg.py [05:24] runs in FunctionalLayer [05:24] does FunctionalLayer *start* mailman ? I thought it didn't. === _mup__ is now known as _mup_ [05:24] thus, I am confused. [05:25] it wants a /var/tmp/mailman directory [05:25] this is unsafe for parallel testing [05:25] I want to make it dynamic [05:25] Doesn't buildmailman create /var/tmp/mailman? [05:25] (vomit) [05:26] Barry wants to kill buildmailman [05:26] Yeah, I think it does. [05:26] I fully support this [05:26] buildmailman configures mailman into /var/tmp/mailman [05:28] lifeless: So, I think you probably want to run them in a layer that is non-parallelisable. [05:28] Or just delete mailman. [05:29] why? [05:29] Why what? [05:30] why non-parallelisable (btw that does not exist) or delete (don't we use it?) [05:30] We use it, but it's crap. [05:30] Project windmill-db-devel build #419: STILL FAILING in 1 hr 5 min: https://lpci.wedontsleep.org/job/windmill-db-devel/419/ [05:30] And it's hard to efficiently parallelise because you'd probably need to rebuild mailman to change the path. [05:30] breaking lists.launchpad.net would probably be considered a regression [05:30] Quite possibly. [05:32] I can't quite tell if you're trolling or not [05:33] We should be able to setup a transient mailman per test runner fairly directly [05:33] Directly but slowly. [05:33] I have a fixture around that creates a python module [05:34] lib/mailman isn't even importable [05:34] it shouldn't be under lib [05:39] 23 seconds [05:45] anyone got attachments to /var/tmp/mailman ? [05:45] I'd like to shove it under the source tree [05:45] e.g. ./var/mailman [05:46] I'd like to kill it from the tree entirely and move to MM3 [05:52] we'll want a network fake that we can use to make sure we interact with it correctly [05:53] I'd love to see that done and it move out of tree [05:53] * wgrant throws rocks at aufs. [05:56] I think I need to tweak fixtures a little first [05:57] hey there wgrant! [05:57] Hi jtv. Did you work out what you needed? [05:57] Nope [05:57] StevenK possibly knows, or I can work it out. [05:57] Big J said it was easy though [05:58] Haven't heard from him either. [05:58] But it was a feature introduced by your squad, so you should know :P [05:58] did he say 'a small matter of programming' ? [05:58] Unlikely [05:58] Since he and I wrote the support [05:59] I suppose I can just run "make harness" on dogfood, but it'd be nice if I had a UI way to do it. [05:59] jtv: the DSD sync thing will create jobs over an FF-configurable threshold, won't it? [05:59] When the job runs, it will check if it's allowed. If not, it will put itself in the queue. [05:59] No, that's not the case anymore. It has to do that in all cases now. [06:00] Do we cron DSDJobs now? [06:00] On dogfood, I mean? [06:01] I'd check the crontab. [06:04] ah — distroseriesdifference_job.py, every 2 minutes [06:05] StevenK, wgrant: mind if I sync some packages on df for a while? [06:05] jtv: Go ahead. [06:05] It won't love you, but go ahead [06:06] StevenK: I can't live without love, but I probably can without yours. :) [06:06] Well, this seems to be working OK. [06:06] (Hmm the error reporting on +localpackagediffs actually works pretty well for me now… needs a bit more tweaking but it's clear there's an error and it takes up no new space) [06:07] I have three tmpfs aufs COW LXCs running the test suite. [06:08] want want want! [06:09] Now I just need to automatically provision them (mkdir, mount, mount, sed, lxc-start does it), and tie it into testr. [06:09] well that's the interesting part, isn't it? [06:10] Well, aufs is infamous for its flakiness. [06:10] I don't think I know it. [06:10] We used it at uni for a while and had to give up. [06:10] It's like unionfs, except less flaky. [06:10] But still flaky. [06:10] I believe it's still used for livecds. [06:10] wgrant: nice [06:11] wgrant: you'll want to bring one up for --list-tests [06:11] So you're giving each container its own union mount overlayed on the branch setup? [06:11] jtv: Right, I have the base container with postgres and LP installed. [06:11] Then I create a tmpfs for each instance. [06:12] Mount that as an RW overlay over the RO base container for each instance. [06:12] wgrant: hows the memory pressure ? [06:12] lifeless: It's i386, so it's nothing. [06:12] * jtv underlines "per-directory journaling QoS" on his filesystems wishlist [06:12] I only have 8GiB of RAM and killed Chromium recently. But I have 3GB free and a couple of GB cached. [06:12] wgrant: well, when you get to 6 instances, it may be less -nothing- :> [06:12] True. [06:12] Let's see what happens if I spin up a few more... [06:13] Wouldn't it be great if you could tell a regular filesystem, "feel free to forget the entire contents of this directory on next mount, if it's more convenient than recovery"? [06:14] tmpfs? [06:14] jtv: Indeed. [06:14] StevenK: no, that ties the policy to the mechanism. And you have to manage it, e.g. unmount it when you no longer need it. And it causes worries about how it interacts with swap. [06:15] Meh, they're just pages [06:15] What I want is a /tmp that's still a regular directory on my regular disk, but doesn't waste my time journalling. [06:15] Then create it as ext2 [06:16] *cough* [06:16] seriously [06:16] Or use tune2fs to remove the journal [06:16] *cough* *cough* [06:16] which will btw make it ext2 [06:16] lifeless: 6 running now, hovering at 64% RAM used, all 6 cores >95% [06:16] I do still want journaling _and_ per-file system on my FS by default, thank you. [06:17] wgrant: schweet [06:17] lifeless: Sort of. It can be ext3 with other ext3 features and no journal [06:17] And it's all tmpfs, so no iowait. [06:17] wgrant: and I don't suppose there's any CPU overhead to lxc? [06:17] Despite my system drive currently being a shitty WD Green. [06:17] jtv: bugger all [06:17] jtv: It's just namespacing, basically. [06:17] should i expect that we could add a new (null) column to an existing table "live"? and populate it with a script? and then mark the column as not null [06:17] jtv: All runs on the system kernel. [06:17] Sweet. [06:17] jtv: uses built in namespace facilities in the kernel, needs a new userspace library instance etc [06:18] and you have a bit of networking thunks going on [06:18] lifeless: The first two runs were on their own. DatabaseLayer took 1:21 to run. The sixth just passed DatabaseLayer, took 1:22 [06:18] Is it the trick where the kernel basically just keeps separate pointers for its data structures, so there are no cross-references? [06:18] lifeless: So it seems to scale rather well. [06:19] * jtv is so looking forward to this [06:19] jtv: https://dev.launchpad.net/Running/LXC [06:19] Thanks [06:19] It's just a bit lighter-weight than KVM. [06:19] jtv: thats now debugged to the point that it works; wgrants aufs and sed stuff will sit on top of that basic config [06:19] its a *tonne* lighter than kvm [06:19] Should be, yes. No qemu. [06:20] …Right? [06:20] i tried to get lxc working on my box but got errors with the network devices trying to start it up :-( [06:20] wallyworld_: what errors? [06:20] * wallyworld_ looks [06:20] wallyworld_: were you following my notes or the u1 notes or something else again ? [06:20] lifeless: your notes [06:20] ian@wallyworld:~/projects/lp-branches/devel-sandbox$ sudo lxc-start -n natty-lp [06:20] lxc-start: failed to attach 'vethgHMwnY' to the bridge 'virbr0' : No such device [06:20] lxc-start: failed to create netdev [06:20] lxc-start: failed to create the network [06:20] lxc-start: failed to spawn 'natty-lp' [06:20] lxc-start: No such file or directory - failed to remove cgroup '/sys/fs/cgroup/cpu/natty-lp' [06:21] wallyworld_: do you have libvirt installed ? [06:21] * wallyworld_ checks [06:21] wallyworld_: if you've used virt-manager you will [06:21] but if you've not, even if you have used virtualbox, you probably don't. [06:22] By the way, exactly when does test_on_merge.py run? (I'm assuming this shifted a bit since the script was named) [06:22] lifeless: no libvert installed [06:22] jtv: it doesn't [06:22] * wallyworld_ fires up apt-get [06:23] wallyworld_: you want libvirt-bin [06:23] lifeless: then we can get rid of it? [06:23] jtv: no, we may well want it when we do the parallel testing work [06:23] ISTRM getting the near-useless "one huge chunk twice an hour" output. [06:24] The reason I ask is, we have CommandSpawner now which is probably better at processing output from sub-processes. [06:24] lifeless: \o/ thanks! [06:25] I think test_on_merge just waits for a huge fixed-size chunk of output and then dumps the whole thing (even if the chunk ends in the middle of a line, hence the huge chunks) [06:25] Using MF would fix both. [06:26] jtv: IIRC test on merge did more than just that [06:26] It does do more. [06:26] jtv: I would be happy to see test on merge ported [06:26] lifeless: so on a 64 bit system (which i have), do i understand correctly that using a 32 bit lxc will use less memory running lp? or did i make that bit up? [06:26] I wouldn't be happy to have it deleted yet [06:26] wallyworld_: significantly less [06:26] so i would install postgres etc etc into the vm? [06:26] wallyworld_: we'll want to run the production tests on 64 bit to catch C bugs, but for devs it should be totally fine to run 32 bit [06:26] wallyworld_: yes [06:27] The MF would need some extensions, I'm sure, but it's basically a ready-made simple API to managing parallel sub-processes. [06:27] cool. i have 4GB and am constantly up at ~90% usage [06:27] wallyworld_: I suspect this will help [06:27] awesome. will try it. thanks [06:27] jtv: MF ? [06:28] may have to set up my ide to do "remote" debugging :-) [06:28] wgrant: I wonder if the unionfs overlay would also make a faster way to restore the test playground db between tests. :-) [06:28] lifeless: it stands for CommandSpawner. [06:28] jtv: no, because you'd need to bounce postgresql [06:28] bugger :) [06:28] Hmm. [06:29] aufs seems to sometimes interact badly with the hardlink security checks. [06:29] In particularly when rm -r'ing directories on the underlay. [06:29] lifeless: CommandSpawner was originally conceived as a Mother class that produces and manages child processes, i.e. a Forker. [06:29] Still, with /var/tmp/bazaar.launchpad.dev gone from the underlay it is happy again. [06:30] jtv: does that make it a Mother Forker? since it manages child processes? [06:31] wallyworld_: shhhh [06:31] Hmm. [06:32] I guess the 6 runs don't really compete for cache. [06:32] Since they are using the same underlying files. [06:32] yes [06:32] the writes are the only IO load you should have [06:33] At least this isn't shared aufs on top of Solaris NFS. [06:33] That is *fun*. [06:33] And kernel OOPSish. [06:33] * jtv runs in terror [06:34] Ah, bug #729338 [06:34] <_mup_> Bug #729338: yama hardlink restriction misbehaves under aufs < https://launchpad.net/bugs/729338 > [06:34] anyone want to do 2 small reviews? [06:34] https://code.launchpad.net/~wallyworld/launchpad/picker-textfield-focus/+merge/65608 [06:34] https://code.launchpad.net/~wallyworld/launchpad/admins-can-unsubscribe-bugs/+merge/65615 [06:35] I'll take the first. [06:35] thanks [06:35] the 2nd one is only 44 lines [06:35] Damn, picked the wrong one. [06:36] why? javascript? [06:36] No, not 44 lines. [06:36] Oh, wait, this one's 40 lines. I win! [06:36] :-) [06:36] …as soon as the page will load. [06:36] i did say small [06:41] Hmm. [06:41] Some tests are pretty RAM-hungry. [06:41] Almost OOMed there, because they were in sync. [06:41] yeah [06:41] you need the full run to guage the footprint [06:41] wgrant: you have what, 6G ? [06:42] test_process_job_sources_cronjob creates heaps of processes. [06:42] 8G [06:42] wgrant: on a hex cpu ? [06:42] Full LP processes :( [06:42] Yes. [06:42] wgrant: thats... unusual [06:42] Oh? [06:43] AIUI the general thing is to fill 3 banks with identical memory and you get twice the memory bandwidth [06:44] wgrant: http://www.intel.com/support/motherboards/desktop/sb/CS-011965.htm#triple [06:44] lifeless: This is an AMD chip. [06:44] ah [06:45] ignore me then :) [06:45] Does Intel have any hex-core chips out yet? [06:45] (I was just going to go with a cheap AMD quad-core, but the hex-core was only slightly more expensive so I thought why not...) [06:47] wallyworld_: first one's done. [06:47] jtv: thanks! [06:48] wgrant: http://ark.intel.com/Product.aspx?id=53568&processor=E7-2803&spec-codes=SLC3M [06:48] lifeless: OK, but Xeons released a month ago don't count. [06:48] wgrant: is yours 6 cores + ?threadpercor ? [06:48] Haha [06:48] Why not? [06:48] jtv: i had the same question about not using 'unseen'. i should have done it as a driveby perhaps [06:49] wgrant: they've had 6 core xeons for a while, though the 10 core ones look *shiny* [06:49] wallyworld_: I was wondering whether it might be deliberate, e.g. so as not to hinder inclusion in lazr-js. [06:49] maybe. i was going to ask tomorrow [06:51] wgrant: I actually had reason to go with a 2-core not so long ago because it had better single-thread performance… that wasn't for me though: what you're doing will probably make lots of cores more attractive. :) [07:02] Project db-devel build #658: FAILURE in 5 hr 57 min: https://lpci.wedontsleep.org/job/db-devel/658/ === jtv is now known as jtv-eat [07:10] jtv-eat: 2-core 4-threads? [07:10] lifeless: probably, though AMD has realized that SMT is no longer a very good idea. [07:11] * jtv-eat logs in halfway across the world to check [07:11] wgrant: so I was aksing, is yours 6(12) or 6(6) [07:11] lifeless: Oops, missed that. 6(6). [07:12] Bulldozer? [07:13] Is Bulldozer out yet? [07:13] dunno [07:13] Q3, IIRC. [07:15] * jtv-eat decides to live up to his name [07:32] Project windmill-devel build #269: STILL FAILING in 3 min 26 sec: https://lpci.wedontsleep.org/job/windmill-devel/269/ [07:34] StevenK: Could you please perma-kill the Windmill builders? [07:34] Yippie, build fixed! [07:34] Project devel build #829: FIXED in 5 hr 54 min: https://lpci.wedontsleep.org/job/devel/829/ [07:34] The jobs themselves or the slaves? [07:34] 06:59:26 < deryck> Can we please kill those windmill notices? Windmill is dead to us. [07:34] 07:08:09 < deryck> abentley, indeed. we'll move to selenium2/webdriver at the Thunderdome. [07:34] 07:08:33 < deryck> flacoste, sinzui -- ok, thanks. StevenK can we kill the Windmill runner? Not needed now. [07:35] I missed that [07:35] wgrant: Delete or disable? [07:35] StevenK: Disable pls. [07:36] I don't trust that we'll have anything good by next week. [07:36] Both done [07:36] Thankyou sir. [07:36] Ooooh, selinium instead of windmill. Nice. [07:40] * StevenK kills a slave while looking at Jenkins [07:41] I need to work out why Jenkins is now taking nearly six hours [07:46] http://arstechnica.com/business/news/2011/06/amd-launches-second-fusion-cpu-gives-glimpse-at-future-of-cpugpu.ars === jtv-eat is now known as jtv [07:50] StevenK: how could it possibly perform well without Oracle? [07:50] lifeless: el Reg had a nice article about HP's new APU-based laptops IIRC. [07:52] jtv: Your implant is acting up. [07:52] errrhuh? [07:53] be more Enterprise, Steve! [07:53] (i.e. massively slower at three times the price) [08:33] allenap: actually I'm redoing your implementation for those bugs - see https://bugs.launchpad.net/testtools/+bug/801031 [08:33] <_mup_> Bug #801031: gather_details evaluates details < https://launchpad.net/bugs/801031 > [08:46] good morning [08:48] bigjools, good morning [09:06] lifeless: Cool. I eagerly await the outcome :) [09:13] allenap: its pushed [09:14] allenap: along with a new stock fixture and gathering details from child fixtures [09:14] lifeless: Ah, I've just seen it. Neat :) [09:18] yah, its nearly the inner loop from gather_details [09:22] and 0.3.6 pushed === allenap changed the topic of #launchpad-dev to: https://dev.launchpad.net/ | On call reviewer: allenap | Critical bugs: 211 - 0:[######=_]:256 [10:01] mpt: Hi. Did you want to talk about that UI lightning talk today? [10:03] huwshimi, yep, in an hour or so if that's good for you [10:03] mpt: Yeah that's fine with me [10:04] wgrant: still there? [10:05] jelmer: Indeed. [10:05] you're well today? [10:05] Or at least moreso? [10:05] wgrant: Yeah, feeling much better. Thanks :) [10:06] wgrant: I was wondering if perhaps the qastaging machine didn't have an existing copy of the imports, causing every import to be a full from-scratch import. [10:06] (and thus not being able to test the upgrade, no matter what the database says about the current format) [10:09] jelmer: I thought I used an import that was there. But I suspect I was wrong. When I QA'd it yesterday I made sure a LOSA copied an un-upgraded copy into place first, and it worked fine. [10:10] jelmer: After copying the staging area across, I called requestMirror() on the branch and confirmed that the externally visible copy was pack-0.92. [10:10] Then forced an import, and it became 2a. [10:10] wgrant: ah, cool [10:10] wgrant: thanks for qa'ing that, btw. I'm looking at QA'ing my other two branches atm [10:10] jelmer: Thanks. Did you see my KDE question? [10:11] lifeless: Slightly under 3.5 hours. [10:11] lifeless: For 6 concurrent runs of the full suite. [10:11] wgrant: thats awesome [10:12] About 15 minutes slower than when I ran the test suite locally, but it's probably longer now. [10:12] And that's with some accidental swapdeath. [10:12] Hi Thunderbird. [10:16] wgrant: isn't their plan to migrate everything to Git eventually, or are some branches going to stay in svn? [10:16] jelmer: I don't know. But this will break SVN now, won't it? [10:17] Or are existing branches OK? [10:17] wgrant: the layouts aren't used for anything on the Launchpad side of things, they're used to discover branches if you import an entire repository [10:18] wgrant: there was also a bug in bzr-svn that prevented it from opening something as a branch if the hard-coded layout (like we had for Apache or KDE) said it wasn't a branch [10:18] Oh, I thought it was still somehow used to work out the root for consistent fileids and stuff. [10:18] OK. [10:20] Tests with failures: lp.services.job.tests.test_runner.TestTwistedJobRunner.test_memory_hog_job (subunit.RemotedTestCase) [10:20] So the test suite is *deliberately* memory-hungry. [10:20] Awesome! [10:21] wgrant: it's actually allocating a lot of memory to test that? [10:26] Not too much, actually. [10:26] def run(self): [10:26] self.x = '*' * (10 ** 6) [10:26] Probably just be a coincidence that it blew up while I was running out of ram. [10:28] wgrant: FYI, bigjools and I decided to refactor the multi parent like this: if the destination archive is empty: use the packagecloner, if not: use the (much safer but slower) packagecopier. [10:29] rvba: You'll need to optimise intra-archive copies with the cloner too. [10:29] Or initialising oneiric+1 will take forever² [10:29] yes [10:29] did we discuss that case rvba? [10:30] I *think* we did but my memory sucks todday [10:30] as does my typing [10:30] I don't think so. [10:30] This case will use the packagecopier. [10:31] wgrant: what kind of optimisation do you have in mind? [10:31] rvba: Copying within an archive (eg. natty->oneiric) should use the packagecloner. [10:31] And it should be done first. [10:31] I think. [10:32] Or it's going to be slow. [10:32] rvba: this is the special case of initialising a new distroseries [10:32] * wgrant eats. [10:32] where there's already a series [10:32] bigjools: Yes, the previous_series case. [10:32] rvba: the packagecopier only needs to be used if there's >1 parent [10:33] bigjools: my understanding was that the packagecopier should be used if the destination archive is not empty. [10:33] To avoid conflicts. [10:33] wgrant: did rabbitmq start ok in your lxc ? [10:33] its whinging in mine [10:34] bigjools: ? [10:35] rvba: there won't be any conflicts in a new distroseries [10:35] s/new/empty/ [10:35] rvba: so you can refine your case to empty distroseries [10:35] not empty archive [10:35] bigjools: Understood. [10:36] bigjools: I'll do that in the following branch then (the one that adds support for this init from [previous_series] [10:36] bigjools: wgrant first branch ready to be reviewed: https://code.launchpad.net/~rvb/launchpad/init-series-bug-789091-devel/+merge/63676 [10:37] This branch adds *only* multi parent backend support. [10:37] 1394 lines /o\ [10:37] Sorry about that. [10:37] :) [10:37] I'll look at the last changes [10:37] YEs [10:38] Thanks. [10:39] * rvba fixes the conflicts introduced by merging this in the 4 dependent branches. [10:46] rvba: I don't think you need that duplicate source check any more [10:46] the packagecopier will take care of that [10:47] bigjools: I think I have removed any duplicate source check ... [10:47] rvba: ah ok, it was in revno 13190 [10:47] rvba: perhaps you can attach a partial diff that you want reviewed on the MP? [10:48] just email it with an attachment [10:48] or is it all in revno 13191? [10:48] bigjools yes: http://bazaar.launchpad.net/~rvb/launchpad/init-series-bug-789091-devel/revision/13191 [10:48] ok I'll look at that [10:48] * bigjools gets some caffeine in preparation [10:49] Thanks. [10:49] lifeless: No, that was one of the 5 tests with errors. [10:49] lifeless: Because my hostname isn't configured. [10:49] lifeless: I presume. [10:49] lifeless: (the other tests were all mmcfg stuff relating to aufs) [10:49] rvba: first comment: [10:49] def getPublishedSources(name=None, names=() .... [10:49] rvba: can you make the code accept name as a single item or a list [10:50] two params is ugly [10:50] bigjools: ok. [10:50] rvba: the external API should be ok still, you'd just do a type check in the method [10:50] bigjools: Right, I'll do that. [10:51] Python FTW [10:51] I'm afraid wgrant concerns about duplicated files is still not addressed :( [10:51] why's that? [10:51] if you use the packagecopier for subsequent syncs it should be [10:52] but how about the first copy ... the one that will use packagecloner? [10:52] that can never conflict [10:52] the series is empty [10:53] oh ... that's right ... sorry, my mind is a mess I must say. [10:53] understandable :) [10:53] rvba: well to be more specific, intra-archive to an empty series can't conflict [10:53] Right. [10:54] inter-archive to an empty series *can* conflict but we prevent that elsewhere [10:54] brb [10:56] Right. Intra-archive copies to a new series are always fine. Inter-archive copies to a new archive are always fine. [10:58] allenap: btw the +0 hack suggests prod has something wrong with it [10:58] allenap: a root cause affecting other pages [10:59] lifeless: Oh no, another rabbit hole :) [11:00] :) [11:01] I don't think you need to dig into it [11:01] just fall down it [11:01] but I think you should note in the patch that avoiding the index is a workaround for an undiagnosed issue [11:01] lifeless: Okay, I'll add that to the follow-up branch. [11:01] it would be nice to diagnose without impacting prod [11:02] but servers with 128GB of ram and TB's of capacity are not cheap [11:02] allenap: (and that said, the +0 thing is -much- cleaner than the union) [11:02] lifeless: Agreed, emphatically. [11:02] allenap: another example of this was the sourcepackagename issue the other day [11:03] allenap: the query joined with archive is fast (when hot) on qastaging, terrible on prod [11:04] lifeless: Is there a bug about this anywhere? That (archive, status) index probably offers fairly low selectivity on SPPH. I wonder if it exists in staging/qastaging. [11:07] "securesourcepackagepublishinghistory__archive__status__idx" btree (archive, status) [11:07] thats staging [11:07] "securesourcepackagepublishinghistory__archive__status__idx" btree (archive, status) [11:07] allenap: we possible want (status, archive) [11:07] allenap: -or- [11:08] (archive, status where status in 1,7,534) [or whatever it is] [11:08] heh, those look like IDs :) [11:08] if its a stats problem a partial index will help more than a reversed index I think [11:08] lifeless: Okay, I'll throw this discussion in a bug report. [11:09] thanks! [11:13] rvba: why are you doing: [11:13] for pocket in archive.getPockets(): [11:13] and running do_copy in a loop? [11:14] bigjools: I thought this was the only way to use do_copy because I had to specify the destination pocket in do_copy. [11:15] hmph [11:15] ok [11:15] bigjools: It's stupid ... so if you have any idea around that I'm glad to hear about it :). [11:16] rvba: I was hoping that do_copy would use the same pocket as the source [11:16] but it appears not [11:16] Me too. [11:16] rvba: so this is a bit simplistic anyway [11:16] the pocket should always be RELEASE [11:16] I think the cloner does that? [11:16] * rvba checks [11:17] Indeed. [11:17] Wait ... [11:18] pubrec.sourcepackagerelease.createBuild uses PackagePublishingPocket.RELEASE [11:19] bigjools: no, you're right, initialize_distroseries has the destination pocket hardcoded to RELEASE [11:19] rvba: that's the only available pocket in an unreleased series [11:20] so just hard-code it and remove the loop [11:20] bigjools: ok. [11:44] mpt: Available anytime. I can come find you if you like or you can come find me... I've moved further around the office [11:45] hmm, oops [11:45] I lp-landed rather than ec2. [11:45] ah well, its only a dep update, and should be sane. [11:46] buildbot will hate on me if its not. [11:46] rvba: on line 670 of packagecopier.py, why did you need to add the extra "if len(source.getBuiltBinaries()) != 0:" ? [11:47] does copyBinariesTo blow up without it? [11:47] bigjools: Yes [11:48] rvba: I think it would be better to make copyBinariesTo not blow up [11:48] That's a side effect of adding 'strict_binaries' [11:48] just return Non if binaries is None or empty [11:48] right [11:48] bigjools: ok. [11:49] rvba: you need to be wary of multiple calls to getBuiltBinaries() there, it's a crazy query [11:50] making copyBinariesTo will fix that double call [11:50] making copyBinariesTo work will fix that double call [11:50] Understood. [11:50] * rvba looks at getBuiltBinaries [11:51] issuing the same query more than once is crack regardless :) [11:51] wgrant: any news on that cocoplum downtime for germinate-lubuntu? [11:52] wgrant: I'm more or less unblocked on the disk space side now ... [11:52] cjwatson: I've not seen mrevell this week, and he normally organises downtime. Perhaps I should just arrange it myself... [11:55] is there much to organise in this case? nobody except IS normally notices if I put cocoplum onto manual for a bit :-) [11:55] wgrant: He's on leave this week [12:00] cjwatson: Well, we should restart poppy or it might get a bit angry. [12:01] cjwatson: And we consider that to be downtime worthy of announcement, historically. [12:02] you need to restart poppy for a germinate change? [12:02] We could just cowboy it, but that upsets people. [12:03] And we've already had one this week :( [12:03] Plus keeping cocoplum up to date is nice. [12:05] why does poppy need a restart for a germinate change? Remember it's a separate tree now. [12:05] huwshimi: can I persuade you to join me for an early lunch? [12:05] bigjools: Not on cocoplum it's not. [12:05] Unless that changed without anybody telling me. [12:06] And I'm pretty sure it didn't. [12:06] jml: Probably, I was going to have a chat with mpt sometime this morning but I might be able to delay that. Did you want to lunch now? [12:06] huwshimi: now would be ideal for me [12:06] jml: ok [12:06] jml: One sec [12:07] mpt: Hey, can we delay until this afternoon or tomorrow sometime? [12:07] allenap: If you've got time, a branch for you to review (diff hasn't generated yet): https://code.launchpad.net/~gmb/launchpad/private-branches-bug-657004/+merge/65639 [12:08] huwshimi: if you've already got something scheduled then don't break it just because I'm hungry and I want to talk to you :) [12:08] huwshimi: we aren't barbarians [12:08] We aren't? [12:08] Damn. [12:08] gmb: I can't speak for anyone north of Islington. [12:08] mpt: I'm heading out to lunch, so let's figure out a time later [12:08] gmb: Certainly. [12:09] huwshimi, ok, sorry I was held up with this USC stuff [12:10] mpt: No totally fine, we didn't schedule a proper time anyway [12:10] mpt: I just didn't want you to be sitting around waiting for me [12:27] wgrant: still there? [12:28] bigjools: A bit. [12:28] wgrant: I am trying to work out wtf is going on in the _create_missing_builds method in the cloner [12:29] bigjools: Bad things. [12:29] wgrant: can you work out why it needs to call sourcepackagerelease.createBuild after createMissingBuilds doesn't make any builds? [12:29] the only place that passes always_create is lib/lp/soyuz/scripts/initialize_distroseries.py: [12:30] and I would have thought that forcing a build is wrong wrong wrong if there is already a build [12:32] bigjools: I would tend to agree with your assessment. [12:32] bigjools: What does annotate say? [12:32] it smells like StevenK [12:32] It was me, yes [12:33] For the case of copying only sources [12:33] cMB should create builds in that case... [12:33] but createMissingBuilds ... [12:33] cMB will always DTRT [12:34] * bigjools is tempted to rename SPR.createBuild to SPR._createBuild [12:34] So, the package cloner only calls cMB in specific circumstances ... [12:35] it looks like it always calls it [12:38] night all [12:38] Night lifeless. [12:38] nn lifeless [12:38] StevenK: so why does it call it? [12:39] I'm struggling to remember [12:40] Project db-devel build #659: STILL FAILING in 5 hr 37 min: https://lpci.wedontsleep.org/job/db-devel/659/ [12:47] rvba: given that none of us know why on earth the cloner is doing that extra call, I would just rely on createMissingBuilds in your code. [12:48] bigjools: Got it. [13:01] wgrant: you still there? [13:07] wallyworld_: Mostly. [13:08] wgrant: just wondering - lxc doesn't support fuse but rocketfuel-setup tries to install it and it fails [13:08] so i've told rocketfuel not to install launchpad-developer-dependencies and am seeing what breaks [13:09] curious if you saw the same problem [13:09] wallyworld_: I always change rocketfuel-setup to use 'apt-get install -o Apt::Install-Recommends=no' [13:09] wallyworld_: Which doesn't install fuse. [13:09] ah. will try that. thanks [13:10] if it works, i can update the wiki [13:17] Project parallel-test build #64: STILL FAILING in 1 hr 18 min: https://lpci.wedontsleep.org/job/parallel-test/64/ [13:18] jelmer: How goes the QA? [13:19] wgrant: confirmed that the classify-exceptions branch is working with a few imports [13:22] hey [13:22] what version of Python do we run on? [13:22] jml: 2.6 [13:22] sweet [13:35] if I have a list of bug ids, is there an efficient way to ask Launchpad which of them are closed? [13:35] or am I going to do N roundtrips? [13:43] hi bigjools, are you familiar with using WebServiceTestCase? i'm trying to figure out how to test an @collection_default_content [13:49] jelmer: wow [13:49] jelmer: I just saw your patch [13:50] jelmer: out of curiosity, have you had a chat w/ mwhudson about it? [13:52] jml: not recently, but we chatted about doing this earlier [13:53] jml: I'm hoping I can get him to do one of the reviews of the branch if it is deemed a good idea [13:53] jelmer: ok. it's kind of a big deal. are you planning on going all the way with this? [13:53] (otherwise it's a lot of dead code to be adding) [13:54] jml: yes, I would be happy to take this all the way [13:56] wgrant: when is the next deployment happening? [13:56] jelmer: Whenever your QA is done. [13:57] wgrant: oh :) [13:57] I'm still waiting for staging to come back [13:57] Well, someone (probably me) should probably QA timrc's thing first, to get another couple of revs. But once your stuff is done we can at least deploy *something*. [13:57] (waiting to see if https://code.staging.launchpad.net/~maxb/cvs2svn/svntest succeeded) [13:57] bac: not massively familiar with that. It just gives you a "browser" object doesn't it? [13:58] jelmer: You know we have an importd on qastaging now, right? [13:58] wgrant: I do now :) [13:58] rvba, jtv: When you get a chance, could please take care of your outstanding QA items? [13:58] jelmer: We got it working earlier this week. [13:59] Because waiting for staging sucks. [13:59] wgrant: I guess that means my branches have deployed somewhere. :) [13:59] bigjools: you get a web service object you can make method calls on. that's ok if you're not familiar [13:59] wgrant: Sure. [13:59] wgrant: you familiar with WebServiceTestCase? [14:00] bac: right, I hope it's the client side of things so that lplib is exercised [14:00] bac: You should be able to just use it like launchpadlib. [14:00] wgrant: i [14:01] wgrant: The crazy multi parent branch is fixed if you want to have a look at it one more time ;) (https://code.launchpad.net/~rvb/launchpad/init-series-bug-789091-devel/+merge/63676) [14:01] bac: eg. the collection_default_content of IBuilderSet can be checked using something like list(lp.builders) [14:01] wgrant: i'm trying to exercise a method marked with @collection_default_content. in lplib the top level collection object is callable and invokes that method [14:01] wgrant: right [14:01] wgrant: \o/ [14:02] wgrant: but the ws_object version of it is not callable [14:02] bac: The collections should not be callable, AFAIK. [14:02] they are generators aren't they? [14:03] Something like that. [14:03] But they're still not callable. [14:03] wgrant: perhaps callable is the wrong term === mbarnett` is now known as mbarnett [14:03] wgrant: but with an ws_object version of builders i cannot do what you showed earlier from lplib [14:05] bac: I suspect that wsObject only works on Entries :( [14:05] But self.service.builders looks like it should work. [14:05] Maybe. [14:05] hmm [14:06] Yes, that should work. [14:06] I see that people are adding sourcecode eggs but not deleting old ones :( [14:06] wgrant: \0/ [14:06] jelmer: Still going... [14:06] wgrant: thanks! [14:06] bac: Does it work? [14:07] indeed [14:07] I'm pretty sure it will, but you never know. [14:07] yes, i just demonstrated it [14:07] Great. [14:12] bigjools: well, you have to be careful about deleting old ones. [14:12] jml: ECONTEXT [14:12] I see that people are adding sourcecode eggs but not deleting old ones :( [14:13] If you delete them before the next release, I will be very sad. [14:13] Because we will be unable to deploy. [14:14] jml: sure, but do people actually check? [14:15] bigjools: I honestly don't know how. And if I did, and cared, I'd probably make a post-commit hook or something that did it for me. [14:16] the reason I care is because a) I hate having my disk filled with crap, and b) I spent ages cleaning it up not so long ago [14:20] matsubara: how did the exploratory testing go yesterday? [14:21] bigjools, I'll put the results in the wiki and ping you [14:21] great thanks [14:21] wgrant: QA done. [14:22] rvba: Thanks! [14:22] np [14:22] flacoste, you mentioned an email about some wiki clean up, but I didn't get any. Did you send it? What's the subject? [14:22] rvba: presumably it worked then? :) [14:23] bigjools: yes, I'm not an admin on DF anymore :) [14:23] matsubara: QA wiki clean-up [14:23] jml was cc-ed === jcsackett changed the topic of #launchpad-dev to: https://dev.launchpad.net/ | On call reviewer: allenap, jcsackett | Critical bugs: 211 - 0:[######=_]:256 [14:23] jml: did you receive your opy? [14:23] flacoste: yes, I did. I haven't had the privilege of talking w/ matsubara yet today [14:24] jml, flacoste: I didn't receive it. could you resend? [14:24] matsubara: my bad, my addressbook completed to your old async address [14:25] flacoste, hmm it should have arrived anyway unless the asyncers disabled my old address [14:26] matsubara: resent [14:26] thanks flacoste [14:27] matsubara: they did: host smtp.async.com.br [189.19.234.109]: 550 5.1.1 ... User unknown [14:27] :-( [14:27] flacoste, got the new copy you just resent. I'll take care of that clean up. [14:28] the trick is to get all the QA info grouped together, I reckon [14:28] flacoste: Millbank 2439 [14:29] jml: ok, will dial in 2 minutes [14:29] :) [14:32] wgrant: ah, staging and qastaging share their import slave? [14:33] just got a database lock on staging because of my import run on qastaging [14:33] reviewer sought: https://code.launchpad.net/~jml/launchpad/xxx-cleanup/+merge/65663 [14:33] jelmer: They do. I would have hoped they wouldn't share locks, though :( [15:09] jml: looking now. [15:09] jcsackett: thanks! [15:20] wgrant, did you run into any trouble with my commit? [15:23] timrc: It is just after midnight. I was planning to QA it tomorrow if nobody had done it by then. [15:25] wgrant, gotcha [15:25] jml: r=me. [15:26] jcsackett: thanks [15:32] lib/canonical/launchpad/pagetests/webservice/apidoc.txt fails for me in stable [15:34] anyone else have that? [15:36] is there a make target I need to build first? [15:38] jml: if you actually change API stuff the make file does not rebuild the wadl [15:38] jml: you can either [15:38] rm -rf lib/canonical/launchpad/apidoc/ [15:39] or 'make clean build' [15:39] bac: thank you [15:40] bac: lib/canonical/launchpad/apidoc doesn't exist :\ [15:45] ahh ok. I think I know what's going on. [15:45] bac: thanks for the pointer. [15:45] jml: working? [15:45] bac: *maybe*. tests take a while :) [16:04] am I suppose to be able to authenticate with staging.launchpad.net via lynx? I'm able to login in successfully but then it takes me to a page with nothing but the word 'Continue'. When I submit again, it eventually takes me to the same page :/ [16:04] I'm never presented with a page asking me the level of access I want to grant [16:05] timrc: I'm not sure. I don't think lynx is one of our supported browsers. [16:05] jml, poop. okay... off hand do you know which console browser is supported? [16:06] timrc: maybe none of them? [16:06] * timrc 's frown deepens [16:06] timrc: I don't know where our supported browsers list lives. [16:06] timrc: but surely we have to have *some* way of authorizing console apps. [16:06] maybe google knows, brb [16:08] jml, ok according to https://help.launchpad.net/API/launchpadlib lynx is supported for > 1.5.5 and I'm using 1.6.1 hm [16:09] timrc: ok. quite possibly it's a bug. can you authorize against production? [16:09] jml, I'll attempt to do so [16:10] benji: random thought: would it make sense for exported webservice references to allow 'schema' to be a string containing the fully-qualified Python name for an interface? [16:11] jml, same thing using LPNET_SERVICE_ROOT [16:12] :( [16:12] timrc: looks like a regression then :\ [16:13] er [16:13] lynx works [16:13] timrc: how can I trick Python into opening lynx to do the auth? (so I can try to reproduce locally) [16:13] I've used it before [16:13] nigelb: well, we know it *used* to work :) [16:13] jml: yes, as of 2 weeks back when I set up tarmac [16:14] jml, I do not know, honestly... it defaults to lynx for Ubuntu server (probably uses sensible-browser to make that determination) [16:14] nigelb, did you have to setup lynx or did you use default params? [16:14] hmm. that'll slow down fixing. [16:14] er config [16:14] timrc: default [16:14] jml: sudo update-alternatives --config x-www-browser [16:14] nigelb, okay [16:15] nigelb: I'm not changing my default browser just to repro a bug [16:15] jml, hehe [16:15] hehe [16:15] You can change it and then change it back :) [16:15] jml, I use vm's to prevent having to do that [16:15] ! [16:15] when I was a lad, we had environment variables [16:15] and we used them [16:15] and were grateful for them! [16:16] haha [16:16] * timrc did not exist in the 1970's [16:16] * timrc ducks [16:16] neither did I [16:16] :P [16:16] if you have a vps, try it on that. [16:16] it should use lynx by default I think. [16:26] jml: that might help with the circular reference problems; zope.dottedname would be an obvious implementation (http://pypi.python.org/pypi/zope.dottedname) [16:29] * benji wonders how sarcasm could ever not be good. [16:29] * benji also wonders if he'll ever figure out how to use multiple irc chans [16:32] nigelb, you must have the magic touch... [16:36] timrc: did it work? :) [16:36] benji: thanks. I, of course, would have reached for the equivalent Twisted code :) [16:37] nigelb, not in my vm... I'm going to attempt natively... maybe bridged networking has something to do with it (completely unsubstantiated theory) [16:47] timrc: i've had problems with lynx before. used elinks and had no problems, though. [16:47] gary_poster: lp2kanban should sort the tags i think, it's flip-flopping [16:47] Tags: from lp-translations, timeout to timeout, lp-translations [16:47] Tags: from timeout, lp-translations to lp-translations, timeout [16:48] flacoste, heh, ok, I'll put a card up [16:48] timrc: IIRC lynx has an issue with a cookie or something that made it not work for me. but obviously, nigelb and our docs run counter to that. [16:48] thanks [16:48] jcsackett, interesting [16:48] jcsackett, I'll give elinks a try [16:49] timrc: i ended up really liking elinks and set it up as my default for console browsers. [16:53] jml, okay so you can specify BROWSER="/usr/bin/elinks" python foo.py to select the browser launchpadlib will use [16:53] timrc: ahh thanks. [16:59] jcsackett, elinks did the trick... thank you very much [16:59] jml: okay, so environment variables still work lda ;) [16:59] *lad [16:59] * jml has to go [17:00] cheerio jml [17:06] poolie: Who did you send that javascript logging email to yesterday? I never got it. [17:06] hrm [17:07] canonical-tech [17:07] are you on it? [17:07] you should join [17:07] poolie: nice reports on velocity, thanks! [17:07] thanks! [17:10] where are all the yui files listed from lazr-js to generate launchpad.js? [17:18] gary_poster: hello? [17:18] hey poolie [17:20] hi there [17:20] i ran one of the performance tools mentioned at velocity on some lp pages [17:20] and it points out that we have some etags on files that don't seem to actually need them [17:20] since they hav ethe revision in the name [17:21] this is really separate from the wadl [17:21] poolie: I'm not, I'll join [17:21] is this probably a (Low) bug, or is there a reason for it? [17:21] poolie, do you mean "beta" and "1.0" and so on? [17:21] oh [17:21] you said separate from wadl :-P [17:22] yeah [17:22] so which pages do you mean, poolie? [17:22] poolie: the link you emailed errors on Launchpad [17:22] sorry to mix it in with this bug [17:22] silly launchpad [17:22] :-) [17:22] which link, huw? [17:22] gary_poster: let me see if i can find the tool again [17:22] k [17:22] poolie: https://launchpad.net/~canonical-tech [17:23] poolie: I think it's 404, but Launchpad doesn't really tell me [17:23] works for me [17:23] ah [17:23] it's privtae [17:23] how on earth do you join then? [17:24] poolie: heh, no idea [17:25] poolie, huwshimi: One of the list admins has to add you. [17:25] allenap: Ah [17:26] huwshimi: I'm trying to find one to ask.... [17:26] allenap: Thank you [17:26] * huwshimi wonders why we 404 when we mean permission denied [17:28] huwshimi: Done (by dragnob). [17:28] allenap: Thanks [17:29] huwshimi: It's probably to do with disclosure. Permission denied indicates that the resource exists. Of course, it could just be a bug ;) [17:29] huwshimi, generally, some "private" things are really private. We don't want people to know they exist. The name's existence is a security leak. Not everything needs that much paranoia, but starting with paranoia and relaxing is easier than the other way around. [17:29] yeah :-) [17:30] so dragnob's a good person to ask? [17:30] gary_poster: Ah I see [17:31] it might be nice if there was a "contact the admins if this thing exists" [17:31] but perhaps just mail will do [17:31] gary_poster: i can't reproduce the warning in the tool [17:31] i think it was in gogle page speed [17:31] but, "curl -Ov https://code.launchpad.net/+icing/rev13265/lazr/build/lazr/assets/skins/sam/negative.png [17:32] does show the response has an etag [17:32] and that's apparently a bad idea for resources that ought to be immutable [17:32] since it could be set to just never expire [17:32] i don't know if that actually has any performance impact or if it's just a warning [17:32] but since they mentioned it i was curious [17:33] rockstar: what's this about lazr-js being deprecated? has the superseding package made it to launchpad in sourcedeps perhaps? [17:34] poolie, ah! yeah, agree. It would have an effect if Apache were set to pay attention to inode for ETag generation, but would be innocuous otherwise. [17:35] btw i'm pretty sure that lazr.restful does try to use the etags [17:35] hrm [17:35] poolie, I think it is worth a bug for lazr.js icing [17:35] if i had time i would rip out its persistent object cache [17:35] (leaving the wadl cache) [17:35] we are not sure if there is a problem [17:35] it seems to cause bugs without ever helping things === allenap changed the topic of #launchpad-dev to: https://dev.launchpad.net/ | On call reviewer: jcsackett | Critical bugs: 211 - 0:[######=_]:256 [17:35] ok, i'll file [17:35] but it's worth a bit of a dig into it anyway [17:35] thank you [17:39] Project parallel-test build #65: STILL FAILING in 1 hr 10 min: https://lpci.wedontsleep.org/job/parallel-test/65/ [17:39] bug 801241 [17:39] <_mup_> Bug #801241: sends etag on immutable icing resources < https://launchpad.net/bugs/801241 > [17:48] abentley, do you have time to mumble about solution to the yui-segfault issue? === matsubara is now known as matsubara-lunch [17:55] sinzui: Sure, how about in 15 minutes? [17:56] jcsackett: can you do a review at your convenience? https://code.launchpad.net/~bac/launchpad/bug-776437/+merge/65692 [17:56] bac: i was just looking at that actually. :-) [17:56] abentley, that is good thanks [17:56] cr3, you might want to talk to deryck in the context of launchpad itself, but I don't think the long term plan is for anyone to use lazr-js anymore. [17:56] i need to finish up one thing, then i'll take a look at it. [17:56] jcsackett: wow, you're a machine [17:56] or rather, take a longer look at it. [17:57] bac: i just keep on eye on the queue on my OCR days. helps me figure out how i'm breaking up my day. :-) [17:57] cr3, why do you think you need to use lazr-js? [17:58] rockstar: I'm working on a project that aims to use similar technologies, methodologies, etc. as launchpad: http://ec2-50-18-91-92.us-west-1.compute.amazonaws.com/ [17:59] rockstar: so, if launchpad jumps off a bridge then I'll jump too [18:09] cr3, lp:phazr is an re-implemtation of most of the useful things in lazr-js [18:10] rockstar: this is probably a question for deryck but I'd be curious to know when launchpad intends to migrate from lazr-js to that [18:10] sinzui: ready [18:11] Project db-devel build #660: STILL FAILING in 5 hr 30 min: https://lpci.wedontsleep.org/job/db-devel/660/ [18:12] cr3, deryck said he's in the process now, last time I talked to him. [18:13] rockstar: awesome, I'll keep a close eye on that then [18:13] rockstar: I had another question for you but it turned out to be a bug in yui upstream already reported by sidnei, thanks for the info! [18:21] bac: r=me. [18:21] jcsackett: thanks [18:21] sinzui: Oh, that third option was: start the browser in a subprocess, but then use that browser process to visit all the pages, e.g. using IPC. === jcsackett is now known as jcsackett|lunch [18:22] sinzui: So it's a combination of the other two options. [18:22] abentley, ah right [18:23] jcsackett|lunch: i'll look into that idea === matsubara-lunch is now known as matsubara [19:22] Project devel build #831: FAILURE in 6 hr 0 min: https://lpci.wedontsleep.org/job/devel/831/ === jcsackett|lunch is now known as jcsackett [19:43] moin [20:07] wallyworld_: the lxc setup on that page supports fuse [20:07] wallyworld_: what didn't work for you? [20:36] jcsackett, do you have time to mumble? [20:36] sinzui: give me five, and sure. :-) [20:41] sinzui: http://people.canonical.com/~jc/images/Details%20Mockups/ [21:03] hi cr3 -- do you need a hand landing https://code.edge.launchpad.net/~cr3/launchpad/hwsubmissionset_search/+merge/63768 [21:04] bac: absolutely, thanks for following up on that branch. is there anything I can do? [21:05] cr3: i haven't looked at it in detail yet. do you think it is ready for ec2 and landing? [21:06] bac: yep, it's been reviewed by lifeless and stub, all changes they suggested have been applied so I'm very comfortable... as if I were wearing a belt and suspenders [21:06] cr3: ok. could i get you to write a 'commit message' on the MP? [21:08] bac: not sure I follow, this is my first contribution to LP [21:09] cr3: np. click on the above link to go to the merge proposal. towards the top is a JS control "set commit message". that is the brief description of the branch that will be used as the PQM commit message [21:10] so just type a sentence that describes the reason for the branch [21:14] bac: done, let me know if you might word anything differently or if I might be missing special notation [21:15] cr3: no special PQM notation any more. the tools take care of it! [21:15] cr3: looks good [21:37] abentley: hi [21:37] lifeless: hi. [21:37] bug 690021 - will we get an OOPS when a job runs into the memory limit and fails ? [21:37] <_mup_> Bug #690021: scan_branches terminated for excessive memory abuse < https://launchpad.net/bugs/690021 > [21:39] lifeless: yes. Python will raise MemoryError. [21:39] thanks [21:39] lifeless: np [22:11] Project parallel-test build #66: STILL FAILING in 1 hr 12 min: https://lpci.wedontsleep.org/job/parallel-test/66/ [22:13] lifeless: Also, I've done the same for sendbranchmail: https://code.launchpad.net/~abentley/launchpad/memlimit-sendbranchmail [22:14] abentley: cool [22:14] this is good stuff [22:16] lifeless: yeah, it's good to reduce the consequences of memory-munching, and AIUI, the bzr team have reduced the memory-munching significantly. [22:16] I'd kindof like to memlimit the appservers too [22:16] with python hitting swap is never the right thing (gc languages :() [22:18] lifeless: if you do, make sure that only the process the OOMed is affected, and it's probably a good idea to treat that process as dirty. [22:19] lifeless: i.e. the environment is suspect, and it shouldn't be reused for another request. [22:20] abentley: pythons MemoryError raising should be fairly reliable, no ? [22:22] lifeless: yes, I haven't seen it fail. Though even with an RLIMIT_AS of 0, I had to allocate 10 ** 6 bytes to trigger it. [22:23] I'd certain make the current request bail [22:23] I don't think we'd need to restart the appserver [22:24] perhaps I'm wrong and it is more fragile? [22:25] lifeless: The issue with the branch scanner was that a previous OOM was causing the following scan to OOM. [22:25] abentley: I guess because of slab allocators? [22:26] lifeless: I thought it might be actual memory leaks. I don't know enough about slab allocators. [22:26] abentley: so python doesn't call malloc() per object [22:27] it allocates slabs of various sizes which objects can go into; I'd need to go check the source to be more precise than that [22:27] Oh, the 10**6 because of the slab allocators? Probably. [22:27] no, the OOM on the next job [22:27] so a slab can only be released to the OS when all the objects in it have been freed [22:28] there may also be an implementation issue with the memlimit - if it just looks at setbrk for instance [22:28] anyhow - thats a good gotcha to know about [22:29] lifeless: the bug for this was 786804 === matsubara is now known as matsubara-afk [23:09] lifeless: got lp running in a 32 bit lxc container on my 64 bit os. one pita though - the .so files need recompiling so lp can run but then this stuffs up things outside the container requiring the .so files (like ec2 land). what's the cmd to recompile just the c source? [23:09] wallyworld_: the eggs are fine, but the things doing build_ext i will give grief [23:10] wallyworld_: make clean && make build should do it [23:10] lifeless: i've done that but it takes too long. i was hoping for a cmd to just redo the so files [23:11] wallyworld_: I have a separate tree just for ec2 land [23:11] wallyworld_: make clean & make build in the sourcecode subdir might work [23:11] thanks. i suspect it will. pita though :-( [23:12] Hey, what's the best way of getting a faster launchpadlib at the moment? [23:13] michaelh1: move to London [23:14] wallyworld_: thats why I have a separate tree :> [23:14] wallyworld_: I'd like to move ec2 land etc out of the main tree; they have no business being there [23:14] yes agreed :-) [23:17] mwhudson: hi [23:17] jelmer: hello [23:17] jelmer: yes i saw your merge proposals :) [23:18] mwhudson: heh, ok.. wasn't sure if you'd see them through the floods of Launchpad-related emails :) [23:42] wallyworld_: so, fuse [23:42] wallyworld_: why do you think lxc doesn't support it ? [23:43] because of the errors and a google search where one of the devs said so :-) [23:43] what errors? [23:43] the config I provided supports fuse [23:43] Project db-devel build #661: STILL FAILING in 5 hr 32 min: https://lpci.wedontsleep.org/job/db-devel/661/ [23:44] wallyworld_: can you msg me your /var/lib/lxc/$containername/config file ? [23:44] trying to install launchpad-dependencies (from rocketfuel-setup) - the package post processing said things like "/dev/fuse cannot be accessed" etc. don't have the specifics handy [23:45] that suggests you didn't have the right devices whitelisted [23:45] the config I'm asking for will help diagnose this [23:46] lifeless: https://pastebin.canonical.com/48968/ [23:46] yeah [23:47] you must have made your container before I added fuse support to the wiki page [23:47] ah right [23:47] #fuse (workaround for Bug:800886) [23:47] lxc.cgroup.devices.allow = c 10:229 rwm [23:47] # part of the Bug:798476 workaround - [23:47] # remove if you are running a 64 bit lxc or [23:47] # 32 bit on 32-bit base os [23:48] lxc.arch = i686 [23:48] add that to your config file [23:48] lp still runs up ok for simple smoke testing. i guess codehosting stuff may be broken without fuse [23:48] * wallyworld_ fires up vi [23:49] wallyworld_: Codehosting works fine without fuse. [23:50] so what part of lp needs it? [23:50] AFAIK nothing [23:50] wallyworld_: The test suite passed for me except for a rabbit startup failure (probably because I changed the hostname without fixing /etc/hosts) and a few mailman tests that are an aufs bug. [23:50] however the stack of things for e.g. windmill and so on may well [23:50] wallyworld_: Nothing. It's just Recommended by one of the LP dependencies. [23:50] wgrant: you ran w/out fuse? [23:50] lifeless: Yes. [23:51] as am i since i couldn't instal lit :-) [23:51] I couldn't either. [23:51] right [23:51] we may want to disable recommends in rocketfuel-setup regardless [23:52] would be a lot quicker to run - fae less downloads [23:52] Indeed. [23:52] I've wanted to do so for months, but the review overhead always made me think twice. [23:52] i also had to install aptitude by hand since rocketfuel used that by default [23:52] not sure why [23:53] whereas apt-get works just fine [23:53] It shouldn't any more... [23:53] since when? [23:53] ah, i was likely using an old copy [23:53] Possibly, if you copied across your original version. [23:53] It certainly uses apt-get now. [23:54] yeah, my fault [23:54] lifeless: So, what kind of hardware are pilinut and pigeonpea? [23:55] FIIK