[00:01] lifeless: it's running 7.04... probably no suspend :) [00:01] benji: I'm with lifeless and rockstar... you can't reboot that bad boy right now [00:02] ok, bowing to popular opinion he gets a stay of execution until next week [00:02] we'll have a small memorial service with light refreshments to follow [00:02] and whoever said peer pressure didn't work [00:24] thumper: the commit count on https://code.edge.launchpad.net/nova ... that's counting unmerged revs? cause in a branch if I run 'bzr log -n0 | grep revno | wc -l' I get 527 [00:24] thumper: so I'm just curious what the number on the site means [00:24] * thumper looks [00:24] 1171 commits by 22 people in the last month [00:24] that bit? [00:24] yup [00:25] counts all commits into branches related to that project [00:25] in the last 30 days [00:25] not just mainline commits [00:25] does it de-dup? [00:25] yep [00:25] distinct revisions [00:25] cool [00:25] thanks [00:25] np [01:29] Hrmph. [01:29] The python-debian upgrade overnight seems to have blown everything up. [01:29] Starting the test suite fails with a DeprecationWarning. [01:29] Commenting it out makes it work fine. [01:30] But I don't see why a DeprecationWarning would be fatal... [01:32] Ah, I wonder if that latest bzr-builder-related revision fixes it. [01:38] How do people manage to file Ubuntu bugs against malone using apport? [01:45] hmm [01:45] is it just malone? [01:45] could be a buggy apport script pointing at malone directly somewhere [01:53] lifeless: It's just a single accidental bug. [01:53] But it's amazing that people still manage it. [01:58] * mwhudson lunches [01:58] wgrant: theres been a few I thought [01:59] lifeless: Not normally filed with apport. [03:03] Fuuuck. [03:04] The upload processor was just really delayed. [03:04] So I now have the same package accepted twice into the same PPA. [03:05] Somehow the cocoplum and germanium upload processors managed to run at just the same time. [03:05] After not running for an extended period. [03:06] Now my PPA is going to be all broken :( [03:07] ooooohdear. [03:07] wgrant: What have we told you about breaking the world? :-P [03:07] At least it's only the staging PPA, not the production one... [03:08] The result of these builds could be, um, interesting. [03:09] https://edge.launchpad.net/~unimelb-ivle/+archive/staging/+packages [03:18] Start 2010-08-01 (2505) What's this? [03:18] Grrrrr. [03:18] Can I convince someone to give me a score bump? [03:22] https://edge.launchpad.net/~unimelb-ivle/+archive/staging/+build/1896715 is very sad and would love a bump. [03:23] Oh. [03:23] There are no builders. [03:23] Grr. [03:24] Why must the builders disappear when the queue is days long? [03:24] It really makes PPAs a whole lot less useful. [03:25] When you have to wait 48 hours for a build. [03:34] Something is seriously wrong with the upload processors. I just got a rejection for an upload that I did an hour ago. [03:37] losa ping [03:37] ^ [03:38] mwhudson: ping [03:41] thumper: pong [03:41] mwhudson: can I have a pre-impl thing with you? [03:41] although thinking it through, I think I almost have it :) [03:42] thumper: heh, ok, skype? [03:42] yeah [03:42] going online... [03:53] Blah. [03:53] Just got another three rejections. [03:53] 1.5 hours later. [03:58] wgrant: are they valid & slow, or invalid rejections ? [03:58] lifeless: They're valid, but really really slow. [03:59] Heh. [03:59] Actually, the rejection reason is broken, because the PPA is now broken. but they should have been rejected anyway. [04:09] * wgrant sometimes wonders how Launchpad hopes to be taken seriously. [04:16] wgrant: surprisingly, there exists software that sucks much more than launchpad [04:16] mwhudson: Yes, but other stuff generally doesn't make me regret moving to it for deployments every time I try to make a release. [04:16] And other stuff doesn't have 3 day build queues. [04:18] yeah, the build queue is pretty insane currently [04:18] And often is. [04:18] And nobody seems to care. [04:22] wgrant: we do care [04:22] wgrant: perhaps we should show you our scars where we cut ourselves each time a user complains [04:26] thumper: The build queue has reached the 2-3 day mark at least once every couple of weeks, lately. [04:26] That makes one of Launchpad's big features just about useless. [04:27] A couple of years ago, a bad PPA build wait might be a couple of hours. Now it's often 24 times that. [04:30] Does anybody even work on Loggerhead anymore? [04:33] magcius: yes, but not an active LP dev [04:33] thumper: argh. There are several UX issues with it that are so annoying. [04:34] thumper: is there a slated replacement or similar for the code browsing? [04:41] magcius: more a fixing than replacement [04:41] magcius: what is it that annoys you so much? [04:41] thumper: the lack of left margin for code, especially annoying with Python [04:42] thumper: and the blame lines are not monospaced and centered, which makes it even more annoying [04:42] magcius: can you give me a concrete example? [04:42] magcius: a link maybe? [04:42] thumper: http://bazaar.launchpad.net/~loggerhead-team/loggerhead/trunk-rich/annotate/head%3A/loggerhead/apps/config.py [04:43] thumper: (the URLs are my second complaint) [04:43] thumper: if you scroll down a bit, it's easy to lose your place in the indentation [04:44] thumper: also, if I want to go up to the directory that that is in [04:44] thumper: I can't just edit the URL to remove the "config.py", it gives an error [04:44] thumper: the 'head:' that is encoded is annoying when passing around, and the URL is needlessly long. [04:47] thumper: I logged these: #569355 and #569358 [04:47] <_mup_> Bug #569355: Left margin in code view. [04:47] <_mup_> Bug #569358: Code viewer URLs have problems [04:47] After a month or two, they're still both "New" [04:51] magcius: I'm sure loggerhead could use some keen devs :) [04:51] thumper: it looks to be a Django app. [04:51] magcius: I think it is pretty vanilla wsgi [04:52] its pastedeploy specifically, which is pretty stock wsgi [04:52] oh hey, it's broken too [04:52] http://bazaar.launchpad.net/~loggerhead-team/loggerhead/trunk-rich/annotate/head%3A/loggerhead/apps/config.py [04:53] if you click on one of the parent directories at the top, it takes you to a weird URL where none of the files from then on work [04:53] like http://bazaar.launchpad.net/~loggerhead-team/loggerhead/trunk-rich/files/428?file_id=apps-20080618010429-n87ewcyimdwsx90g-3 [05:03] mwhudson: ping; I've just done a microreview about string formatting ;) [05:03] mwhudson: if you want to discuss in realtime, let me know. [05:06] lifeless: you have? where? [05:07] https://code.edge.launchpad.net/~mwhudson/launchpad/vostok-traverse-distro/+merge/31241 [05:08] lifeless: branch_type is an enum [05:08] or value from an enum, really i guess [05:08] mwhudson: _always_, reliably, even when its gone wrong ? [05:08] lifeless: i think so [05:08] lifeless: however [05:08] lifeless: this isn't my change and i'm happy to revert it [05:09] It seems to me that the reflex of 'this can go wrong' is one worth making easy to apply [05:09] yeah, i agree i think [05:09] hey! [05:09] someone review this: https://code.edge.launchpad.net/~mwhudson/launchpad/kill-launchbag.site/+merge/31348 [05:09] O hai! thumper, mwhudson: Could one (or both, I don't mind) have a look at an MP that I've pushed up that uses the job infrastructure? [05:11] https://code.edge.launchpad.net/~stevenk/launchpad/db-add-ifp-job/+merge/31349 [05:11] mwhudson: can you let whomever made that change know about this discussion ? [05:12] lifeless: sure [05:12] lifeless: You too, if you wouldn't mind, but you tend to stalk new MPs anyway [05:12] StevenK: I stalk everyones [05:12] StevenK: I'm subscribed to the feed [05:12] StevenK: but I'll pass today, today is jetlad day #2. [05:12] StevenK: I will look monday [05:13] lifeless: You'll need to do a DB review in any case, but Monday is cool [05:13] StevenK: + -- The particular type of foo job [05:13] Oh, damn it [05:13] do you need the job_type anyway? [05:13] I knew I should have grep'd the diff for 'foo' [05:14] i can't really imagine InitialiseDistroSeriesJob containing other kinds of job, somehow... [05:15] mwhudson: TBH, I was following the wiki page -- so I'm not certain. I can forsee the need to change it later to add the ability to, for example, do a scorched-earth rebuild. [05:17] ok [05:18] mwhudson: So is that a use-case for job_type, or that can be fed in via the JSON metadata? [05:21] StevenK: i guess it depends on details [05:21] StevenK: the point of job_type not being in the json is that you can query it [05:21] would you have a different cron job for that kind of ifp job? [05:22] Nope, they should be handled by one cron job [05:25] then i don't think you need a job_type at this stage [05:26] StevenK: similarly, i'm a bit suspicious of the json_data [05:27] mwhudson: Well, I think there is a need (although, not right now) to feed in some more data, so leaving it in sounds fair. [05:28] mwhudson: However, I agree re: death to job_type, and have added to that my list [05:28] StevenK: json is json [05:28] why have two fields? [05:28] assuming I read the chatter here correctly [05:28] Isn't someone else working on copy archive population jobs? [05:28] Isn't that pretty similar to i-f-p? [05:29] wgrant: it was james_w [05:30] * StevenK reads through james' MP [05:31] That's for an entire archive, whereas my MP covers distroseries [05:32] StevenK: They seem very, very similar. [05:32] populate-archive just happens to not copy binaries. [05:36] * StevenK will talk to Julian [05:58] StevenK: and while you're at it, please another blood-letting for the lack of builders making wgrant's build take two days [05:59] thumper: I would, but I value my ears [06:00] * StevenK heads out to the bank [06:11] thumper: did you know that the view returned by test_traverse() is security proxied? [06:18] mwhudson: nope [06:18] or at least in this case [07:10] I am confusede. [07:10] Is that Ye Olde English confused? [07:13] It's "I'm doing too many things to state my confusion" :P [07:14] I don't understand why dailies have priority to destroy the build farm even more than it is already. [07:14] I would have thought they would be the last things, apart from rebuilds, that should be built in a situation of the worst resource starvation we've seen in a looong time. [07:15] The queue is three days long. [07:15] It will take more than that to clear. [07:16] wgrant: So, I had already admitted I had privledges, and would bump builds. Am I supposed to reply back with "Your builds aren't important enough, sod off" ? [07:17] I think there needs to be a policy. [07:17] Because this starvation happens frequently. [07:17] And long-running new crack builds are surely not the things that need to be prioritised. [07:17] Alpha 3 is around the corner, which is where I suspect the builders have gone. [07:18] wgrant: Which still ends up with people being told "Your builds aren't important enough, sod off"; but they are, to *them* [07:18] StevenK: I don't see how dailies could possibly be important. [07:18] They are dailies. [07:18] They happen daily. [07:19] They attempt to happen daily [07:19] Nobody is going to be pissed off if one delivery of the new daily crack is delayed by 24 hours. [07:19] People *are* going to be pissed off if their first builds in weeks take five days. [07:19] Except, say, fta, who is well-documented about his feelings of the build farm [07:20] See, I have a build which will take less than a minute. [07:20] And it won't be done for five days. Because the dailies are prioritised. [07:20] When there could be hundreds of other builds done in the time a single daily takes. [07:20] But I can't bump that one [07:20] Sadly [07:22] * wgrant now has to advise the admins to use a repository elsewhere, because the PPA can't be populated with the new release within a week. [07:23] wgrant: I can bump builds if you ask, but I was under the impression recipe builds can't be bumped [07:25] StevenK: A score bump for https://edge.launchpad.net/~unimelb-ivle/+archive/staging/+build/1896715 would be most appreciated. [07:25] Otherwise, I'll make my arch-indep packages arch-dep, and just let it build on amd64 in a few hours... [07:27] wgrant: Bumped; with a score above the other builds I bumped [07:27] StevenK: Thankyou muchly. [07:28] Can you even buy beer yet? [07:28] Hopefully the other builders will reappear at some point. [07:28] * StevenK cackles [07:28] Not in the US :P [07:28] I wasn't in the US, last I checked :-) [07:29] Tarue. [07:29] Hmm. [07:29] Double form submission. [07:29] I clicked the button once, but it submitted twice. [07:29] That's a bit scary. [07:30] Browser bug? :-P [07:30] Possible... [07:32] * StevenK is also distracted, arguing with Telstra over a bill :-/ [07:32] KILL THEM. [07:32] Haha [07:33] At least with NBN that might all go away... [07:33] Telstra will become irrelevant and fade away :) [07:33] Or so we can hope... [08:30] good morning [09:09] Ni hao === stub1 is now known as stub === jelmer_ is now known as jelmer [11:54] bigjools: I wonder if current_source_publication should be latest_source_publication. [11:55] wgrant: it could be an additional method [11:55] Ah, good idea. [11:55] current_ is very valid [12:05] Morning, all. [12:36] pqm could be much better. [12:37] deryck, good morning [12:43] Thanks for the blog post deryck [12:43] mrevell, glad to do it. Now to actually get that bug fixed. :-) [12:43] :) === mrevell is now known as mrevell-lunch [13:07] bigjools: Any chance of a score bump for https://edge.launchpad.net/~unimelb-ivle/+archive/staging/+packages? === matsubara-afk is now known as matsubara === mrevell-lunch is now known as mrevell [14:22] wgrant: still around? [14:22] bigjools: Indeedily. [14:22] wgrant: seen /builders for nonvirt i386? [14:22] 149 jobs (eight minutes) [14:22] It's translations jobs. [14:23] I think not - and nothing was getting dispatched earlier [14:23] They appear in the wrong queue, and their estimated duration is screwed. [14:23] still not [14:23] Oh? [14:23] ah [14:23] TTBJs? [14:23] Translations jobs. [14:23] Yes. [14:24] I filed a bug about the incorrect queue earlier this week. [14:24] And they have a low score, so they'll sit there until the i386 virt queue clears. [14:24] I don't know of a bug for the bad duration estimate. [14:33] wgrant: are they also in the PPA queue length? [14:33] bigjools: No. [14:33] ah [14:33] wow, I wonder why there;s so many [14:34] It's not that many. [14:34] it's a couple of days' worth, at least... [14:35] 150! [14:35] we need to score them up a bit [14:35] they're quick [14:35] I'm also toying with the idea of scoring people's builds -1 for each outstanding build they have [14:36] Possibly. [14:36] I think we need to rethink scheduling. [14:36] Since the current system obvious doesn't work. [14:36] At all. [14:36] +ly [14:37] well, it does work [14:37] the problem is that there;s more work than the builders can handle [14:37] we're due 7 more PPA builders soon-ish [14:38] You've been saying that since Wellington :P [14:38] But the problem also manifests itself when the daily wave appears. [14:38] * bigjools can't comment on why they're not instaled [14:38] The dailies drown out other builds for a few hours. [14:38] yes [14:38] and if we score those down, someone else complains [14:38] *shrug* [14:39] If they don't build for a while, sure. [14:39] But there should be no issue with a wait of a few hours. [14:39] Which allows a much more realtime experience for the people who actually upload things. [14:41] so the next piece of work will be to add some knobs and dials to bias scores on the fly [14:45] Hmm. I'm not sure that a more automated solution could not be devised. [14:45] But first, sleep. [14:46] Did they just steal samarium too? [14:51] Unlikely [14:52] hi StevenK [14:52] james_w: O hai! [14:53] StevenK: I'll take a look today at how much the two job types can be coalesced [14:53] I'm pretty sure we don't need two db tables at least [14:53] james_w: That sounds great. We could probably get away with 2 job types [14:54] I'm not sure what's so different that they can't share some of the same code as well [14:54] I think they can share an awful lot of code. [14:54] Populating a copy archive is just a subset of i-f-p. === matsubara is now known as matsubara-lunch [14:55] * bigjools high fives james_w, StevenK and wgrant [15:11] wgrant: samarium is back === salgado is now known as salgado-lunch [17:03] sinzui, hi === matsubara-lunch is now known as matsubara [17:03] hi jelmer [17:04] sinzui: The PPA appears to be lacking a copy of pocket-lint for hardy, is that intentional? [17:05] I mentioned this on the mailing list [17:05] * maxb finds thread [17:07] jelmer. Sort of. I think all developers are on lucid or maverick, and pocket-lint is only used by developers [17:07] Amu I wrong [17:07] ? [17:07] am I wrong? [17:08] sinzui: The ec2 image is still hardy-based so it's impossible to update at the moment. [17:08] bugger [17:08] sinzui: We should not preclude developing on the distro that production uses, nor should launchpad-developer-dependencies be uninstallable in any ppa series, IMO [17:09] well I do not know if pocket-lint works on hardy. We could copy it and see if it builds [17:09] sinzui: I'll try [17:09] has 0.5 been built yet? [17:10] no, the PPA has 0.4 at the moment [17:12] jelmer, I think it will work since python 2.5 was backported for lp. pocket-lint includes its external deps in a contrib dir. === deryck is now known as deryck[lunch] [17:43] has anyone put any thought about what for do about LaunchpadObjectFactory.makeDistributionSourcePackage returns an unproxied object [17:44] It is simple class that differs to database classes. I cannot image how to get a proxied version of a DSP === salgado-lunch is now known as salgado [17:51] sinzui: construct it from proxied objects? [17:51] s/from/inside/ [17:51] I am look for that actually [17:52] I think distribution.getSourcePackage() is the only way to get a proxies object === deryck[lunch] is now known as deryck === Ursinha is now known as Ursinha-lunch [18:05] mars, so what's the deal with the lazr-js branches? === beuno is now known as beuno-lunch [18:12] deryck, hi [18:12] jml, hi. on call right now. [18:13] deryck, np. I'll use email to achieve my nefarious purpose [18:16] sinzui, ProxyFactory [18:17] jml: distribution.getSourcePackage() a nice one line change [18:17] sinzui, cool. [18:17] fwiw, my branch that turns the print outs into Python warnings has landed [18:18] if it sucks, revert it & send me an email. [18:41] * rockstar lunches === beuno-lunch is now known as beuno === matsubara is now known as matsubara-brb [19:27] so, we were asked to make sure that it is possible to delete all artefacts of copy archives from the librarian. Could anyone point me to any code that I could read about the cleanup process and how it is decided what to delete? [19:32] james_w: it's not easy [19:33] james_w: start at the publishing records, then cascade down through builds, librarian files and spr/bpr if they're not published elsewhere [19:34] we added some ON DELETE CASCADEs to the model but some of it's dangerous [19:36] bigjools: so it's merely that the archive is deleted, and that cascades down to delete the librarian files at the end of the chain? [19:36] james_w: in an ideal world yes, but remember that stuff was copied from other archives so a lot of the files may be shared === gary_poster_ is now known as gary_poster [19:37] your binaries won't in the case of copy archives of course, but if we apply this work to PPAs as well then it will [19:37] bigjools: right, I'm just trying to understand the process first [19:37] james_w: the db cascade only goes so far, it stops at the stuff that might be shared [19:39] moin [19:40] bigjools: so we need another process, as we can't rely on the database to do this for us [19:40] james_w: yes - we've implemented some of it in the publisher, which is the right place to do this [19:40] hello lifeless, working on the weekend [19:41] james_w: there's an archive status of DELETING and DELETED [19:41] james_w: the former is set by the webapp request and the publisher picks that up, deletes the repo, and sets status to DELETED [19:42] bigjools: the publisher deletes the published files and the librarian files too? [19:42] james_w: no, just the repo area right now [19:43] there's a separate process that deletes librarian files [19:43] we need to extend it to pick up copy archive files [19:43] ok, and where is that process? [19:44] cronscripts/expire-archive-files.py [19:45] james_w: it picks up superseded and deleted publications [19:45] james_w: you can just make a one line change to that script :) [19:45] and deleting a copy archive will delete the publications from that archive? [19:45] yes [19:46] well [19:46] actually, if you delete the publishing records then the librarian files will be orphaned and the GC will get them [19:46] so don't worry about that expiration script [19:47] so we want to make deletion of copy archives delete the binary publishing records for that archive? [19:47] (and the associated builds presumably) [19:48] and source publications [19:48] and any binarypackagereleasefile / sourcepackagereleasefile [19:49] that is not published elsewhere [19:51] mars, ping [19:51] bigjools: so it already deletes all sources, we should just make it do the same for the other objects? [19:51] well requestDeletion() of all published sources at least [19:52] james_w: when I say delete, I mean remove the database row, not mark the publication deleted [19:52] ok [19:52] we want to completely blow any trace of this archive away [19:52] but still in Archive.delete? [19:52] no [19:52] you have to do it in the publisher === matsubara-brb is now known as matsubara [19:53] why? [19:54] just so it's only done when it goes DELETING->DELETED? [19:54] yes [19:54] it would take too long to do in a webapp request [19:55] we might even need to move that bit of code that marks all the source publications as deleted into the handler in the publisher, I suspect it'll take too long for copy archives [19:55] heh [19:56] you can't actually delete copy archives right now any way [19:56] so what is actually done right now? [19:56] all the publications are marked as deleted and the repository is removed [19:56] I guess it's because they aren't published [19:57] the repository is not removed via the standard mechanism that PPAs are, so I don't want to change something that isn't actually used. [19:57] I don't understand what you mean [19:58] if copy archives are set to DELETING then they are not set to DELETED by the publisher as it stands. So either there is some other mechanism used, or we just have a bunch of copy archives in the DELETING state. [19:58] oh - I don't know if anyone set those as deleted yet [19:59] well, they go inaccessible from the web [19:59] they get disabled [19:59] so maybe they are just disabled? [19:59] right [19:59] jelmer is fixing that so we can still see disabled copy archives :) [20:00] so we want to change it such that an admin can go to +edit and delete them? [20:00] at which point the publisher will remove them from disk if they have been published and set them to DELETED. [20:01] right, so if you see the bottom of lib/lp/soyuz/scripts/publishdistro.py [20:01] and in addition extend it such that if it is a copy archive all publications are deleted, and other things such as SPRs are deleted if they are only references by this archive? [20:01] it only considers PPAs before calling publisher.deleteArchive [20:01] right, that's what I was saying a minute ago [20:01] yes :) === Ursinha-lunch is now known as Ursinha [20:01] sorry it's late and I am slow :) [20:01] should that be the same for all archives, or just copy archives? [20:01] should be the same for copy archives too [20:02] so [20:02] I would take a look at the foreign key chain off the publishing records and you'll see quite quickly what you need to delete :) [20:02] no, I mean, should the deletion of publications and SPRs etc. happen for all DELETING archives? [20:02] or just copy archives? [20:02] yes, we need to blow them away entirely [20:03] we do a half-assed job at the moment so that people can rename their accounts [20:03] I have a test that does [20:03] naked_productseries.datecreated = test_date [20:03] ok, so no ArchivePurpose check. [20:03] to test a value that cannot be written via the interfaces [20:03] bigjools: thanks, I'll show you some code tomorrow so you can verify the direction [20:03] not tomorrow you won't :) [20:04] james_w: the only trick part is where packages are published in more than one archive [20:04] tricky even [20:04] once you get past that it's easy [20:04] bigjools: I'll show you tomorrow, but you won't look :-) [20:04] heh [20:04] I'm about to upgrade my hardy server to lucid, I might not be looking at anything [20:51] sinzui: ping [20:51] hi barry [20:52] sinzui: hi! say, i'm trying to set up another branch mirror but lp isn't happy. could you double check this for me? https://code.edge.launchpad.net/~python-dev/python/2.7 [20:52] sinzui: i can svn ls http://svn.python.org/projects/python/branches/release27-maint [20:53] barry: you want an import not a mirror. Mirror is for native bzr branches, import for foreign VCS. [20:53] d'oh! [20:54] ah! I was wonder why the page looks so strange [20:56] we should eliminate that difference [20:56] its implementation not intent [20:56] james_w, sinzui: thanks. i just did an approved import request [20:56] lifeless: yes, or add import to the list of options in "register a branch" and explain the difference ;) [20:57] barry: file a bug pleasE? [20:57] lifeless: already started... :) [20:57] I believe there is already several bug about "Register a branch" and a discussion to remove it [21:00] lifeless, sinzui: you are the triagemasters, please do your magic: https://bugs.edge.launchpad.net/launchpad/+bug/611837 [21:00] <_mup_> Bug #611837: Confusions between import and mirror branches [21:01] thanks guys! [21:01] I am sure that is a dup I wonder if it would have shown up as a dup if the bug was reported against launchpad-code [21:02] bigjools: still around? How is the uploadpolicy context set for uploads from users. Apparently all the process-upload calls in lp-production-configs use the buildd policy. [21:03] james_w: it uses the insecure policy for general uploads, it's set in the crontab entry [21:03] ah, crontabs, thanks [21:03] sinzui: possibly! [21:04] The +setbranch form shows a unified view of all the nonsense. It does a better job of explaining the difference [21:05] That view is for registration *add* setting it to a series, so it is not always the right link to use [21:41] oh crap, a car crash of concerns [21:54] did jtv's GenericCollection ever land? [21:54] yes [21:58] hum [22:05] sinzui: ProxyFactory lets you proxy any proxiable object. You don't need to get it from anywhere special. [22:05] wgrant, thanks. I have applied that to the objects I cared about [22:09] hi wgrant [22:10] Hi james_w. [22:10] so, deleting an archive disables it, which means that the publisher tends to ignore it [22:11] That's correct. [22:11] so we have to teach the publisher to consider disabled archives in the DELETING state [22:11] I thought that was already the case. [22:11] There is a special DELETING path in there... [22:11] however, the methods it uses for finding archives aren't very suitable for this [22:11] wgrant: I'm extending it beyond PPAs, it works fine for them [22:11] Ah, right. [22:12] Copy archives are going to be materialised soon? [22:12] yeah [22:12] well, it's more about librarian, but yes, that too [22:12] I've been asked to make the publisher delete artefacts in the db from deleted archives [22:13] Ah... [22:13] It would be nice if LP permissions didn't suck. [22:14] Once you do that, anyone with upload rights can completely destroy the archive and all its history irretrievably. [22:14] that applies to PRIMARY too? [22:14] that sounds undesirable [22:14] No. It's special. [22:15] But anyone in ~ubuntu-drivers could. [22:15] right [22:15] Which is far too large a group of people, but that causes other more gravely concerning issues. [22:15] still not particularly desirable in general [22:15] No. [22:15] we could limit this to COPY archives for now [22:15] That sounds good. [22:16] I got the impression that this is wanted for PPA as well so that they can be properly deleted [22:16] is that right? [22:16] I cannot support PPAs' inclusion in it until it's more restricted. [22:16] It is, yes. [22:17] what would a good interface for the general case of this look like? [22:17] a way to query deleted archives and purge them? [22:17] Possibly. [22:18] It's unobvious, though, due to the various criteria that need to be used to select the archives. [22:18] DELETE just refers to the published state right now? [22:18] Not just purpose, but privacy. [22:18] Hm? [22:18] deleting an archive just removes the published files, the archive is still in the db, disabled? [22:19] I assumed that the purging of the archives would be an admin-controlled task. [22:19] mmm [22:20] really we want the $-distro admins, not the lp-instance admins [22:20] Right. The publications are Deleted, then the archive status is set to DELETING, then the archive is purged from archive disk, then the status is set to DELETED. [22:20] lifeless: Distro doesn't care about PPAs. [22:21] wgrant: true [22:22] so with some surgery something in DELETED could be resurrected [22:22] for ppas why not immediate delete? or time-based ? [22:22] james_w: Exactly. [22:22] Or its latest contents at least downloaded. [22:22] though the publications would all be deleted too, so it would be a convoluted process [22:22] Right. The librarian files would disappear in a week, too. [22:22] right [22:23] so perhaps we have another state PURGED that has a delay of a week too, and removes the DB rows for the publications etc. as well? [22:23] Possibly. But I don't really see the pressing need to delete that stuff. [22:23] wgrant: dead data costs to keep around; its not free. [22:24] wgrant: neither in a dollars sense, nor a performance sense. [22:24] True. [22:24] if the users does not want it [22:24] and we don't have a legal obligation to keep it [22:24] why would we incur that overhead? [22:24] COPY archives is what we have been asked to implement this for, Julian said that I should do it for everything [22:25] I'm fine with putting the infrastructure there and using it for COPY archives, as it will be easy to use it for other types later if desired [22:27] james_w: It doesn't look like any filtering is done on copy archives. [22:27] wgrant: where? [22:27] james_w: lib/lp/soyuz/scripts/publishdistro.py [22:27] no, it's getArchivesForDistribution that is the problem [22:28] So you probably just have to change the archivepurpose guard around the deleteArchive call. [22:28] Oh. [22:28] it only lets you see enabled archives unless you pass in an admin [22:28] Ah. [22:28] Or you say exclude_disabled=False... [22:28] nope [22:28] you have to say that *and* pass in an admin [22:29] Oh. [22:29] Indeed. [22:29] How stupid. [22:29] indeed [22:29] I can't see an easy way to unthread it without checking all callers [22:29] hence my question about GenericCollection, I wondered if implementing an ArchiveCollection would make sense, but that's a bit large for me to take on right now [22:30] I think that would be a good idea. But it's probably too big for a small change like this. [22:36] http://paste.ubuntu.com/471263/ is the horrible hack that I am using for now [22:37] james_w: You could call it filter_visible or something like that. [22:37] yeah, that's a better name [22:39] james_w: But the callsites for the method number only three, so it's not too bad if you have to change it in other ways. [22:40] good to know [22:40] not sure how I would change it other than this or going for a collection [22:40] anyway, I think that's enough for this week, I'll tackle the hard bit of this on Monday [22:41] and a nice illustration of the perils of purging deleted archives just now :-) [22:41] Sounds good. I think the Collection pattern needs to be used more widely. [22:41] In that case I think purging (or just renaming) would have fixed it. [22:42] He doesn't want the content -- just to be able to upload to it again. [22:42] thanks for your help [22:43] np === matsubara is now known as matsubara-afk [23:36] * jelmer laughs reading the summary for bug 235939 [23:36] <_mup_> Bug #235939: X-Launchpad-Message-Rationale header for code import emails too human friendly