[01:09] oh the irony - bug 784854 [01:09] <_mup_> Bug #784854: Questions in Launchpad Answers should be checked for double posting. < https://launchpad.net/bugs/784854 > [01:09] [yes, I know its not ironic. Pedants!] [01:19] wgrant: you remember we talked about running a script to update all lp branches to be stacked on +brand-id/$id [01:19] wgrant: did that happen? [01:23] mwhudson: A couple of batches were started. [01:24] spm and thumper hopefully know more. [01:24] One run died due to a 502 during a nodowntime, I don't know what happened after that. [01:24] oh bugger. I completely forgot to kick that again [01:49] Hmm. [01:49] I think the devel failure is spurious. [01:49] Passes locally, doesn't really make sense... [01:49] But I've never seen it before. [01:49] spm: so what is the status ? [01:49] spm: is there an rt i can subscribe to or something? [01:50] you know. I don;'t think there is. I shall create one - makes itless likely to be forgotten. [01:50] woo process [01:56] How does one subscribe to an RT, anyway? We don't have separate accounts, so it doesn't seem to be possible. [01:59] i think you just make a comment on it via email? [01:59] i'm not really sure though [02:00] definitely "rt user account" != "email address to send activity on a ticket to" as concepts though [02:01] LP's RT does SSO now... maybe the IS one should too. [02:12] lp's is the u1 one [02:12] is need to upgrade [02:12] Ah. [02:12] aiui [02:17] storeblob should move to the librarian directly I think [02:18] Possibly. But that requires that the librarian know about more than just LFA/LFC. [02:18] it already does [02:18] And TLT. [02:18] however I am thinking about what would be needed for a standalone blob store [02:20] it seems to me that apport should be doing: a) upload a tonne of data to a gcable non-accessible area, b) handing off the result of that to something that claims a reference to the data from a) [02:20] Right. [02:21] now, we could do this today using the one temporary blob table [02:21] with the insert from the librarian and then an update to set metadata from the appservers [02:22] and refactor later to one thing owned by the librarian and another owned by the appservers [02:23] if we change the gc implementation to be something like gcable = [blob for blob in store if none(map(methodcaller('references'), librarianclients))] [02:23] for obj in gcable: if obj.age > 30 days: delete obh [02:23] fin [02:24] that wouldn't let us do fast in-librarian usage reports [02:24] Yeah. I'm not entirely sure how the reference counting will work with an SOA, but it seems doable. [02:24] but it would let us do them from within each service [02:24] alternatively we have a 'service' concept in the librarian [02:24] and we attach blobs to services [02:25] and we can create a service for (e.g.) each distro [02:25] Hmm, I guess, yes. [02:25] for apport [02:25] etc [02:25] and ask for reports at the granularity of a service [02:25] like 'how much data does this service have' -> ppa quota report [02:25] We probably need arbitrary "tagging" for blobs, to get the groupings. [02:25] Yeah. [02:25] Like that. [02:26] gcing of services becomes an issue then [02:26] but it would be a much slower churn rate [02:26] and other ubuntu/Canonical projects could get api access to make and consume services in the same pool [02:27] interestingly if you look at s3 [02:27] aggregate reports are not exposed [02:27] you can see what you used [02:27] but not get 'whats the size of my bucket delilah' [02:27] which suggests some possibilities about their internal model [02:27] Hmm. [02:27] I didn't realise that. [02:27] well, IMBW [02:28] I couldn't find anything in the api docs though [02:31] http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?BucketBilling.html [02:32] Fees for object storage and network data transfer are always billed to the owner of the bucket that contains the object unless the bucket was created as a Requester Pays bucket. [02:32] The reporting tools available at the Amazon Web Services developer portal organize your Amazon S3 usage reports by bucket. [02:34] so while I can see us doing blob-storage + blob-accounting (including gc) layered on top [02:34] I suspect amazon have blob-storage-and-accounting [02:34] with possible microservices that scale each function (storage,accounting) separately, but that are allowed to know about each other [02:34] e.g. callbacks on storage adding to the accounting service [02:34] ditto downloads/uploads [02:36] I think we should aim to be able to replace germanium with the librarian when we split it out [02:36] There's a spec for that.™ [02:36] DisklessArchives on the old wiki. [02:37] Would make everything much simpler. [02:37] of course there is [02:37] and smaller. [02:37] well, a tradeoff - we'd make the impact of downtime even greater, so want to start adding in redundancy [02:38] Right, but a split-out librarian should be fairly easily redundant. [02:40] something like that :> [02:41] * wgrant constructs a bonfire to dispose of archiveuploader. [02:41] Yippie, build fixed! [02:41] Project devel build #731: FIXED in 5 hr 12 min: https://lpci.wedontsleep.org/job/devel/731/ [03:08] mtaylor: yo [03:25] Project db-devel build #561: FAILURE in 5 hr 36 min: https://lpci.wedontsleep.org/job/db-devel/561/ [03:27] hmm [03:27] it seems like they fail on design [03:27] so no [03:27] Oh? [03:27] * lifeless goes to the dark side and makes negative comments [03:27] 'The rsync daemon requires no authentication, so it should be run on a local, private network' [03:28] Have you seen the librarian lately? [03:28] crunchy shell security is a pretty big WTF for a multi-tenant system [03:28] Yeah. [03:28] wgrant: so I'm assessing this based on: [03:28] The db-devel failure is real; I am fixing. [03:28] Project windmill-devel build #103: STILL FAILING in 47 min: https://lpci.wedontsleep.org/job/windmill-devel/103/ [03:28] - testing story [I see no test fakes for us to use in their docs] [03:29] - sanity [e.g. wtf count] [03:29] - friction vs what we want [like diskless archives] [03:29] their building on rsync is itself a wtf for me [03:31] Hmmmmm [03:31] This test is duplicated. [03:31] Odd. [03:31] double landing? db-devel then devel + merged silent-wrong? [03:31] Doesn't look like it. [03:32] They are adjacent, but it's possible. [03:32] that matches the symptoms [03:32] log will know [03:33] Oh. [03:33] Not duplicate. [03:33] test_higher_radio_mentions_parent vs test_higher_radio_mentions_parents [03:33] also 'The listings are stored as sqlite database files' [03:34] doth NOT fill me with confidence [03:34] ahaha [03:34] I love sqlite [03:34] but if you want K:V its more than you need [03:34] and if you want SQL its less than you need for this size problem [03:36] these are relatively small things [03:36] But I'm finding it hard to love it [03:38] aahhahahahhaha [03:38] 'Currently it is recommended to use 3 (as this is the only value that has been tested). ' [03:42] lifeless: what are you talking about? [03:42] mwhudson: I'm doing some reading on alternative blob stores to the librarian [03:42] ah ok [03:42] mwhudson: the librarian is starting to creak [03:42] ceph! [03:42] * mwhudson hides [03:42] thats a fascinating option [03:43] and one I think I've been talking about since ~ 2005 [03:43] but [03:43] we need a layer on top of that [03:43] for gc and such? [03:44] gc, accounting, url generation [03:44] not to mention actually adding and removing data [03:44] huh, has "S3-compatible storage" now [03:44] * mwhudson gets back to hiding [03:44] wheeee [03:44] http://ceph.newdream.net/wiki/S3-compatible_Gateway [03:45] spm: did you make an rt about the branch-$id script thingy? [03:45] mwhudson: not yet, no. [03:46] http://ceph.newdream.net/wiki/RADOS_Gateway [03:47] mwhudson: some options are: - ocfs2 + multiple librarian front-ends [03:47] - openstack swift [03:48] - with accounting hooks etc implemented [03:48] - ceph + a metadata service [03:49] - though http://ceph.newdream.net/wiki/RADOS_Gateway notes that rados is slow [03:50] what were you looking at with the rsync/sqlite/etc. stuff? [03:50] swift [03:52] - we could do something like haystack [03:52] - given our files are all immutable, regular fs overhead is uninteresting [04:08] mwhudson: actually, istr there was some problem with the run we did last week. will need to dig out details tho. [04:09] spm: Didn't it just die with a 502 during a nodowntime? [04:09] don't recall exactly. it may have been as simple as that [04:11] spm: this is why there needs to be an rt :) [04:11] yes. [04:12] wgrant: the cause per-se didn't matter; as it got complex from there as the scripts don't handle an outage gracefully. basically abort. which makes doing 300+k branhces a little more time consuming. [04:12] Yeah :( [04:17] i guess it has to look at every branch for each run? [04:17] yes [04:45] * mwhudson goes through old tabs [04:45] lifeless: didn't you say that you'd reword the "team participation / directory service" section of https://dev.launchpad.net/ArchitectureGuide/ServicesAnalysis a bit? [04:45] no worries if you just decided not to bother [05:02] lifeless: https://code.launchpad.net/~wgrant/launchpad/bug-784948/+merge/61503 [05:03] wgrant: You can also dump the import of removeSecurityProxy [05:03] Silence. [05:03] :-( [05:03] Only trying to help [05:03] (but done) [05:03] I had noticed that, but then archiveuploader tried to stab me :( [05:03] mwhudson: in my queue [05:04] Nasty archiveuploader. [05:04] wgrant: why doesn't notification.send() [05:04] log [05:04] if logging is important ? [05:05] lifeless: For one thing the logger would have to be passed down another level. [05:05] I guess it might be useful to log the addresses, though. [05:05] that seems pretty mechanical [05:06] anyhow, I've commented along these lines and approved [05:06] I was going for minimal solution that will unblock deployments and let me get back to more important things. [05:06] Thanks. [05:07] as a variation, log the email if its available [05:08] (of course these logs go nowhere; the script is run with -q and no output redirection) [05:08] bbbbwah [05:08] Yes. [05:08] That should also be fixed [05:08] Yes. [05:10] Hmm, where did this heat come form? [05:10] *from [05:10] It's 21 degrees! [05:26] wgrant how goes BFJ ? [05:28] lifeless: It needs table flattening and triggers added at the next DB deploy, so BPB's data can be populated. That's still three weeks away. [05:28] There are still ~2 methods that need fixing up. [05:28] But that can only be done after the migration. [05:28] Everything that can be done beforehand is done. [05:29] cool [05:29] and then a second db patch to tidy up? [05:29] I was counting on being on maintenance for a couple of weeks longer, and the deploy deploy not being a week late. [05:29] Right. [05:29] 1) DB patch with triggers and garbo. [05:30] spm: did you make an rt about the branch-$id script thingy? [05:31] 2) Normal branches to migrate to flattened tables, with lots of queries being mechanically changed and two being rewritten. [05:31] 3) DB patch to drop old tables. [05:31] 2's branches are mostly prepared, except for the two general query rewrites. [05:31] mwhudson: I have opened a blank RT. what more do you want of me?!?!!? [05:31] And lots of that has already landed. [05:31] spm: hey, if i can be cc:ed on updates to it, i'll shut up [05:31] heh [05:31] I want to believe [05:32] well, at least a bit [05:33] * StevenK gives up on OverridePolicy and merges in devel [05:35] StevenK: You're encountering difficulties? [05:35] lifeless: https://code.launchpad.net/~wgrant/launchpad/bug-784948/+merge/61503, now with added logging. [05:39] wgrant: Yes. [05:39] And I want the storm distinct-on thing [05:39] Ah. [05:41] I've stopped it caring about pockets, but it still can't locate this upload for the new test case [05:42] :( [05:43] Which means it gets stuffed into universe, since FromExisting didn't return anything for that SPN [05:45] DAMN IT [05:46] I don't specify distroseries [05:47] Heh. [05:48] wgrant: I'll keep my +1 for that branch with the logging changes [05:48] StevenK: Thanks. [05:50] grrrr [05:50] Aha. [05:50] rabbit fail [05:50] archiveuploader is defeated. [05:50] lifeless: Again? [05:50] the layer [05:51] tests failed when run in the whole test suite [05:51] I suspect isolation fail [05:51] :( [05:51] It's still failing regularly in devel/db-devel. [05:51] that sucks [05:51] canonical.testing.ftests.test_layers.MemcachedTestCase.testRabbitWorking [05:51] canonical.testing.ftests.test_layers.DatabaseTestCase.testRabbitWorking [05:51] canonical.testing.ftests.test_layers.LibrarianTestCase.testRabbitWorking [05:51] canonical.testing.ftests.test_layers.ZopelessTestCase.testRabbitWorking [05:51] erence: None != 'localhost:47565' [05:52] Lovely. [05:59] other layers use it [05:59] and None is the default in the schema [06:00] mwhudson: 45879 :-) [06:00] spm: w00t [06:01] hey wow [06:01] 702 OOPS-1964O247 Sprint:+temp-meeting-export [06:01] our top count is under 1K now [06:01] thats pretty awesome [06:01] let me guess, that will go away next week when it's not uds? [06:01] I think the temp is a lie [06:01] mwhudson: thats overnight [06:01] mwhudson: so no, its already post uds [06:02] ah [06:02] 23 / 20 Sprint:+temp-meeting-export [06:02] spm: do you have any idea how many branches have been processed? [06:02] It only started just before UDS, though. [06:02] mwhudson: we got about 23K done [06:02] spm: heh ok === almaisan-away is now known as al-maisan [06:08] lifeless: do you know the mechanism by which a particular url advertises to the browser that is can provide an rss feed and hence enable the rss button? [06:09] which rss button [06:09] the one in the browser toolbar [06:09] orange square [06:10] (which is gone from all modern browsers, FWIW) [06:10] though I think most browsers have stopped doing [06:10] i've got a bug that says rss feeds need to be disabled for private bugs [06:10] thats not what it means :) [06:11] also the bug says it blows up for private bugs which is arguably different [06:11] but AIUI we haven't implemented authenticated feeds yet for some bizarre reason [06:11] yes but the suggested solution is just to disable rss feeds for private bugs [06:11] so disabling it is probably the most reasonable expedient solution [06:11] 2 things: 1) the shouldn't be there. 2) there should be no login link on feeds. [06:11] feeds were designed to only be served out of squid [06:11] mwhudson: a mistake [06:11] :) [06:11] is the specific bizarre reason [06:12] mwhudson: still a mistake, vary will let you handle authentication through squid just fine [06:12] yeah well, blame your boss's boss [06:12] lifeless: not defending it! [06:12] mwhudson: elliot? [06:12] iirc [06:12] heh [06:12] anyhoo [06:12] We can and will solve it [06:12] I want to get reliable atom/rss available in the next 6-12 months, for all objects we index. [06:13] so we can pshb it up [06:14] omg, someone used specificationfeedback on me! [06:14] KILL THEM [06:14] wgrant: i will opt for education [06:14] There are international laws against that sort of thing. [06:14] lifeless: so for now, if i look for a way to disable rss feeds for private bugs as per the suggestion in the bug report that will be ok? [06:15] Come on, soyuz-upload.txt... [06:15] Work please. [06:16] And don't take 140s. === al-maisan is now known as almaisan-away === almaisan-away is now known as al-maisan === al-maisan is now known as almaisan-away [07:37] Project windmill-db-devel build #294: STILL FAILING in 1 hr 9 min: https://lpci.wedontsleep.org/job/windmill-db-devel/294/ [07:53] wallyworld: thats fine, of course. But please stay connected ;P [08:38] good morning [08:47] Project windmill-devel build #104: STILL FAILING in 1 hr 9 min: https://lpci.wedontsleep.org/job/windmill-devel/104/ [08:50] wgrant: thanks for fixing the test in test_distroseries.py earlier today. [09:01] Do we have a good pattern for adding annotations to objects that views feed to templates, without adding them to the objects and without making them persistent? I can think of: pass a tuple of (annotation, object); pass a wrapper that delegates to the original object; or pass just a dict of relevant data to the view. [09:04] lifeless, any thoughts? === almaisan-away is now known as al-maisan [09:13] Hi [09:13] hi [09:15] jtv: I think lazr.delegates was intended for this sort of thing, but its terribly heavyweight [09:16] jtv: another option you haven't enumerated is a lookup on the view - view/annotation/object (where annotation is callable) [09:17] bigjools: hi [09:18] bigjools: hi [09:19] howdy [09:19] howdy [09:19] bigjools: https://code.launchpad.net/~lifeless/launchpad/rabbit/+merge/61057 seems to have found a test isolation issue of some sort [it has the test Layer for you] [09:19] lifeless: that's a good one, thanks... I just managed to squeeze some work out of my lazy brain and got to "dict" [09:20] lifeless: and delegates is definitely too heavyweight; I really didn't want to add ZCML for this! [09:26] lifeless: what's the isolation issue? [09:27] bigjools: looks shallow - something seems to be zapping the config back to the schema default (of None) when the full test suite is run (vs when just the new tests are run) === allenap changed the topic of #launchpad-dev to: https://dev.launchpad.net/ | On call reviewer: allenap | https://code.launchpad.net/launchpad-project/+activereviews | Critical bugs:210 - 0:[######=_]:256 [09:48] allenap: something I expect to have to repeat rather a lot is "here's a DSD, what would be the package specification for a PlainPackageCopyJob that would resolve it?" I'm about to write a function specify_dsd_package(dsd): return (dsd.source_package_name.name, dsd.parent_source_version) but did you perhaps have something for that already? [09:49] bigjools: anyhow I mention that test failure [09:49] bigjools: in the (vague) hope that allenap or yourself might want to land the branch [09:49] heh [09:49] you mean you want to offload it? :) [09:50] jtv: No, I don't have anything for that. It might be nice to incorporate that into an alternative PlainPackageCopyJob.create() classmethod. [09:50] more that I don't want you guys blocked on me doing it [09:50] I doubt we're that close yet [09:50] I have a few balls to juggle right now, and while its important, its not top of the list [09:50] allenap: not worth the bother if it's a simple helper call. BTW I also filed a bug yesterday for two flaws in getActiveJobs: it doesn't sort and it doesn't filter for job status. [09:51] jtv: I saw; sorry for creating the bugs in the first place :) [09:51] I'm sure it felt like a worthwhile effort at the time. :) [09:52] I _would_ fix them en passant but am slightly too woozy right now. Also, can't just fix it without tests. [09:52] (Glad I didn't tackle this earlier since I just disposed my dead-end branch) [09:54] WTF? Why doesn't it show up as my last-reported bug? [09:55] lifeless: I'm reviewer today, so I might have a go with that layer branch. [09:55] allenap: I'll forward you the ec2 output [09:55] Thanks. [09:55] Project db-devel build #562: STILL FAILING in 5 hr 41 min: https://lpci.wedontsleep.org/job/db-devel/562/ [09:57] allenap, bigjools: bug 784515 [09:57] <_mup_> Bug #784515: PlainPackageCopyJob.getActiveJobs should check job status < https://launchpad.net/bugs/784515 > [10:02] jtv: yeah that's not good is it :) [10:02] possibly not [10:17] allenap: Are you up for a reviewing an oversized branch? [10:17] allenap: It is effectively moving the code and massaging it to work [10:18] StevenK: Yeah, okay. [10:18] allenap: https://code.launchpad.net/~stevenk/launchpad/announcements-copies/+merge/61516 [10:18] allenap: The two methods at the top of notification that aren't exported are the end-goal [10:19] StevenK: Okay, I assume I'll understand what that means when I read the code :) [10:19] Hopefully it does [10:23] Does anybody see a potential hurt in piping LP.cache['context'] through obfuscate-email, like so? http://paste.ubuntu.com/609988/ [10:24] obfuscate-email is only applied on anonymous requests [10:24] wgrant: ^ ? [10:26] henninge: Ugh. [10:27] henninge: I have no idea how to solve this. [10:27] wgrant: what do you mean? [10:27] it is solved [10:27] I mean, that works [10:28] I am just wondering if I missed any effect because this is a rather simple solution [10:28] Yeah, until it leaks somewhere we don't want and all the email addresses in the DB get erased. [10:29] Ah [10:30] wgrant: you said yesterday that the cache is not really needed on anonymous pages. [10:30] It seems perilous to be sending out bad data in a machine-readable format, when it looks fine in most circumstances. [10:30] Then someone writes a tool that uses it. [10:30] And goodbye email addresses. [10:30] yes, I see the danger [10:31] there are two general solutions that I see [10:31] 1- Not send out the data [10:31] 2 - check all data for obfuscated email addresses [10:32] all *received* data that is [10:37] wgrant: yes, thinking about it the only general solutionwould be to reject all data on the the webservice that contains