=== EdwinGrubbs is now known as Edwin-zzzz === lifeless_ is now known as lifeless === almaisan-away is now known as al-maisan [08:11] stub, lifeless, is there any facility in launchpad at the moment for configuration that can be changed by sysadmins? [08:11] in particular if i want a list of trusted dkim domains [08:11] or a switch to turn dkim on/off [08:11] yes [08:11] the config files [08:11] ok [08:11] they have settings for dev, staging, edge, production [08:39] hm [08:39] oops [08:53] anyone around who knows the job system and can provide some advice? [08:54] I would like to know the preferred way to implement a new job [08:54] Whether a fairly generic job with json, implying serialising some things to strings to lookup as keys when the job is run, or a more-specific job type that can have References to the other models, saving that serialisation step, would be preferred. [08:55] hi james_w [08:55] unfortunately i don't know about that [08:56] hi poolie [09:21] Morning [09:25] BjornT: hi, do you have an opinion on whether a new job type should use as many References as possible, or serialise to JSON and look up the objects again based on keys in the JSON? [09:28] james_w: hi [09:28] james_w: I'm not sure there is a general answer to that question [09:28] hi lifeless [09:28] james_w: I'd consider things like: [09:29] - are there security implications on any referenced tables [09:29] I'm happy to give specifics [09:29] - are there performance implications ... [09:29] - what would be simplest for your code [09:29] - are there race conditions - do you need to freeze parameters so they don't change under you [09:30] james_w: sure, though I'm signing off - 4am rise tomorrow - shortly [09:30] james_w: oh and obviously the inverse of the previous one, do you have parameters that *have* to be fresh [09:32] g'night [10:01] james_w: hi [10:01] james_w: what are you looking to do? [10:01] hi thumper [10:02] I want to create a job type for the creation of copy archives [10:02] these have a source archive, target archive, source distroseries, source packagesets, source component, etc. [10:02] james_w: what we did for BranchJob was to just have the branch that it was linked to, and everything else in a json blob [10:03] the general idea is: [10:03] an ArchiveJob to parallel BugJub and BranchJob would be useful elsewhere, but there is far more linked data here than in any of the other jobs [10:03] if you need to query against the value, it needs to be a column [10:03] otherwise a blob is fine [10:03] james_w: the question relates to what you end up querying against [10:03] and what the job really links to [10:04] and how it needs to be selected... [10:04] what invalidates the job... [10:04] how it will be deleted... [10:04] just target archive needs to be queried against I think [10:05] what other jobs will there be for archives? [10:05] don't know yet [10:06] there are a bunch of things that could be possible to have there, bulk copying packages, even publication [10:06] but it is likely that there will be more? [10:06] I'd suggest make a mirror of the branch job table for now [10:06] although I hate myself for saying it... [10:07] the duplication rubs me the wrong way [10:07] but is the best we have for now [10:07] but instead of a branch reference, have the archive [10:07] there's a hell of a lot of boilerplate to create a new job type [10:07] I wouldn't say there is that much... [10:07] a lot has been moved into base classes [10:07] but yes, [10:07] there will be duplication and boilerplate [10:08] https://dev.launchpad.net/Foundations/JobSystem [10:08] * thumper isn't working :) just helping [10:08] I'm in Perth having a holiday [10:08] well, thanks :-) [10:09] but as you can see, I have my laptop :) [10:10] can't go on holiday without it, well, not for more than a few days [10:10] I need my fix [11:01] Morning, all. [11:21] In relation to the just-gone discussion in #launchpad, maybe we should have uses_soyuz and uses_lp_distromirrors attributes on distributions? [11:39] maxb: We already have IDistribution.full_functionality [11:39] Currently a celebrity-based hack, but it works. [11:41] maxb, or even IDistribution.build_system :) [11:42] ICanHazBuilds [11:42] But even just hiding series that don't have any PPA-capable DASes would be a good start. [11:44] wgrant: BTW did you notice that we got the first successful recipe build done in prod yesterday? [11:45] bigjools: I saw some bugs fly past that suggested it. [11:45] Is this after chmoding the source dir on cesium? [11:45] no, fixing the init script for the b-m [11:45] Ah. [11:45] which had an interesting "cd" in it [11:45] so, part of this success is down to you, thanks a lot man :) [11:46] I'm glad it's finally running. [11:46] A few months later than I expected, but..... [11:57] mrevell: The PPA terms of use state that anything OK for Ubuntu is OK. [11:58] And just about anything redistributable is OK for Ubuntu. [12:00] wgrant, Thanks for that. I'm not saying you're wrong but that's not how we've enforced it before. For example, we nearly closed a PPA a year or two back that had the flash installer in it but the owner reworked it. [12:00] Right, it seems like it's wrong, but it seems to be what the terms of use say. [12:01] Hmm. [12:02] bigjools: The bzr-builder PPA doesn't also have a modern bzr in it? [12:02] is that a question or a statement? [12:02] I just tried a Hardy build, and it seems to be using real Hardy bzr. [12:02] I didn't even know bzr-builder would work with something so old. [12:03] the production system doesn't use a PPA for the bzr-builder dep [12:03] Oh, bzr-builder is in -updates? [12:03] no, there's an internal archive [12:03] Ah, right. [12:03] wgrant, Thanks for raising that. As ever, you're the guy who keeps me on the right track :) I'll review the terms and how we enforce them. [12:03] s/bzr-builder PPA/internal archive/ [12:04] heh - no idea [12:04] I'll let abentley chase that one up [12:05] wgrant: bzr-builder doesn't do anything particularly interesting with bzrlib, just equivalents of branch and merge, which have been around for ever. [12:06] Ah. [12:16] james_w: http://launchpadlibrarian.net/50794525/buildlog.txt.gz <- do I need to backport python-debian too? [12:16] wgrant: quite possibly [12:17] we could perhaps fix it in bzr-builder [12:31] Is the PPA publisher working? I copied a package about 12 minutes ago -- it's still Pending. [12:32] Ah, there. [12:32] Looks like it missed two runs. === matsubara-afk is now known as matsubara === mrevell is now known as mrevell-lunch === salgado-afk is now known as salgado === mrevell-lunch is now known as mrevell [14:04] how do I get the key for a enum item? [14:04] I want to store something that I can use to get that enum value back [14:05] .value? === Edwin-zzzz is now known as EdwinGrubbs [15:52] james_w, have you figured it out yet? [15:52] no [15:52] yes [15:53] james_w, cool. :) [15:53] sorry for not trying to answer earlier. [15:53] PackagePublishingPocket.items[PackagePublishingPocket.RELEASE.value] I think [15:54] just waiting for my tests to confirm [15:54] yes === salgado is now known as salgado-lunch [16:23] which layer should I use for a test class that will just be using the db (and possibly librarian), but doesn't care about the web ui and stuff? [16:23] is LaunchpadZopelessLayer just for script tests? === matsubara is now known as matsubara-lunch [17:07] bigjools: do you have a problem with PackageCloner creating build records for the packages it copies too? Currently it just copies sources and the builds are created elsewhere. I would like to consolidate for ease of testing. [17:08] james_w: remind me where they're created at the moment? [17:08] scripts/populate_archive.py [17:08] urgh [17:08] yeah change it === beuno is now known as beuno-lunch === salgado-lunch is now known as salgado === beuno-lunch is now known as beuno === matsubara-lunch is now known as matsubara === al-maisan is now known as almaisan-away [18:22] I'm getting an error trying to build gettextpo [18:22] it's the first time I've tried to build lp in a few months, so I might have missed some obvious / crucial step [18:22] http://paste.ubuntu.com/454033/ [18:26] help? [18:30] anyone know why the merge proposal diffs aren't updating in launchpad, and it's not sending the mails for proposal status changes? [18:32] dobey, no, I don't, but I saw abentley say something vaguely operational about it on another channel [18:33] dobey, no, at this point it's just under investigation. [18:33] jml: that's odd, if pydoc was failing it would normally say "Error" before "Leaving directory" I think [18:34] abentley: ok, thanks [18:34] that's probably the last step either way, so that doesn't tell you whether it failed [18:35] if you are using parallel make then the actual error can be further back [18:35] james_w, how would I know if I'm using parallel make? [18:36] I don't know, apart from it doing more than one thing at once [18:36] if there's no error before it starts printing pygettextpo stuff then you probably aren't [18:36] I don't think it's default for LP, so unless you ran "make -j"... [18:37] hm. [18:37] oh, I missed a tb [18:37] http://paste.ubuntu.com/454037/ [18:44] oh huh [18:44] I think I have a local copy of Python [18:51] If I want to add some tests for bugtask, should I add them to test_bugtask.py, test_bugtask_0.py or test_bugtask_1.py?+ [18:54] heh [18:55] probably make test_bugtask_jml.py? [18:55] james_w, maybe I should use the database id for my row in Launchpad to avoid confusion [18:56] deryck, If I want to add some tests for bugtask, should I add them to test_bugtask.py, test_bugtask_0.py or test_bugtask_1.py? [19:01] jml, I don't know. Let me look more after the call. [19:01] deryck, thanks. [19:11] gah, IPackagesetSet has a get method that works different to all the other get methods I have come across [19:11] yeah. we need some consistency. [19:18] are there any scripts in LP that would be run by an admin and just create a job for later processing? Would we want such a script to create the job but synchronously process it? Or just do the work and leave the job for webservice requests? [19:24] james_w, I'm not sure what you mean. [19:24] populate_archive.py currently creates a copy archive and sits there while everything is copied [19:25] I'm working on making copy archive creation a job such that we can do it over the API and the like if we wish [19:25] I'm wondering if I should: [19:25] a) have the populate_archive.py script create the job and exit, leaving it up to a cronscript to do the work and the person running the script to poll the web ui to find when it is done [19:26] b) have the populate_archive.py create the job, but pre-claimed, and then kick off the running of the job synchronously such that it can report the results to the admin running the script [19:27] c) have populate_archive.py ignore the job system and just run the code in pretty much the way it is now, and just have the job system as an alternative entry point [19:27] d) other [19:27] e) none of the above [19:27] 3) Uzbekistan in the 1920s [19:33] james_w, sorry for the delay, I'm on a call that got interesting :) [19:34] np [19:34] it's an idle question as I hack the code to actually make any of this possible [19:35] james_w, I don't know enough about what populate_archive.py does, I think. a) seems the most consistent [19:35] and we should have a widget on the ui for tracking in-progress jobs [19:36] so populate_archive.py creates something like launchpad.net/~jml/+copy-archive/some-new-archive [19:36] then it spends a few minutes copying all the sources from Ubuntu in to it and creating build records so that you can testbuild a snapshot of Ubuntu [19:37] this is currently something that is usually done by an admin for doko [19:37] so I guess it's a question for the losas? they are the only ones that will use the populate_archive.py interface. [19:38] james_w, yeah, maybe. [19:38] james_w, now that you explain it though, I'm more inclined toward a) [19:38] Perhaps it will be changed to do an API call instead, in which case we can't really do synchronous, as we don't even have a way to poll for completion of this job [19:39] james_w, although it's something of a regression in the UI of the script, we should just fix the webservice api so that the script can behave the way it did. [19:39] james_w, after all, this isn't the only case where someone wants to run scripts that kick off long-running jobs. [19:39] yeah [19:40] deryck, thanks for looking at the test_bugtask thing btw [19:40] jml: do you have any notes from your earlier call with Julian? [19:41] jml, yes, just back from coffee. Looking now.... [19:43] dobey, the script got stuck and we've restarted it. Please let us know if you're still experiencing problems. [19:43] abentley: ok thanks [19:43] * dobey waits for a flood of e-mail [19:51] james_w, we do already have scripts that create jobs and let a cronjob handle running them. e.g. the branch scanner. Daily builds is similar, though it hands off to the build farm, not a cron script. [19:52] jml, so I think just use test_bugtask. [19:52] jml, the others seem like someone was thinking something that is not now clear :-) [19:52] deryck, that had occurred to me :) [19:52] deryck, thanks. [19:52] abentley: they are scripts run by admins? [19:53] james_w, they are normally run from cron scripts themselves. [19:53] Or as cron jobs, I should say. [19:53] abentley: right, this will only ever be run by hand on request [19:53] * deryck will make a note to confer and refactor test_bugtask* as appropriate [20:06] 5 seconds to start the functional layer [21:12] gary_poster, do you know by what mechanism the API documentation lists the possible choices for "hardware_bus" on https://edge.launchpad.net/+apidoc/devel.html#bug_target for searchTasks? [21:14] jml: no, but my guess would be by introspecting a vocabulary specified in an interface. I could probably find out quickly. Should I? [21:14] gary_poster, yes please. I'm trying to figure out how it knows to introspect as well. [21:15] k [21:25] jml: it almost certainly introspects the Choice field on line 95 of lp/bugs/interfaces/bugtarget.py . This in turn points to a vocabulary, which is apparently, per the comment, set to HWBus in ./canonical/launchpad/interfaces/_schema_circular_imports.py:165 === ubuntujenkins_ is now known as ubuntujenkins [21:27] lazr.restful.marshallers.SimpleVocabularyLookupFieldMarshaller knows how to iterate items on an enum to get the valid choices [21:28] does that get used to generate the wadl [21:28] yes [21:31] ok. thank you gary_poster & james_w [21:31] np [21:42] actually, it looks like it's used to make the json, not the wadl [21:49] jml: http://paste.ubuntu.com/454134/ [21:49] james_w, thanks. [21:58] jml: in return any hints you could give me of how I would get a SourcePackagePublishingHistory given it's id would be greatly appreciated [21:58] there's no ISourcePackagePublishingHistorySet I can see [21:58] hmm, there's an IPublishingSet [21:59] james_w, you can cheat & use storm. getUtility(ISomethingOrOther).find(SourcePackagePublishingHistory, SPPH.id=your_id). [22:00] jml: thanks, I'll resort to that if this doesn't work === matsubara is now known as matsubara-afk [22:06] getUtility(IPublishingSet).getByIdAndArchive() apparently. Though for some reason that returns you a ResultSet and you have to call .one() on it yourself. [22:06] :( [22:08] james_w, there's only one callsite for that, and no unit tests. [22:08] (or doctests) [22:09] jml: are you suggesting I fix it up? === salgado is now known as salgado-afk [22:09] james_w, I am. But I can completely understand if you'd rather not. [22:10] I'm already in the middle of a monster branch [22:10] so adding one more thing to it isn't going to hurt that much [22:11] hmm, my ec2 instances seem to be dying without mailing me [22:11] james_w, good news and bad, even while I momentarily gaze away [22:12] gary_poster, james_w, fwiw, I was working on https://bugs.edge.launchpad.net/lazr.restful/+bug/590561. Now I've stopped. [22:12] <_mup_> Bug #590561: searchTasks API documentation doesn't document acceptable status arguments [22:12] on call [22:12] np. it's not urgent. [22:12] more fun: http://paste.ubuntu.com/454140/ [22:12] oh, that one's my mistake [22:13] james_w, nyah-ah-ah [22:13] james_w, if they are dying without mailing, I'd recommend running them attached. [22:14] james_w, there's no other way of guaranteeing you'll get output. [22:15] jml: just kicking that off now [22:16] james_w, cool. [22:17] james_w, I'm off now to find something to eat. [22:18] jml: enjoy [22:18] james_w, and thence to not work. [22:18] james_w, enjoy your evening