[00:00] <rick_h_> wgrant: heh, ubikey
[00:00] <rick_h_> once in a while bump the dippy thing
[00:01] <StevenK> wgrant: Twitch.
[00:01] <StevenK> % grep -c 'pdb.set_trace' lib/lp/bugs/mail/bugnotificationrecipients.py
[00:01] <StevenK> 1
[00:02] <wgrant> StevenK: Bug #1024148
[00:02] <_mup_> Bug #1024148: Branch.transitionToInformationType breaks when making a subscriberless branch private <disclosure> <regression> <sharing> <trivial> <Launchpad itself:Triaged> < https://launchpad.net/bugs/1024148 >
[00:03] <wgrant> StevenK: Oh wow, that's nice.
[00:27] <rick_h_> StevenK: got a sec? I think I blew up buildbot, but my ec2 test run passed so wonder if you can help me figure out what the difference is and what I can do to fix
[00:29] <rick_h_> my understanding is that this runs a make check so if I make clean_buildout and make check locally and it works I should have been ok?
[00:39] <StevenK> rick_h_: make check will take ~ 5 hours
[00:39] <StevenK> rick_h_: Right, the buildbot slaves don't have convoy
[00:39] <rick_h_> StevenK: right but what I mean is that the buildbot fail is about convoy, but it's in the download cache
[00:40] <rick_h_> ah bah that's right...I forgot it's packaged
[00:40] <StevenK> Assuming convoy is available everywhere is a recipe for disaster
[00:40] <rick_h_> crap...ok...looking
[00:41] <wgrant> +jsbuild: $(PY) combobuild jsbuild_widget_css $(JS_OUT)
[00:41] <wgrant> jsbuild very deliberately didn't depend on combobuild before :)
[00:41] <rick_h_> right, but combo-rootdir drops the YUI code into the build dir now. Since it worked locally with make check I thought I was clear
[00:41] <rick_h_> I'll move the meta.js to make run and should fix it
[00:42] <wgrant> rick_h_: Why did you need to move combo-rootdir?
[00:42] <StevenK> rick_h_: So local has convoy, as does ec2. buildbot does not.
[00:43] <rick_h_> StevenK: right, gotcha.
[00:43] <rick_h_> wgrant: so with the policy of 'all things come from system packages or python packages' YUI is served from a python package now. I set the combo-rootdir to extract that out when it runs
[00:44] <rick_h_> and it needs to run before the launchpad.js generation now
[00:44] <wgrant> That policy is not sensible for non-Python things like YUI
[00:44] <rick_h_> where before yui was put into place by buildout
[00:44] <wgrant> Right
[00:44] <wgrant> Which is probably correct.
[00:44] <wgrant> lifeless: opinion pls
[00:44] <lifeless> hi
[00:44] <lifeless> 'sup ?
[00:45] <wgrant> Repackaging yui tarballs as a launchpad-yui egg seems pretty insane.
[00:45] <rick_h_> my understanding in talking with sinzui deryck and company that this was the preferred path.
[00:45] <wgrant> They might think they prefer it, but they don't.
[00:45] <rick_h_> that anything like the sourcedeps is to be deprecated and that all things much come from system or python packages
[00:46] <rick_h_> and since I need to be able to run multiple yui versions at once for testing, I setup a python package that dumps any contained YUI versions to the build dir
[00:46] <lifeless> sorry, whats the full story?
[00:46] <wgrant> Dumping multiple versions of yui into a custom tarball just to make it an egg is entirely crazy.
[00:46] <StevenK> rick_h_: source*code* is deprecated, sourcedeps is not
[00:46] <wgrant> lifeless: Yesterday we had a custom buildout rule which took a normal upstream YUI tarball and turned it into something Launchpadish
[00:47] <wgrant> lifeless: Today we instead have a Launchpad-specific egg with two versions of YUI unpacked within it
[00:47] <StevenK> Change utilities/sourcecode.conf == bad, change versions.cfg ==okay
[00:47] <wgrant> I don't see this as an improvement.
[00:47] <StevenK> Bah, I can't call BugNotificationRecipients.update() with a resultset. :-(
[00:47] <rick_h_> wgrant: the improvement is that this package has make commands to help auto extract only the build bits from the YUI download, etc
[00:48] <rick_h_> and then it dumps all versions it contains into the build dir without needing to update any list/config for them
[00:48] <wgrant> Right, a helper like that is sensible
[00:48] <rick_h_> and the only thing that matters is that build/js/yui is symlinked to the version in versions.cfg
[00:48] <wgrant> Embedding the YUI codebase is is not
[00:49] <rick_h_> well based on conversations in pre-impl it was sort of based on the lazr.js model and some other 3rd party modules I found
[00:49] <lifeless> rick_h_: so, we want to reduce the number of dependency systems in use, that is correct.
[00:49] <rick_h_> http://pypi.python.org/pypi/launchpad_yui/0.1
[00:50] <wgrant> This doesn't reduce the number of dependency systems
[00:50] <wgrant> It adds a new one: embedding large third-party codebases in a Launchpad-specific public egg
[00:50] <lifeless> rick_h_: however, with current deployment logic and packaging tech, that means we'll end up with a) debian package for firefox, rabbit, pgsql etc; b) buildout versions.cfg entries for python packages and anything else that buildout can deliver.
[00:50] <wgrant> It happens to sit behind something like looks like one of our other dependency systems
[00:50] <wgrant> But it isn't.
[00:51] <rick_h_> right, well because JS doens't have a good dependency system in LP, so I shoe-horned my way into an existing one
[00:51] <StevenK> Which is terrible
[00:51] <lifeless> rick_h_: We've had some friction in the past using 'make YUI into an egg' approaches; can I ask what problem you were trying to solve ?
[00:51] <wgrant> We already had it fairly cleanly shoehorned, didn't we?
[00:51] <wgrant> I don't understand what was wrong with the old way.
[00:51] <lifeless> StevenK: wgrant: Can you guys chillax a little so we can understand whats going on?
[00:51] <rick_h_> lifeless: so the problem was 'how to more easily maintain/manage multiple yui versions in LP'
[00:55] <lifeless> rick_h_: can you expand on that a little? what was giving you grief, what do you want to be able to do (and how does this do it)
[00:55] <lifeless> rick_h_: I realise you've probably discusse that with deryck & sinzui already, but I can't really give an opinion till I get a handle on all the friction :)
[00:56] <rick_h_> ok so it's a couple parts of me not getting it initially (I realize now) and wanting to make installing/testing newer versions of yui easier.
[00:56] <rick_h_> the idea was that we need to be able to drop several versions of YUI into the build/js dir and use the feature flag to switch which one you use
[00:57] <rick_h_> we kind of had that, there's some buildout stuff to install 3 versions that are in the download-cache of yui which I didn't quite get until today
[00:57] <lifeless> we'll need that for production too
[00:57] <lifeless> otherwise race conditions galore happen.
[00:57] <rick_h_> and in discussions the 'idea' seems to be that things must either be a python package or a system package going forward
[00:58] <rick_h_> that seems to have been taking the idea too far from what I'm hearing now though. download-cache is still ok with non-python packages
[00:58] <rick_h_> so the yui zips in there would have been ok
[00:58] <lifeless> download-cache is just a holding place; the key thing is to get away from the 4! we have today.
[00:59] <rick_h_> so with the understanding that YUI has to come in via python package or system package, I thought a python package was easier to deliver multiple versions and setup http://pypi.python.org/pypi/launchpad_yui/0.1
[00:59] <lifeless> while we use buildout, and buildout can do more than just eggs, we're fine with zips in download-cache, yes.
[00:59] <rick_h_> today I landed changes to extract YUI from that package vs the buildout download cache, I did that in the current combo-rootdir bin script, and that needed to run now before launchpad.js is built for non-combo loader users
[01:00] <rick_h_> so I moved around the make deps so that jsbuild required combo biuld, which required convoy and the buildbot slaves don't have that, so I broke buildbot
[01:00] <lifeless> That pypi project is probably a great place for scripts to do things like download the latest zip and put it in download-cache, to live.
[01:01] <lifeless> rick_h_: thanks.
[01:01] <lifeless> so heres what I think: shovelling one package into another as a transport mechanism is necessary sometimes but usually at best somewhat unpleassant
[01:01] <rick_h_> so I'm trying to see 1) what to do to unfubar buildbot and 2) if I should be rethinking the changes all together.
[01:02] <lifeless> that strategy tends to run into several classes of headache: versioning is tricky if you want it to match at all; authentication of the transported thing is often hard or at the very least nontrivial
[01:02] <lifeless> and lastly its usually surprising for everyone that encounters it.
[01:03] <lifeless> rick_h_: do you have any ideas about how we manage syncing and using the gallery widgets going forward ?
[01:03] <lifeless> (this may seem like a non sequitor, but its very related :))
[01:03] <rick_h_> so that's a seperate issue at the moment. This is only for the base YUI library. I've started conversations with others on the syncing, but there's not currently an acceptable answer yet
[01:04] <rick_h_> honestly, my pitch to deryck was to run our own gallery.yui.com but js.canonical.com that could be https and shared modules for all teams. Since modules are versioned, they could be kept and updated in a single place so all could follow dev/what's available.
[01:04] <lifeless> It seems like the same problem to me: we have code in a different *space (language/name/dependency - everything), that we want to consume easily and reliably and upgrade from time to time, and possibly fork from time to time.
[01:04] <rick_h_> however, that's a BIG undertaking
[01:05] <rick_h_> since it required infrastructure and someone to shepard/manage things
[01:05] <lifeless> it also doesn't give any assistance towards offline development, offline testing and local forking
[01:05] <rick_h_> lifeless: you're right in that it is, but as I said, there's no 'great' answer yet so I admit that this python package setup was hackish, but kind of an the immediate means to the end
[01:06] <lifeless> sure, I'm not critiquing it [yet] :P :)
[01:06] <rick_h_> lifeless: well, the way the yui gallery works is that it's a single repo, anyone can checkout and serve
[01:06] <rick_h_> much as you checkout yui itself, and put it into a combo loader server
[01:07] <lifeless> by checkout, do you mean 'grab a tar' or 'git clone' ?
[01:07] <rick_h_> but yes, it's not perfectly worked through for sure. There's a balance on duplication of effort and visiblity of a central location vs the distribution of things to offline/individual running
[01:07] <rick_h_> git clone
[01:07] <lifeless> ok
[01:07] <rick_h_> https://github.com/yui/yui3 you clone that and get it all
[01:07] <rick_h_> and updates are done by forking their repo, adding your new module, and requesting they merge it back
[01:07] <lifeless> so, with all that in mind
[01:08] <lifeless> do you think we should handle widgets the same as base YUI ?
[01:09] <rick_h_> ideally, it would be nice
[01:09] <lifeless> are there any reasons to handle them differently ?
[01:10] <rick_h_> just beacuse the yui libary is not apt to change as often, and is a single part vs many modules
[01:10] <rick_h_> there's a manner of scale when you consider managing a couple of versions of one dep, vs couple of versions of many deps
[01:10] <rick_h_> and I don't have the same issues of making existing code visible to teams, updates sync'd, etc with YUI base as I do any modules
[01:11] <lifeless> I don't follow those issues; sorry to keep doing this - but can you expand on them please?
[01:12] <rick_h_> sure, so let's agree that everyone is using YUI and knows where it is. If they need JS, they know what to get and where to go get it
[01:12] <rick_h_> and it's a pull only use
[01:12] <rick_h_> if a new version comes out, they pull it down
[01:12] <lifeless> the base YUI ?
[01:12] <rick_h_> right
[01:12] <lifeless> why is it pull only? Is it bug free?
[01:13] <rick_h_> well, sure there are bugs, and you go through their system, get them updated, pull the new release. I've not seen us keep a patch against YUI anywhere so far
[01:13] <rick_h_> not saying it's not possible, but not something experienced in work so far
[01:14] <rick_h_> so for the base YUI it's "usually" a pull only of maybe 2 or 3 versions (current running, next version to test with, keeping an eye on a future preview release)
[01:15] <rick_h_> while wigets/gallery code needs to be made visible to teams. "Is there a module for this?", they need to be able to update those, push changes without breaking backward compatiblity for other users, and there's many more as teams develop new JS modules
[01:16] <lifeless> I don't get the difference
[01:16] <lifeless> when you say visible to teams, is YUI base not visible to teams?
[01:16] <rick_h_> and then there's infrastructure, we don't test YUI downloads, but we'd have to test and document gallery items/etc
[01:16] <rick_h_> YUI base is because they all know where to look
[01:16] <rick_h_> how many canonical teams know of the various YUI modules other teams have written
[01:17] <rick_h_> ?
[01:17] <rick_h_> this is the main reason I hope to one day get together a centralized area for YUI modules done like the kitchen code on the -tech mailing list recently
[01:18] <rick_h_> landscape mentioned a nice mock tool they use in testing, but I've got no idea that exists to be able to share/reuse that
[01:18] <wgrant> wallyworld: Could you find some time to review https://code.launchpad.net/~wgrant/launchpad/edit-stacked-information-type/+merge/114767?
[01:18] <wallyworld> sure
[01:18] <lifeless> rick_h_: I don't understand why modules we'd handle ourselves but base we'd handle by collaborating closely with upstream
[01:18] <lifeless> rick_h_: I'm sure there is a key thing I'm missing :(
[01:19] <rick_h_> I'd think it was purely that we're more hands on with modules, while 99% of the time hands off on YUI base
[01:19] <rick_h_> 99% of the time YUI base is r/o while modules we dev are very write heavy with docs/tests/sharing
[01:20] <lifeless> so you're saying that we can block a project waiting for a fix to yui base, but we want to run a local fork for changes to a module (whether we created the module or not)
[01:20] <rick_h_> you'd use memcache all the time for very read heavy uses, but would you stick your writes in there? :)
[01:20] <lifeless> rick_h_: thats called 'redis' :P
[01:20] <StevenK> wgrant: Bah, I keep distracted by my QA
[01:21] <rick_h_> well, I think we'd look at a system to carry a temporarity patch or else build a module that patched it for us
[01:21] <rick_h_> gallery-patch-bug1234 and then use that until the next YUI version came out with the fix
[01:21] <rick_h_> and then all of our teams would gain access to that patch module while we worked with upstream
[01:21] <wallyworld> wgrant: can you add a screenshot to the mp?
[01:22] <lifeless> rick_h_: so while I accept that it may happen *more often*, that seems like a reason to run the same system as yui base to me: so that there is no surprise for dealing with the time when yui base is the thing that needs fixing.
[01:23] <lifeless> rick_h_: by run the same system I mean the dep/deploy/dev story around yui modules/yui base.
[01:23] <rick_h_> lifeless: right, and that's fine. However, from my conversations, the dream of a shared JS area is so far away, I'm working around that for the time being
[01:23] <rick_h_> and trying to work within the current dev environment to get LP caught up to the latest YUI, make it easier to test new versions, and keep the gallery modules we've subsumed where they are for the moment
[01:24] <lifeless> sure
[01:24] <wgrant> wallyworld: It looks like https://code.qastaging.launchpad.net/~canonical-launchpad-branches/lp-production-configs/lalala/+edit but with fewer information type choices
[01:24] <lifeless> note that nothing you've said above speaks 'shared JS area' to me.
[01:24] <rick_h_> now, when we get to a good answer for the gallery code, I'm all for revisiting the YUI base story into that lifeless
[01:24] <wgrant> wallyworld: The UI hasn't changed. Just the set of choices
[01:24] <lifeless> so theres still at least some causal link you need to help me understand :)
[01:25] <wgrant> wallyworld: In general the only private stacked-on branches will be Proprietary in Proprietary projects, so there'll only ever be one choice.
[01:25] <rick_h_> lifeless: sure, so I don't see a good way to manage and expose modules built by different teams than having a shred JS setup of some sort. Right now I'm told we're to try to push stuff up to the YUI gallery, but as we can't use that to pull/download from at the moment, it's a push only setup
[01:25] <wallyworld> wgrant: ah right thanks. i had just read the covering letter and didn't quite grok the change
[01:25] <lifeless> rick_h_: why can't we use that to pull or download from ?
[01:26] <rick_h_> lifeless: well currently LP uses I think 2 gallery components, and we don't want to pull down all of hte gallery (size/complexity), and we need to be able to get whatever code into our download-cache, so it would be a pain to keep that up to date/in sync
[01:27] <lifeless> how big is all of the gallery (size on disk, git clone time)
[01:27] <rick_h_> lifeless: we also only want the build files, not the src/docs/etc.
[01:27] <wgrant> wallyworld: It both reduces the amount of code we have for this unlikely case, and allows users to fix conflicts if they arise (eg. because the stacked-on branch changes type)
[01:27] <rick_h_> lifeless: the git clone is approx 64MB
[01:28] <rick_h_> lifeless: the build directory of only production files is 21M
[01:28] <lifeless> rick_h_: you proposed cloning it al for a shared area, but that will also require devs to have a full copy etc, so I don't understand why you'd say we *don't want to pull...*
[01:28] <wallyworld> wgrant: i think though we decided to disallow a stacked on branch changing type to private if a public branch was stacked on it?
[01:28] <rick_h_> lifeless: right, as I said, we've not come to a satisfactory answer yet on that front
[01:29] <wgrant> wallyworld: Ah, yes, so that particular case probably can't happen.
[01:29] <rick_h_> lifeless: and while a dev can take a download hit, doing it for every build/etc seems something to avoid
[01:29] <lifeless> sure, but why would we do it on every build/etc ?
[01:29] <lifeless>  we don't bzr pull on every build/etc.
[01:29] <rick_h_> lifeless: but agreed, there's work to be done on that front which is why I'm putting it off :)
[01:29] <lifeless> I mean.
[01:29] <rick_h_> lifeless: true, I guess download-cache is updated, but not redownloaded
[01:29] <lifeless> You'd have to *write code* to pull etc on every build.
[01:30] <rick_h_> lifeless: so we'd have to have something like that for the gallery/JS code
[01:30] <wallyworld> wgrant: with the hidden types code, will the order of the radio buttons be messed up ie the (un)embargoed security choices come at the end
[01:30] <lifeless> here are the computed (vs root cause) constraints I know of that are imposed on us in this area:
[01:31] <wallyworld> wgrant: which would be inconsistent with elsewhere
[01:31] <lifeless>  - be able to build and deploy LP without internet access
[01:31] <lifeless>  - be able to do buildbot tests likewise
[01:31] <lifeless>  - be able to inspect all changes happening to code that we execute in trusted zones - e.g. js that runs in our browser windows, python on the sever etc.
[01:31] <wgrant> wallyworld: getInformationTypesToShow returns a set of allowed. setUpWidgets then iterates through the vocab, picking out items that are in the allowed set. So order is retained.
[01:32] <lifeless> I think pretty much everything else is up for grabs
[01:32] <wallyworld> wgrant: cool, thanks. just wanted to double check as i couldn't recall the exact implementation detail
[01:32] <rick_h_> lifeless: ok
[01:32] <wgrant> That implementation detail only landed three hours ago, so you're forgiven :)
[01:33] <wallyworld> wgrant: and the diff didn't really show the full picture
[01:33] <wallyworld> wgrant: are we missing a test?
[01:34] <wgrant> wallyworld: I think so. I couldn't find one for the checkbox override.
[01:34] <wgrant> ec2 will tell me for sure
[01:34] <wallyworld> to check that security is allowed branches linked to security bugs
[01:34] <wgrant> Oh
[01:34] <wgrant> There's one for that
[01:34]  * wgrant hunts
[01:34] <lifeless> rick_h_: how big is a tar of all of yui3 gallery (not base) ?
[01:34] <wallyworld> regardless of the stacked on type
[01:34] <rick_h_> lifeless: I'll pull together some notes on ideas and why don't we schedule a time to sit down and look it over.
[01:34] <wgrant>     def test_private_branch_with_security_bug(self):
[01:34] <wgrant>         # Branches on projects that allow private branches can use the
[01:34] <wgrant>         # Embargoed Security information type if they have a security
[01:34] <wgrant>         # bug linked.
[01:34] <rick_h_> lifeless: just the build or the whole thing?
[01:35] <lifeless> rick_h_: something suitable for devs to have, buildbot to have, and prod to have.
[01:35] <wallyworld> wgrant: ok, thanks. r=me
[01:36] <wgrant> wallyworld: Marvellous. Thanks.
[01:36] <lifeless> rick_h_: and is there a programmatic interface to do releases of individual yui modules? (are they separately versioned etc) ?
[01:37] <rick_h_> lifeless: yes, they use an ant based build system and do weekly releases that are served
[01:37] <rick_h_> so you can specify "load the gallery from date 2012-07-12"
[01:37] <rick_h_> in your yui config
[01:38] <lifeless> rick_h_: what granularity does that have ?
[01:38] <rick_h_> lifeless: only a weekly build
[01:38] <lifeless> not frequency, granularity :)
[01:38] <lifeless> is it all of gallery, or per-module ?
[01:38] <rick_h_> lifeless: it's all the repo
[01:38] <rick_h_> lifeless: all the gallery
[01:38] <rick_h_> lifeless: I'm not sure if you can specify it per module, you might be able to but not tried it
[01:38] <lifeless> ok
[01:39] <lifeless> so, here are some questions we might want to answer
[01:39] <lifeless> rick_h_: oh, how big was that allofgallery thing ? Are you finding out ?
[01:39] <lifeless>  - how big is allofgallery
[01:39] <rick_h_> lifeless: ah sorry, got distracted
[01:40] <lifeless>  -> would e.g. daily snapshots into the download cache be a significant burden?
[01:40] <lifeless> rick_h_: you said something earlier that each module has multiple versions, is that right? so you can ask for a specific one that is known to work ?
[01:41] <rick_h_> lifeless: so build tar/gz is 3.4M
[01:41] <rick_h_> lifeless: yes, when you write a module you specify a version string, but we've not used it so far
[01:41] <lifeless> so the weekly snapshot includes an unversioned tip of the module + all the prior still supported releases?
[01:42] <lifeless> (that yui host)
[01:42] <rick_h_> lifeless: so the weekly snapshot builds the tip of all modules, and serves it via a new url specific to that build date. Each build is a new url.
[01:42] <lifeless> ok, so you *can't* stick to an arbitrary old version trivially ?
[01:42] <rick_h_> I know and have used that to go back in time for the whole gallery, but not seen how to use that on a per-module basis
[01:43] <lifeless> righto.
[01:43] <rick_h_> lifeless: I don't know, will look into it
[01:43] <lifeless> indeed thats a question too
[01:43] <lifeless> I mean, if you only get snapshots-of-tip granularity, a shared JS area for all of Canonical would likely be not an improvement over working directly upstream
[01:44] <rick_h_> lifeless: true
[01:44] <lifeless> because you'd still have folk sitting on a known-good, and then upgrading many bits at once and dealing with the fallout.
[01:44] <lifeless> unless we engineer something special using introspection etc; but at that point we could do that upstream and then work upstream with less friction.
[01:45] <lifeless> I don't think we have quite enough knowledge yet to progress this further
[01:45] <lifeless> what other questions should we queue up
[01:45] <rick_h_> lifeless: right
[01:45] <lifeless>  - how do other YUI users deal with this?
[01:46] <rick_h_> well, the big questions are is this feasible? will teams work together and want to do so. How will it be managed, and how to get this setup for local dev.
[01:46] <lifeless>  - how often would we really be updating the gallery cop(ies) we have - assume we update to a) get bugfixes b) to work with a new YUI and c) to have newer versions of modules we're contributing to.
[01:46] <rick_h_> lifeless: honestly I'm not sure. While I know teams using it, I've not seen places doing a lot of cross-team work like we're looking at
[01:46] <lifeless> rick_h_: I haven't seen any need for cross-team stuff yet, in all that you've discussed.
[01:46] <lifeless> rick_h_: I've seen you propose it :)
[01:47] <lifeless> rick_h_: cross team work is super hard
[01:47] <rick_h_> lifeless: well that's the point. Is that with all of this, we're worried about a single gallery build breaking a team using an older version that missed something that changed
[01:47] <rick_h_> which would mean another team submitted a fix/update for a module that caused it to change
[01:48]  * rick_h_ is missing how this isn't cross team 
[01:48] <lifeless> rick_h_: I think you have a hidden assumption somewhere.
[01:48] <lifeless> :)
[01:48] <rick_h_> lifeless: more than likely
[01:49] <lifeless> uhm, its not cross team because you can (naive, may need tuning): drop yui-gallery-XYZ.zip into download cache, pull that out like we were yui base-XYZ.zip, drop in additional single-module zips for things we fork (so they are visible)
[01:49] <lifeless> setup and document how to do it a way for folk like e.g. sinzui or wgrant or $anyone to do a new snapshot as needed.
[01:50] <lifeless> none of that implies shared cross team stuff; the place for collaboration is yui gallery upstream.
[01:50] <rick_h_> lifeless: sorry, but I really need to wrap up. It's late here and I'm hoping to unblock buildbot before bed
[01:50] <rick_h_> lifeless: ok, I see
[01:50] <lifeless> Ok, short term suggestion: revert it back to what it was before.
[01:50] <StevenK> A revert is the easiest.
[01:51] <rick_h_> lifeless: ok, I'm hoping to get the ok to push it with http://paste.mitechie.com/show/735/ as that moves the conflicting point out to where it really belongs
[01:51] <StevenK> rick_h_: I'm happy to revert your revision out if you want to crash.
[01:52] <rick_h_> StevenK: ok, if that's easiest will do then thanks. I'll revisit the buildout setup and adjust from there then and see if I can come up with a place for the helper code to assist in getting updates going
[01:52] <rick_h_> s/will do/to do
[01:52] <rick_h_> ugh
[01:53] <lifeless> rick_h_: From what I've heard so far the canonical specific shared area is entirely orthogonal to handling yui + gallery + odd forks of gallery modules effeciently for LP
[01:53] <rick_h_> thanks StevenK and wgrant for the help and lifeless for the discussion. I've got some tweaking to my long term vision to work on
[01:54] <rick_h_> lifeless: right, I'm just trying to start with the goal of updating YUI in LP and leaving the gallery/etc for a later date
[01:54] <lifeless> rick_h_: if LP can do it efficiently with a direct relationship with upstream, that provides a template for other Canonical projects to also work with it efficiently and collaborate with upstream
[01:54] <rick_h_> lifeless: agreed, and we've started that process. We have our first module inthe gallery yesterday, and I've been working to facilitate discussion with the PES team and their modules and upstream
[01:54] <lifeless> I think - I'm sure - that its easier to replicate a template of working with an established group that setting up a new group which would amongst other things have to broker with upstream.
[01:55] <rick_h_> lifeless: lifeless but as working with upstreeam doesn't help us at all in our deploy/serving area, I held out a vision of a single 'canonical-gallery' at some point in the future (loooong future)
[01:55] <lifeless> rick_h_: (discussion) - anytime. sorry I had to ask so many questions :)
[01:55] <lifeless> rick_h_: from the sounds of it though, a canonical gallery wouldn't help either.
[01:56] <rick_h_> lifeless: no it's good, it's going to focus me on thinking more of the tar/offline mode than I originally was
[01:56] <lifeless> rick_h_: also note that anything on a new domain name will make first-view loads for LP slower.
[01:56] <rick_h_> lifeless: right, it has weak spots as well
[01:56] <lifeless> rick_h_: about 3 seconds slower.
[01:56] <lifeless> rick_h_: (well, 3-10, but definitely 3).
[01:56] <rick_h_> ok, you know better than I on that front
[01:57] <lifeless> SSL :)
[01:57] <StevenK> rick_h_: Revert is landing.
[01:57] <rick_h_> 3s for a SSL handshake to a second domain?
[01:57] <lifeless> rick_h_: you can test - add a 300ms latency to your outbound packets, see the world the way the non-Europe folk do :)
[01:57] <rick_h_> StevenK: ty much sir
[01:57] <StevenK> rick_h_: Easily.
[01:57] <rick_h_> ok, I'm spoiled from my network/location :)
[01:57] <lifeless> rick_h_: http://wiki.bazaar.canonical.com/SmartPushAnalysis1.4#Network_connection
[01:58] <StevenK> rick_h_: Move to Sydney/Melbourne/Christchurch and use LP.
[01:58] <rick_h_> ok, so as I said, I need to refocus the long term goal for sure, but it is a bit different from the current issue of 3.5 on LP and making sure we don't spend a year getting to 3.6 after it's out in a month/two
[01:58] <lifeless> yah
[01:58] <lifeless> totally.
[01:59] <StevenK> BAH
[01:59]  * StevenK stabs PQM, and adds [testfix]
[01:59] <rick_h_> StevenK: heh, I asked deryck if anyone's complained on the combo loader and we decided if the aussies didn't raise a fuss it must be working out ok :)
[01:59] <lifeless> FWIW I'd use the current approach which handled the 3.4->3.5 OK AIUI, and look at rearranging as part of doing the gallery.
[01:59] <rick_h_> lifeless: rgr
[01:59] <lifeless> rick_h_: I haven't looked up your mail about combo loader stuff yet either, I've been sick this week
[01:59] <StevenK> rick_h_: I think the combo-loader needs to actually cache the files
[01:59] <lifeless> am only just on deck now.
[02:00] <rick_h_> lifeless: not a problem, I figure with it being Friday for me tomorrow I'd not turn on a FF then until next week anyway
[02:00] <lifeless> cool, I'll look for it monday
[02:00] <lifeless> rick_h_: is the combo loader oops enabled, and do we get perf information out of it ?
[02:00] <rick_h_> StevenK: oh, I thought webops copied over the way the apache cache/etc is setup from U1/Landscape. Not peeked at it tbh
[02:01] <StevenK> rick_h_: Revert landed as r15619
[02:01] <rick_h_> StevenK: ty
[02:01] <rick_h_> lifeless: no, convoy isn't oops enabled currently. I'd assume any perf info would be normal apache request/logging/etc
[02:01] <wgrant> StevenK: It's pretty aggressively cached already, I believe
[02:01] <wgrant> Yeah
[02:01] <wgrant> Cache-Control: public,max-age=5184000
[02:01] <StevenK> wgrant: Oh? I thought the caching was non-existant.
[02:02] <rick_h_> the only issue is that each deploy breaks the cache
[02:02] <StevenK> Like it should.
[02:02] <rick_h_> since the url changes for deploy purposes
[02:02] <wgrant> StevenK: It's probably done by the Apache config, as rick_h_ suggests
[02:02] <rick_h_> right, just saying that might make it seen less cached with frequent deploys going on
[02:02] <rick_h_> s/seen/seem
[02:02] <lifeless> btw
[02:03] <lifeless> this is a reason to have things like yui not part of lp's build area at all
[02:03] <lifeless> version them entirely independently
[02:03] <lifeless> may require some assembly.
[02:03] <rick_h_> lifeless: agree, but definitely some assembly
[02:04] <rick_h_> but again, perfect is the enemy of good
[02:04] <rick_h_> step by step
[02:04] <StevenK> wgrant: Join added. Now how to fix the other bug.
[02:04] <lifeless> totally ;)
[02:05] <rick_h_> ok, night all, thanks again and sorry to fubar buildbot on you guys
[02:05] <lifeless> man, this is ripe: http://arstechnica.com/tech-policy/2012/07/verizon-net-neutrality-violates-our-free-speech-rights/
[02:05] <lifeless> "Broadband networks are the modern-day microphone by which their owners [e.g. Verizon] engage in First Amendment speech"
[02:06]  * StevenK peers at buildbot
[02:06] <lifeless> uhm, nothing that my ISPs have *ever* said to me has been even slightly protected speech. Or would have been had I lived in the US.
[02:06] <StevenK> bzrlib.errors.InvalidHttpResponse: Invalid http response for https://xmlrpc.launchpad.net/bazaar/: Unable to handle http code 502: Bad Gateway
[02:06] <StevenK> Hmmm, at least its not my fault.
[02:07] <StevenK> lifeless: At least it isn't RIPE.
[02:12] <StevenK> wgrant: I can't say Branch.id IS IN () without toys being depramed, so what was your thought there?
[02:12] <lifeless> Branch.id.is_in(...)
[02:14] <StevenK> lifeless: Obviously. You're missing some context.
[02:33] <wgrant> StevenK: Hm, possibly just exclude the branch query entirely in that case, then.
[02:34] <StevenK> wgrant: But wasn't the bug query written in such a way that it would deal with no bug ids being passed, or am I on crack?
[02:35] <wgrant> StevenK: Yes.
[02:35] <wgrant> StevenK: Because in the case that no bug IDs were passed, it was correct behaviour to just operate on every bug.
[02:35] <wgrant> But that's not correct any more.
[02:35] <wgrant> Since you pass in a list of artifacts that you want to operate on.
[02:35] <wgrant> Not just a list of bugs.
[02:36] <wgrant> So if I pass in only bugs, it should only work on those bugs.
[02:36] <wgrant> If I pass in only branches, it should only work on those branches.
[02:36] <wgrant> If I pass in a few of each, it should work on those bugs and those branches.
[02:36] <wgrant> If I don't pass in a list of artifacts, it should work on all bugs and branches that satisfy the other filters.
[02:39] <StevenK> wgrant: But run() had if self.bug_ids <filter on bugs> else: every other filter
[02:40] <wgrant> StevenK: Yes. Is that odd?
[02:40] <wgrant> Because it didn't support branches before this, the logic was:
[02:41] <wgrant> If I pass in only bugs, it should only work on those bugs.
[02:41] <wgrant> If I don't pass in a list of artifacts, it should work on all bugs that satisfy the other filters.
[02:41] <wgrant> fin
[02:46] <StevenK> wgrant: Then just the join is needed
[02:46] <wgrant> StevenK: Howso?
[02:47] <wgrant> StevenK: Currently if I pass in a list of bugs, it will work on those bugs and *every* branch.
[02:47] <wgrant> Won't it?
[02:53] <StevenK> wgrant: http://pastebin.ubuntu.com/1089151/ but it's a bit messy
[02:54] <wgrant> StevenK: I'd invert those flags, but something like that, yeah
[02:54] <wgrant> StevenK: Alternatively, start bug_filters and branch_filters off empty, rather than with the visibility filter
[02:55] <wgrant> StevenK: Keep the branch_ids/bug_ids/else code the same.
[02:55] <wgrant> StevenK: Then wrap the two queries at the end in an 'if bug_filters' and 'if branch_filters', and insert the visibility clause there
[02:58] <StevenK> wgrant: http://pastebin.ubuntu.com/1089155/
[02:59] <wgrant> StevenK: You should probably put *_invisible_filter inside the conditional blocks, and maybe even inline them if they fit nicely.
[04:12] <StevenK> wgrant: http://pastebin.ubuntu.com/1089206/
[04:13] <wgrant> StevenK: Looks reasonable. Does it work?
[04:15] <StevenK> That's a seperate problem.
[04:15] <wgrant> Heh
[04:17] <StevenK> wgrant: Spot the glaring error in the branch query.
[04:17] <wgrant> StevenK: Hah
[04:17] <wgrant> Indeed
[04:18] <wgrant> findspec is in the wrong place
[04:18] <StevenK> And no .using()
[04:18] <wgrant> Yeah, but all the important bits are there :)
[04:18] <wgrant> Just not the syntax...
[04:22] <StevenK> wgrant: As an added bonus, http://pastebin.ubuntu.com/1089218/ passes tests.
[04:22] <wgrant> StevenK: Did you write new tests?
[04:23] <wgrant> To catch the issues?
[04:23] <StevenK> Nope
[04:23] <StevenK> Well, not yet.
[04:24] <StevenK> wgrant: A great deal of the tests just pass in bugs, so I'm not sure why they don't tickle what afflicted qas.
[04:25] <wgrant> StevenK: Because they don't check that the branch subscriptions survive
[04:25] <wgrant> Because they only care about bugs.
[04:25] <StevenK> wgrant: But the query was running for *ages*
[04:26] <StevenK> So why don't the bug tests show that ...
[04:27] <wgrant> StevenK: Because sampledata doesn't have hundreds of thousands of branches and millions of subscriptions
[04:27] <wgrant> So the query will take milliseconds.
[04:27] <StevenK> Oh. Right.
[04:28] <StevenK> wgrant: So we want to sprinkle in StormStatementRecorder or what do you want to do?
[04:29] <wgrant> StevenK: We want a test that tries to remove subscriptions from a set of bug IDs, and checks that those subscriptions are gone but other bug and branch subscriptions remain.
[04:29] <wgrant> And the same but for branch IDs.
[04:29] <wgrant> There are probably existing tests to whichyou can add a couple of lines to do this.
[04:40] <StevenK> wgrant: Right, http://pastebin.ubuntu.com/1089239/
[04:40] <StevenK> wgrant: I get two failures if I shelve the changes in sharingjob
[04:41] <wgrant> Great
[04:41] <StevenK> wgrant: Commit, push and force wallyworld to review it since we effectively wrote it together?
[04:42] <wgrant> Plus he's OCR
[04:43] <StevenK> % bzr log | head -n 8 | tail -n 2
[04:43] <StevenK>   Fix RASJ to deal with branches correctly without spawning a query that will
[04:43] <StevenK>   take approximately three eons to complete.
[04:43] <wgrant> (and do the wrong thing)
[04:45]  * StevenK stabs Firefox for crashing
[04:46] <wgrant> StevenK: Crashing, or the DnD hang that sprung up in the last major release?
[04:47] <StevenK> That looked like a crash, the window disappeared followed by the process
[04:47] <wgrant> Ah
[05:00] <StevenK> wallyworld: O HAI. https://code.launchpad.net/~stevenk/launchpad/fix-rasj-and-branches/+merge/114779
[05:08] <StevenK> wgrant: http://pastebin.ubuntu.com/1089253/ is that other thing I'm working on.
[05:20] <wgrant> Ah, I;ve solved commercial subscription expiry too.
[05:35] <StevenK> wgrant: Do you think I'm missing something from my branch?
[05:37] <wgrant> StevenK: Which one?
[05:39] <StevenK> wgrant: [15:08] < StevenK> wgrant: http://pastebin.ubuntu.com/1089253/ is that other thing I'm working on.
[05:51] <wgrant> StevenK: Rather hard to say. Does it work?
[05:51] <StevenK> wgrant: test_bugnotification works at least
[05:53] <wallyworld> StevenK: hi, just got back from school pickup
[05:53] <wallyworld> bad traffic today in the rain :-(
[05:53] <StevenK> Bright, sunny and 23degC here today
[05:55] <wallyworld> StevenK: we will conflict. i also have removed those unused imports, but yours will land first since i'm just completing the covering letter for mine
[05:55] <wgrant> Dreary, dismal and 12degC or so :)
[05:55] <StevenK> wgrant: So normal Melbourne weather then.
[05:55] <wgrant> Hey, from what I've heard SE QLD has been pretty similar for the last week...
[05:56] <wgrant> Although I guess warmer :)
[05:56] <StevenK> wgrant: Did the Victorian goverment write a stern letter asking for Melbourne's weather back?
[05:57] <wgrant> StevenK: No, that wouldn't involve building more jails or decomissioning educational institutions :)
[05:57] <StevenK> Haha
[05:57] <StevenK> Maybe your government needs to turn the educational institutions into jails.
[05:58] <wgrant> That approximates their current policy.
[05:58] <StevenK> Haha
[05:58] <wallyworld> StevenK: looks nice, r=me
[05:59] <StevenK> wallyworld: At least you weren't sobbing saying "RBSJ looked nice until you touched it inappropiately and renamed it." :-P
[06:00] <StevenK> steven@undermined:~% uptime
[06:00] <StevenK>  16:00:06 up 16:50,  4 users,  load average: 3.71, 3.23, 2.15
[06:00] <StevenK> Sigh
[06:00] <wallyworld> StevenK: nah, all good. i started out with a generic job but at the time it was messy so just got bugs working.
[06:01] <StevenK> test_sharingjob needs to good spring clean at some point
[06:01] <wgrant> Everything we've touched does :)
[06:01] <wgrant> A lot can be cleaned up once sharing is fully deployed
[06:01] <wallyworld> and some of the other sharing tests need XXXs removed and (likely) some branch tests added
[06:01] <StevenK> Apparently, reading from USB is *hard* or something.
[06:59]  * cjwatson celebrates his 100th LP branch landing
[06:59] <wgrant> :)
[06:59] <wgrant> Pre-LP me has competition :(
[07:00] <cjwatson> Yeah.  Little bit of a way to go yet.
[07:01] <nigelb> dammit
[07:02] <nigelb> wait
[07:02] <nigelb> cjwatson gets on a different list.
[07:02] <nigelb> wgrant: nah, you don't.
[07:02] <nigelb> unless I boot up my vm and start contributing.
[07:03] <wgrant> Heh
[07:49] <adeuring> good morning
[08:09] <jml> cjwatson: congratulations!
[08:10] <jml> cjwatson: how are you celebrating?
[08:23] <cjwatson> jml: coffee
[08:23] <cjwatson> and more branches :)
[08:24] <jml> cjwatson: coffee is fantastic, wonderful and slightly bitter. What better way to celebrate landing a hundred Launchpad branches?
[08:56] <jam> anyone have familiarity with buildout? I'm trying to do a bootstrap using python-2.7, and
[08:56] <jam>  it is telling me 'could not install pyinotify-0.9.3' but there is no other error, and I can see the .tar.gz available
[08:56] <mgz> ...the windows installers use it!
[08:57] <jam> mgz: yes, and we are super happy about it there :)
[08:57]  * mgz doesn't have familiarity of it either :)
[09:29] <czajkowski> wgrant: have you ever seen an error message like this when a user tries to log in. https://answers.launchpad.net/launchpad/+question/203032
[09:31] <wgrant> czajkowski: I reassigned that to SSO a couple of minutes back. It usually means they refuse to accept cookies
[09:31] <czajkowski> bah that will teach me to open my work in tabs and come back to things
[09:31] <czajkowski> wgrant: cheers
[10:41] <Tribaal> hi all
[13:43] <jam> jelmer: ping
[13:44] <jelmer> jam: pong
[13:54] <jam> jelmer: sorry for the delay, otp, but I'd like to have you pick up some of the 'get launchpad running on python-2.7 on Lucid'
[13:55] <jelmer> jam: sure, which things in particular?
[13:56] <jam> right now python-2.7 is segfaulting, and we probably need to bring in barry
[13:58] <jelmer> jam: okay
[14:00] <jam> jelmer: I guess the other option is doku, but I think barry has done more on that particular ppa
[14:00] <jelmer> jam: still no idea on what's causing the segfault?
[14:01] <jam> jelmer: 'import ctypes' seems to be a pretty common trigger
[14:01] <jam> it is failing right now trying to build pyinotify
[14:01] <jam> but I notice that there is a 'import ctypes' in there.
[14:01] <jam> I'm pinging barry in #canonical
[14:01] <jelmer> what do we use pyinotify for anyway?
[14:01] <jam> also, jelmer, mgz, vila: You probably want to keep in touch with eachother while I'm gone. I imagine at least doing the standup. (I realize vila is gone next week)
[14:02] <jam> jelmer: it doesn't really matter, ctypes gets imported at other times.
[14:02] <jam> bzrlib probably improts it, etc.
[14:04] <jam> jelmer: not sure if we have to have ctypes, but segfaulting regardless is bad :)
[14:07] <jelmer> jam: does that mean that even "python -c 'import ctypes'" segfaults?
[14:08] <jam> jelmer: yes
[14:08] <jam> I did just try that directly
[14:08] <mgz> we can do a jelmer/mgz standup on tuesday
[14:08] <mgz> perhaps at 9pm
[14:08] <jam> mgz: heh, whatever works for you :)
[14:09] <jam> you guys work all day long anyway, right?
[14:10] <mgz> at least some days I stop at 6 :)
[14:18] <mgz> I have lp on lucid up and working and the ctypes crash from the ppa,
[14:18] <mgz> so should be able to continue from where you are.
[14:21] <mgz> is there a neat way of getting gdb to look in the right place for src?
[14:21] <mgz> given it has paths like the following for ppas:
[14:21] <mgz> /build/buildd/python2.7-2.7.2/Modules/_ctypes/libffi/src/closures.c
[14:22] <cjwatson> based on the similar python3.2 failure a while back I suspect it's a pretty easy build fix.
[14:23] <mgz> sure, I'm leaving that to barry, just following along at home :)
[14:41] <czajkowski> jelmer: can you please follow up on the RT I gave you yesterday when you get a chance being poked for an update on the ppa
[15:00] <jelmer> czajkowski: ah, yes, sorry
[15:00]  * jelmer is having an unproductive day
[15:10] <czajkowski> :(
[15:11] <czajkowski> jelmer: need the link ?
[15:12] <jelmer> czajkowski: no, I still have it open here; thanks though
[15:12]  * mgz sends jelmer some cuddles
[15:20] <jelmer> thanks  mgz
[17:34] <bac> jcsackett: darn, someone beat me to it:  https://launchpad.net/jujube