[00:15] <bdmurray> I had a bzr upgrade of a branch on Launchpad go bad on me ... what should I do?
[00:35] <kiko> bdmurray, you can delete the bzr.backup dir via sftp iirc
[00:35] <beuno> actually
[00:35] <beuno> don't delete it
[00:35] <beuno> you actually want to delete .bzr
[00:35] <beuno> and rename bzr.backup  :)
[00:35] <beuno> bzr.backup -> .bzr
[00:35] <beuno> to restore the branch
[00:36] <bdmurray> so actually, it didn't go bad but the bzr upgrade process timedout
[00:36] <beuno> bdmurray, what's the link to the branch?
[00:37] <bdmurray> lp:~brian-murray/update-manager/brian
[00:38]  * beuno checks the branch
[00:39] <beuno> bdmurray, I'd delete .bzr, and rename bzr.backup -> .bzr
[00:39] <beuno> and do the upgrade again
[00:40] <bdmurray> beuno: okay, thanks I'll give that a shot
[00:40] <beuno> beuno@beuno-laptop:~$ bzr info sftp://bazaar.launchpad.net/~brian-murray/update-manager/brian
[00:40] <beuno> Standalone branch (format: unnamed)
[00:40] <beuno> unnamed isn't a very good format to have
[00:43] <spiv> Yeah, fully upgraded branch shouldn't ever say "unnamed" over SFTP.  (bzr+ssh unfortunately reports formats as 'unnamed', and also it's possible to have combinations like an old branch format in a current repo format which would also be reported as 'unnamed')
[00:49] <bdmurray> So, just to be clear I want to sftp to bazaar.launchpad.net?
[00:49] <beuno> bdmurray, yeap, and go to your branch
[00:49] <beuno> you may want to use lftp
[00:50] <beuno> I hear it's nicer
[00:50] <bdmurray> I'm familiar with sftp but with ls nothing is returned
[00:50] <bdmurray> Couldn't stat remote file: No such file or directory
[00:52] <beuno> yeah, it's weird that way
[00:52] <beuno> mwhudson, what was the procedure in these cases?
[00:52] <mwhudson> i guess probably the easiest thing to do is for me to fix it on the server
[00:53] <mwhudson> bdmurray: you'
[00:53] <beuno> probably, but we should really find a way for people to solve this on their own. Failed upgrades happen with certain frequency
[00:53] <beuno> I wonder if we should have a "restore backup" button on the UI
[00:54] <mwhudson> we should certainly have an 'upgrade' button on the ui
[00:54] <beuno> that too  :)
[00:55] <beuno> would save billion of gigwats of bw
[00:55] <beuno> is there a bug filed for that?
[00:55] <mwhudson> bdmurray: in the next release (on thursday) you'll be able to use a regular sftp client much more easily
[00:57] <mwhudson> bdmurray: looks like the branch is in fact mostly updated
[00:57] <mwhudson> bdmurray: what are you after, just an upgrade to packs?
[00:58] <bdmurray> mwhudson: right, just a standard upgrade
[01:00] <jml> beuno: that's a good question
[01:01] <jml> beuno: bug 254135
[01:02] <mwhudson> bdmurray: done
[01:02] <beuno> jml, thanks!  I'll add a "me too" to that
[01:05] <bdmurray> mwhudson: great, thanks!
[02:18] <nathangrubb> man.. I used to hate BZR, now I love it
[02:19]  * beuno hugs bzr
[02:19] <nathangrubb> Yes :D
[02:40] <coppro> how do I get OpenID with my launchpad account
[03:16] <yannick> elmo, thank you.
[03:16] <beuno> coppro, you need to be part of the beta team
[03:16] <beuno> although, I think, it's going to be release soon-ish to everyone
[03:17] <beuno> you can request to join the beta team if you'd like:  https://launchpad.net/~bzr-beta-ppa
[03:17] <beuno> uhm, wrong url
[03:18] <beuno> https://launchpad.net/~launchpad-beta-testers
[03:18] <beuno> that's the one
[04:30] <tgm4883_laptop> Is there a way to reupload something to my PPA?  I am getting rejected emails saying "MD5 sum of uploaded file does not match existing file in archive"
[04:35] <persia> tgm4883_laptop: Don't modify orig.tar.gz, and don't use -sa unless you bump the upstream version.
[04:37] <tgm4883_laptop> persia, hmm, I didn't think I was modifying the orig.tar.gz.  I am using -sa, but I did think I was bumping the upstream version (I went from 6~svn145-0ubuntu1 to 6~svn170-0ubuntu1)
[04:38] <persia> tgm4883_laptop: That should be an upstream version bump.  In that case, I have no idea.  Sorry.
[04:39] <tgm4883_laptop> my only thought is that it somehow conflicts with the other upload that i'm doing
[04:39] <tgm4883_laptop> which is
[04:39] <tgm4883_laptop> which is the same package, but one for hardy and one for intrepid.  the hardy one has ~hardy at the end of the version
[04:39] <stgraber> hey, I don't think any admin is around but when one will can he please cancel all the vcs-imports for: https://code.launchpad.net/ltsp-cluster
[04:40] <stgraber> as we need to clear our code, I prefer to do the cleaning in SVN, the use bzr-svn and push to LP myself
[04:40] <tgm4883_laptop> it's strange, as it seems whichever release version gets there first is ok, but the other one fails
[05:22] <jamesh> tgm4883_laptop: do the hardy and intrepid uploads have identical orig.tar.gz files?
[05:22] <tgm4883_laptop> jamesh, yes
[05:22] <jamesh> for this sort of case, you should have a single orig.tar.gz and different diff.gz files
[05:23] <tgm4883_laptop> ok, I have a development dir for hardy stuff and one for intrepid.  Should I then not do get-orig-source stuff for the hardy stuff?
[05:23] <jamesh> tgm4883_laptop: the error you got suggests that they aren't identical.  Are you sure their MD5 sums match?
[05:24] <tgm4883_laptop> jamesh, it might be slightly different, as the time is different when I do the svn pull for g-o-s, but it pulls the same svn revision
[05:24] <jamesh> tgm4883_laptop: that is your problem.
[05:25] <tgm4883_laptop> ok, should I just use the same orig.tar.gz then from the intrepid build?
[05:25] <jamesh> yes.
[05:25] <tgm4883_laptop> ok
[05:25] <tgm4883_laptop> one more question then
[05:25] <tgm4883_laptop> actually, nm, I think it's the same issue
[05:25] <tgm4883_laptop> i'll do that and report back, thanks
[05:26] <jamesh> tgm4883_laptop: the files for all the distro releases are served from a single directory, so if files have the same name they need to be identical
[05:27] <tgm4883_laptop> jamesh, ok, then is there a way to re-upload the package to ppa?
[05:27] <tgm4883_laptop> cause it says it's already uploaded
[05:27] <jamesh> tgm4883_laptop: removing the .upload file will probably do the trick
[05:27] <jamesh> tgm4883_laptop: but you probably need to regenerate the .dsc and .changes files
[05:28] <jamesh> since they contain checksums for the .orig.tar.gz
[05:28] <tgm4883_laptop> will do
[05:28] <tgm4883_laptop> thanks
[05:40] <persia> tgm4883_laptop: If you've an automated construction of your orig.tar.gz, you may want to consider gzip -9nf to reduce the chances of md5sum skew for unmodified contents.
[05:49] <jamesh> I wonder if "bzr export" does repeatable tarballs?
[05:49] <mwhudson_> jamesh: i don't think it does, iirc there's a bug report about that somewhere
[05:50] <jamesh> it could grab file timestamps from the "last changed revision" in the inventory
[05:52] <persia> jamesh: mwhudson: No.  For two reasons: firstly, the timestamps, and secondly that most people just tar & gzip without the right arguments to not include md5sum adjustments as part of the tar & gzip process.
[05:52] <persia> Also note that it breaks some VCS workflows if you preserve repo timestamps.  Consider the case of `bzr revert foo; make`
[05:52] <jamesh> persia: "bzr export" can generate a .tar.gz from a bzr revision
[05:53] <jamesh> persia: and it should be possible to make the output only depend on the input revision
[05:53] <jamesh> rather than the time you ran the command or any other changes
[05:53] <persia> jamesh: I'm absolutely certain that it does so in a manner that doesn't call the special hooks in tar and gzip to not track either the original (temporary) tarfile name, nor the timestamp thereof.  Further, I suspect it doesn't force compression of everything (when it does not preseve space), so one gets different sets of compression in different environments.
[05:54]  * mwhudson__ applied the windows solution
[05:54] <persia> jamesh: Right, but for the "create tar.gz" case, I want the timestamp at which the file was last committed, and for the working directory case, I want the timestamp when the file was last modified for any reason.
[05:54] <jamesh> persia: it uses the Python tarfile module to create the tarball
[05:54] <jamesh> so on-disk timestamps don't come into the equation for that part of the process
[05:55] <persia> jamesh: OK.  Then it's certainly broken, since that's known to not preserve md5sums.
[05:55] <jamesh> how so?
[05:56] <persia> The way the compression is done by that module will generate a different md5sum for each creation, because the time the tarfile is created ends up being in the tarfile.
[05:56] <persia> Note that there are plenty of use cases where this is a good thing, just not for md5sum preservation for Debian-format packaging.
[05:57] <persia> I suspect there is a way to hint it, but haven't dug through that code to determine the right set of hints.
[05:57] <jamesh> hmm.  Looks like it writes the current timestamp into the gzip header
[06:00] <persia> jamesh: Precisely :)
[06:01] <persia> For command-line gzip, calling with -9nf both skips the current timestamp inclusion, forces compression of everything, avoiding possible platform-based variance.
[06:06] <jamesh> persia: I agree that "bzr export" as it is now won't produce repeatable archives, but that doesn't mean that it couldn't.
[06:07] <persia> jamesh: True.  I'm just not sure if it should.
[06:07] <jamesh> persia: the problems seem to be (1) the gzip header, and (2) the tarfile entries it creates use current time (with a TODO comment indicating that it'd be good to do otherwise ...)
[06:07] <persia> jamesh: Essentially, while it makes life significantly easier for those using bzr export to generate orig.tar.gz files for Debian-format packaging, I'm not convinced it doesn't break some other workflow.
[06:08] <persia> jamesh: Yep.  Those are the two places where one needs to adjust things to get repeatable md5sums.
[06:17] <jamesh> I'm still not quite sure why this would cause problems elsewhere.
[06:33] <persia> jamesh: Sorry.  Distracted.
[06:33] <persia> Imagine the case where one wants to know *when* the orig.tar.gz was constructed.  For example, one is upstream, and one has this tarball, and one wonders if it's accurate.
[06:34] <persia> Well, that's probably more useful for foo.tar.gz (no *orig*), but still.
[06:34] <jamesh> persia: if the idea is to produce an accurate representation of a revision, why does it matter when it was created?
[06:34] <jamesh> if you know that the command to generate it produces repeatable, accurate results
[06:35] <persia> jamesh: Hrm.  Perhaps not.  Is there any possibility that you and I could have different branches with the same revision number and generate two tarballs that appear to be the same revision?
[06:35] <jamesh> persia: revision numbers are not globally unique, so that is a distinct possibility
[06:35] <jamesh> but timestamps will be the least of your worries in that case.
[06:35] <persia> Yeah :)
[06:36] <persia> I'm just trying to think of various ways that having it generate something suitable for orig.tar.gz might break the case for tar.gz.
[06:36] <persia> Mind you, I'm an avid user of orig.tar.gz, and am not upstream for anything, so perhaps I'm not the ideal person to identify possible regressions.
[06:42] <maco> is this an ok place to ask about the launchpad api?
[06:49] <persia> maco: There isn't a better one :)
[06:50] <maco> persia: ok
[06:51] <maco> if i do  bug_filter.add_option("bugnumber", int(bug.strip()))  why does it give a ValueError on that line?  i thought it was because i was using a string and it says bugnumber will be an int, but trying it as an int also gives the error
[06:52] <maco> er, hrm maybe that doesn't make sense
[06:52] <maco> um, i'm using the python-launchpad-bugs package and trying to query launchpad through the api
[06:52] <tgm4883_laptop> persia, jamesh using the same orig.tar.gz fixed it.  Thanks again
[07:28] <maco> is there a field.something to search for bug numbers?
[07:28] <maco> it turns out the python bug search package for launchpad doesn't have that as a search option, so i'm wondering if there's a way to add that field as a search parameter
[07:30] <BjornT> maco: no, there isn't.
[07:30] <jml> hello BjornT
[07:30] <BjornT> hi jml
[07:37] <maco> BjornT: i found how to get a bug by number on one of the other wiki pages
[08:01] <thekorn> hi, what's the best place to report API bugs, is there a subproject or are you using tags?
[08:18] <jamesh> thekorn: https://bugs.launchpad.net/launchpadlib might be appropriate
[08:20] <thekorn> jamesh, ok, but what's the best target for bugs I can also reproduce with 'raw' http request, without using launchpadlib
[08:21] <jamesh> thekorn: dunno.  But filing bugs against launchpadlib will probably get the attention of the right people
[08:21] <jamesh> I guess it depends on what the bug is exactly.
[08:21] <thekorn> jamesh, ok, thanks, will file it there
[08:24] <thekorn> well, I get an 503 error for any operation on huge objects, like requesting https://api.staging.launchpad.net/beta/bugs/1/messages
[09:35] <didrocks> Hi everyone!
[09:36] <didrocks> I don't understand why I was rejected in my PPA: "PPA uploads must be for the RELEASE pocket" trying to upload to it my iptables backport for hardy
[09:36] <didrocks> I have "Distribution: hardy-backports"
[09:36] <persia> didrocks: "hardy-backports" is not a RELEASE pocket.  You need "hardy".
[09:37] <bigjools> should be just "hardy"
[09:37] <stdin> you can only have "hardy", ie: no -backports
[09:37] <persia> (And yes, this means that the result is wholly unsuitable for later upload to hardy-backports, but that's a different issue)
[09:37] <bigjools> we're thinking about adding pocket support back to PPAs, FWIW
[09:38] <didrocks> ok, so, for testing I can just put hardy. But when I have to change it for real backporting, I have to change it to ...-backports
[09:38] <didrocks> oki bigjools :)
[09:39] <didrocks> thx to everyone
[09:40] <persia> bigjools: So that one has separate PPAs for e.g. hardy vs. hardy-backports?  And one can have an e.g. hardy-proposed PPA?
[09:40] <bigjools> persia: pretty much.  But we're still at the "thinking about it" stage so don't get your hopes up too much :)
[09:41] <persia> bigjools: Aside from the use case of package review through PPAs (which is missing too many bits to be very useful), for what would this be used?
[09:41]  * wgrant is more hoping that multiple PPAs and/or flexible components reappear.
[09:41] <wgrant> Pockets aren't so useful.
[09:42] <bigjools> it uncomplicates a lot of specialised processing for us internally in addition to what you said (which is high on our prioritry list BTW)
[09:42] <bigjools> multiple PPAs are also on our list
[09:42] <wgrant> I guess P3As have to special-case that for the security PPA among others?
[09:43] <bigjools> cough, yes :)
[09:43] <wgrant> s/PPA/P3A/
[09:47] <persia> bigjools: Creating a review process is high on your priority list?  Are there specs available?  This has been widely discussed, and there is a known set of 25 or so requirements that might be a good input.
[09:48] <bigjools> persia: no specs yet, but I would be very pleased to get your input
[09:48] <persia> bigjools: How about a session at UDS?
[09:49] <bigjools> might be a bit late by then, although it could be discussed.  I am not going but Celso and Muharem will be there
[09:49] <persia> I know several other people would also have input, and announcing that might be a good way to get people to bring their thoughts to the room.
[09:49] <persia> bigjools: Surely you'll be attending remotely, no?
[09:50] <bigjools> we're thinking explicitly about package sync reviews right now, but adding PPAs into the mix is also good
[09:50] <bigjools> persia: the time difference would make it hard but if there's a specific session that's of interest I would listen in
[09:50] <persia> Actually, the case for syncs and updates is far more interesting to me.  Whether PPAs are part of it is an implementation detail.
[09:51] <persia> bigjools: Unless you're limited somehow, I'll recommend adjusting your sleep schedule for the week.  I've generally found it advantageous to attend remotely when I couldn't be there in person.
[09:51] <bigjools> persia: try telling my kids to adjust their sleep schedule :)
[09:52] <persia> Perhaps we could try to schedule for the early morning / late evening to better match your needs?
[09:52] <persia> bigjools: Well, that's harder :)
[09:52] <bigjools> persia: yeah, if it's early in the day in CA then it's easier for me
[09:55] <persia> bigjools: In that case, I'd recommend asking for whoever is coordinating LP-oriented sessions to have it earliest.  I'm not sure what the tracks will be this time, but having it in a MOTU-type track will probably expose the majority of the blind reviewers (who need the UI support more)
[09:55] <bigjools> btw I don't think PPAs are an implementation detail, they are an additional part of the interaction to think about.  You could consider them as little upload queues for Ubuntu (which is what is happening for security)
[09:56] <wgrant> I think that is an interesting idea for a workflow change.
[09:57] <bigjools> persia: ok, thanks, I'll try and fix that.  BTW if you could let me have any input you have on this topic ahead of time it would be very useful.
[09:57] <persia> bigjools: While it might make the use case for -security better, having been involved in two flavours working out of PPAs and trying to get into the main archive, I'll say it's *extremely* poor for an upload queue.
[09:58] <bigjools> persia: is that because the tools were bad though?
[09:58] <persia> bigjools: Sure.  I'm not sure how much time I'll have beforehand, but I'll send you some stuff if I get it together.
[09:58] <bigjools> awesome, thanks
[09:58] <persia> bigjools: The large issue is that a PPA allows interdependencies between packages that don't match the archives, which can complicate things.
[09:59] <bigjools> true
[09:59] <persia> The medium issue is that a PPA encourages multiple revisions to address issues, and often the changelog needs to be elided when merging to the primary repositories.
[09:59] <persia> The small issue is that the PPA UI breaks at about 60 packages.
[10:00] <persia> The small issue is fairly easy to fix with some thought.
[10:01] <persia> The medium issue could maybe be addressed, if it is decided that PPAs are for review, rather than user distribution, but that breaks many current PPA use cases.
[10:01] <bigjools> pocket support could help the former maybe?
[10:01] <persia> The large issue is just difficult.
[10:01] <persia> Which is "the former"?
[10:01] <bigjools> "medium issue"
[10:02] <bigjools> what is the problem with the UI at 60 packages?
[10:02] <jamesh> persia: why should changelog entries be elided/removed?
[10:02] <persia> It's not batched, so if there are > 60 packages, one cannot operate on the alphabetically large packages without specifically searching for them by name.
[10:03] <jamesh> most package changelogs mention shitloads of releases that have never been in Ubuntu archives
[10:03] <jamesh> due to Debian heritage
[10:03] <jamesh> why should PPAs be different?
[10:03] <persia> jamesh: Because 1) we don't want to jump from -3ubuntu1 to -3ubuntu18 unless we need, and 2) because often changes get tested and reverted, and it's just embarassing noise.
[10:03]  * wgrant wishes that debian/changelog would crawl away into a corner and die.
[10:03] <bigjools> heh
[10:03]  * persia likes debian/changelog
[10:04] <wgrant> It's being obsoleted by real VCSes.
[10:04] <jamesh> persia: so get people to use version numbers that fit between -3ubuntu1 and -3ubuntu2 for their testing packages?
[10:05] <jamesh> this seems disconnected to maintaining the change history
[10:06] <persia> jamesh: Yes, but that then means that one needs to perform special operations before pocket-copying, which takes us back to PPAs not having packages suitable for direct upload to Ubuntu (see the initiating discussion about use of hardy-backports)
[10:07] <jamesh> persia: I guess I see the problem as a social one rather than a technical one
[10:07] <jamesh> [not that my opinion matters much in this case]
[10:07] <persia> wgrant: Well, I guess.  If everything was in a real VCS, and the infrastructure was in place, and it worked, I could probably live with autogenerated debian/changelog, but I like to read it as a user.
[10:08] <mok0> persia: you mean, you like to read it as a developer
[10:08] <persia> jamesh: Well, maybe social, or maybe policy, but it's essentially that if we wish to use PPAs as a upload queue with pocket-copying into the primary archive, we either give up on the conventions for revision numbering, or we allow them to be clobbered in PPAs, and give up on using PPAs for end-user distribution
[10:09] <persia> mok0: No.  I mean I like to read it as a user.  I was reading debian/changelog for each package update *long* before I was a developer.
[10:09] <mok0> persia: Not many users care about changelog, though...
[10:10] <jamesh> persia: so, if you weren't using a PPA there would be rules that say "if you don't do X, your package will be rejected"
[10:10] <mok0> persia: If you want to know about new features in a package, it wont help you
[10:10] <bigjools> persia: if we have -proposed in PPAs O don't think that's a problem
[10:10] <persia> mok0: Well, it depends.  I venture to say that many users do, and those that do are often proxy for larger groups of users (administrators of larger deployments, etc.)
[10:10] <bigjools> s/O/I/
[10:10] <jamesh> why can't you use those same rules when deciding whether to copy a PPA package to the main distribution?
[10:10] <persia> mok0: Why won't it help me?
[10:10] <persia> jamesh: precisely.
[10:11] <mok0> persia: because those new features are not described there
[10:11] <mok0> (usually)
[10:11] <persia> bigjools: Huh?  I'm not sure I understand how the presence or absence of -proposed affects the revision nomenclature issue.
[10:11] <jamesh> you'll have some PPAs that contain crap that will never be appropriate to copy, and some that are great
[10:11] <persia> mok0: Typically either they are described there, or there is a mention of a new upstream release, and I can read /usr/share/docs/$(package)/changelog.gz
[10:12] <jamesh> the presence of crap doesn't really matter though, provided it is contained in its own unsupported PPA
[10:12] <persia> (and yes, this means that every day when I upgrade Ubuntu I need to go to extra effort downloading sources now, but I consider that a bug)
[10:12] <mok0> persia: Right, upstream changelog is where you look for features
[10:12] <bigjools> persia: I'm talking about the clash between "end-user distribution" and "revision numbering"
[10:12] <bigjools> jamesh: right
[10:12] <persia> mok0: Depends on the feature.  Not always, and *extremely* rarely for stable instalaltions.
[10:12] <mok0> bigjools: you are right, release info does not logically belong in changelog
[10:13] <persia> bigjools: Ah, so the -proposed PPA might not enforce revision numbering rules, and the $(RELEASE) PPA would enforce them?  That sounds like a workable solution.
[10:13] <mok0> bigjools: it's not logical to put build information in the changelog
[10:13] <mok0> bigjools: and it wouldn't appear in a VCS either
[10:14] <bigjools> persia: how does this process work right now in Ubuntu?  can we better it?
[10:14] <persia> bigjools: Which process?  Package review?
[10:15] <bigjools> persia: testing and versions etc.
[10:15] <persia> bigjools: Are you still about in ~30 minutes?  I'm trying to multitask just now, and while I'd be very happy to explain, I'll do better then.
[10:16] <persia> (unless someone else wants to explain it)
[10:16] <bigjools> persia: yeah no problem
[10:16] <persia> bigjools: Thanks :)
[10:16] <bigjools> I know what it's like :)
[10:51] <persia> bigjools: Sorry about that.
[10:51] <persia> So, for reference, bug #179857 was filed a while back, because the way it's done is very awkward, but I'll try to explain.
[10:52] <persia> There are essentially 7 different sorts of workflows, which fall into a few categories.
[10:52] <persia> There's NEW packages, which go through REVU, get approved by two developers, and get uploaded into the NEW queue.
[10:53] <persia> For NEW packages, a bug is filed against "Ubuntu".  It gets assigned to someone, and that person pushes the package through REVU.
[10:53] <persia> When the package is accepted, as bug isn't against a package (the package didn't exist at the time the bug was opened), the bug doesn't get closed, although the changelog specifies the bug closure.
[10:54] <persia> The packager than manually closes the bug.
[10:54] <persia> Whilst the package is on REVU, it goes through several iterations, fixing various issues pointed out by the reviewers.
[10:55] <persia> Often these changes include changes to the upstream version numbering scheme, the revision numbering scheme, whole and entire replacement of various sections of the package, sweeping changes to orig.tar.gz, inadvertant switching between native packaging and non-native packaging, etc.
[10:55] <persia> As such, it's fairly safe to say that the first revision uploaded to REVU is likely not something that if distributed to end-users would upgrade safely.
[10:56] <persia> Sometimes people put one or more REVU revisions also in a PPA, but those are typically deleted, as they are often not the final version, and subsequent changes break the current PPA model.
[10:57] <persia> There's updating packages with some new upstream code.
[10:57] <persia> (and perhaps I can't count: I'm suddenly unsure from where I got seven).
[10:57] <bigjools> :)
[10:57] <mok0> Wow
[10:58] <persia> For these, the updater pulls in the new code from upstream, and merges with the existing debian packaging.  The diff.gz file is then attached to a bug, and either ubuntu-main-sponsors or ubuntu-universe-sponsors subscribed.
[10:59] <persia> Each of these sponsoring groups has a slightly different internal workflow, but generally it involves pulling the orig.tar.gz from somewhere (to avoid sneaky people from submitting a subverted orig.tar.gz), reconstructing the package, and testing.  Comments are done in the bug, and new diff.gz files uploaded.  Eventually some candidate is deemed sufficiently good, and the developer uploads the reconstructed pacakge, closing the bug from the chang
[10:59] <persia> elog.
[11:00] <persia> There's processing merges with Debian (or some other external apt-source from which we sync or merge)
[11:01] <persia> For these, the merger will prepare a candidate package, create a debdiff against the merge source, and submit this in a bug to one of the sponsoring queues.
[11:01] <persia> The sponsor will pull from the source directly, apply the debdiff, and investigate the resulting package.  As with updates, discussion happens in a bug, and new revisions are prepared.
[11:02] <persia> There's syncs: generally the sync requestor only files a bug for these, and the sponsor is expected to download the current and candidate sources, compare and review, and determine if the sync is appropriate.  Discussion in sync requests in minimized, as these will be presented to the archive-admins, who prefer short bugs.
[11:03] <persia> (having a button to allow anyone who can upload to a given pocket be able to sync to that pocket would be a huge win here)
[11:03] <bigjools> that's the plan
[11:04] <bigjools> s/pocket/component or package set/
[11:04] <persia> There's bugfix updates, in which the bug fixer will prepare a candidate debdiff fixing a specific bug (often just wrapping a patch from upstream, from another distro, or from a user), and attach that the the bug that would be fixed.  This bug gets pushed to one of the sponsor queues for review, and discussion within the bug.
[11:05] <persia> There's also multiple-bugfix updates, in which someone will prepare a debdiff fixing several bugs, attach it to a new bug, reference the new bug in the old bugs, and subscribe the new bug to the sponsors queue for review and discussion.
[11:06] <persia> And I'm sure I've lost count, because those 6 are the only ones that come to mind.  There are alternate workflows in place, but not official.
[11:06] <bigjools> ok that's great info, thanks very much
[11:06] <persia> These include both people putting a candidate package somewhere a developer can pull it with dget, and people submitting bzr branches.  Some developers will accept those, but one's chances for packages not maintained by some specific person are low in either case.
[11:07] <persia> For the universe sponsors queue, there's also a bit of a bug status dance, as described at https://wiki.ubuntu.com/MOTU/Sponsorship/SponsorsQueue
[11:08] <persia> So it gets put as "In-Progress" while the candidate is being prepared, and back to "Confirmed" when submitting for review.  There's no published process for the Main sponsors queue, but many preparing candidate revisions will apply the same guidelines used for Universe to Main.
[11:08] <persia> Info dump complete.  Now accepting questions.
[11:09] <persia> And yes, "component" is better.  Sorry for the misuse of the term.
[11:10] <persia> Is there a bug number for uploader-has-access-to-sync-button?  I'd like to subscribe.
[11:10] <bigjools> I need time to digest that
[11:11] <persia> Understood.  I'm about a fair amount.  At least wgrant and siretart have been involved in various previous discussions, and may also be able to answer questions.
[11:11] <bigjools> let me find the spec
[11:13] <siretart> persia: I didn't read the beginning of the discussion, what are you currently discussing?
[11:13] <siretart> (was changing servers)
[11:15] <persia> siretart: Overview of the six different sorts of workflows (or 11 if you distinguish universe from main) used to accept updates from those without direct upload permission, and mention of the two alternate workflows (one deprecated, one experimental).  Alll in reference to bug #179857
[11:16] <persia> siretart: http://pastebin.ubuntu.com/47393/ has a transcript of my summary, if that's any help.
[11:17] <siretart> persia: ah, I see. thanks
[11:18] <bigjools> persia: you're already subscribed to the spec for this
[11:18] <persia> siretart: I mostly mentioned you because you're often in this channel, and I know you attended the session in Boston about that.
[11:18] <siretart> persia: feel free to hilight me even more often :)
[11:18] <bigjools> https://blueprints.edge.launchpad.net/soyuz/+spec/sync-workflows
[11:18] <persia> bigjools: heh.  I guess it just doesn't get a lot of traffic then :)
[11:18] <bigjools> it's new :)(
[11:19] <bigjools> I am going to paste your comments above into it if that's ok?
[11:20] <persia> bigjools: The majority of what I've just said isn't that relevant to that spec, but you're welcome to put it there if you've nowhere else to capture it.
[11:20] <bigjools> umm right, brain fart
[11:20] <persia> bigjools: Also, any chance that a copy of that spec could go to wiki.ubuntu.com?  I suspect most of the subscribers can't currently read it.
[11:21] <bigjools> I will check on that and get back to you
[11:21] <siretart> perhaps the spec should be moved to wiki.ubuntu.com, since it seems to involve discussing this with MOTUs
[11:21] <siretart> not only copied
[11:22] <persia> That would be nifty, if it doesn't break some internal mapping of specs on the launchpad wiki.
[11:23] <bigjools> the problem is that they tend to discuss code changes, and until the day LP is OSS that's a bit sensitive :)
[11:23] <siretart> FWIW, I think that 'distributed-development' and 'native-source-syncs' will change the situation quite a bit.
[11:24] <siretart> bigjools: well, perhaps you can split the spec in two parts: the workflow discussion and the implementation details? people won't be interested that much in the latter anyways..
[11:25] <bigjools> yeah, I will bring this up in our meetings
[11:25] <wgrant> native-source-syncs would be a prereq for sync-workflows, wouldn't it?
[11:25] <wgrant> bigjools: Aha, this must be the first good reason that I've seen for keeping specs private.
[11:26] <persia> I think we ought keep distributed-development separate from other things, where possible.  While I'd like to see it sooner, I'm not sure it's a direct dependency of anything, and don't want to either delay it or other things because of a false dependency.
[11:26] <bigjools> it's probably the only reason
[11:26] <wgrant> bigjools: All I've heard previously is "No."
[11:27] <persia> Actually, it's probably interesting for a lot of specs to be published more widely, if only the proposed workflow and use cases.  Having closed implementation for closed-source makes sense.
[11:27] <persia> (setting aside the open/closed source issue)
[11:29] <bigjools> wgrant: re. n-s-s and s-w, yeah, the lines are blurred across those two anyway
[11:29] <wgrant> One seems fairly useless without the other.
[11:30] <bigjools> n-s-s is nearly complete anyway
[11:30] <wgrant> This is good.
[11:31] <wgrant> Is gina almost in a state to make n-s-s useful?
[11:32] <bigjools> getting there!
[11:33]  * siretart still wonders if n-s-s will be able to sync packages that have not been imported into launchpad yet
[11:33] <gour> BjornT: ping
[11:33] <bigjools> no it won't
[11:34] <siretart> meh, so how are we supposed to sync packages from debian then?
[11:35] <bigjools> because debian will be in LP :)
[11:36] <gour> BjornT: i sent another test (https://bugs.staging.launchpad.net/gnumed/+bug/270704) which went through and it looks it is the same case as the one from yesterday - both were signed with mimegpg and that's why they are correct - the whole message is signed. however, this is the case we need: gpg-signing & forwarding automatic-reports. still having the bug fixed would be nice
[11:37]  * gour thinks it would be nice that bot could discern staging from 'real' bugs
[11:39] <siretart> bigjools: do you have any estimate or guess how big will be the delay between a DD (e.g. me) uploading a package to debian/experimental and the package becoming available in lp to sync?
[11:40] <siretart> bigjools: and do you intend to sync http://debian-multimedia.org or http://e-tobi.net (or whatelse not) as well?
[11:40] <persia> actually, having an official list of sync/merge sources for oversight by the archive-admins would resolve a fair number of discussions.
[11:41] <persia> Especially if one could identify the source of a given package, rather than always assuming it to be Debian
[11:41] <bigjools> siretart: I can't say for certain on either question yet, but I can let you know
[11:41]  * wgrant tries to think of how those could be represented in the data model.
[11:41] <persia> wgrant: Wouldn't you need Origin for each orig.tar.gz?
[11:42] <wgrant> persia: I'm not seeing the relevance.
[11:42] <persia> Err, no, that wouldn't work, because Origin: can change.
[11:43] <persia> wgrant: Erm.  Just thinking about things about which I have too little information :)  I'll go be a user somewhere else for a bit, and look forward to someone knowledgeable having an idea :)
[11:43] <wgrant> Deferring to bigjools sounds good, yes.
[12:07] <gnomefreak> does PPA handle Debian packages?
[12:08] <wgrant> gnomefreak: PPAs only build for Ubuntu distroseries.
[12:08] <gnomefreak> wgrant: thanks thought so but had to ask
[12:09] <Ziroday> pulling from bazaar here in Singapore seems to be extremely slow, does launchpad have a local asian mirror?
[12:10] <mwhudson> sadly, no
[12:10]  * mwhudson is in new zealand
[12:10]  * wgrant also feels the pain in .au
[12:10] <mwhudson> Ziroday: is the branch in packs format?
[12:11] <mwhudson> packs >> knits for high latency links
[12:11] <Ziroday> mwhudson: not to sure
[12:11] <mwhudson> Ziroday: it should say on the branch page
[12:11] <Ziroday> hmm where is the closest server then?
[12:12] <Ziroday> mwhudson: yes its in knits
[12:12] <Ziroday> is there a way to see how many kB/s you're getting when pulling?
[12:13] <mwhudson> Ziroday: there are no mirrors of the launchpad branches
[12:13] <Ziroday> mwhudson: still very new to all of this https://launchpad.net/entertainer, not sure
[12:14] <Ziroday> thats the program I am trying to pull
[12:14] <mwhudson> oh heh
[12:14] <mwhudson> paul hummer == rockstar, another launchpad dev...
[12:14] <Ziroday> haha
[12:15] <mwhudson> Ziroday: this branch? https://code.edge.launchpad.net/~entertainer-releases/entertainer/trunk
[12:15] <mwhudson> it's actually packs
[12:15] <mwhudson> ("packs containing knits" is confusing, yes)
[12:15] <Ziroday> mwhudson: yeah that one
[12:15] <Ziroday> woops, sorry saw knits thought it was in knits
[12:16] <mwhudson> anyway, the way i usually try to follow progress is just with 'du -sh' in the branch dir :/
[12:16] <wgrant> I wonder if it would be more efficient to allow one to download a tarball of large branches.
[12:17] <Ziroday> mwhudson: thats a neat trick
[12:17] <persia> The problem with a single large file is that it gets hit by the TCP ACK window issue.  Having lots of small files in parallel can be faster for many environments.
[12:18] <Ziroday> how can you find out how large a branch is?
[12:18] <mwhudson> wgrant: not massively, i think
[12:20] <spiv> Ziroday: "bzr info -v URL" will tell you.
[12:20] <Ziroday> spiv: thanks
[12:21] <spiv> Or just in this case you could glance at http://bazaar.launchpad.net/~entertainer-releases/entertainer/trunk/.bzr/repository/packs/ and add up the file sizes ;)
[14:25] <uws> Who is the right person to contact about vcs-import failures on Launchpad?
[14:26] <uws> The Webkit vcs import has failed numerous times already: https://code.launchpad.net/~vcs-imports/webkit-open-source/trunk
[14:26] <uws> upstream SVN is a beast, so the initial import takes ages.
[14:26] <uws> however, I have a working branch at https://code.launchpad.net/~uws/webkit-open-source/webkit.trunk
[14:27] <uws> the vcs-import maintainer may want to branch of my branch first, and have vcs-import update from there onwards (that works fine and doesn't take ages)
[14:27] <spiv> uws: mwhudson or abentley I think
[14:27] <uws> mwhudson, abentley: ^ see my comments about webkit vcs mirror
[14:27] <spiv> uws: which tool did you use to make the import?
[14:27] <uws> spiv: bzr-svn
[14:27] <uws> took ages
[14:28] <uws> updating it is doable though
[14:28] <uws> at least with bzr-svn 0.4.12 which had a performance killer bug fixed
[14:28] <spiv> Yeah, recent bzr-svn releases have been getting much faster.
[14:28] <uws> if you branch my webkit bzr mirror, you can happily  bzr pull http://upstream-webkit-svn/.../trunk/
[14:29] <abentley> uws: looks like it's due to a timeout.  Stinky.
[14:29] <uws> abentley: Yeah, I had the same issues, but I now have a fully up to date branch you might want to use for bootstrapping
[14:29] <uws> I pasted the url above
[14:30] <abentley> uws: So you've effectively created an svn mirror?
[14:31] <uws> abentley: Yes, using bzr-svn. Took days, had to restart it several times, but is fully up-to-date now
[14:31] <uws> and pushed into lp already
[14:31] <uws> I'm offering it to you so you can reuse it, and then have vcs-mirror update it
[14:31] <uws> that would mean I wouldn't have to to that myself anymore ;)
[14:31] <abentley> uws: We can't use bazaar as a starting point, because we don't use bzr-svn.  We use a different converter.
[14:32] <abentley> But what's this http://upstream-webkit-svn/.../trunk/ URL?
[14:33] <uws> hold on
[14:33] <abentley> Oh, shorthand for the real upstream.
[14:33] <uws> abentley: http://svn.webkit.org/repository/webkit/trunk
[14:33] <uws> yeah
[14:33] <uws> that's where I "bzr pull" from
[14:34] <abentley> So, we can't start from bzr-- we need to start from svn.  If there were a subversion mirror, we could use that instead of the real upstream.  I think.
[14:34] <uws> abentley: Well, branch my bzr branch and bzr push it to your locally running svn server
[14:35] <uws> (if that works)
[14:35] <wgrant> Does bzr-svn store enough info that you could push it to a svn branch?
[14:35] <uws> yes
[14:35] <uws> that's what many gnome people do
[14:35] <wgrant> Well, I know you can do it, but I meant if it would be lossless.
[14:36] <abentley> uws: I'll have to talk to my colleagues about this.
[14:36] <spiv> uws: I don't think that does, sadly, if only because launchpad's converter isn't bzr-svn, so it wouldn't use the properties that bzr-svn sets.
[14:36] <uws> abentley: ok, no hurries. Just wanted to let you know I have an available svn->bzr conversion, so feel free to reuse it
[14:36] <spiv> uws: and the svn repo will have a different UUID, which will probalby confuse things.
[14:37] <spiv> I think ideally if there's a good bzr-svn import in the wild already, it'd be best to make that the official import on Launchpad.  The trick is making sure it gets updated.
[14:37] <uws> spiv: uuid is hackable on a local svn mirror ;)
[14:37] <uws> spiv: Well, that one mirror "in the wild" is mine
[14:37] <uws> and I don't update regularly, so I'd rather hand it off to ~vcs-mirror ;)
[14:37] <spiv> Sure :)
[14:38] <abentley> uws: We are interested in supporting bzr-svn, but we're not there yet.
[14:38] <spiv> Right.  I'm not sure how close ~vcs-mirror is to handling this situation nicely.
[14:38]  * spiv shuts up and lets abentley talk, as he has more idea what's going on :)
[14:39] <abentley> spiv: are you in Australia right now?
[14:39] <spiv> Yep.
[14:40] <spiv> Which means I'll be asleep soon.
[14:40] <uws> abentley: How do the updates work currently?
[14:40] <uws> they take incremental svn revisions and stack them on top of the existing bzr branch, right?
[14:40] <abentley> uws: We use the cscvs importer, and we have a couple of machines that run periodically, to update the branches.
[14:40] <uws> if you can put my bzr branch in place of this "existing bzr branch" it might help
[14:41] <abentley> cscvs and bzr-svn use different strategies, and we don't believe they're compatible.
[14:43] <uws> abentley: That means the "official" webkit svn trunk mirror will not be available for the foreseeable future?
[14:43] <uws> too bad, I'll keep my own branch and update from webkit svn then :)
[14:45] <abentley> I don't think this particular import is likely to succeed, unless the upstream fixes their server.
[15:54] <persia> I just encountered a failure-to-upload for most architectures for https://launchpad.net/ubuntu/intrepid/+source/libmatchbox/1.9-4
[15:55] <persia> Is this something that I can work around, or does it need attention by someone who can fiddle with Soyuz?
[15:55] <bigjools> on the phone, be with you in a sec
[15:56] <persia> Thanks :)
[16:17] <geser> persia: "Duplicated ancestry". Wasn't there a bug that builds fail to upload when they just got moved to an other component?
[16:18] <persia> geser: I don't know.  If that's it, then that's the problem.
[16:19] <geser> persia: sounds like bug 135610
[16:19] <persia> libmatchbox was just demoted about an hour before I pressed the "rebuild" button.  I'm just not sure what to do next, as I want both this, and matchbox-window-manager to build before the image build run.
[16:20] <persia> geser: Yep.  That's it.  Mind you, in the case of *demotion* due to FTBFS from ogre-model, the advice is hard to follow.
[16:21] <persia> So, I should just retry the builds, and this time they might work, or is there a switch that can be thrown to make it faster?
[16:22] <geser> as the publisher should got run already you could try if a build retry works now
[16:23] <persia> I'll do that, and wait for the queues.  The buildds are fairly bored right now, so it oughtn't be an excessive delay.
[16:31] <geser> persia: it got successfully uploaded now
[16:40] <bigjools> persia: I get off the phone and see that you fixed it already :)
[16:41] <persia> geser: Yep :)
[16:41] <persia> bigjools: Is that the best way to fix it?
[16:41] <bigjools> it's as good as any
[16:42] <persia> OK.  Is there a magic button that only retries the upload?
[16:42] <bigjools> nope :(
[16:42] <persia> OK.  Thanks.
[17:08] <persia> Is there a better way to determine when a publisher run completes than polling a Release file?
[19:13] <radix> is there a way to get an activity log for a person instead of for a bug?
[19:13] <radix> cause that would be hecka sweet.
[19:15] <persia> radix: Nope.
[19:16] <ephracis> radix: why would you need that anyway?
[19:19] <radix> ephracis: to know what I've done recently
[19:27] <ephracis> radix: haha, bad memory? :P
[19:27] <radix> ephracis: no, I just do such a fantastically large amount of work that any normal human mind cannot keep track of it
[19:28] <ephracis> radix: haha, great job! :)
[19:28] <radix> ;)
[20:29] <thekorn> hi, I've got one question: the API allows to change the content of an attachment,
[20:29] <thekorn> will this also be possible via the web ui, like reuploading?
[20:30] <thekorn> and also, will there be some kind of flags like: this attacchment was changed by <user>
[20:42] <kiko> thekorn, not sure -- intellectronica, do you know?
[20:43] <intellectronica> thekorn: i don't think that's in the plan. in fact, i'm not sure if this is not a bug
[20:43] <intellectronica> i don't think you should be able to change an attachment
[20:44] <thekorn> I agree
[20:44] <bac> intellectronica: for download files i explicitly do not allow it
[20:44] <thekorn> changing patches which are already tested is bad
[20:44] <bac> intellectronica: just mark it read only in the interface and it happens
[20:45] <thekorn> and also changing tracebacks etc. which indicates a bug is also bad
[20:46] <intellectronica> thekorn: feel like filing a bug?
[20:46] <thekorn> sure, will do it in a bit
[20:48] <intellectronica> thekorn: cool, thanks
[21:59] <teratorn> is there any way to get email notification of new revisions in a given branch on launchpad?
[22:00] <beuno> teratorn, sure, you can subscribe to it
[22:01] <teratorn> yeah well... I *am* subscribed.. but I dont get mails for revisions that *I* push up
[22:01] <teratorn> I wanted to get the mails so I can forward them to other people
[22:02] <beuno> teratorn, you should, What's the branch's URL?
[22:02] <teratorn> but I guess I have to have them subscribe
[22:02] <teratorn> ah, I think I just have to edit options