[00:00] <slug> wgrant: this is how i created my pbuild: sudo pbuilder create --debootstrapopts --variant=buildd
[00:00] <slug> and then sudo pbuilder build deal.ii_7.0.0-1ubuntu1ppa7~natty1.dsc
[00:00] <slug> on a 32 bit machine running ubuntu 11.04
[00:01] <wgrant> Did it build the docs?
[00:01] <wgrant> I forget if pbuilder calls binary-arch or binary-indep by default these days.
[00:02] <slug> wgrant: well, my i386 build yes: http://slug.aeminium.org/software/ubuntu/ppa/deal.ii/
[00:02] <slug> wgrant: I'm trying to figure out why the launchpad ppa build machinery doesn't work
[00:02] <wgrant> It does work.
[00:02] <wgrant> It builds all of Ubuntu.
[00:02] <wgrant> This is a subtle bug in your package.
[00:04] <slug> wgrant: humm. does the ppa uses pbuilder ? how can I make a local configuration that it's equivalent to the ppa?
[00:04] <RAOF> If you want a slightly closer-to-buildd-setup locally you can use sbuild rather than pbuilder.
[00:04] <wgrant> slug: Launchpad uses sbuild.
[00:04] <wgrant> As RAOF says.
[00:04] <wgrant> It's an old version of sbuild, but they have pretty similar characteristics.
[00:04] <RAOF> wgrant: Launchpad doesn't actually use the sbuild in the archive though, does it? :)
[00:06] <slug> RAOF: how different is it from pbuilder? it's https://help.ubuntu.com/8.04/serverguide/C/ubuntu-vm-builder.html ?
[00:07] <RAOF> It just has slightly different behaviour.  In rare cases that behaviour is important.
[00:08] <slug> RAOF: where can i get it? https://launchpad.net/vmbuilder here?
[00:08] <RAOF> slug: apt-get install sbuild
[00:09] <slug> ah right, sorry, it's sbuild not sbuilder :)
[00:09] <RAOF> slug: Then “mk-sbuild natty” to generate a chroot.  (man mk-sbuild contains all sorts of interesting options)
[00:10] <slug> before i waste more time, do you know how easy is to use ccache with sbuild ? i had to patch some stuff to make it work with pbuilder.
[00:10] <RAOF> It's not terribly hard.  However, given you want a build environment as close to launchpad's as possible, I don't recommend using it in this case.
[00:13] <slug> RAOF: alright, i'll give it a try. thanks!
[01:28] <slug> RAOF: is it possible to create a cache of apt downloaded files with sbuild ? pbuild does this, so if a build fails i don't have to redownload everything again
[01:30] <slug> RAOF: something simple, without the need of setting up a http proxy or something :)
[01:30] <RAOF> slug: It certainly is; you can either add /var/cache/apt/archives to the list of bind-mounts (you'll find this in /etc/schroot/sbuild/fstab) or do what I do and use squid-deb-proxy + an apt.conf.d snippet to send all apt stuff through it.
[01:31] <jbicha> I tried a few months ago to get sbuild working but I got stuck, is there a guide somewhere?
[01:32] <slug> RAOF: great, i added the ccache directory there already, didn't cross my mind to do the same to the archives :)
[01:33] <jbicha> alright, I'll try https://help.ubuntu.com/community/SbuildLVMHowto and see how far I get before I get into trouble
[01:33] <slug> jbicha: i just did apt-get install sbuild then “mk-sbuild natty” as RAOF said earlier.
[01:34] <slug> then logout or newgrp sbuild and mk-sbuild natty again
[01:34] <RAOF> jbicha: Hm.  That guide's somewhat out of date.  Also, sbuild is perfectly happy without lvm if you don't already have that set up.
[01:34] <jbicha> yeah I don't have lvm
[01:34] <RAOF> (Plus, if you don't use lvm you get to build on a tmpfs, which is hella fast)
[01:35] <slug> i'm not using lvm
[01:59] <jbicha> ok, sbuild works, I had trouble months ago with schroot not wanting to mount or something crazy
[01:59] <jbicha> how do I view the log, besides getting an email?
[02:02] <RAOF> It's also in $WHEREVER_YOU_SEND_BUILDS_TO/$PACKAGE_$VERSION_$SOMETHING.build
[02:04] <jbicha> RAOF: oh, never mind that makes sense, thanks!
[02:21] <bjf> poolie, i'm the buy from the rally that you blocked because he was beating up the app servers
[02:21] <bjf> s/buy/guy/
[02:22] <bjf> poolie, i'm about to go pull a bunch more bug data and i want to give you a heads up
[02:24] <bjf> poolie, it's 5200 bugs. i'm using 8 parallel threads
[02:24] <StevenK> wgrant: ^
[02:24] <poolie> :)
[02:24] <poolie> hi bjf
[02:24] <bjf> hi
[02:24] <poolie> bjf, perhaps you could throttle back to just say 4?
[02:24] <bjf> i can
[02:25] <poolie> and, maybe, make sure your email address is in the user agent so people know?
[02:25] <poolie> hm
[02:25] <wgrant> "User-Agent: Yes, that guy again!" ?
[02:25] <bjf> poolie, the launchpad client name is "kernel-team-lp-update-modified-bugs"
[02:25] <poolie> perhaps we can file a bug or something saying what the data is that you need, and in the future perhaps lp can supply it in a way that's easier on both parties
[02:25] <poolie> ok
[02:25] <wgrant> Ah, that's much better than last time :)
[02:25] <wgrant> Thanks.
[02:26] <bjf> poolie, the plan is for this to be the last big run ("the plan")
[02:26] <poolie> ok
[02:27] <StevenK> Shall we notify the LOSAs?>
[02:27] <StevenK> s/>//
[02:27] <bjf> poolie, have you thought about if i'm just one person with 8 (now 4) parallel threads, what would happen if someone malicious was doing this?
[02:27] <poolie> well i presume the losas would ban them
[02:27] <wgrant> bjf: Then we'd block them without a thought.
[02:28] <bjf> poolie, we have a box in the dc with 128 effective processors, at one time, i was thinking of using that
[02:28] <bjf> poolie, that was before i knew what an impact this was having
[02:29] <poolie> so... where are we going with this?
[02:29] <bjf> poolie, wgrant, running
[02:29] <poolie> perhaps we should rate limit on the server side?
[02:31] <poolie> bjf, it's not really news that a 128cpu machine on a gige pipe can dos a web service
[02:31] <StevenK> Haha
[02:31] <poolie> so, i guess the point is
[02:32] <poolie> how can we let people do stuff without needing to talk to them one on one
[02:32] <bjf> poolie, i agree, but an 8 core system on a 20MB pipe can as well?
[02:32] <poolie> and without on the one hand blocking them or on the other hand having too much impact on other users
[02:32] <wgrant> bjf: It wasn't DOSing. It was setting off alerts that people were doing nasty things.
[02:33] <poolie> unfortunately lplib is probably not going to cope well with a 'please slow down' http code
[02:33] <bjf> wgrant, just add me to the "trusted idiot" list
[02:33] <wgrant> Heh.
[02:33] <poolie> i wonder if one of the front end proxies can do that automatically?
[02:46] <bjf> poolie, i've just sent you email with the list of data that i'm collecting and why, feel free to share with any/all iterested
[02:46] <bjf> poolie, i'm also available to talk about it anytime
[02:46] <poolie> thanks bjf
[02:46] <poolie> so it's one call per bug plus one per task plus one per attachment?
[02:47] <bjf> essentially
[02:47] <bjf> i don't know what all that turns into when it comes to round-trips
[02:52] <TheEvilPhoenix> anyone in here familiar with how `pbuilder` works for building source packages?
[02:52] <TheEvilPhoenix> #ubuntu-packaging is kinda dead right now
[02:52] <slug> TheEvilPhoenix: yes
[02:53] <slug> TheEvilPhoenix: i mean, i know how to use, i don't know how it works :)
[02:53] <TheEvilPhoenix> slug:  if i have multiple pbuilder environments, say... one for natty, one for maverick, and one for lucid... will it figure out which to use for building the source packages automatically based on the debian changelog/control filesa?
[02:53] <TheEvilPhoenix> or no?
[02:53] <TheEvilPhoenix> s/filesa/files/
[02:55] <slug> TheEvilPhoenix: it should, afail
[02:56] <TheEvilPhoenix> afail?
[02:56] <TheEvilPhoenix> typo of afaik?
[02:56] <slug> yeah, could work too for i learned ;)
[02:56] <TheEvilPhoenix> since the primary system is Natty, and i've already built the Lucid environment, and am building the maverick environment now...
[02:56] <TheEvilPhoenix> we'll see what happens... :P
[02:57] <slug> TheEvilPhoenix: do you have different base files ?
[02:58] <slug> TheEvilPhoenix: you might have to pass a different --basetgz to pbuilder
[02:58] <TheEvilPhoenix> the base .orig.tar.gz file is the same for the package in question
[02:58] <TheEvilPhoenix> but i'd like to be consistent with the build environment
[02:58] <TheEvilPhoenix> and up until the latest system repair
[02:58] <TheEvilPhoenix> i'd been using Lucid
[02:59] <TheEvilPhoenix> but now that i'm on natty
[02:59] <TheEvilPhoenix> i want to be able to use either lucid or natty environments
[02:59] <TheEvilPhoenix> maverick... just for one app i develop
[03:00] <TheEvilPhoenix> but lucid and natty depend on the same .orig.tar.gz
[03:00] <TheEvilPhoenix> so it shouldnt be an issue, but we'll see.. :P
[03:01] <TheEvilPhoenix> wow it takes a while to build environments doesnt it...
[03:01] <TheEvilPhoenix> >.>
[03:02] <TheEvilPhoenix> is pbuilder what's used on the build machines up on the launchpad ppa systems?
[03:02] <TheEvilPhoenix> to build the .debs for the ppas
[03:02] <StevenK> TheEvilPhoenix: No. sbuild is
[03:02] <TheEvilPhoenix> whats the difference between them, if i may ask?
[03:03] <TheEvilPhoenix> of course, if yo uknow
[03:03] <TheEvilPhoenix> otherwise i'll just dig around on google for the differences
[03:04] <StevenK> pbuilder uses base tarballs, sbuild can use tarballs, chroots, or LVM snapshots, they use different methods to resolve dependencies, the list goes on
[03:04] <slug> TheEvilPhoenix: if you use pbuilder, i recommend ccache. I'm still trying to figure out how to use ccache with sbuild.
[03:04] <TheEvilPhoenix> the end result is generally the same, no?
[03:04] <StevenK> TheEvilPhoenix: It can be.
[03:04] <TheEvilPhoenix> a deb installer package is built in either way, no?
[03:04] <TheEvilPhoenix> of course...
[03:04] <TheEvilPhoenix> i'm just using pbuilder to build .debs outside a PPA for this project...
[03:04] <TheEvilPhoenix> but that's because i'm strange :P
[03:05] <StevenK> For most package,s yes, they should produce identical results
[03:06] <TheEvilPhoenix> where's the dput config help page on lp...
[03:06] <TheEvilPhoenix> because i need to reconfigure my dput.conf
[03:06] <TheEvilPhoenix> s/dput.conf/dput config/
[03:07] <slug> TheEvilPhoenix: https://help.launchpad.net/Packaging/PPA/Uploading
[03:07] <TheEvilPhoenix> thanks
[03:15] <TheEvilPhoenix> and where will the upload acceptance/rejection email be sent to?  the primary email on the launchpad account?
[03:20] <slug> TheEvilPhoenix: i only have one, i'm not sure.
[03:20] <slug> TheEvilPhoenix: i get it there.
[05:16] <jk-> hi all
[05:17] <jk-> anyone able to help me out with info about the automatic bug updates from debian changelogs?
[05:18] <wgrant> jk-: We don't do anything directly with Debian changelogs.
[05:19] <jk-> wgrant: ok, do you know how the automatic change to Fix Commited works?
[05:20] <wgrant> There is no such thing. Ubuntu uploads the (LP: #1234) in the changelog will set the relevant bug to Fix Released, however.
[05:20] <jk-> wgrant: though, this might be specific to kernel..
[05:20] <StevenK> It grabs that information from the .changes file, not the changelog
[05:20] <wgrant> Perhaps the kernel team have a script.
[05:20] <wgrant> StevenK: Well, yes, but I hope nobody ever writes the change sifle directly.
[05:20] <jk-> StevenK: yup.
[05:21]  * StevenK has done it. Once.
[05:21] <jk-> ok, so triggered by the upload
[05:22] <jk-> so, how does it match the changeset to a specific task in the bug?
[05:22] <StevenK> Based on the source package and distroseries
[05:22] <wgrant> It looks and the distribution, package and series of the upload.
[05:22] <wgrant> at the.
[05:22] <jk-> specifically: https://bugs.launchpad.net/ubuntu/+source/alsa-driver/+bug/689036 - the task is for 'alsa-driver', but the upload is for 'linux'
[05:23] <wgrant> jk-: If there is only one task, I believe it will use that.
[05:23] <jk-> ah, that was manually set to Fix Released anyway
[05:23] <jk-> wgrant: makes sense.
[05:26] <jk-> cool, thanks!
[11:16] <diwic> Is it just me, or is launchpad terribly slow today?
[11:16] <diwic> both the website and the bzr interface
[12:38] <adeuring> henninge: ^^^
[12:39] <henninge> adeuring: thanks. ^^^
[12:46] <diwic> adeuring, uploads to bzr as well?
[12:47] <adeuring> diwic: I did not notice any problems
[12:47] <adeuring> diwic: can you give me an example of a slow page?
[12:48] <diwic> adeuring, ok, might be something on my side then...just upgraded to oneiric so it could be some hw/sw problem maybe
[12:48] <wgrant> We were having some bzr issues several hours ago, but they've been resolved for around 4 hours.
[12:48] <diwic> let me check
[12:50] <diwic> adeuring, hmm, the slow pages seem to have corrected themselves but "bzr push" is still extremely slow here
[12:50] <diwic> and this time I even got a "could not resolve bazaar.launchpad.net: Name or service not known" error
[12:51] <adeuring> diwic: time nslookup bazaar.launchpad.net 0.1sec for me
[12:52] <diwic> adeuring, yeah, and pinging it works fine as well
[12:52] <adeuring> diwic: so... a sluggish DNS server as part of the problem?
[12:52] <adeuring> diwic: I install bind9 be default, because I had too many prblems with DNS servers from DSL providers...
[12:53] <diwic> I don't know?
[12:54] <diwic> But still, if it gets stuck on "Fetching revisions 5/7220" which it has done a few times then that shouldn't be the problem, or...?
[12:54] <adeuring> right
[12:54] <diwic> I should probably try transferring today's work to natty and try pushing the tree from there and see if it works better
[12:57] <diwic> Argh, three minutes later, when I've already ctrl-C'ed bzr, the "Enter passphrase for key" message comes up...
[12:59] <diwic> aha, under Natty I can push just fine
[12:59] <diwic> \o/
[12:59] <diwic> the problem is with Oneiric
[13:00] <diwic> question is where...
[13:00] <adeuring> diwic: might be worth a bug report?
[13:00] <diwic> adeuring, what package would you suggest?
[13:00] <adeuring> diwic: bzr, I think
[13:00] <diwic> ok, thanks!
[13:01] <wgrant> diwic: If you were having DNS issues in oneiric and not natty, sounds more like a kernel issue.
[13:01] <diwic> wgrant, ok
[15:53] <slug> hey guys, the sbuild at the launchpad ppa for amd64 builds correctly but the i386 fails. the strange is that both sbuild and pbuild work fine on my own machine (both archs).
[15:54] <slug> failed build log for i386: https://launchpad.net/~slug-debian/+archive/ppa/+build/2661543
[15:55] <tsarev> Hi guys
[15:55] <slug> my own sbuild log for i386: http://slug.aeminium.org/software/ubuntu/ppa/deal.ii/sbuild-i386/
[15:55] <tsarev> https://code.launchpad.net/~tsarev/percona-server/5.5-processlist_rows_stats-sporadic_fails-fix/+merge/70307
[15:55] <tsarev> "Updateing diff" about two hours
[15:56] <bigjools> slug: make[2]: *** No rule to make target `/build/buildd/deal.ii-7.0.0/common/Make.global_options'.  Stop.
[15:56] <bigjools> slug: your package has a bug
[15:57] <bigjools> tsarev: see the topic.  There are problems at the moment.
[15:57] <adeuring> tsarev: yes, sorry, we have currently problems with merge proposals, see the channel topic
[15:57] <slug> bigjools: i saw that error. but why does my local sbuild works?
[15:57] <tsarev> adeuring: Ok :) Thank you
[15:57] <bigjools> slug: you probablty have a dependency installed locally that is not mentioned in the package
[15:57] <bigjools> other than that, I have no idea
[15:57] <slug> bigjools: i installed sbuild from scratch.
[15:58] <bigjools> I'd ask on #ubuntu-motu
[15:58] <bigjools> slug: try using pbuilder, BTW.
[15:59] <slug> bigjools: i used pbuilder initially and built the packages correctly too. the reason i tried with sbuild is that ubuntu launchpad uses sbuild and i don't understand why it fails there but not here.
[16:00] <bigjools> are you building on natty?
[16:00] <bigjools> i386 natty to be precise
[16:10] <oojah> Is there any way (or point) to move a source package bug from bugs.l.n/ubuntu/+source/package to bugs.l.n/ubuntu/oneiric/+source/package ?
[16:12] <slug> bigjools: yes
[16:12] <slug> bigjools: i'm building both amd64 and i386 on natty
[16:12] <bigjools> slug: you package almost certainly has a bug, you should talk to a packaging expert
[16:12] <bigjools> your*
[16:13] <slug> bigjools: will i find one at that motu channel?
[16:13] <bigjools> yes :)
[16:13] <slug> ok, let me give it a try, thanks!
[16:13] <bigjools> good luck
[19:09] <cr3> why does launchpadlib now discard the consumer_name in launchpadlib.credentials.AuthorizeRequestTokenWithBrowser?
[19:12] <wgrant> cr3: New launchpadlibs have a system-wide authentication token.
[19:12] <cr3> wgrant: hm, I've also noticed something about DESKTOP_INTEGRATION permissions and so forth
[19:13] <stewart> hi! is there any way to change the short name of project? i.e. "launchpad.net/foo" to "launchpad.net/bar" ?
[19:13] <wgrant> stewart: Ask at https://answers.launchpad.net/launchpad/+addquestion and someone will sort it out for you.
[19:14] <cr3> in respect to what stewart was asking, what if I have a system that refers to launchpad.net/foo, is there a way to be notified that the system should now refer to launchpad.net/bar?
[19:16] <wgrant> cr3: No, but we can add a redirect.
[19:17] <cr3> wgrant: that would be a good start until there's a nice pub-sub story in launchpad
[19:27] <CarlFK1> https://wiki.ubuntu.com/CarlKarsten?action=login  if I am logged in to lp, shouldn't that work here?
[19:27] <CarlFK1> or at least give me an openid login button
[19:30] <CarlFK1> oh swell - 500 "Please contact the server administrator,  webmaster@ubuntu.com"
[19:31] <CarlFK1> reload = "OpenID error: Nonce already used or out of range."
[19:31] <CarlFK1> Nonce?
[19:32] <wgrant> CarlFK1: You should possibly talk to #canonical-sysadmin.
[19:32] <wgrant> The Ubuntu wiki is not a Launchpad service.
[19:32] <CarlFK1> k - thanks.  /j ing..
[19:32] <CarlFK1> um...
[19:33] <CarlFK1>  /j #canonical-sysadmin.  oh,  the dot.
[20:08] <jonrafkind> is the i386 builder not functional? im only seeing an amd64 build going on for my project
[20:10] <jonrafkind> oh hm, the i386 page says 'start in 1 hour', so i guess its just queued up
[20:29] <bjf> not sure who on the LP team is around ... we (the kernel team) are running into an issue we've had before and continue to have, lack of a decent testing server/service for LP api script development
[20:30] <bjf> we've tried staging and that goes away for days/weeks at a time
[20:30] <bjf> we've been trying qastaging and that suffers from lots of timeouts and just seem flakey
[20:31] <bjf> in our opinion, it seem reasonable that the LP team supports a server against which we can do script development and not have it impact the production database
[20:32] <bjf> thoughts? comments?
[20:51] <lifeless> bjf: Its not actually reasonable
[20:52] <lifeless> bjf: here's why: you can use LP development environment locally yourself; thats easy and trivial
[20:52] <bjf> lifeless, easy and trivial to a LP developer
[20:52] <lifeless> bjf: something with real world data in it, to be fast, requires -very large- dollar investment
[20:52] <lifeless> bjf: its well documented
[20:53] <lifeless> bjf: https://dev.launchpad.net/Running/VirtualMachine
[20:53] <bjf> lifeless, we need real-world data in it
[20:53] <lifeless> bjf: so, to not timeout, or at least no more than production, we need a server with 128GB of ram.
[20:53] <lifeless> bjf: + additional servers for appserver, codehosting, soyuz etc.
[20:54] <lifeless> bjf: this is extremely nontrivial.
[20:54] <lifeless> bjf: staging and qastaging are *intended* to be mostly-available
[20:54] <lifeless> bjf: the timeouts there impact LP developers as well, and we are trying to scrounge hardware to fix them
[20:55] <bjf> lifeless, we outside the LP development team have no visibility into why staging or qastaging are having the issues that they are having
[20:55] <lifeless> bjf: well, come and talk to us, like you are now :)
[20:56] <lifeless> bjf: their primary role though, is for qa of new code and schemas being deployed to LP
[20:56] <bjf> lifeless, it might be possible that if this were simply a hw issue that we could raise this issue so that the necessary hw were applied
[20:56] <lifeless> if qastaging didn't time out as much, would it be sufficient for you ?
[20:57] <lifeless> We've very happy for other folk to play with staging/qastaging; it gets more from the hardware than if we only used it for qa
[20:58] <bjf> lifeless, probably, though we've never (until now) really been told what they are intended for
[20:58] <lifeless> staging is where we test 'next schema' versions of LP - database changes
[20:58] <lifeless> qastaging is where we test code we are going to deploy.
[20:58] <lifeless> there are two environments because of the different pipeline lengths
[20:59] <lifeless> if we get the pipelines harmonised (an ongoing effort), we may drop back to just one environment for testing on [they share hardware, so this would slightly mitigate the cold cache timeout problems you experience]
[21:00] <bjf> lifeless, if we just threw hw at this would that help or is that too simplistic an answer?
[21:00] <lifeless> bjf: sufficient hardware would remove some of the known constraints
[21:01] <lifeless> others, like it taking a while to restore 300GB of DB data (not to mention codehosting terabytes...) would not be helped by hardware alone.
[21:01] <lifeless> however, I think hardware differences between prod and staging are the primary thing giving you timeout grief - that and lp bugs having known inefficiences that we're wrking on anyhow
[21:03] <bjf> lifeless, so, if multiple kernel folks want to develop lp scripts, and they want to test them on real data prior to hitting production data, would you recommend that each of them run their own instance of LP ?
[21:04] <lifeless> bjf: running their own LP instance, and cloning bug data from prod into that, would be the most robust thing; it will insulte you from other devs testing scripts, for instance ;)
[21:04] <lifeless> bjf: and would let you see notifications and so on which are disabled on [qa]staging
[21:06] <lifeless> *insulate*
[21:06] <bjf> lifeless, it needs to run under Lucid (the last lts)? (this makes sense, i'm just confirming
[21:06] <lifeless> bjf: it /can/ run on natty, but lucid is the deployment target, so there is a bias to work on that
[21:07] <bjf> lifeless, ok
[21:07] <lifeless> I think we run on O too, as of a couple weeks back
[21:07] <lifeless> but I would really recommend a VM, LP wants custom apache rules, postgresql rules - its easier not to messup your host OS :)
[21:09] <bjf> lifeless, when you talk of cloning bug data, that is the 300GB you are talking about as well, so we'd each clone that?
[21:09] <lifeless> bjf: hell no :)
[21:09] <lifeless> bjf:  you only need kernel bugs right ?
[21:10] <bjf> lifeless, yes, there are probably less than 30 packages
[21:10] <lifeless> if you skip comments thats probably a few hundred MB tops.
[21:10] <lifeless> even for all releases of Ubuntu
[21:14] <bjf> lifeless, thanks
[21:35] <Laney> can we have .orig.tar.xz yet?
[21:38] <lifeless> not yet, need to finish validation in staging before production deployment of the dpkg etc changes, and then finally the code landing
[22:09] <maxb> https://launchpad.net/~bzr/+archive/proposed/+build/2659653 https://launchpad.net/~bzr/+archive/proposed/+build/2659651 - any buildd admins around and willing to rescore those, so I can upload a dependent package before I sleep?
[22:11] <erichammond> Got timeout errors from launchpad when I attempted to copy 2 binaries from natty to oneiric: https://launchpad.net/~alestic/+archive/ppa/+copy-packages
[22:12] <erichammond> Worked when I copied one package at a time.
[22:13] <lifeless> yes
[22:13] <lifeless> well known scale limitation
[22:14] <lifeless> being worked on but its not an easy thing to fix
[22:18] <Laney> lifeless: will it be long? Just uploaded something with xz to Debian that we'd like to sync
[22:29] <maxb> Never mind on that rescore, I'll do something different
[22:40] <lifeless> Laney: minimum of mid next week
[22:56] <erichammond> lifeless: Perhaps change the UI to radio buttons for now, limiting it to the copying of one package at a time.
[23:19] <bludude> is there any way to import a git branch other than HEAD yet?
[23:21] <maxb> No, although I believe that's now progressed from a "Wouldn't it be nice if" to a short term goal
[23:23] <bludude> alright, thanks