[01:38] is there a way to tell how many times a file has been downloaded? or how many hits a ppa gets? [01:39] wrinkliez: You can see a download count for files you've added to a project. [01:40] And I've implemented PPA download stats, but I'm not sure if the script is running yet. [01:40] wgrant: i seee... [01:41] spm: Do you know if any progress has been made on that front? [01:41] not offhand, no [01:44] i think im having trouble finding this download count [01:44] :S [01:44] wrinkliez: https://launchpad.net/PROJECT/+download === maco is now known as maco2 [01:52] do you have to create that page or something? i got a page not found [01:53] wrinkliez: Which project? [01:54] not a project, i dont think, but https://launchpad.net/~wrinkliez/+archive/ppasearch [01:54] just a script [01:55] wrinkliez: That's a PPA. As I said, it's implemented, but not in operation yet. [01:55] oh thats true you said that lol [01:55] cool [01:56] clearly im a nub at this whole schpeal ^.^ [01:57] lamont: Around? [05:43] This life long wait to build a package is getting really irritating [05:43] what's going on that it's so plugged up - and why is there never any change in priority? [05:45] I wish somebody could answer that. :( [05:51] MTecknology: Why would there be a change in priority? [05:51] What good would that do? [05:53] wgrant: the last I knew (unless it changed) the longer it sits, the higher the priority gets, the more likely it is to get built next - certain pockets and things got higher priorities, so they were more likely to be built first but if a package sat too long it would gain more priority over the other packagebuilds. [05:53] the other builds* [05:54] over newer ones, yes. not over ones that are of a similar age. [05:54] MTecknology: The time-based increments have been turned off, since they didn't affect anything. [05:54] Because all packages had them applied. [05:54] oh [05:54] So it didn't make any different. [05:54] *difference [05:54] makes sense [05:54] what about the massive congestion? [05:55] I think we may have a Python stack rebuild in progress. [05:55] We should deprioritise that. [05:56] When I left work 6 hours ago this package was estimated to be built in 3 hours, it's still estimated for 3 hours - it's been sitting there about 15 horus now. [05:56] Yes, build farm master overhead is probably well over 100% at the moment. [05:57] So the estimates will be too low. [05:58] So ... super-duper-command stop python-stack --purpose=yay [05:58] :D [05:58] Pretty much. [05:59] amd64 queue is 22 hours - that was 32 hours earlier [05:59] Right. Lots of the python stuff is arch-all, so only builds on i386. [05:59] hm? [06:00] The amd64 queue is shrinking. [06:00] my i386 build was done about 4 hours ago [06:00] Right. [06:00] THe amd64 queue is currently long because of some bad situations of the last couple of days. [06:00] The i386 one is long mostly because of the Python rebuild. [06:00] oh [06:00] what kidna bad situations? [06:01] 1) Bad split of builders between i386 and amd64. [06:01] 2) Most of the buildds were gone for several hours last night. [06:01] what abotu before that - it was congested even before that, wasn't it? [06:02] Mostly because of the bad split. [06:03] so that split happened twice? [06:03] At the moment there are 22 i386 builders, but onlyl 13 amd64. [06:04] ooh.. there should be about 22 amd64 builders then? [06:04] +/- [06:04] No. [06:05] I'd probably give it 17ish. [06:05] But the queues are so bad at the moment that that wouldn't help too much. [06:05] 4 makes that big of a difference? [06:05] oh [06:05] Over time it does, yes. [06:05] that's an extra 30%. [06:06] So once the congestion dies down (since it's catching up) and a few more builders come back - things should be smooth yet again? [06:07] Probably. [06:07] I think you guys should buy about 200 more powerful servers for building :D - then no queu time. :P [06:08] wgrant: thanks very dearly for that information - it's very nive to know === nhandler_ is now known as nhandler === Ursinha-afk is now known as Ursinha [08:25] oops 0656o530 [08:25] oops-0656o530 [08:25] https://lp-oops.canonical.com/oops.py/?oopsid=0656o530 [08:26] oops-0656O530 [08:26] https://lp-oops.canonical.com/oops.py/?oopsid=0656O530 [09:01] hi [09:02] My lp account has been suspended and I have no clue why or how I can activate it again :( [09:06] BlindFreakazoid: what's the account name? the ~ bit [09:06] konradgraefe [09:08] BlindFreakazoid: apparently there were reports of your account being used to send spam [09:08] ouch [09:09] I can re-enable, but pls be careful :-) [09:09] thank you [09:09] :) [09:27] hey guys. im packing my application now, and im on the changelog. im at the part where I'm supposed to put a distribution, but what if it doesnt matter which distribution? [09:27] do i put all? any? do i not put anything? [09:37] wrinkliez: Put the distribution series (eg. 'lucid') that you want the package to build for. [09:41] wgrant: what if it will work on every distribution? its juts a script... [09:57] Hi there! [10:03] I have a question about launchpad's PPA, what do I need to change in order for my packages to be compiled in both i386 and amd64. [10:03] .=? [10:03] Also, why is it taking such a long time these days? [10:04] It used to take minutes. [10:04] sluimers: some of the machines that usually serve as builders have been borrowed for other purposes [10:04] ahhh... [10:04] well [10:05] to be more fair, some of the machines that are usually on loan as builders have been taken back :-) [10:05] They're back now, but the queue is immense. [10:05] ah [10:05] And the Python stack rebuild isn't helping (roughly 2.5k builds) [10:05] And the i386 thing? It used to be that my packages were being compiled in lpr, amd64 and i386 [10:06] which I didn't understand in the first place. [10:07] Is that because of the borrowing as well? [10:07] 'Architecture: all' packages are built on only i386, because you've stated that the binaries from one architecture will work on the rest too. [10:07] You probably want 'Architecture: any' instead, which will build separately on each. [10:08] Are there any other architectures launchpad builds on? [10:08] Or is it just those three. [10:08] ? [10:08] ppas just build on x86 and amd64 [10:08] Just those three. [10:08] and lpia for old releases? [10:08] Although lpia is gone from Ubuntu 10.04 LTS. [10:08] builds for the ubuntu archive can build on more archs [10:50] I'm chaning my nickname from 'qense' to 'sense' and I would like to do the same on Launchpad. However, the nickname 'sense' there is already registered by someone who created his account on 2006-02-20 but never did anything with it. Is it possible to request that nickname? [10:53] Is it possible to upload one package for multiple series? === Meths_ is now known as Meths [11:04] Is PPA uploading broken for anybody else? [11:04] what is your problem with it? [11:05] I’ve attempted to upload to my PPA several times today (dput ppa:jeremy-visser/python-iview) and got no response from Launchpad. Not even an ‘accepted’ or ‘rejected’ e-mail. [11:06] Not gone in to spam either. [11:06] ok, are you signing the upload with a key registered to your account in LP? [11:06] Yep. [11:06] ok let me check the logs [11:06] CF13C41A [11:07] I also accidentally uploaded to the Ubuntu archive the first time, and normally if I accidentally do that I get a big “REJECTED” e-mail, but didn’t get a response from that either. [11:07] I hope your server didn’t ban me because of that. ;) [11:07] ok I see the upload processor blowing up with your upload, which means 2 things, 1) your package has a problem, 2) we have a bug :) [11:07] let me check the details [11:08] heh [11:08] Well it’s building fine in pbuilder for me. [11:08] But I guess that’s not relevant. [11:09] hmmm it says verification of your key failed [11:09] is it on the keyserver? [11:09] so why didn’t it e-mail me and tell me that? [11:09] because there's a bug [11:09] yeah it’s been on the keyserver for years [11:09] many signatures [11:09] ok - keyserver might be flaky again :( [11:10] which keyserver are you referring to? [11:10] I think it’s on all of them anyway. :) [11:10] keyserver.ubuntu.com [11:10] well I just synced my key again then [11:10] dunno if it’ll make any difference [11:11] so should I try uploading again? [11:11] Is it actually a key not found error, or something more sinister? [11:11] I'm trying to trace that [11:11] the error from GPG is "No public key" [11:11] yeah my key is actually `;rm -rf /#CF13C41A [11:14] uploading a new one now [11:14] uploaded. [11:15] The key looks fine, and other uploads are being accepted fine... [11:15] if the error was that the key was missing, this should have rejiggled it [11:15] i.e. me syncing the key [11:16] wgrant: getVerifiedSignature() is failing [11:17] wonder if going to ~jeremy-visser/+editpgpkeys and reimporting the key would help [11:17] It wouldn't. [11:18] The fingerprint associated with your account is correct. [11:18] And the key is on keyserver.u.c, and isn't expired there. [11:18] gpgme is failing it, I've no idea why [11:18] ha, Karl Goetz has signed my key [11:19] I don't remember meeting him at the keysigning [11:19] although i've met him other times [11:19] with that error "No public key" [11:19] bigjools: Does it give a fingerprint or key ID in the message? [11:19] nope, that is all [11:19] It's /possible/ it's complaining about the dsc instead. [11:19] no, it's the changes [11:20] Damn. [11:20] "GPG verification of python-iview_0.2-1~bzr57~ppa1~karmic_source.changes failed" [11:20] And it really is CF13C41A? [11:20] jayvee: we don't generally email about gpg failures otherwise it's a spam vector [11:20] If it is, all I can suspect is the internal keyserver being flaky... [11:20] I suspect that also [11:21] $ debsign -kCF13C41A python-iview_0.2-1~bzr59~ppa1_source.changes [11:21] trying a second upload after running that command [11:21] :( [11:22] just in case it's picking the wrong key [11:22] like my expired apt repo key [11:22] but the terminal output looks okay [11:22] actually, hang on [11:22] gpg: Signature made Wed 14 Jul 2010 20:20:59 EST using RSA key ID 480A7272 [11:22] what the hell is 480A7272 [11:22] Aha. [11:22] That would do it. [11:22] :) [11:23] The Internet has never heard of it. === Ursinha is now known as Ursinha-afk [11:23] I tend to use -k with the e-mail, not the key ID [11:23] http://pastebin.com/0YsbdPyh [11:23] so when you answered "yeah it’s been on the keyserver for years" you didn't actually check [11:23] I did check [11:24] http://keyserver.ubuntu.com:11371/pks/lookup?op=vindex&search=0xBECE90AACF13C41A [11:24] but not the key that you were signing with, I mean [11:24] but funny thing is, that's resulting after I ran debsign -kCF13C41A [11:24] hang a sec [11:24] 480A7272 is a subkey of CF13C41A [11:24] Oh. [11:24] Yes. [11:24] Damn. [11:25] Hm. [11:25] That subkey is brand new. [11:25] ah wow, I forgot I created that [11:25] It's on the keyserver, though. [11:25] so looks like it's been signing everything with that subkey [11:25] I can --recv-keys it fine. [11:25] but given that it's a subkey, shouldn't it be trusted anyway [11:25] But I wonder if LP is getting confused. [11:26] yeah possibly a grep like error [11:26] It's meant to work with subkeys. [11:26] There's code to do it. [11:26] grep 'RSA key ID $KEY_ID' `gpg-verify-stuff` [11:27] and I'm not sure I can force it not to sign with the subkey [11:28] so now I remember why I created that subkey. it's cause I read that 1024-bit keys are getting quite weak. [11:28] so adding a 4096-bit subkey should make it stronger, no? [11:29] bigjools: Where in getVerifiedSignature is throwing the exception? [11:29] impossible to tell [11:29] jayvee: uh..... strong subkey, weak root key, are you not seeing a flaw here? [11:30] bigjools: Ah, that kind of exception handling... excellent. [11:30] maxb: don't know enough about crypto to tell, sorry [11:31] wgrant: GPGVerificationError is thrown in a few places, it's one of the first 2 [11:31] given that the other places hard-code a different error [11:31] Ah, true. [11:34] jayvee: If you're worried that 1024 isn't strong enough, create a new key [11:34] thanks for the advice — I’ll consider doing that. [11:34] I’m not worried. Just thought it might be a good idea. :) [11:38] So do any of you know gpg or debsign well enough to recommend me a workaround? [11:39] I'm not entirely sure of how having multiple signing keys works. [11:39] The ABC have changed their server backend thus breaking Python-iView, so I need to get this update out to quell the e-mail flood I’m getting. :) [11:39] Ahhh, I thought I knew your name from somewhere. [11:40] wgrant: you’re an Aussie too? [11:40] Yep, Melbourne. [11:40] IE, "Mostly, but only until we can kick Victoria out." [11:40] Heh. [11:51] Huh, a 504 Gateway timeout view a merge proposal. [11:51] s/view/when I tried to view/ [11:55] hey, one of my packages went through 28 minutes ago [11:55] did somebody manually approve it? [11:56] We can't do that for PPAs [11:56] puzzling [12:04] well to work around this issue, I expired the 4096-bit key, so now gpg defaults to the primary key again [12:04] I guess if I want to do it right, I should make a new 4096-bit key. [12:05] or, rather, create a new 8192-bit primary key, then have a 4096-bit subkey. [12:05] and then save the 8192-bit key offline on a flash drive buried somewhere where I won't be able to find it by the time I expire the subkey. :) [12:05] Heh. [12:06] during which time there will have been an EMP shockwave from North Korea and it will have been wiped anyway [12:06] or that solar storm like during the 1800s [12:06] Does EMP kill flash? [12:07] http://en.wikipedia.org/wiki/Solar_storm_of_1859 [12:07] we're due for another one in 2012 [12:07] http://socioecohistory.wordpress.com/2009/05/09/nasa-warns-of-super-solar-storm-2012/ [12:07] Hmm, inconvenient. [12:08] as for me, I'm looking forward to being able to see an aurora from where I live [12:08] should buy a decent camera before then. [12:10] ha, did canonical just die [12:10] The LP spring has vanished for lunch. [12:10] Er. [12:10] sprint. [12:10] "193.85.232.176 - 193.85.232.183" registered to Gestin Holding a.s. in the Czech Republic [12:12] Yep. They're in Prague for the week. [12:12] coincidence? I think not. :) [12:12] hey here's a suggestion: maybe you should enable your amd64 or lpia builders to build 'any' arch packages [12:13] because your i386 builders are way slower than your amd64 ones [12:16] jayvee: The i386 queue is long because there's a rebuild of the Python stack going on. [12:16] That's 1700 i386 builds, but only a few hundred amd64 builds. [12:17] However, I did write a branch yesterday which lets i386, amd64 and lpia builders build packages from any of those architectures. [12:17] Which would solve that problem. [12:18] ouch [12:30] Is it possible to upload one package for multiple series? [12:30] For PPA [12:32] no === Meths_ is now known as Meths === Ursinha-afk is now known as Ursinha [13:14] hello there. One of our branches is stuck saying "Updating" for days now. https://code.launchpad.net/~mixxxdevelopers/mixxx/features_HSS1394 [13:14] What can I do to give it a kick? [13:15] I think the order of events was 1) I committed, 2) said updating, 3) clicked upgrade branch before Updating went away, 4) the branch format didn't change, but now it's stuck [13:23] Anybody heard of a bug in ubuntu one where it thinks that your paid account is a free one? [13:24] #ubuntuone [13:24] err, sorry [13:24] try asking in #ubuntuone [13:24] ta. will do [13:30] bzr operations work fine [13:30] on this stuck branch [13:35] Pegasus_RPGAMD64: Hi, best thing to do is to file a question at https://answers.launchpad.net/launchpad-code/+addquestion [13:35] ok then, thank you [13:36] Unfortunately response time may not be great, because the relevant people are all in Prague having an annual week-long meeting [13:38] oh fun [13:38] well at least I can commit and stuff [13:38] thanks for your time [13:41] maxb: you're not there? [13:42] heh, no, I'm not a Canonical employee, I just loiter here a lot [13:48] ah, whew, commiting a change broke it free [13:56] wgrant: around now... still here? === Ursinha is now known as Ursinha-afk [14:29] wgrant: amd64 queue is 3 days now.. [14:31] How do I push a build to a private ppa? [14:32] you mean upload a source? === Ursinha-afk is now known as Ursinha [14:33] bigjools: ya [14:34] same as a normal PPA [14:34] bigjools: yay, it took a while to respond so I thought I screwed up - thanks [14:38] bigjools: the priority is a lot higher now - is that because it's a private ppa or because of paying for the project? [14:41] private ppas get a hefty score bonus [14:41] maxb: oh, spiffy [14:43] they do indeed [14:44] You won't hear me arguing :P [15:07] hi how do I unsuspend a code import [15:13] hmm, he didn't stick around for long [15:56] my AMD64 builds constanly being pushed back [15:56] is the build farm overloaded again? [15:57] There are currently 4 amd64 buildds, so the others must have been taken out === fta__ is now known as fta === doko_ is now known as doko === fta_ is now known as fta [17:00] StevenK: "taken out"? [17:00] https://launchpad.net/~pmjdebruijn/+archive/ppa/+build/1867418 [17:00] it seems to build in just a few minutes :) === beuno is now known as beuno-lunch [17:42] Aloha, just wondering if someone could help with https://answers.launchpad.net/ubuntu/+question/117738 [17:42] PLease === fta_ is now known as fta [18:06] checking [18:09] hexmode: thanks [18:09] hexmode: it's been assigned to launchpad itself by someone [18:13] czajkowski: sorry, couldn't help. for some reason, I thought this was -ngo [18:13] hexmode: no problem it's a LoCo issue [18:14] thanks though [18:25] spm: the account is still not working :( It now says "Sorry, something just went wrong in Launchpad. We’ve recorded what happened, and we’ll fix it as soon as possible. Apologies for the inconvenience. (Error ID: OOPS-1656K1205)" so I thought I have to wait a few minutes after the reactivation [18:25] https://lp-oops.canonical.com/oops.py/?oopsid=1656K1205 === beuno-lunch is now known as beuno === fta_ is now known as fta [19:21] Hey; trying to understand why gdb fails to import (since June) in ~vcs-imports: http://launchpadlibrarian.net/51937356/vcs-imports-gdb-trunk.log is the latest attempt from today [19:21] These look like a patched cvs sending progress messages as to be able to limit the amount of data downloaded: [19:21] 2010-07-14 18:16:12 WARNING checking out: 37252 kB [19:21] 2010-07-14 18:16:26 WARNING checking out: 46784 kB [19:22] these I don't get locally: [19:22] cvs checkout: CVS password file /home/importd/.cvspass does not exist - creating a new file [19:22] cvs checkout: existing repository /cvs/src/src/gdb does not match /cvs/src/src/texinfo [19:22] cvs checkout: ignoring module src/texinfo [19:22] hmm actually I do [19:22] at the very end [19:22] and locally cvs exits with error code 1 as well === fta_ is now known as fta [19:28] I've poked a gdb committed to ask about the server side setup [21:25] I tried to file a bug, OOPS-1656B1429 [21:25] https://lp-oops.canonical.com/oops.py/?oopsid=1656B1429 [21:51] sinzui: thanks === flacoste is now known as flacoste_afk