[00:19] <skunk> whats the Ubuntu software centre written in??
[00:23] <skunk> sorry was that a bad question??
[01:41] <TheMuso> Whoo! Chroot tarball corruption on armel raring. :(
[02:18] <infinity> TheMuso: It's not the tarball, it's the buildd.  Let me go abuse it.
[05:36] <pitti> Good morning
[05:40] <jk-> hi pitti
[07:22] <dholbach> good morning
[08:44] <dholbach> doko_, maybe you can help me? with the new libxml2 in experimental we should be able to get back in sync again, but it FTBFS in raring (http://paste.ubuntu.com/1334247/). it seems to build in debian, also using binutils-gold - I'm not quite sure where to go from here - is this something we should fix in debian/ubuntu or something which should be forwarded upstream? Do you know which compiler version or flag might be responsible for the erro
[08:44] <dholbach> r so they know how to reproduce it?
[08:54] <dholbach> ^ or anyone else really :)
[09:30] <doko_> dholbach, a verbose build log maybe would already help ...
[09:30] <dholbach> doko_, the normal raring build log?
[09:30] <mitya57> hi cjwatson
[09:31] <mitya57> ScottK thinks we should re-add essential flag to python-minimal
[09:31] <mitya57> (which was reverted by you): https://code.launchpad.net/~mitya57/ubuntu/raring/python-defaults/resync/+merge/131193
[09:31] <pitti> mitya57: why that? I thought it was spelt "python3" now
[09:31] <mitya57> pitti: "You can't remove essential until it's been verified nothing in the archive assumes it's present."
[09:32] <doko_> dholbach, no, that silent too ...
[09:32] <mitya57> I hope cjwatson will now say he has verified that somehow :)
[09:33] <dholbach> doko, can you help me understand what you'd need?
[09:35] <doko> dholbach, compiler flags, linker flags, ...
[09:35] <mitya57> hi dholbach, doko
[09:35] <dholbach> hi mitya57
[09:35] <mitya57> maybe one of you can sponsor the new sphinx?
[09:36] <mitya57> (I mean my debdiff at bug 1070336)
[09:41] <cjwatson> mitya57: I'd be happy to argue with ScottK about this directly; it seems a bit daft to do so by proxy
[09:42] <cjwatson> mitya57: I'll follow up in the MP
[09:42] <mitya57> cjwatson: thanks
[09:59]  * cjwatson finally moves auto-sync from cron.cjwatson to an actual crontab entry
[09:59] <cjwatson> only about eight years late
[10:07] <jamespage> cjwatson: is syncpackage use into raring-proposed safe? I see the minor thread on ubuntu-devel ML but it was unclear to me whether the log changelog was OK or not?
[10:07] <jamespage> log/long
[10:08] <cjwatson> jamespage: there's some problem on the LP side; if you have ubuntu-dev-tools << 0.143ubuntu0.1 then it'll produce wrong output in your terminal, but that doesn't really matter
[10:09] <cjwatson> bug 1073492 is the LP bug
[10:09] <cjwatson> I wouldn't block on that for now if I were you; go ahead and sync stuff where appropriate
[10:09] <jamespage> cjwatson, great - thanks
[10:10] <xnox> dholbach: doko is after a build with VERBOSE=1 set, such that the build-log is about 3-4 times longer than the current one. The information as to why it is failing, is currently hidden.
[10:13] <cjwatson> isn't it V=1?
[10:13] <cjwatson> looks like automake at least
[10:14] <cjwatson> (cf. Debian #680686)
[10:18] <xnox> cjwatson: yes, it's V=1 & VERBOSE=1 in CMake.....
[10:18] <xnox> cjwatson: yes, it's V=1 for autotools & VERBOSE=1 in CMake.....
[10:29] <dholbach> xnox, in http://paste.ubuntu.com/1334398/ I used --disable-silent-rules now
[10:31] <xnox> dholbach: I like how make is in Deutsch and gcc in English in that build log =)
[10:31] <dholbach> haha
[10:32] <xnox> doko: here is a verbose log for you http://paste.ubuntu.com/1334398/ I don't see anything suspicious and -fPIC is used.....
[10:44] <rbasak> xnox: sorry I didn't manage to make the bibisect session. Have you considered the apt by-hash stuff? Then you'd just need to keep the hash of InRelease (when we have it) and alter expiry of old files. Perhaps with a (possibly internal) redirect to take you from a "dated" URL to the correct InRelease by-hash, and then apt should Just Work with it
[10:44] <rbasak> (and debootstrap too)
[10:44] <rbasak> although that would add a dependency on the by-hash stuff, so perhaps that's a reason not to use it
[10:44] <xnox> rbasak: I was not at the bibisect, nor at the apt-hash session. I was at the "keep the snapshot" session only.
[10:45] <rbasak> (on me completing the by-hash stuff)
[10:46] <xnox> rbasak: I am not sure what the by-hash stuff is =) the current "implementation" I have for the snapshots I have is to simply rsync-snapshot the dists/ & proxy redirect pool/ to launchpadlibrarian.
[10:46] <xnox> rbasak: is there anything concise I can read about the by-hash stuff?
[10:46] <rbasak> xnox: https://wiki.ubuntu.com/AptByHash
[10:47] <rbasak> xnox: I'm hoping to have it done by Raring FF
[10:47] <rbasak> xnox: progress so far is a PoC that works with a patched apt in a PPA
[10:49] <xnox> rbasak: interesting stuff. If the hashes are kept for the past month with a proxy to launchpad librarian beyond that horizon we are golden for full snapshots archive.
[10:50] <rbasak> xnox: exactly - snapshot capability just falls out of the design. I'm not sure about the proxy to launchpad librarian for > one month though. What would be doing about the index files?
[10:50] <xnox> rbasak: are you running a PoC mirror anywhere with this stuff already on continuous basis? or just locally.
[10:50] <rbasak> xnox: I was running one but I turned it off when it got postponed last cycle
[10:50]  * rbasak looks
[10:51] <rbasak> Argh
[10:51]  * rbasak reinstalled his phone today, and just realised that he's lost all his Google Authenticator setups
[10:51] <xnox> rbasak: well we need to keep dists/ snapshots in by-hash/ or by-date/ dirs. But the files in pool/ are expired and redirected to launchpadlibrarian via smart proxy upon request.
[10:51] <rbasak> No AWS for me today then :-/
[10:51] <xnox> =(
[10:52] <rbasak> There was a mirror in S3. Let's assume it's not there right now, but I'll get it going again soon
[10:52] <rbasak> Ah OK. Yeah - as long as we keep the indexes around that'll work. I like the idea of deferring to launchpadlibrarian instead of keeping the packages around forever
[10:53] <Laney> launchpadlibrarian isn't guaranteed to keep all files around forever
[10:53] <cjwatson> no, but it does keep them for the lifetime of the release
[10:53] <cjwatson> not guaranteed by code but guaranteed by process
[10:58] <xnox> rbasak: and if dists will keep on growing forever, but we can implement round-robin expiry e.g. similar to how pnp4nagios does it. Keep dists 0.5 for the past 2 weeks, keep dailes after that for 3 months, then weeklies... etc. And redirect expired dists to the closes new one.
[10:58] <xnox> where "0.5" is "0.5 hour"
[11:00] <xnox> although we might not want to play with time that much.
[11:03] <rbasak> Sure. Just decide what the expiry algorithm is and we can implement it :)
[11:04] <xnox> rbasak: I like the "wait until we are running out of disk space and see how can we expire the least amount of stuff while maximizing usefulness"
[11:05] <cjwatson> you might also consider proxying large objects in dists/ such as installer uploads somehow
[11:05] <cjwatson> the custom tarballs that are unpacked into that should be somewhere in the librarian
[11:05] <cjwatson> that would require a somewhat more intelligent proxy though
[11:13] <rbasak> xnox: my PoC is in S3. I'm not sure if we can detect when S3 is running out of disk space :-P
[11:14] <xnox> rbasak: hmm... yeah expiring a bucket may be drastic if we are not careful what we put in it.
[13:14] <cjwatson> nobuto: Would you mind merging mozc?  uim is stuck in raring-proposed because it makes versions of uim-mozc earlier than 1.5.1090.102-4 uninstallable.
[13:15] <cjwatson> nobuto: (Or let me know if you don't have time and I can do it)
[13:18] <nobuto> cjwatson: if syncing from debian experimental, no merge will be needed. the patch has been upstreamed. but I will contact the maintainer in Debian to confirm it's OK.
[13:18] <ScottK> cjwatson: No need to argue about python-minimal.  You dropping it with intent is different than it being dropped in an upload that I was considering sponsoring.
[13:26] <cjwatson> nobuto: OK, either way is fine by me, though I know nothing about the safety of mozc in experimental
[13:26] <cjwatson> ScottK: cool
[13:27]  * cjwatson grumbles in the general direction of bison
[13:27] <cjwatson> DFSG-cleaning that breaks autoreconf considered annoying
[13:41] <nobuto> cjwatson: OK, I will double check that.
[14:12] <pitti> doko: hm, I'm slightly puzzled by https://launchpadlibrarian.net/121262133/buildlog_ubuntu-raring-i386.ubuntu-drivers-common_1%3A0.2.71build1_FAILEDTOBUILD.txt.gz
[14:12] <pitti> doko: "/bin/sh: 1: python3.2: not found", how can that be? it depends on python3-all, and fails with that on i386 and armhf only (works on other arches)
[14:13] <pitti> doko: is/was that something transient?
[14:13] <xnox> pitti: I don't see it depending on python3-all
[14:13] <pitti> oh sorry, looked at the wrong version
[14:14] <pitti> doko did add the -all dep in https://launchpad.net/ubuntu/+source/ubuntu-drivers-common/1:0.2.71ubuntu1
[14:14] <pitti> and that built fine
[14:14] <doko> pitti, I do love these shell script style makefiles :-/
[14:14] <xnox> ack.
[14:15] <pitti> doko: yeah; if only debhelper supported py3 properly..
[14:16] <pitti> so, that leaves the question of what failed in ubuntu1
[14:16] <doko> it doesn't try to install python2
[14:17] <pitti> doko: as I said, I was looking at the wrong log; you already fixed this error in ubuntu1
[14:17] <doko> ahh, ok
[14:37] <pitti> ev: btw, I was talking to seb128 about sending crashes to daisy only -- he says that he depends a lot on having direct submitter contact/feedback, so I think we shouldn't do this just yet
[14:38] <kumadasu1> Hi. I want to apply this patch to chromium-browser in local build.  http://code.google.com/p/webrtc/issues/detail?id=512
[14:38] <kumadasu1> But I don't know how to apply patch.
[14:39] <ev> pitti: okay, at what point would you like to re-evaluate that? When we have the server-side hooks implemented?
[14:39] <stgraber> @pilot in
[14:39] <ev> Or perhaps taken from a different angle, why do we want direct submitter contact?
[14:39] <kumadasu1> chromium-browser's sorce code is compressed and
[14:39] <ev> and can we eliminate the need for that over the medium term
[14:39] <pitti> ev: yes, I think having the extra scripts feature is a minimum requirement for that
[14:40] <ev> okay, cool
[14:40] <pitti> ev: which might be sufficient in a lot of cases, but due to the nature of GUI bugs even those might not be sufficient in a lot of cases
[14:40] <pitti> seb128: ^ was there anything else?
[14:40] <pitti> I think so, but I might have forgotten (post-party morning..)
[14:41] <seb128> ev, pitti, mpt: was there any consideration to have an text entry on the dialog? to let user optionally put a description of what they were doing when the issue happened
[14:41] <seb128> that's what firefox is doing
[14:41] <ev> yes
[14:41] <ev> seb128: it's on the list of work items for 13.04
[14:42] <seb128> ok
[14:42] <seb128> well I guess we should reconsider once that happens
[14:42] <ev> if you hit "show details" you'll have an optional box to add some details
[14:42] <seb128> without description it's just too hard to figure how to reproduce issues
[14:42] <seb128> hum, hidden in "show details"? I'm not sure anyone will go find it there :-(
[14:43]  * didrocks doubts about it too
[14:43] <ev> well, we've only briefly talked about the UI for this. But the belief is that technical people, the kind who you'd want to provide a description, would be hitting show details anyway
[14:43] <ev> at least the one time it would take them to realise it was there
[14:44] <ev> and do recall we're operating at scale
[14:44] <ev> out of 800 instances of a crash, someone is bound to enter something there
[14:44] <ev> speculation, of course
[14:44] <ev> we can always see how it goes
[14:44] <seb128> well, firefox has it in the main ui
[14:44] <seb128> looking through some of their bug, 90% of descriptions are useless but you get infos from the other 10%
[14:44] <ev> but I'm worried about putting it in the main UI as I think anything that makes the dialog look more complex is going to scare away some people from hitting submit
[14:45] <seb128> well, people don't need to hit submit
[14:45] <ev> mpt: you might be interested in this conversation :)
[14:45] <ev> seb128: why don't they?
[14:45] <seb128> the way it's currently designer it sends the report even if you close the dialog
[14:45] <seb128> you would need to uncheck the checkbox to not send it
[14:45] <seb128> and I guess users who get scared by the dialog just close
[14:45] <ev> oh I see your point
[14:46] <ev> by the way, the fact that the dialog has a close button at all is a bug
[14:46] <ev> it's supposed to just be minimise
[14:46] <seb128> well, any of the buttons at the bottom would send the report
[14:46] <seb128> so even if users don't read the content and click on any of those it would send the report
[14:46] <ev> indeed, that was just an aside
[14:47] <seb128> to go back to the topic, I think we need to keep launchpad submit by default until we get useful descriptions from e.u.c
[14:47] <ev> yes - I don't want to rip out anything that you find useful
[14:47] <seb128> I will let mpt and you figure the ui that leads us there ;-)
[14:48] <ev> but hopefully we'll get those useful descriptions soon
[14:48] <ev> seb128: cheers :)
[14:48] <pitti> I guess the server and foundation guys will/should be happier with the "run extra scripts" possibility
[14:48] <seb128> pitti, the extra script thing will be useful without doubt
[14:48] <seb128> it's a bit orthogonal to the description though
[14:49] <pitti> can these be interactive?
[14:49] <pitti> so for desktop we could use a dialog which asks for more information, or some messages like "now reproduce the problem and click ok", or something such?
[14:49] <pitti> ev: ^ I think I already asked you taht
[14:49] <pitti> but it might have been rejected in design already
[14:50] <ev> no
[14:50] <ev> as in no I really don't want them to be interactive
[14:50] <ev> for a number of reasons which I'm happy to enumerate
[14:50] <ev> but lets see where we get without them being interactive first
[14:50] <seb128> it's a bit tricky, I doubt you will be able to get some infos without those being interactive
[14:51] <seb128> like you can't really go and fetch private infos without asking the user
[14:51] <ev> the interactivity we currently have is confusing at best for non-technical people
[14:51] <jbicha> ev: can we not kill all the close buttons? some users get really annoyed when we do that (like in Software Updater)
[14:51] <seb128> count me in the "hate that I can't close the "you need to reboot""
[14:51] <ev> seb128: fetching private information will be covered by the text in the initial dialog, though admittedly we don't have a mock up for that yet
[14:51] <seb128> I right click the unity launcher icon to do it but it's a workaround
[14:52] <ev> but we're definitely going to have implied consent there
[14:52] <ev> jbicha, seb128: what if we mapped escape to "dismiss this dialog with the default action (send report)" ?
[14:53] <ev> I also think that a lot of these changes layer on top of each other. When we fix the system errors so they get collected into a single dialog, hopefully the frequency you'll see this will go down and your need for a quick dismiss action will decrease
[14:53] <ev> equally so when we add the "don't bother me again with this - just send the damn reports" ;) checkbox
[14:56] <stokachu> mterry: does pcreate recognize raring yet?
[14:56] <seb128> ev: not sure about that, I would expect esc to "undo" the action, e.g exit without submitting anything
[14:57] <ev> hmm, yeah
[14:57] <mterry> stokachu, it relies on underlying tools, it doesn't know about code names itself
[14:58] <stokachu> mterry: ah, so debootstrap needs some modification
[14:59] <pitti> ev: during the devel cycle we can/have to assume some degree of "technical people", though
[14:59] <mterry> stokachu, yeah, I don't think quantal debootstrap is updated yet...
[14:59] <ev> pitti: sure - how does that fit into the current conversation? Sorry, I'm just not making a clear connection there.
[14:59] <stokachu> mterry: yea i dont even see it in devel branch
[15:00] <jbicha> what about if we had different modes: 1. send crash reports automatically without getting asked every time (but with the ability to click a button to view the reports you've sent); 2. Ask me first so I can include additional info or decline to send the report 3. Don't send crash reports & don't notify me of crashes
[15:00] <Laney> it's in the q-proposed queue
[15:00] <pitti> ev: that was re "ev | the interactivity we currently have is confusing at best for non-technical people"
[15:00] <Laney> (debootstrap)
[15:00] <stokachu> Laney: does this require a sru for precise as well?
[15:01] <Laney> if it's to be updated there, yes
[15:01] <ev> pitti: oh, I see. Well, the intent is to have these server-side hooks on post-release.
[15:01] <ev> and I think we end up with a lot of complexity if we're trying to maintain two sets - one for before release and one for after
[15:01] <Laney> stokachu: it's trivial if you want to do it; just adding a symlink raring→gutsy
[15:01] <stokachu> Laney: ok, would it be a good idea to just update all scripts across the releases?
[15:01] <stokachu> Laney: ok i can do that
[15:02] <ev> I'd really like to get them implemented without interactive UI and see what the remaining cases are where we need some human interaction, then address those specifically
[15:02] <jbicha> ev: people expect pre-release to be "buggy"; I hear regularly about people that think precise is more buggy than oneiric because of the popups
[15:02] <Laney> use bug #1068707
[15:02] <stokachu> Laney: awesome thanks ill get that updated
[15:02] <pitti> ev: *nod*
[15:02] <ev> jbicha: I don't think people should ever expect our software to be of poor quality. Our whole push at the moment is towards a rolling release
[15:03] <ev> something that's stable throughout
[15:03] <ev> sorry, I don't think we should ever settle for people expecting it to be buggy
[15:03] <ev> I also think we make things harder for ourselves if we let things slide towards instability with the promise that we'll clean up the mess eventually
[15:04] <jbicha> ev: people interpret those popups as "buggy"; I think defaulting from 2 to 1 closer to release is a good thing
[15:04] <ev> jbicha: I did not understand the second half of your sentence. Can you elaborate, please?
[15:04] <jbicha> and we can't make promises about Ubuntu being stable during the Alpha phase
[15:05] <pitti> ev: yeah, that was the "old" mode indeed, and we haven't been good at it given the constant feature pressure
[15:05] <ev> why can't we make promises about it being stable during development (there are no more alphas)
[15:05] <jbicha> ev: see my previous 3-mode comment
[15:05] <ev> pitti: *nods*
[15:05] <pitti> but now that pressure has shifted around, and we have a lot more machinery in place to avoid regressions; I'm hopeful we can improve that enough to really make the dev release "stable"
[15:05] <ev> yeah, me too
[15:05] <ev> really excited as well
[15:05] <ev> there's so much potential and good ideas here
[15:06] <jbicha> ev: because then we land features at the last possible minute because we are too paranoid about regressions which makes the releases *worse* because they didn't get time to be tested in the actual repositories
[15:06] <pitti> well, they land two weeks after the last possible minute right now
[15:06] <jbicha> there definitely is an Alpha stage still as long as we have autosyncs from Debian
[15:06] <ev> jbicha: so the dialogs serve two purposes
[15:06] <pitti> that can't possibly get any worse
[15:06] <ev> one is to let the user report the issue to us, and that's great
[15:06] <ev> but the other and equally important need is that they actually explain what just happened
[15:07] <ev> there are a lot of non-technical people out there who don't know what an application disappearing means
[15:07] <ev> they need some further explanation
[15:07] <ev> and that's what the dialog does
[15:07] <cjwatson> jbicha: putting those through -proposed is already making a rather big difference
[15:07] <ev> plus turning on automatic reporting requires some sort of initial consent anyway
[15:08] <jbicha> ev: like the Amazon integration requires initial consent? ;)
[15:08] <stokachu> zing!
[15:08] <ev> jbicha: pretty sure you at least get a link of some legal text to read there
[15:09] <jbicha> checkbox in the installer "Help us make your Ubuntu experience better" :)
[15:09] <ev> and for the millions who are already using Ubuntu? :)
[15:09] <ev> believe me
[15:09] <jbicha> checkbox in the upgrader :)
[15:09] <ev> I'd love to see lots of machines automatically sending us reports
[15:10] <ev> but I think there is a real need here for some explanation
[15:10] <ev> and for the technical people who don't want to see these dialogs, we'll have a very easy way to turn on automatic reporting in 13.04
[15:10] <stokachu> it doesn't automatically send kernel dumps though right
[15:10] <ev> it doesn't automatically send anything right now
[15:10] <ev> everything puts up a dialog that carries with it your implied consent to send us the information
[15:11] <ev> we're reducing the number of dialogs as well, for those of you who weren't in the plenary or were watching at home as I crashed the live stream
[15:11] <ev> sorry, wearing that like a badge of honour, but I'm most amused
[15:12] <stokachu> ev, is there something that detects if it is a kernel dump and explains possible security implications?
[15:12] <jbicha> maybe we should keep track of how many turn on automatic crash reporting so that we can get an actual sample size for our bugs/install ratio
[15:12] <ev> stokachu: https://wiki.ubuntu.com/ErrorTracker#kernel-crash
[15:12] <ev> jbicha: we will
[15:13] <stokachu> Laney: is debootstrap out of sync with bzr and whats being released?
[15:13] <ev> I'm all about measuring, and you can rest assured that we'll have a percentage of systems that report crashes which have done so automatically
[15:13] <Laney> stokachu: it would not surprise me; I didn't use bzr myself
[15:13] <cjwatson> debootstrap has never been maintained in bzr
[15:14] <Laney> UDD bzr presumably
[15:14] <cjwatson> any resemblance there is probably coincidental
[15:14] <stokachu> so when im doing patch work i was under the impression i always pull from latest bzr in launchpad
[15:14] <cjwatson> it often doesn't work desperately well for SRUs ...
[15:14] <cjwatson> you should always double-check versions against rmadison output or LP
[15:15] <cjwatson> if UDD bzr is in sync, great, otherwise do something else
[15:15] <stokachu> cjwatson: thats kind of confusing dont you think?
[15:16] <cjwatson> Yes
[15:16] <cjwatson> I'm stating current reality
[15:16] <stokachu> lol ok
[15:16] <cjwatson> If we had any resource to polish UDD properly, it'd be different
[15:16] <stokachu> no problem ill just adjust my workflow
[15:18] <stokachu> ev: this dialog just says it experienced an internal error, but doesn't say anything about if its a kernel dump that could be 50 gigs and that you are sending what was currently dumped in memory
[15:18] <stokachu> i dont think financial instituations would want that whether it be desktop or server
[15:18] <stokachu> or governments.. or ninjas
[15:19] <mfisch> How long should it take for UDS-R tasks to show up on status.ubuntu.com/ubuntu-raring/people.html  ?
[15:19] <stgraber> mfisch: status.u.c should be mostly up to date after FeatureDefinitionFreeze
[15:19] <mfisch> stgraber: thanks
[15:21] <ev> stokachu: you may not be sending the dump if we already have one for that signature. Institutions can completely disable error reporting if they don't want sensitive information leaking out (Google and Ericsson do this).
[15:24] <stokachu> ev: but that disables for everything, couldnt we have a blacklist?
[15:25] <stokachu> and maybe have kernel set as a default
[15:26] <ev> stokachu: we want kernel crashes.
[15:26] <stokachu> ev: oh is it just stack traces sent and not actual core dumpes
[15:26] <stokachu> dumps*
[15:26] <ev> no - a subset of the systems are sending us core dumps.
[15:26] <stokachu> ev: but not kernel?
[15:27] <ev> sensitive information is possible in an application crash too
[15:27] <ev> disabling crashes system wide is the right thing to do if you're a company that cares about these sorts of things
[15:28] <ev> we don't have kernel crashes full wired up yet. When they are, a subset of the systems out there will be sending the full dumps, yes.
[15:28] <ev> do note that we don't even get a lot of those
[15:29] <ev> the conditions for actually being able to write a kernel crash to disk mean that they're not going to ever approach the frequency of regular application crashes
[15:30] <stokachu> ok
[15:31] <doko> bah, hard disk shows I/O errors again ...
[15:31] <diwic_> ev, I'm looking at a stack trace @ errors.ubuntu.com. The error is present in several versions of pulseaudio - from which version are the actual stack trace - line numbers differ between releases?
[15:32] <ev> diwic_: oh interesting
[15:33] <ev> so we don't currently track that
[15:33] <ev> give me one moment to just see if we can easily grab it given the information we have
[15:33] <diwic_> ok
[15:43] <stokachu> Laney: ok i uploaded a debdiff to that debootstrap bug
[15:43] <Laney> cool cheers
[15:43] <Laney> can you subscribe the sponsors?
[15:43] <Laney> i'll try to look in a bit anyway
[15:44] <stokachu> Laney: yea i got sponsors and sru subscribed so it should be int he queue
[15:44] <_val_> Hey guys. When are you going to release a spice-xpi package? Compiling this the source of spice-xpi gives me head ache. Someone?
[15:44] <stokachu> Laney: thanks :D
[15:46] <_val_> the xulrun-embedder.. not found... etc.. is not solveable.
[15:54] <stokachu> _val_: did you see this https://launchpad.net/spice-xpi
[15:55] <jpds> _val_: spice-client in quantal?
[15:59] <_val_> stokachu: looking at it
[16:00] <seb128> what's the status of autosyncs for r?
[16:00] <_val_> jpds: I meant the precise
[16:00] <cjwatson> seb128: enabled
[16:00] <_val_> I need the precise build
[16:01] <cjwatson> seb128: in fact they're now fully automatic rather than in cron.cjwatson
[16:02] <cjwatson> seb128: are you looking for something in particular?
[16:02] <seb128> cjwatson, no, just not being fully awake, I was looking at http://people.canonical.com/~platform/desktop/desktop.html but we didn't change the serie so it's listing quantal versions ...
[16:03] <cjwatson> heh, ok
[16:03] <seb128> sorry for the noise ;-)
[16:03] <seb128> cjwatson, good to read that syncs are fully automatic now though ;-)
[16:03] <pitti> indeed
[16:03] <pitti> cjwatson: btw, my anxiety is gone: http://people.canonical.com/~ubuntu-archive/testing/raring-proposed_probs.html actually is non-empty :)
[16:04] <cjwatson> heh, yeah, it is occasionally
[16:04] <cjwatson> that one should clear shortly
[16:05] <pitti> my hands are itching to graphviz-ify http://people.canonical.com/~ubuntu-archive/proposed-migration/update_excuses.html
[16:05] <pitti> but that's not straightforward, one needs more info than on that page to get the dependencies apparently
[16:05] <Laney> oh, I should proposedify ben really
[16:05]  * Laney slots that into the todo
[16:06] <pitti> or there just are not any excuses due to uninstallability
[16:08] <stokachu> _val_: you can get the source from launchpad and rebuild for precise and fix any errors there
[16:09] <pitti> Laney: qu'est-ce que c'est "ben"?
[16:09] <pitti> another magic archive status tool?
[16:09] <Laney> transition tracker
[16:27] <stokachu> stgraber: could i get bug 423252 put under the libgcrypt11 package it is currently set to sudo
[16:28] <stgraber> stokachu: hmm, I see it already has a libgcrypt11 task, what exactly do you mean with "put under"?
[16:30] <stgraber> stokachu: when only accessing by bug number instead of the full URL LP seems to be taking some random task (or maybe the first?), but accessing the bug with https://bugs.launchpad.net/ubuntu/+source/libgcrypt11/+bug/423252 should give you the right view
[16:31] <stokachu> stgraber: ok i just didnt know b/c it says in "sudo"
[16:31] <stokachu> stgraber: i guess it doesn't matter?
[16:31] <stgraber> stokachu: apparently somebody added a sudo task for Debian to that bug and ubottu is using that one when you mention the bug on IRC, that's all
[16:32] <stokachu> stgraber: ok no worries then
[16:37] <achiang> are auto-syncs from testing, unstable, or experimental?
[16:37] <carif> #ubuntu-devel, I submitted a merge proposal for lp 967229, just wanted to confirm that someone can review it
[16:38] <cjwatson> achiang: unstable
[16:40] <achiang> ta
[16:41] <cjwatson> achiang: we may go back to testing for the next LTS cycle (14.04); OTOH we may not if our own stability protections are working out well enough by then
[16:41] <cjwatson> achiang: experimental is right out :)
[16:42] <achiang> cjwatson: heh, ok. i'll ask motu to sync a small package of mine from experimental. :)
[16:42] <cjwatson> sure - with syncs being self-service there isn't much reason to put lots of effort into selective auto-syncing from experimental or what-have-you
[16:43] <ogra_> cjwatson, did you ever try that raring update on your nexus you asked about ?
[16:43] <achiang> ah, that makes sense.
[16:44]  * ogra_ is curious if it survived
[16:44] <cjwatson> 14:32 <cjwatson> ogra_: I upgraded my nexus7 to raring (after pinning the ubuntu-nexus7 PPA) and found that Unity failed to start because the runtime linker failed to find libnvwsi.so for
[16:44] <cjwatson>                  /usr/lib/nux/unity_support_test, even though /usr/lib/nvidia-tegra still seems to be in ld.conf.d and that library is there.  I worked around it for now by symlinking that library into
[16:44] <cjwatson>                  /usr/lib/arm-linux-gnueabihf/, but I assume that's wrong.  Is this known?
[16:44] <cjwatson> 14:32 <cjwatson> (There was also an onboard regression with python3.3, and I just uploaded a backport of the upstream fix for that)
[16:45] <cjwatson> ogra_: ^- results
[16:45] <ogra_> oh, thx !
[16:45] <cjwatson> so the verdict is "almost"
[16:45] <ogra_> yeah, thats a bug in the nvidia package the libs have no SONAMEs
[16:46] <ogra_> i guess the linker ges confused by that
[16:46] <ogra_> *gets
[16:46] <cjwatson> um, right, but I didn't notice those packages changing from quantal to raring
[16:46] <achiang> ogra_: i thought there was a fix for that?
[16:46] <ogra_> achiang, only for tegra2
[16:46] <cjwatson> unless glibc 2.16 is stricter
[16:46] <cjwatson> I didn't try to narrow it down much further
[16:46] <achiang> ogra_: do you have a bug for tegra3? i can escalate to nvidia
[16:47] <ogra_> achiang, nope, i'll create one but ebroder is fully aware of the issue
[16:47] <ogra_> he was the one who got me the tegra2 rebuild, and promised me tegra3 would be fixed in their next release
[16:48] <ogra_> cjwatson, i'm pretty sure you simply got the tegra3 package from raring, the PPA one has a .links file to work around the issue ... linking all libs to /usr/lib, i was a bit reluctant to push that into the official archive
[16:48] <achiang> ogra_: got it, thanks
[16:49] <achiang> ogra_: i think you meant e. brower, not ebroder :)
[16:49] <ogra_> err, yesw
[16:49] <ogra_> heh
[16:49] <cjwatson> ogra_: let me unpack my nexus7 and check
[16:50] <ogra_> downgrading to the PPA package would fix it in that case
[16:50] <achiang> cjwatson: you mean it's not your primary dev machine yet?
[16:50] <Laney> what's the full package name?
[16:50] <ogra_> i should probably just upload the hack to raring ... as ugly as it is it helps atm
[16:50] <Laney> nvidia-graphics-drivers-tegra3?
[16:50] <ogra_> Laney, yep
[16:50] <cjwatson> ogra_: I pinned the entire PPA, so I'm sceptical about that answer
[16:50] <Laney> I don't have that installed at all
[16:51] <ogra_> binary is nvidia-tegra3
[16:51] <Laney> ah
[16:51] <cjwatson> achiang: :-P
[16:51] <Laney> ubuntu@nexus7-bisquicks:/usr/lib/arm-linux-gnueabihf$ dpkg -L nvidia-tegra3 | grep libnvwsi
[16:51] <Laney> /usr/lib/nvidia-tegra/libnvwsi.so
[16:51] <Laney> /usr/lib/1libnvwsi.so
[16:51] <ogra_> EEEK !
[16:51]  * Laney giggles
[16:52] <ogra_> oh dyslexia !
[16:52] <achiang> lol "bisquicks"
[16:52] <ogra_> haha, yeah
[16:52]  * Laney is quite pleased with that
[16:54] <achiang> sadly, we are going to roll the image again to change that due to http://goo.gl/826tPall and http://pad.lv/1072086
[16:55] <ogra_> well, we probably want a fix for that typo above as well :)
[16:55] <Laney> achiang: first one doesn't work
[16:55] <ogra_> though i dont get why that would affect release upgrades
[16:56]  * Laney reboots hopefully into R ...
[16:56] <achiang> Laney: someone out there got nexus7-m*****f*****
[16:59] <achiang> oh, the correct link was http://goo.gl/826tP
[16:59] <Laney> aha
[17:00] <Laney> I was going to suggest a wind-up, but the phrasing there makes me think it is not
[17:03] <stokachu> achiang: lol
[17:06] <cjwatson> ogra_: for the record, "tegra" appears nowhere in dpkg.log for the day I did the upgrade
[17:06] <ogra_> ok
[17:06] <cjwatson> ogra_: also, I wonder if there's a general robustness problem here; it seems kind of odd for a failure in unity_support_test on a system with no non-unity fallback to result in a blank screen rather than some attempt at a usable desktop
[17:06] <ogra_> cjwatson, well, libnvwsi.so is fixed now, just uploaded to the PPA
[17:07] <cjwatson> great, thanks
[17:07] <ogra_> oh, there definitely is
[17:07] <ogra_> and i think that was also discussed in some desktop session
[17:08] <ogra_> so i think i should upload the same "fix" to raring for now
[17:19] <stokachu> stgraber: could i get bug 1013798 nomination approval?
[17:21] <stgraber> stokachu: done
[17:21] <stgraber> stokachu: no oneiric?
[17:22] <stokachu> stgraber: sure i can do oneiric too
[17:22] <stokachu> stgraber: ok nominated oneiric
[17:23] <stgraber> stokachu: and approved. thanks!
[17:23] <stokachu> thank you :D
[17:33] <stokachu> pitti: is there any other information you require to have bug 1036834 approved for precise?
[18:04] <cjwatson> Laney: haskell-persistent now requires haskell-devscripts from experimental; should haskell-persistent be reverted, or should we upgrade haskell-devscripts?
[18:05] <Laney> cjwatson: Those version constraints are artificial to get packages built against the experimental ghc
[18:05] <cjwatson> Which doesn't work in Debian either AFAICS, judging from buildd.debian.org ...
[18:05] <Laney> someone accidently uploaded to unstable.
[18:06] <cjwatson> OK, so that should be reverted?
[18:06] <Laney> which is about as fun as you can imagine
[18:06] <Laney> did anything build against it?
[18:06] <cjwatson> I doubt it since it failed to build
[18:07] <Laney> oh, yeah, that works. So you can revert for now if you like, but I plan on (harassing iulian to) uploading the new stack soonish.
[18:07] <cjwatson> New stack or not I'm trying to keep britney update_output as clear as possible
[18:07] <cjwatson> Even if that involves more uploads than long-term necessary
[18:07] <Laney> hmm, how did nomeata do the binNMUs?
[18:08] <Laney> some of the packages must have been uploaded without the bumped BD
[18:08] <Laney> so I suspect there's some uninstallability in raring if that stuff got autosynced
[18:09] <cjwatson> in raring-proposed
[18:09] <Laney> yeah
[18:09] <cjwatson> So haskell-persistent 1.0.1.3-1+really0.9.0.4-2ubuntu1?
[18:09] <cjwatson> or is it just a matter of reverting the haskell-devscripts bit?
[18:09] <Laney> I'd do a test build and see if you get back to the same ABI hash we had before
[18:09] <Laney> if not then you might as well push on
[18:10] <cjwatson> We won't because of haskell-unordered-containers
[18:10] <mlankhorst> guess it's too soon to attempt a upload just to test if it works?
[18:10] <cjwatson> Which I've been rebuilding for today
[18:10] <Laney> so the only concern is if it breaks API and so requires changes to other packages
[18:11] <Laney> did you find any other sourceful changes required?
[18:12] <cjwatson> Not so far
[18:12] <Laney> good
[18:12] <cjwatson> haskell-persistent looks like API changes to me although I don't really know the difference
[18:12] <cjwatson> Some changes to function signatures and the like; no idea what counts as API
[18:13] <cjwatson> Though the diff is only 1250 lines
[18:13] <cjwatson> But then, it fails for other reasons; no libghc-monad-logger-doc
[18:13] <Laney> I expect the ' functions are internal, but don't know for sure
[18:14] <Laney> maybe it's best to go back :-)
[18:14] <cjwatson> I think this is just buggered and we should revert and then rebuild rdeps
[18:14] <Laney> hmm
[18:14] <Laney> will have to make sure that we don't miss it next time then
[18:15] <Laney> I suppose any breakage will make itself known
[18:15] <cjwatson> 1.0.1.3-1+really0.9.0.4-2build1 perhaps?
[18:15] <cjwatson> To preserve autosynciness
[18:15] <Laney> yeah
[18:15] <Laney> perhaps I'll upload it to exp anyway
[18:16] <Laney> sigh
[18:16] <Laney> nobody made any real attempt to clean up after the mistaken uploads: http://packages.qa.debian.org/h/haskell-persistent.html
[18:16] <cjwatson> Actually, that would be a different upstream tarball wouldn't it
[18:17] <cjwatson> And not actually << 1.0.1.3-2
[18:18] <mlankhorst> Laney: when does everything get set up? :-)
[18:18] <Laney> everything?
[18:18] <mlankhorst> erm for ubuntu membership i mean
[18:18] <cjwatson> YM upload permissions?
[18:18] <Laney> oh well I can add you to ubuntu-dev now
[18:18] <Laney> someone on the TB will need to push buttons to give you the PPU though
[18:18] <mlankhorst> oh sure
[18:19] <stgraber> mlankhorst, Laney: I'll try to take care of creating the packageset today. micahg said he'd create the team, once that's done I can grant that team upload rights to the packageset and you'll be good to go
[18:19] <Laney> oh, yeah, we did a team didn't we
[18:20]  * Laney deactivates the direct membership again :P
[18:25] <Laney> stgraber: I made ubuntu-xorg-dev
[18:25] <mlankhorst> yay
[18:25] <stgraber> Laney: thanks. I'll take care of the rest after lunch
[18:26] <Laney> mlankhorst: added you
[18:26] <Laney> it will give you indirect ubuntu membership too
[18:29] <mlankhorst> ah k
[18:29] <cjwatson> Laney: is http://paste.ubuntu.com/1335410/ too terrible?  I don't actually see a way to preserve no-new-upstream-release-autosynciness without breaking the spec for the Debian revision part
[18:30] <cjwatson> or having a terribly misleading version
[18:30] <Laney> nah, that's what I'd do
[18:30] <Laney> this is what you get when you revert
[18:30] <cjwatson> yeah
[18:30] <cjwatson> worst case we end up with 1.0.1.3+really1.0.1.3-... for a while :-)
[18:42] <stokachu> cjwatson: so my shell-fu is probably wrong but could you look at http://paste.ubuntu.com/1335451/ and see if im missing something here?
[18:43] <cjwatson> stokachu: That looks OK to me if it works
[18:44] <stokachu> cjwatson: the second half is whats actually output to the file and it fails to load
[18:45] <cjwatson> Put 'set -x' at the top and look at the logs
[18:45] <stokachu> ok ill re-test
[18:45] <cjwatson> (You can just hack this in locally, no need to rebuild the package)
[18:46] <cjwatson> Should hopefully end up in .xsession-errors
[18:46] <stokachu> ok trying that now
[18:48] <stokachu> bah nothing showing up in .xsession-errors
[18:49] <stokachu> ah i see whats going on
[18:50] <infinity> It's just a shell script, you could just manually run it with "sh -x /path/to/script" and see if it does what you think it should.
[18:50] <stokachu> yea
[18:50] <stokachu> http://paste.ubuntu.com/1335471/
[18:51] <stokachu> i need a /usr/lib/*/$moduledir_suffix...
[18:51] <infinity> (I assume the same thing is being done to appmenu-gtk3 as well?)
[18:51] <stokachu> infinity: yep
[18:52] <stokachu> infinity: wouldnt this be easier in perl? :P
[18:52] <infinity> You said it, not me.
[18:52] <stokachu> hah
[18:53] <infinity> Wait...
[18:53] <infinity> moduledir=/usr/lib/x86_64-linux-gnu/gtk-3.0/3.0.0/menuproxies
[18:53] <infinity> Wasn't the whole point of this to ship a conffile that didn't have the arch bits in it?
[18:53] <infinity> Since this is part of an MA:same package?
[18:55]  * infinity wonders if this wouldn't be a trillion times easier if you just shipped 80appmenu_${triplet} for each build, so they don't conflict, and can continue having hardcoded paths...
[18:57] <stokachu> wouldn't that be adding more complexity unecessarily though?
[18:57] <infinity> Then you could go back to the same 4-line 80appmenu file as before.
[18:58] <infinity> I don't see how it's added complexity, really.  You're shipping one conffile per arch lib, not unheard of.
[18:58] <infinity> See, eg: dpkg -S /etc/ld.so.conf.d/{i686,x86_64}-linux-gnu.conf
[18:59] <stgraber> Laney: shouldn't that team be called -graphics-dev or similar considering wayland is included in the set?
[18:59] <mlankhorst> details, it's the xorg team responsible for it ;)
[19:09] <stgraber> mlankhorst: package set created and populated. I'm generating a new report now to confirm that the list matches what we discussed
[19:09] <mlankhorst> sure
[19:50] <stgraber> mlankhorst: http://people.canonical.com/~stgraber/package_sets/raring/xorg
[19:50] <mlankhorst> yeah noticed :)
[19:51] <mlankhorst> thanks
[20:14] <stokachu> infinity: i think ill be forced to do 80appmenu_{triplet} otherwise package conflicts happen on the config when both architectures attempt to install
[20:14] <stokachu> or i could separate out into an appmenu-data package
[20:15] <infinity> stokachu: A -data package for a single conffile seems a bit silly.
[20:17] <stokachu> yea, however, i think /etc/X11/Xsession.d/80appmenu-gtk_{i386,amd64} 80appmenu-gtk3_{i386,amd64} seems excessive
[20:17] <stokachu> but i dont see a way around it
[20:18] <stokachu> the wiki seems to think that shipping configs with data files in a shared library is wrong
[20:20] <stokachu> s/with/and/
[20:20] <kumadasu1> Hi. I want to apply this patch to chromium-browser in local build.  http://code.google.com/p/webrtc/issues/detail?id=512
[20:20] <kumadasu1> But I don't know how to apply patch.
[20:20] <kumadasu1> This kind of question is appropriate here?
[20:22] <kumadasu1> I'm using Ubuntu 12.04 precise.
[20:23] <kumadasu1> Probably I should use quilt.  I tried $quilt push -a and I got below
[20:23] <kumadasu1> Patch chromium_useragent.patch does not exist
[20:23] <kumadasu1> Applying patch chromium_useragent.patch
[20:24] <kumadasu1> I looked the directory and found "debian/patches/chromium_useragent.patch.in" instead of ".patch".
[20:25] <kumadasu1> What is .patch.in? and How to apply the patch to chromium-browser?
[20:28] <ScottK> kumadasu1: Something .in usually means it's used to generate that file in same way.  I don't know the way the chromium-browser package is set up beyond "really complicated".
[20:28] <ScottK> It's probably in the top handful or two of packages for complexity to deal with.
[20:32] <kumadasu1> Hmmm, so now I understand chromium is so complicated.
[20:32] <stokachu> if the policy for multi-arch states we should not include arch-independent files in the shared library package is it still wrong to assume a -data package is necessary even though its for one file? we already create -bin packages for executables
[20:33] <stokachu> appmenu-gtk is kind of a special case, a shared library relying on a configuration file in /etc/X11/Xsession.d/
[20:34] <kumadasu1> chromium has compressed source code like tar.lzma.  It's bothered me too.
[20:35] <kumadasu1> I could build chromium-browser with pbuilder,
[20:35] <ScottK> stokachu: Can you just generate it in the postinst if it's not already there and make it not a conffile or are there arch specific differences in the file?
[20:36] <stokachu> ScottK: http://paste.ubuntu.com/1335739/ this is an example of what gets generated during install
[20:36] <slangasek> stokachu: where do you see it said that arch-independent files may not be included in the shared library package?
[20:36] <stokachu> we could do something where we just ignore moduledir and test for /usr/lib/*/gtk-2.0/2.10.0/menuproxies
[20:37] <slangasek> also, appmenu-gtk isn't a shared library package, it's a plugin package
[20:37] <stokachu> slangasek: https://wiki.ubuntu.com/MultiarchSpec#Architecture-independent_files_in_multiarch_packages i was reading this
[20:37] <ScottK> stokachu: I think that would be 'wrong'.
[20:38] <slangasek> stokachu: right - that's specifically addressing shared library packages, not DSOs
[20:38] <stokachu> slangasek: gotcha
[20:38] <slangasek> there's no soname here, so there's no reason to worry about that section
[20:38] <kumadasu1> so if I know what quilt does, I can handwrite diff file.
[20:40] <kumadasu1> quilt command generate patches/a.patch and modify patches/series and are there other files?
[20:40] <stokachu> slangasek: with that said im running into an issue when coinstalling amd64/i386 wrt 80appmenu and 80appmenu-gtk
[20:40] <slangasek> stokachu: well, the file does need to be identical across all architectures
[20:41] <ScottK> Which it's not.
[20:41] <stokachu> slangasek: so does it make sense to have it in a -data package?
[20:41] <slangasek> no
[20:42] <slangasek> you need to solve whatever's causing it to be different across architectures
[20:42] <slangasek> otherwise, your -data package would probably only work on one architecture
[20:42] <stokachu> ah i wonder if its because im storing moduledir=@moduledir@
[20:42] <slangasek> that would do it, yeah
[20:43] <stokachu> ive been staring at this to long :\
[20:43] <stokachu> ok that clears everything up, thanks for the fresh set of eyes
[20:43] <slangasek> sure :)
[20:44] <kumadasu1> I tried once handwrite diff and modify series,  and pbuilder (precise armhf) to package chromium-browser.
[20:46] <achiang> slangasek: hi, this blueprint seems to be associated to me, but perhaps you should be the approver? https://blueprints.launchpad.net/ubuntu/+spec/foundations-r-arm-boot-resume-speedup
[20:47] <kumadasu1> I succeeded package and it could run as browser, but it doesn't work mediastream(patch purpose) and webgl doesn't work properly too.
[20:49] <kumadasu1> I don't know the istruction is correct or not.  Any suggestion?
[20:50] <slangasek> achiang: seems reasonable, marking.  are you and mfisch still the right assignee/drafter? (and is the drafting done?  If so, please set definition to 'pending approval')
[20:51] <achiang> slangasek: i don't think we should be the assignee/drafters
[20:51] <kumadasu1> Ah, kernel version may different between pbuilder's environment and target.
[20:52] <slangasek> achiang: ah, well then - ok, will get that sorted out on our side
[20:52] <slangasek> achiang: btw, I missed this session... did you happen to notice my question in the whiteboard about what "fixed version" of bootchart refers to?
[20:53] <achiang> slangasek: yes, i believe in our build, bootchart was looking in the wrong spot for the logs and we wrote a small simple patch to fix it
[20:53] <mfisch> slangasek: we put a fixed version of bootchart in the public PPA and we have an open bug upstream
[20:53] <kumadasu1> because the target environment uses PPA from TI OMAP.  I run chromium-browser on precise on pandaboard.
[20:53] <slangasek> achiang, mfisch: could you please file a bug against the Ubuntu package with patch, and link that bug to the blueprint?
[20:54] <achiang> slangasek: yep, will do
[20:54] <slangasek> ta
[21:05] <stgraber> @pilot out
[21:13] <mfisch> slangasek: I linked the bug # into the Blueprint and cwayne is adding the patch
[22:12] <robert_ancell> StevenK, can you reject the latest gnome-games update to quantal-proposed? I gave it the wrong version number
[22:22] <StevenK> robert_ancell: I could, except I can't see it in -proposed
[22:23] <robert_ancell> StevenK, I see it here https://launchpad.net/ubuntu/quantal/+queue?queue_state=1&queue_text=
[22:23] <StevenK> Oh, I've looking in NEW, sigh
[22:24] <StevenK> robert_ancell: Rejecting gnome-games/1:3.6.0.2-0ubuntu2
[22:24] <robert_ancell> ta
[22:28]  * infinity had completely forgotten StevenK was in ~ubuntu-archive until he just read backscroll.
[23:15]  * xnox thinks it's time to upgrade to raring =)
[23:19] <infinity> xnox: Why so late?  I've been running it since it opened!
[23:19] <xnox> infinity: i didn't trust britney at first.
[23:19] <infinity> She'll be crushed to hear that.
[23:20] <xnox> infinity: now that she is handling haskell back, I can see it's working.
[23:22] <mwhudson> britney is a ridiculous name inherited from debian?
[23:22] <mwhudson> i certainly hope we've stopped inventing names like that
[23:23] <xnox> mwhudson: http://pad.lv/~katie
[23:23] <mwhudson> xnox: well yes, and gina and fiera and other things
[23:23] <mwhudson> but they're all old
[23:24] <infinity> Most of the ones in Debian have since been renamed.
[23:24] <infinity> britney is an odd hold-out.
[23:24] <mwhudson> oh good
[23:43] <xnox> well ben is still called ben and it's fairly newish.