[01:26] <twb> So this "weyland" thing that Mark was waffling on about... does it mean ssh -X won't Just Work in upcoming releases of Ubuntu?
[01:26] <twb> (Or: is there a more appropriate channel to ask that?)
[01:37] <kklimonda> twb: it's simply way to early to ask that
[01:37] <twb> okey dokey
[01:38] <kklimonda> also, X are not the only way of running applications remotely.
[01:39] <twb> I realize that, but I'd prefer not to replace ssh -X with some raster-based nightmare like RFB or RDP
[01:39] <kklimonda> but it's still too early to ponder about it.
[01:39] <twb> Nod.
[01:42] <Chipaca> is there an easy and magic way to split a library package into libfoo and libfoodev?
[01:42] <lifeless> .install files
[01:45] <Chipaca> lifeless: thanks
[01:45]  * Chipaca reads
[01:55] <electrofreak> I've realized a problem with the powernowd and Phenom II's dynamic clocking. For example... if I download mprime and run 4 stresses from the program, my CPU usage will go to 100%, but the clocks of all the cores is 800MHz except for 1, which is at 3.2GHz. This was also a problem when doing multithreaded converting with ffmpeg. But if I use 'stress' with 4+ cpu stresses, it will clock up all the cores properly. So, there seems to
[01:55] <electrofreak> be a problem with the threading and powernowd clocking each of the cores independently. Any ideas on a proper fix? (killing powernowd is a workaround)
[01:58] <psusi> electrofreak, uninstall the powernowd package... it's obsolete and I have requested that it be removed from the archive... the functionality it provides is now provided by the kernel itself
[01:59] <twb> Doesn't it just set the CPU scaling governor to performance on AC and conservative on battery?
[01:59] <psusi> I think that's laptop mode tools...
[02:00] <psusi> though it is interesting that your cores report different clocks... usually all the cores in a multi core cpu must be using the same clock
[02:01] <twb> Last time I looked, all those throttling apps basically just modprobed the driver and then set the governor on ACPI events
[02:01] <twb> i.e. they were all pretty much useless
[02:04] <electrofreak> psusi, well... I don't think that the kernel's version works right either
[02:05] <electrofreak> psusi, I could be wrong maybe
[02:05] <electrofreak> psusi, well, in older CPUs... they clocks were different per core...
[02:06] <electrofreak> but this is a brand new Phenom II... each core clocks differently
[02:06] <psusi> electrofreak, given that it works correctly with one program and not the other, I'd guess that the one is either not actually running multiple threads and keeping all cores busy, or they are running at low priority so are ignored by the governor.. but first thing I suggest is that you remove the powernowd package...
[02:06] <electrofreak> psusi, err... I messed up that sentence... in older CPUs, they clocks on each core were the same. But on new CPUs... each core can clock independently
[02:08] <electrofreak> psusi, well... that program is definitely doing multiple threads... because in top it reports nearly 400% CPU (4 cores) and when I stop powernowd... the temperature jumps up very quick.
[02:08] <electrofreak> psusi, 'stress' actually spawns separate processes.
[02:08] <electrofreak> psusi, I'll try removing powernowd
[02:09] <electrofreak> psusi, although, I might need to reboot now... because it just stopped it and the kernel itself isn't handling it on its own right now (probably because powernowd took over before
[02:10] <electrofreak> psusi, or is there something that I can set in /sys to make the kernel take over again?
[02:17] <electrofreak> psusi, ok. I echo'd "ondemand" into /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor, that seems to work
[02:17] <electrofreak> but now, mprime for some reason doesn't cause any of the cores to clock higher now.
[02:19] <psusi> electrofreak, it's probably nicing itself and iirc, ondemand by default ignores processes using the maximum niceness
[02:20] <electrofreak> nice value is 10
[02:20] <electrofreak> should I try 0 nice?
[02:34] <electrofreak> psusi, ok. I figured this out. have to set ignore_nice_load=0... then everything works fine.
[02:34] <electrofreak> psusi, adding it to sysfs.conf to persist through reboots, too.
[06:50] <pitti> Good morning
[06:51] <pitti> slangasek: nice, thanks
[07:35] <dholbach> good morning!
[09:24] <neeraj> Hi, while uploading my package in ppa, I am getting this error msg again and again or two/six packages which I am packaging. Rejected:
[09:24] <neeraj> Unable to find sugar-presence-service-0.90_0.90.1.orig.tar.bz2 in upload or distribution.
[09:24] <neeraj> Files specified in DSC are broken or missing, skipping package unpack verification.
[09:26] <neeraj> anyhelp? I tried google but not able to find a soln
[09:36] <akheron> neeraj: try building with the -sa option
[09:38] <neeraj> akheron: thanks. I forgot to use "-sa" and kept trying with -s flag :(
[12:48] <pitti> sconklin: I assigned bug 672964 to you, it's a regression in the lucid proposed kernel
[12:48] <pitti> sconklin: diwic reported it, I guess he can supply additional debug info if needed
[13:53] <Riddell> doko: what should we do about -mimplicit-it=thumb ?
[13:54] <Riddell> doko: so far we added it to qt and kde4libs to work around failing to build
[13:54] <Riddell> doko: should we add it to other packages as needed or should it go back in the gcc defaults?
[13:54] <Riddell> currently the kde stack is stuck on http://launchpadlibrarian.net/58639702/buildlog_ubuntu-natty-armel.kdepimlibs_4:4.5.3-0ubuntu1_FAILEDTOBUILD.txt.gz
[14:10] <mdeslaur> cjwatson, ev: bug 673028
[14:35] <Riddell> \sh: could you verify bug 533369 ?
[14:45] <Riddell> oubiwann: hi, what was the outcome of the touch sessions with the Qt people?
[14:52] <\sh> Riddell: I trashedbin my squeeze stuff, but can redo do it tomorrow...not right now...
[14:54] <oubiwann> Riddell: they liked the architecture, have used v1 of the GEIS API (complete with a Qt demo), and are even more excited about GEIS v2
[14:55] <oubiwann> they provided a great deal of feedback on all aspects of utouch, and really improved the quality of the UDS sessions as a result
[14:55] <Ng> 9
[14:55] <Ng> doh
[14:55] <ogra_ac> 10
[14:55] <ScottK> 11
[14:59] <Riddell> oubiwann: so presumably we can look forward to touch on linux being in Qt 4.8?
[14:59] <oubiwann> Riddell: well, it's more complicated than that... full MT support on Linux with Qt depends on XInput 2.1 landing
[15:00] <oubiwann> we're hoping that 2.1 makes it into the X 1.11 release
[15:01] <oubiwann> 1.10 is probably going to come out around Apr 2011, and that will still be too soon for XI2.1 to be fully tested
[15:01] <oubiwann> 1.11 is likely to come out around August of 2011, so that's the target for XI2.1 getting released
[15:02] <oubiwann> toolkits aren't interested in supporting touch officially for linux until 2.1 lands
[15:03] <oubiwann> but the Qt guys are really staying closely involved with the 2.1 dev process and plans, and I would expect that they'll be ready to go as soon as X is
[15:03] <oubiwann> (in other words, I wouldn't expect a 6 month lag from them after August 2011)
[15:07] <Riddell> oubiwann: right, thanks for the update
[15:07] <oubiwann> Riddell: no prob
[15:11] <bilalakhtar> Where can I find a listing of people with upload rights to 'which packageset' and other details
[15:12] <geser> bilalakhtar: the data is in LP and to some part also distributed in the heads of some people
[15:13] <bilalakhtar> geser: I mean, what is the URI to the LP page?
[15:13] <bilalakhtar> since I know there is one
[15:13] <geser> I don't know a single page
[15:13] <bilalakhtar> geser: okay
[15:13] <bilalakhtar> fine then
[15:13] <geser> especially that I don't know of any page in LP that shows upload permissions at all
[15:14] <geser> you have to use the LP API to get the data
[15:14] <bilalakhtar> ok
[15:14] <geser> PPU and similiar (e.g. package sets) are only managed through the LP API
[15:17] <geser> are you looking for something special? perhaps I can tell you the right spots to look at
[15:28] <cjwatson> bilalakhtar: might be easiest to use edit_acl.py from lp:ubuntu-archive-tools - it has a query feature
[15:29] <bilalakhtar> cjwatson: thanks
[15:30] <SpamapS> So.. I'm wondering why there was no discussion of btrfs at uds-n .. don't we want it as a default FS for 12.04? is that being relaxed now?
[15:30]  * SpamapS has started using btrfs for all cache/temporary data.
[15:31] <cjwatson> mostly forgetfulness, but I thought there were still serious issues with reliability and things like being able to use a halfway reasonable percentage of the block device
[15:33] <SpamapS> since UDS-M I've been using it for my local mirror and now I'm working on getting schroot's setup on it
[15:34] <SpamapS> have had no issues, but have never gotten near capacity on a drive. does it just fragment badly?
[15:34] <pitti> I have used it as / (not /home yet) for about half a year without data loss
[15:34] <pitti> however, resuming from RAM often hangs, and I heard that this was particular to using btrfs
[15:36] <jdong> I've used it as / and /home and /srv for a year...
[15:36] <jdong> no data loss but two reformats due to metadata occupying tens of gigabytes of space
[15:36] <cjwatson> SpamapS: I don't know the details.  comments I've heard are that you're lucky if you get it to use even half the disk space.  I think that's unacceptable for Ubuntu
[15:36] <jdong> and performance, I've found, rapidly degrades with time
[15:36] <jdong> cjwatson: it's lately more like 80%, but still not great.
[15:37] <cjwatson> (I haven't looked in a while though, and I recognise that we're not contributing much to the kernel side)
[15:37] <jdong> it's much more irritating when it mysteriously goes slow though...
[15:38] <jdong> particularly with qemu/kvm type workloads (random IO on gigantic files), btrfs can generate a lot of disk traffic
[15:39] <jdong> it's still missing important features like a consistency check / repair mechanism too
[15:41]  * cjwatson grumbles at having to do http://paste.ubuntu.com/528747/ to busybox
[15:42] <SpamapS> jdong: wow.. ok .. so not in 12.04 ;)
[15:43] <jdong> SpamapS: well I don't know. I'm not great at predicting the future, and Ubuntu doesn't have a stake in doing this kernel development right now
[15:43] <jdong> so it's just speculation on our end
[15:43] <jdong> this looks like a case of proverbial "it's ready when it's ready"
[15:44] <\sh> cjwatson: why?
[15:44] <SpamapS> jdong: IMO, if its not ready by 11.04, its not ready for 12.04 given the long term ramifications of a default filesystem choice on an LTS
[15:44] <cjwatson> \sh: see build logs.  irritating header incompatibilities
[15:45] <jdong> SpamapS: well the filesystem is certainly not something to risk mass-exposure to. But who knows, maybe by 12.04 ext4 will be deprecated in favor of btrfs :)
[15:45] <jdong> SpamapS: never know what'll happen in a year+ of Linux dev
[15:48] <jdong> SpamapS: but I think the bottom line is, we'll decide around UDS time for 12.04 with the information then. Right now it's hard to speculate one way or the other
[15:48] <jdong> but users are already welcome to do btrfs installs and play with the technology
[15:48] <SpamapS> jdong: if there is one thing to be conservative about.. default filesystem on an LTS is probably it. ;)
[15:48] <jdong> yep :)
[15:48] <\sh> cjwatson: hmm...wasn't there some discussion about not using <linux/*> includes when avoidable?
[15:49] <SpamapS> jdong: if its not already default for 11.10 .. it would be hard to convince *me* that its ready enough for 12.04. :-P
[15:50] <jdong> SpamapS: your opinion might be different if all the other popular OS'es by 12.04 have features afforded by btrfs-type filesystems.
[15:50] <SpamapS> Seems to me like there's another thing going on, which is changing the way people think about files in general. There's the traditional static folder/tree view, and then there's something else.. I like the idea of a file manager that is entirely time-oriented.
[15:51] <SpamapS> jdong: If they have something btrfs-type that is tested and agreed upon, then lets use that. ;)
[15:51] <jdong> SpamapS: indeed if OS X, Windows, RedHat, and friends ALL had time machine type snapshotting features, live checksumming / recovery, etc... we might get impatient
[15:51] <ebroder> i'm kind of skeptical that's a compelling enough use case to be worth getting impatient over
[15:52] <SpamapS> ebroder: people don't think twice about saving a file if they know it is version controlled. "forever undo" is quite freeing. :)
[15:53] <jdong> ebroder: deep integration with the rest of the desktop of COW snapshotting would make me very impatient ;-)
[15:53] <cjwatson> \sh: yes.  hence "grumbles"
[15:58] <doko> Riddell: are thre bug reports open about this?
[16:06] <Riddell> doko: no, just build logs
[16:17] <doko> Riddell: I'll have a look tomorrow with the linaro people
[16:19] <doko> Riddell: so this is kdepimlibs, qt and kde4libs ?  a bug report would be nice
[16:24] <TeTeT> tkamppeter: hi, I'm encountering a black/white vs color printing problem in eog, I filed a bug against eog, but not sure if it's not actually cups: https://bugs.launchpad.net/ubuntu/+source/eog/+bug/673082
[16:39] <m4t> hey, i've got a .config that fails to boot when compiled with 10.10 32bit toolchain
[16:39] <m4t> i verified this on another maverick install also
[16:40] <m4t> CONFIG_MPENTIUMM and CONFIG_M686 both have the issue
[16:41] <m4t> should i file this under linaro-gcc?
[17:35] <pitti> cjwatson: any idea how this could have happened? https://launchpad.net/ubuntu/jaunty/+source/xserver-xorg-video-geode/2.11.10-1~jaunty1
[17:35] <pitti> cjwatson: Q-FUNK just accidentally uploaded it (was meant for PPA)
[17:36] <pitti> cjwatson: shall I reject the binary?
[17:36] <psusi> jdong: I've been playing with lvm snapshots to test ugprading to natty and then reversing it... very nifty
[17:36] <pitti> it should just have been rejected (upload to stable), and on top of that jaunty is EOL
[17:36] <RoAkSoAx> kirkland: ping?
[17:36] <pitti> wgrant: ^ any idea about this jaunty upload?
[17:37] <cjwatson> pitti: madison-lite on cocoplum doesn't list it
[17:37] <pitti> cjwatson: it was just uploaded a few mins ago
[17:38] <cjwatson> absolutely should have been uploaded, do what you can to reject it
[17:38] <cjwatson> er, shouldn't
[17:39] <pitti> cjwatson: I rejected the binary, but of course the source is there now
[17:40] <RoAkSoAx> pitti: howdy! I have a question for you! I'm gonna ship the scripts (to reduce power consumption) with powernap. So I was wondering if it would be better to install the scripts in usr/lib/powernap/actions and then symlink to  etc/pm/power.d, or should I install them directly to etc/pm/power.d ?
[17:40] <pitti> RoAkSoAx: install directly, please; that's much less confusing
[17:41] <jdong> psusi: yeah and btrfs's potential to extend that to the per-file/per-directory level, once integrated with applications, will be incredible too
[17:41] <RoAkSoAx> pitti: indeed. though I'm also writing a tool to enable/disable scripts easily and also list scripts with their description. So i gues that the tool will just remove the executable bit
[17:42] <pitti> RoAkSoAx: or rename them to .disabled or so
[17:42] <pitti> RoAkSoAx: but chmod -x sounds okay, too
[17:42] <SpamapS> https://launchpad.net/~clint-fewbar/+archive/fixes/+build/2037553/+files/buildlog_ubuntu-lucid-amd64.mongodb_1%3A1.2.2-1ubuntu1.1~ppa1_FAILEDTOBUILD.txt.gz
[17:42] <SpamapS> can anybody explain why this symlink produces this error:
[17:42] <TerminX> anyone know why rsyslog hasn't been synced from debian in quite a long time?
[17:43] <SpamapS> dpkg-deb: building package `mongodb' in `../mongodb_1.2.2-1ubuntu1.1~ppa1_amd64.deb'.
[17:43] <SpamapS> tar: ./usr/bin/mongo: file changed as we read it
[17:43] <SpamapS> /usr/bin/mongo is a symlink
[17:43] <RoAkSoAx> pitti: alright, will evaluate which one is more convenient. Thanks for the Input :)
[17:43] <SpamapS> to ../lib/mongodb/xulwrapper
[17:45] <cjwatson> TerminX: needs merge, somebody needs to look at it by hand
[17:45] <cjwatson> (like all merges)
[17:45] <RoAkSoAx> SpamapS: you doing the symlink manually?
[17:45] <SpamapS> no, with dh_link
[17:46] <SpamapS> it works fine when building locally w/ sbuild
[17:46] <TerminX> cjwatson: ah, I guess there's no specific maintainer for the package then?  I was under the impression that ubuntu had switched to rsyslog by default
[17:47] <TerminX> I don't think anybody looked at it during the maverick dev cycle either
[17:48] <SpamapS> some things just get lost in the scuffle. ;)
[17:48] <cjwatson> TerminX: merges are assigned to the person who touched the package last by default; but not everyone gets around to all their merges
[17:48] <cjwatson> the standard rubric is that if you want to do it, you contact the person who touched it last to avoid duplicating work
[17:49] <RoAkSoAx> SpamapS: idk then... maybe the symlinks gets overriden by something else during the creation of the package (i'm just guessing :) )
[17:49] <SpamapS> RoAkSoAx: at this point all I can do is guess. :-/
[17:49] <ebroder> looking at the diff from debian, there's probably a bunch of stuff you could drop if you merged rsyslog now - stuff like sysklogd transitional code
[17:49] <SpamapS> RoAkSoAx: I suppose I can ls -lR debian/mongodb in the rules file so we see what it looks like before that point
[17:51] <RoAkSoAx> SpamapS: or see if soemthing is installing in /usr/bin/mongo instead of just symlinking
[17:52] <SpamapS> RoAkSoAx: ls -lR will show that I hope
[17:52] <SpamapS> RoAkSoAx: doesn't relaly make sense that it would "change while reading" though
[17:52] <SpamapS> and why it does it only on the buildd's I'm a little confused
[17:52] <juk_> bug-buddy just caught stardict with: ** GLib **: /build/buildd/glib2.0-2.26.0/glib/gmem.c:239: failed to allocate 512 bytes
[17:55] <SpamapS> hmm
[17:55] <RoAkSoAx> SpamapS: try building with a pbuilder instead ..
[17:55] <ebroder> pitti: by the way, i'd very much like to help with the perl-sectomy, or the footprint work more generally. please feel free to throw tasks at me if you'd like
[17:55] <SpamapS> I just updated my lucid schroot and one of the things it pulled down was this:
[17:55] <SpamapS> Get:14 http://archive.ubuntu.com/ubuntu/ lucid-updates/main tar 1.22-2ubuntu1 [390kB]
[17:55] <pitti> ebroder: ooh, much appreciated
[17:55] <RoAkSoAx> SpamapS: ah!1 that might be it then
[17:55] <SpamapS> If it breaks now, there may be a regression
[17:55] <RoAkSoAx> SpamapS: indeed
[17:56] <RoAkSoAx> SpamapS: btw.. is there a meeting today?
[17:56] <SpamapS> RoAkSoAx: yes, 18:00UTC, though I think that will be moved to 16:00UTC going forward
[17:56] <RoAkSoAx> SpamapS: ok cool... will be back later ... my battery is almost drained :)
[17:57] <SpamapS> looks like zul forgot to update the header
[18:09] <pitti> ebroder: do you want to start with some small ones, like cups-driver-gutenprint and libfile-copy-recursive-perl?
[18:09] <pitti> ebroder: not sure whether you followed the changes I already uploaded today?
[18:10] <pitti> ebroder: in the ideal case, packages already don't need anything from perl-modules, so a simple dh_perl -d is enough
[18:10] <pitti> ebroder: for some packages I needed to replace functionality from perl-modules with external programs (like rm -r) or other small replacements
[18:10] <pitti> ebroder: I sent them to Debian, too, one patch is already accepted :)
[18:10] <ogra_ac> pitti, we need to talk abotu the WI tracker some time this week ... (mobile got renamed to arm etc)
[18:11] <pitti> ebroder: the biggest thing left is apparmor-utils
[18:11] <ogra_ac> (not today anymore though, just a heads up that i will ping you some time)
[18:13] <pitti> ogra_ac: sure; please feel free to adapt natty.cfg yourself (~platform on lillypilly)
[18:13] <ogra_ac> ok
[18:14] <ebroder> pitti: sure, the small ones sound fine. i can look at apparmor-utils too if i get a chance, but what is our solution if a package does need something from perl-modules?
[18:14] <pitti> ebroder: for easy cases I rewrote the code to just use perl-base
[18:14] <jdstrand> pitti, ebroder: you might check with kees on the apparmor one-- he may already be working on it
[18:14] <pitti> ebroder: like "dirname"
[18:15] <pitti> ebroder: for doc-base the only place that needed it was --dump-db, which is a debug option; I changed the code to say "Please install perl for this blabla" if you use it
[18:15] <ebroder> pitti: ok, sounds good. i'll try to be sufficiently creative
[18:15] <pitti> ebroder: I haven't yet worked on a package where perl-modules is necessary for core functionality
[18:16] <pitti> ebroder: in that case we still have the option to split out that module from perl-modules, so that we can install it separately
[18:16] <pitti> ebroder: we should make a list of those in the blueprint, so that we can do that in a larger step
[18:16] <pitti> ebroder: so if you stumble over one of those, please make a note there
[18:16]  * ebroder nods
[18:16] <cjwatson> I wouldn't do that casually though, the Replaces forest can get pretty insane
[18:16] <pitti> ebroder: thanks! please let me know about the progress, so that we can coordinate
[18:16] <pitti> cjwatson: no, we should try to avoid that
[18:17] <cjwatson> you'll probably run into File::Path at some point
[18:17] <pitti> fortunately so far it wasn't needed
[18:17] <pitti> you can do surprisingly much with perl-base
[18:18] <kees> pitti, ebroder: so apparmor-utils uses libterm-readkey-perl, librpc-xml-perl in the Depends
[18:18] <cjwatson> libfile-copy-recursive-perl uses File::Copy which is in perl-modules
[18:18] <ebroder> pitti: in the cases where we can just do dh_perl -d, why don't i just make a note on the blueprint? it'll probably be faster for you to just do those than you dealing with a branch from me
[18:18] <kees> actually, ${perl:Depends}
[18:18] <pitti> kees: right, those are fine (as soon as we fix those to not pull in perl); we need to fix apparmor-utils' "perl" dependency itself, though
[18:18] <cjwatson> sometimes you can use the module optionally and implement some other fallback
[18:19] <kees> pitti: it's coming from ${perl:Depends} I assume
[18:19] <kees> pitti: shouldn't we fix dpkg's idea of that instead?
[18:19] <pitti> kees: right, dh_perl adds a perl dependency by default
[18:19] <cjwatson> kees: that's what dh_perl -d does
[18:19] <kees> pitti: so what's the right solution?
[18:19] <pitti> kees: I usually go through "grep -wr use" and check where all imported modules come from
[18:20] <kees> ah, s/dh_perl/dh_perl -d/ in rules
[18:20] <pitti> kees: and once nothing is left, change dh_perl to -d
[18:20] <cjwatson> be sure to check require as well as use
[18:20] <pitti> right
[18:21]  * pitti will still do some maverick SRU catchup and then call it a day; enough breakage for one day :)
[18:21] <pitti> so, good night everyone!
[18:27] <kees> okay, I remain confused. what is the process to make sure I can switch to perl-base?
[18:27] <kees> because it seems like a lot of stuff lives in perl-modules, which requires perl currently
[18:28] <pitti> kees: grep the source for "use" and "require" and check whether all of those are in perl-base, or standalone perl packages (which are seprate dependencies)
[18:28] <kees> pitti: right, I have the list now. I guess doing the perl-name to package is the trick
[18:29] <pitti> kees: for "use foo" I usually do "dpkg -S foo.pm"
[18:29] <pitti> not sure whether there's something better
[18:29] <kees> yeah, that's where I am too. Okay, cool.
[18:29] <pitti> kees: thanks for looking into it!
[18:29] <kees> you bet.
[18:30] <kees> pitti: btw, with the loss of changelog.Debian.gz, people will not be able to examine their local filesystem to determine why a security update to a package was performed...
[18:30] <kees> what happens if I encounter something that requires perl-modules ?
[18:30] <pitti> kees: they just came back :) (in reduced form), see u-d-a@ and planet
[18:31] <pitti> kees: but apt-changelog should pick those up as well
[18:31] <pitti> kees: for simple cases it's often possible to rewrite
[18:31] <kees> require Term::ReadLine;
[18:32] <kees> pitti: ah, top 10 changes, nice.
[18:32] <pitti> e. g. rmtree($dir) -> system('rm', '-r', $dir)
[18:32] <kees> use Data::Dumper
[18:32] <pitti> kees: that's more tricky; perhaps we could use libterm-readline-gnu-perl for that, and un-"perl"-ize (and MIR) that
[18:33] <pitti> kees: Dumper was used in doc-base, but it was only a debugging tool; I did http://launchpadlibrarian.net/58890660/doc-base_0.9.5_0.9.5ubuntu1.diff.gz for that
[18:33] <kees> File::Basename, the list goes on and on
[18:33] <pitti> kees: as I said, this one will be hard :)
[18:33] <pitti> Basename is trivial to express with core Perl, though
[18:33] <kees> hrmpf.
[18:33] <kees> yeah
[18:34] <pitti> kees: I don't (much) expect to actually be able to drop Perl completely in Natty; but working towards it
[18:34] <kees> -    mkpath ($d, 0, 0755);
[18:34] <kees> -    rmtree ($d.$suffix, 0, 0);
[18:34] <kees> +    system ('mkdir', '-m', '-p', $d);
[18:34] <pitti> at least it will be much easier to drop it for custom projects, etc
[18:34] <kees> that's wrong. -m requires an argument
[18:34] <pitti> kees: oops, thanks; didn't come up in testing
[18:35] <kees> I think we'd be better have a perl module that lives in -base that implements all these common uses
[18:36] <pitti> kees: as I said above, for the more complex ones we might actually just split them out from perl-modules
[18:36] <pitti> so that we can install them separately
[18:36] <kees> okay.
[18:37] <pitti> kees: fortunately most packages just look like http://launchpadlibrarian.net/58895007/foomatic-filters_4.0.5-0ubuntu3_4.0.5-0ubuntu4.diff.gz
[18:39] <kees> pitti: heh, good. apparmor-utils seems to make up for it. :) 6 things from -modules or perl itself. :)
[18:40] <pitti> I'll keep that for later then :)
[18:40] <kees> heh
[18:41] <pitti> kees: I'll start with the simple libraries, they should be easy, and can collect necessary stuff along the way
[18:41] <kees> have you replaced File::Temp anywhere yet?
[18:41] <pitti> no
[18:41] <pitti> I touched some 6 packages so far, 4 were trivial (just add -d), one required a simple code change, and doc-base was the one nontrivial one so far
[18:42] <pitti> kees: (doc-base fixed, FTR, thanks for spotting)
[18:42] <kees> pitti: cool
[18:43] <SpamapS> RoAkSoAx: darn, no failure with the updated tar
[18:44] <kees> pitti: oh, actually, sorry, it's still wrong. File::Path::mkpath's $mode is filtered by umask where as mkdir -m is not.
[18:44] <pitti> kees: but it's just a cache from public doc data..
[18:45] <apw> pitti, whome owns the launchpad-janitor scripting ?
[18:45] <pitti> I think having that directory 700 would be wrong
[18:45] <pitti> apw: launchpad?
[18:45] <pitti> apw: you mean the thing that autocloses bugs and so on?
[18:51] <apw> pitti, yeah indeed ... my real question is, is there a way to say 'really don't close this bug' so that we can have a proposed kernel with two uploads, the first one closes a bug and the second reopens it ...
[18:51] <apw> and if there isn't do we think that they would accept a new syntax to allow that
[18:51] <pitti> apw: accepting into proposed doesn't close bugs
[18:51] <pitti> just when they get moved to -updates
[18:57] <apw> pitti, right ... exactly but when it does go into -updates everying in the changelog which is all proposed versions in one block via -v<old updates version> get processed and closed
[18:57] <pitti> apw: correct
[18:58] <pitti> apw: so the followup upload has to change the changelog accordingly to not mention the dropped patches any more
[18:58] <apw> pitti, and i want to be able to close a bug in the first -proposed upload, but undo it in the second -propoesed upload ... so that the bug is not closed when moving to -updates
[18:58] <pitti> we can't close bugs in the first upload
[18:58] <pitti> they aren't fixed if they are only in -proposed
[18:58] <apw> pitti, so it is acceptable to hand edit the changelogs to drop the LP: bits from the first upload to proposed to ensure they don't close on move to updates ?
[18:58] <pitti> apw: there is no actual need to start a new changelog record, though; you could just bump the existing one and drop stuff
[18:58] <apw> pitti, yep ... get you on the proposed not doing anything bit
[18:59] <apw> pitti, that is acceptable ?
[18:59] <pitti> apw: that works for me as well
[18:59] <pitti> whatever is more convenient
[18:59] <pitti> i. e. update the first changelog record, or add a new one for the new version and just retro-fix the previous one
[18:59] <pitti> the former would look less confusing to users, though
[19:00] <apw> pitti, doesn't that mean that our changelog sort of lies ?
[19:00] <pitti> apw: not really IMHO; the bits that get released to -updates never had the dropped changes after all
[19:00] <pitti> apw: but as I said, I'm also fine with your method
[19:00] <apw> pitti, ok thanks
[19:02]  * pitti waves good night for real now
[19:04] <RoAkSoAx> SpamapS: weird then
[19:06] <cjwatson> kees: BTW, best not to do any of this (perl-base -> perl) unless you have to, IMO - I don't think we should be aggressively doing it for stuff not on the CDs
[19:08] <cjwatson> apw: why list them in the -proposed upload in the first place, then?  Why not just say "see LP #nnnnnn" (or some similar syntax which carefully doesn't match the close regex) if you just want to refer to it?
[19:09] <bilalakhtar> pitti: So this means that there won't be a debian/changelog in ubuntu packages from now on?
[19:10] <apw> cjwatson, well we are uploading a -proposed with them in where they should close, then if they fail testing we revert them and want them not to close any more... seems the appropriate thing is to as you say change that first reference from LP: xxx to SEE: xxx and we'll be golden
[19:11] <bilalakhtar> BTW, why is -proposed frozen?
[19:11] <cjwatson> apw: so you're not unconditionally leaving them open in -updates, only if -proposed fails?
[19:11] <cjwatson> bilalakhtar: be careful about the distinction between debian/changelog (source packages) and changelog.Debian.gz (binary packages).  pitti's work only affects the latter (and there's open TB discussion about it)
[19:11] <cjwatson> bilalakhtar: -proposed is frozen to assist the Linaro 10.11 release
[19:12] <bilalakhtar> cjwatson: okay, thanks
[19:12] <cjwatson> or was frozen, I think I saw slangasek saying he was OK with it being unfrozen nowish
[19:13] <apw> cjwatson, we are marking them for closure yes in the first upload to -proposed, then we re-upload to -proposed undoing the commits which didn't work (or could not be verified) so we have a changelog block which has them closing and one which reverts that change.  when that moved to -updates it would erroneously close (via the janitor) so we were checking how to best avoid the bad close.  it seems effecticly removing the bug numbers from the first ch
[19:13] <apw> angelog section is the right approach
[19:13] <cjwatson> or you could just reopen after the janitor closes them
[19:13] <cjwatson> consider how much technical work is worth it :)
[19:14] <cjwatson> technically, a reupload ought to require some degree of reverification, which is expensive
[19:14] <apw> cjwatson, heh indeed, i think updaing the references in the old is easier than touching all the bugs as we can script the former
[19:14] <apw> cjwatson, yes, and the regression testing is meant to occur on the second upload to give us more confidence
[19:17] <lool> Hmm any idea why postgresql-9.0 isn't in natty?  I see it has been in unstable for ~20 days, perhaps we only sync packages which made it to testing?
[19:17] <cjwatson> you can script the latter too ... :-)
[19:18] <cjwatson> lool: no, it's because it delivers at least one binary package (libpq-dev I think) which is also in natty built by another source package.  I asked pitti about it and he said he wasn't sure if he wanted to switch to 9.0 yet
[19:18] <cjwatson> lool: testing isn't related
[19:19] <cjwatson> apw: sounds like a lot more work to me, but as you like ...
[19:19] <lool> cjwatson: Ok; thanks for the explanation, I remember you also mentioned that in your email, just didn't think of this explanation
[19:36] <robbiew> cjwatson: so http://cdimage.ubuntu.com/ubuntu-server/daily/current/ has server images for Mac/PPC32/PS3...was this done on purpose?
[19:37] <robbiew> consolidation?
[19:37] <cjwatson> robbiew: yes, https://blueprints.launchpad.net/ubuntu/+spec/ubuntutheproject-foundations-n-cdimage-ports-consolidation
[19:37] <cjwatson> killing off the ports directories in general
[19:37] <robbiew> ah ha!  I knew there was a blueprint
[19:37] <robbiew> couldn't find it
[19:37] <robbiew> thanks!
[19:38] <robbiew> cjwatson: so from a QA perspective, did we agree to test these as well?
[19:38] <cjwatson> (though the exact set of images being built may well change ...)
[19:38] <cjwatson> certainly not, no
[19:38] <robbiew> cool
[19:38] <robbiew> hggdh: ^^^
[19:38] <cjwatson> there's a separate blueprint somewhere for what we should be testing
[19:38] <cjwatson> this was just reorganisation of what we were already building
[19:38] <cjwatson> mind you, *somebody* should be on the hook to test anything we build
[19:38] <cjwatson> but it shouldn't necessarily be Canonical QA
[19:38] <hggdh> cjwatson, robbiew: thanks
[19:39] <robbiew> cjwatson: ack
[19:39] <robbiew> agreed
[19:39] <robbiew> cjwatson: so much for me requesting PS3s for the Server team
[19:39] <robbiew> lol
[19:44] <ajmitch> robbiew: surely you could get a few PS3s donated to willing community members to test? :)
[19:45] <robbiew> heh...they don't even support Linux as another OS anymore ;)
[19:48] <cjwatson> robbiew: :-)
[19:48] <cjwatson> yeah, I think ps3 was slated to be EOLed, I'm just waiting for somebody who purports to be authoritative for that port to tell me - don't like doing it by fiat
[19:49] <robbiew> cjwatson: so who's authoritative for PS3?
[19:59] <cjwatson> robbiew: good question.  I passed it on to somebody called IIRC Tiago Faria a while back, but I don't know who took it from there
[20:36] <wgrant> pitti: Hm, that's unfortunate.
[20:50] <jdstrand> sbeattie: fyi, apparmor hit maverick-proposed
[20:51] <sbeattie> jdstrand: cool, thanks
[21:40] <lifeless> sladen: hi
[21:43] <sladen> lifeless: hello
[21:48] <highvoltage> hi lifeless and sladen!
[22:04] <mathiaz> cr3: http://people.canonical.com/~chucks/SRUTracker/sru-tracker-bugs.html
[22:04] <mathiaz> cr3: ^^ this is the SRU tracker I mentioned
[22:06] <cr3> mathiaz: thanks, zul actually answered me. email sent to victorp for his information
[23:51] <bdrung> kirkland: thanks for the merge. can you move the errno tool from u-d-t to bikeshed?