[13:28] *python script [13:28] What sort of python script. One that packages for you? [13:28] persia: when I needed sponsorship, I saw packages on mentors.d.o took very long to be reviewed, while those reviewed by a team was taken into account a lot faster [13:29] persia, it's for my tweekers project ( https://launchpad.net/tweekers ) [13:29] teams have a well-defined workflow, so they are usually quicker in welcoming new members [13:30] DktrKranz: Makes sense. I tend to work closely with some individual for non-team work (but I haven't done any Debian work in quite some time, and need to get back to it) [13:33] persia: yeah, being in touch with one ore more DDs helps a lot, but in cases when this is not possible, my experience says teams are quicker to review stuff, and overall quality is way much better [13:33] DktrKranz: I generally prefer collaborative work myself (which is why I tend to be more active in Ubuntu than Debian). [13:33] maybe we should try that way, as a suggestion [13:33] Makes sense. Feel like updating the wiki page on New Packages ? [13:36] persia: I will. I'll mention some Debian teams which fit the most REVU packages (perl, python, GNOME, KDE, and some) [13:40] persia: DktrKranz either team maintain package or individual's need go through NEW Queue, don't see any difference there [13:41] freeflying: Nice thing about teams is that one can often get extended peer review prior to NEW, which can speed NEW. [13:41] Also, team-maintained packages tend to get better attention for transitions, etc. [13:42] Of course, having a standard sponsor (or being a DD) obviates some of that :) [13:42] And this is drifting fairly far off-topic :) [13:51] porthose: I see you've put some work into bug 389654, do you feel like finishing it? I can offer you some help if you need it. [13:51] Launchpad bug 389654 in lyricue "Please upgrade lyricue" [Wishlist,Confirmed] https://launchpad.net/bugs/389654 [13:53] persia: btw, do still have a i18n channel [13:54] freeflying: I've never been involved in an i18n channel, so I can't say with any authority. Sorry. [13:54] persia: thx dude [13:55] randomaction, hehe actually I forgot about it, please feel free :) === korn_ is now known as c_korn [14:21] freeflying: well, Debian NEW queue is shorter than what we historically remember these days [14:22] DktrKranz: yes, I experienced, before might need 3 weeks, recently maybe 3 days [14:25] things will even improve, stay tuned :) [14:26] DktrKranz: more developments? [14:26] DktrKranz for ftp-assistant? [14:27] help is always appreciated [14:28] things move quicker because ftpmaster+ftpteam are 8 members [14:28] but more won't harm [14:28] just wondered what "things will even improve" meant :) [14:30] more features dak side [14:33] I fixed a little bug in lintian autorejects, now it works better (with more REJECTs), for your happiness :) [14:36] zlib being rejected for embedded zlib? :) [14:36] that's lintian fault :) [14:36] heh heh [14:37] fix was packages weren't checked properly because orig tarball was not available for unpack [14:37] s/fix/bug/ [15:03] Hello, how can I cleanup pbuilder's aptcache from only from *OLD* packages ? [15:46] AnAnt: try logging in with --save-after-login and running `apt-get autoclean`. [15:51] AFAIK pbuilder doesn't store the cache in the chroot [15:51] maybe it bindmounts it though [15:53] but if not then you can pass --aptcache to --login and then it should work as persia says [15:54] oh, he left already... [15:55] * persia has used pbuilder about twice over a larger number of years, and defers to anyone with actual experience with the tool [15:55] When I get around to setting up LVM again then I will go back to using sbuild [15:56] I find it friendlier [15:58] You find sbuild friendlier? [15:58] yep [15:58] mainly with the build products it generates [15:59] For the first few cycles I used sbuild, I only had 20G under LVM, and the rest done differently. [15:59] I know some people who use LVM on an external drive for sbuild, and non-LVM internall. [15:59] the initial cost to setup the apt caches is not insignificant though [15:59] I have a spare drive that I could use for it and my Windows partition :) [15:59] Laney: pbuilder copies the cache by default, but you can make it use hard links. [16:00] copies where? [16:00] The initial cost to setuo the apt caches? How do you mean? [16:00] Laney: from /var/cache/pbuilder/aptcache? [16:00] persia: sbuild doesn't do any caching (or is that wrong?) [16:00] Laney: anyway there's pbuilder --autocleanaptcache [16:01] sbuild doesn't do any caching, from what i've heard [16:01] tidy [16:01] I only use it with pbuilder-dist [16:01] pbuilder-dist probably has support for that as well [16:01] sbuild can do caching, if one populates the apt cache for the source. [16:01] i don't usually run --autocleanaptcache for my ubuntu pbuilders [16:01] But most people seem to use some sort of proxy configuration or local mirror with sbuild. [16:02] that's right [16:02] I set up apt-cacher-ng or whatever it's called [16:02] i only use it for debian, because i only have one chroot for it (sid). [16:02] if i use --autocleanaptcache, it's going to clean out the debs for my older ubuntu versions. [16:03] Does pbuilder have an interface that lets you set up a cron job like http://pastebin.ubuntu.com/356116/ ? [16:03] (note that one wouldn't use "clean" if one wanted to cache in the chroot) [16:03] pbuilder --update [16:04] * hyperair generally uses cowbuilder these days [16:05] it takes up more disk space, but at least i no longer have to have my comp hang for a few minutes while pbuilder extracts the tgz [16:05] i had some problem with hardlinks and different filesystems when I used cowbuilder [16:05] use a proper filesystem that supports hardlinks then >_> [16:05] didn't investigate, the <1s that pbuilder takes to untar doesn't bother me [16:06] huh?! <1s?! [16:06] what kind of hard disk are you using? [16:07] a tasty raptor [16:07] 275M /home/pbuilder/base-lucid-amd64.cow/ [16:07] that's writing out 275M in less than one second. [16:07] i refuse to believe you can hit that kind of speed with anything short of a solid-state [16:08] * Laney shrugs [16:08] It's caching. [16:08] there's only so much you can cache >_> [16:08] The disk is probably still writing, but the filesystem is available to userspace. [16:09] * persia is currently caching about 670M [16:09] caching reads or caching writes? [16:09] * hyperair gets a laggy mouse for a good 5 minutes when using pbuilder === Lutin_ is now known as Lutin === \vish is now known as vish [16:15] hyperair: Sorry. cached in general. I only have about 300M in dirty pages loaded (which should write out at some point, but I set the timeout fairly high to avoid waking the drive unnecessarily) [16:15] hmm [16:16] i think i should set my timeout higher too [16:16] there's a pm-utils-power-save or something in lucid right? [16:16] that does this kind of magic, doesn't it? [16:16] * persia was fiddling with powertop but probably [16:16] how do you check the dirty pages anyway? [16:17] the amount of dirty pages i mean [16:17] But the point was that 270M wasn't unthinkable with 2G+ RAM available. [16:18] hmm true [16:18] you could even make pbuilder extract onto a tmpfs coem to think of it [16:19] I guess, but I don't really see the point of that. [16:19] tmpfs is usually limited to RAM/2, which can be tight for some packages (unless you have lots more RAM than I) [16:20] And if there is RAM available, the system should do cache-on-write, so you shouldn't notice too much performance difference [16:20] (or if you do, something needs tuning) [16:22] heh right [16:44] Hi! Anyone up to review qt-shutdown-p? It is a small simple program to shutdown the computer. http://revu.ubuntuwire.com/p/qt-shutdown-p === yofel_ is now known as yofel [18:09] hey all [18:14] hi sivang [18:26] Heya gang [18:28] hoia bddebian :) [18:29] Heya sebner [18:32] Hi sebner and bddebian [18:33] hihu geser :) [18:33] Heya geser [18:52] Hi! Anyone up to review qt-shutdown-p? It is a small program to shutdown the computer. It uses qdbus to send a shutdown request to the session manager or to hal or alternatively uses sudo shutdown -P now. http://revu.ubuntuwire.com/p/qt-shutdown-p === RainCT_ is now known as RainCT === micahg1 is now known as micahg [19:29] Could someone please review package ipwatchd - http://revu.ubuntuwire.com/p/ipwatchd - and ipwatchd-gnotify - http://revu.ubuntuwire.com/p/ipwatchd-gnotify [19:34] when I'm using pbuilder to rebuild a package from universe, can I do a 'one-time' build with enabled universe repos? so it doesn't change the base tgz, but still uses universe? === asac_ is now known as asac [19:41] How can I specify debian/control Build-Depends for a situation where the package requires a version of libfoo-dev (>= 1.2.3) but (<< 2.0)? [19:43] libfoo-dev (>= 1.2.3), libfoo-dev (<< 2.0) [19:43] geser: okay, i thought I tried exactly that and had no joy. I'll try it again! [19:53] geser: yup, that did the trick. Thank you. [20:58] Moto please... Package fraqtive needs retry builds due to the -lGL mesa problem: https://launchpad.net/ubuntu/+source/fraqtive/0.4.5-1 [21:00] Doing. [21:00] kamalmostafa: I didn't forget about the TI stuff, just busy with $WORK right now. [21:00] ScottK: no problem -- it isn't going anywhere ;-) [21:01] kamalmostafa: Done (building now) [21:02] ScottK: I'll hang on to the trees and you just let me know whenever is convenient for you to review them. tnx for the rebuilds. [21:02] OK. No problem. Thanks for noticing [21:09] who can approve my ubunt-dev again? [21:09] I am goping to be hacing on stuff for Maemo [21:10] that are for ubutu, as in , allow folks to develop for Maemo on ubuntu [21:10] so, can I get my upload rights back ? [21:13] sivang: You need to apply to the MOTU Council. [21:18] ScottK: how to do that? [21:18] ScottK: is there a docuent to read about it? [21:18] It's on the Ubuntu wiki. I don't recall exactly where. [21:20] tsimpson, are you using the patched version of xchat you submitted in bug 507072 ? [21:20] Launchpad bug 507072 in xchat "Add support for freenode ircd-seven CAP command" [Undecided,New] https://launchpad.net/bugs/507072 === cyphermo1 is now known as cyphermox === emma_ is now known as emma === cyphermox_ is now known as cyphermox