[12:02] <raphink> you mean the latest source did not produce any bin ?
[12:03] <raphink> so the bins are from the latest but one source
[12:03] <raphink> ;)
[12:03] <psusi> the source code in the source package for pktsetup is an older version...only has two help lines in usage() that it prints with you give --help... if I run pktsetup --help, it gives more help and reports a newer version
[12:03] <raphink> ?
[12:03] <psusi> no, I mean the bin from the binary package was NOT compiled from this source ;)
[12:03] <raphink> yes
[12:03] <raphink> but the bin could have been built from an older version and the latest src could have not produced any bin
[12:03] <raphink> this doesn't seem to be the case though
[12:03] <psusi> no... the bin is built from a newer version
[12:04] <raphink> ok
[12:04] <raphink> that's weird
[12:04] <psusi> yea
[12:04] <raphink> what does p.u.c say about it?
[12:04] <psusi> p.u.c?
[12:04] <raphink> packages.ubuntu.com
[12:05] <psusi> wait...
[12:05] <psusi> ohh, there's some patches in debian... they must not be applied
[12:05] <psusi> they arne't dpatch though... hand rolled looks like
[12:05] <psusi> hrm...
[12:07] <selinium> If you wanted to change the colour of a single word in a paragraph, what tag should I use? acronym? or is there a more suitable one?
[12:08] <selinium> Duh, sorry guys, wrong channe;l
[12:08] <psusi> ok... THAT looks like the right source ;)
[12:09] <sistpoty> gn8 everyon
[12:09] <sistpoty> +e
[12:09] <raphink> \sh: debdiff uploaded
[12:10] <\sh> raphink: number?
[12:10] <seth> selinium, <span> probably
[12:10] <raphink> #6558
[12:10] <selinium> cheers seth! :) lol
[12:11] <raphink> \sh: right?
[12:12] <\sh> raphink: fetching package and debdiff now :)
[12:12] <raphink> ok
[12:14] <\sh> raphink: it complains again about the changelog...debuild fails
[12:15] <raphink> fails?
[12:15] <raphink> well I know about the changelog stuff, but it doesn't make debuild fail here
[12:15] <\sh> parsechangelog/debian: error: found change data where expected next heading or eof, at changelog line 105
[12:15] <\sh> dpkg-genchanges: error: syntax error in parsed version of changelog at line 0: empty file
[12:15] <\sh> debuild: fatal error at line 768:
[12:15] <\sh> dpkg-buildpackage failed!
[12:15] <raphink> what makes it fail?
[12:15] <\sh> raphink: the missing version entry
[12:16] <raphink> what if you use debuild directly
[12:16] <raphink> instead of dpkg-buildpackage ?
[12:16] <raphink> actually I tested it with debuild
[12:16] <\sh> raphink: i'm using debuild :)
[12:16] <raphink> and with pbuilder
[12:16] <raphink> and it worked
[12:16] <\sh> raphink: I'm using it on dapper :)
[12:16] <raphink> same here \sh I have dapper here
[12:16] <\sh> raphink: it creates a source changes file yes, but it's not correct :)
[12:17] <raphink> let's see again
[12:17] <lifeless> hub: opensync is stuck in NEW at the moment
[12:17] <lifeless> hub: the gui and plugins are ready to go after that
[12:17] <raphink> it seems to build here
[12:17] <raphink> just launched debuild
[12:18] <raphink> does it fail at the end of the build?
[12:18] <hub> lifeless: okay. there are directly uploaded?
[12:19] <\sh> raphink: try this: debuild -S -v1:0.16.7.2-1ubuntu2 -ksh@sourcecode.de
[12:19] <hub> lifeless: s/there/they/
[12:19] <lifeless> EPARSE
[12:19] <ajmitch> hub: to debian, yes
[12:19] <\sh> raphink: I'm not building in chroot...I'm using this debuild call and pbuild it
[12:19] <hub> ajmitch: ah you mean they do to debian and we resync....
[12:19] <hub> ajmitch: ok cool
[12:19] <raphink> wait it's building right now
[12:19] <raphink> the build is finishing
[12:21] <raphink> hmm right I get the same error as you with your command \sh
[12:22] <raphink> but I don't get it when I run `debuild' simply
[12:22] <raphink> \sh: I know why you get this error
[12:22] <raphink> ;)
[12:22] <raphink> there's no such version as 1:0.16.7.2-1ubuntu2
[12:23] <raphink> ;)
[12:23] <raphink> in this changelog at least
[12:23] <\sh> raphink: didn't you merge the changelog?
[12:23] <raphink> my version is 1:0.16.7.2-3ubuntu1
[12:23] <raphink> \sh: I merged the changelog with the latest available source on ubuntu
[12:23] <raphink> which was 1:0.16.7.2-2
[12:23] <raphink> since it had been synced
[12:24] <raphink> and thus didn't contain any ubuntu changelog ;)
[12:25] <raphink> see http://archive.ubuntu.com/ubuntu/pool/universe/e/enlightenment/ \sh
[12:25] <\sh> Previous Ubuntu Version: 1:0.16.7.2-1ubuntu2
[12:25] <\sh> Current Debian Version:  1:0.16.7.2-2
[12:25] <raphink> nope
[12:25] <\sh> argl...
[12:25] <\sh> the report is old
[12:25] <raphink> previous source in ubuntu is 1:0.16.7.2-2
[12:25] <raphink> and current debian is -3
[12:25] <\sh> well..running version is 1ubuntu2
[12:25] <raphink> which is the one I'm merging
[12:25] <raphink> -2 never produced bins in Ubuntu
[12:25] <raphink> yes \sh
[12:26] <raphink> but then crimsun synced -2
[12:26] <raphink> but it never built
[12:26] <raphink> because it lacked the xbitmaps deps
[12:26] <raphink> so now I'm basing my merge on -2, merging -3
[12:26] <raphink> or should I base it on -1ubuntu2 although it's an old source?
[12:26] <lucas> gnight
[12:27] <\sh> raphink: forget it...I'll adjust it to but it complains about line 105...
[12:27] <raphink> \sh: i'm ok to base it on -1ubuntu2 if you prefer it that way
[12:27] <\sh> raphink: and line 105 is the missing entry :)
[12:27] <raphink> line 105 must be the missing entry
[12:27] <\sh> raphink: no...since -2 was synced
[12:27] <raphink> yes
[12:27] <raphink> it doesn't prevent from building though
[12:28] <\sh> raphink: so we have all the changelogs we need..
[12:28] <raphink> merges should not be based on synced versions?
[12:28] <raphink> hmm?
[12:28] <\sh> raphink: nono..it's ok...:)
 raphink: so we have all the changelogs we need..  <--- in which version?
[12:28] <raphink> \sh: no I'd like to understand really :)
[12:29] <\sh> raphink: from the last ubuntu version to the actual version...-2 synced and -3ubuntu1 will be merged
[12:29] <raphink> ok
[12:29] <raphink> I should just have ignored -2, right?
[12:29] <\sh> raphink: no you can't :)
[12:29] <raphink> then what?
[12:29] <raphink> :s
[12:30] <raphink> use the changelog from -1ubuntu2, add -2 and -3 changelogs and dch ?
[12:30] <\sh> raphink: nothing to be done for debuild...debuild -S -ksh@sourcecode.de is enough for here..
[12:30] <\sh> but I wonder if we shouldn't fix up the changelog at all..
[12:31] <\sh> ajmitch: what do you think?
[12:31] <raphink> ajmitch said it was ok
[12:31] <raphink> that it would maybe not even be considered in Debian
[12:31] <raphink> since so small a bug
[12:31] <\sh> ok..i'll build it now :)
[12:31] <raphink> just as I'm not sure to report to Debian that I bumped standards-version to 3.6.2
[12:32] <raphink> and reporting the xbitmaps stuff would be nonsense since they don't have this pb in Debian
[12:32] <\sh> updating the standards version is normally a job for the maintainer
[12:32] <raphink> yes
[12:32] <raphink> I did it because it was an old one really
[12:32] <raphink> and that's not a big diff ;)
[12:32] <\sh> raphink: well..they don't have it not now :) wait until modular xorg is hitting debian :)
[12:33] <raphink> yes exactly
[12:33] <raphink> as long as they don't have modular xorg
[12:33] <raphink> this patch will have to be applied on all merges of enlightenment
[12:33] <\sh> well..you can file a bug with a debdiff attached and tell them that it is for the modular xorg...but I wouldn't do it, they know exactly where to look :)
[12:34] <raphink> yes
[12:34] <raphink> ;)
[12:34] <raphink> and filing bugs for future work is not always appreciated imo
[12:35] <raphink> does it build well now?
[12:35] <\sh> yes no probs
[12:35] <raphink> ok :)
[12:36] <\sh> uploaded
[12:36] <raphink> :D
[12:36] <raphink> time for me to get uploaded to my bed
[12:37] <raphink> ;)
[12:37] <\sh> rebooting with new kernel...brb
[12:37] <raphink> hehe ok
[12:39] <psusi> why does this source package have man pages in section 8?  there is no section 8?  and when it is installed, it ends up in section 1
[12:42] <\sh> re
[12:58] <\sh> k...going to bed
[01:10] <psusi> by gosh I'm making all kinds of patches for udftools... by the time I'm done with this thing it will be proper plug and play
[01:45] <raphink> ajmitch: shall I email some guys about their package when they haven't uploaded a new version for a month and a half ?
[01:45] <raphink> or even more ;)
[01:45] <tseng> no?
[01:45] <raphink> :p
[01:45] <tseng> emailing people directly is a good way to annoy people
[01:46] <tseng> thats what bug trackers are for
[01:46] <raphink> well then we need to use a bug tracker on REVU
[01:46] <ajmitch> raphink: new version on revu?
[01:46] <raphink> lots of packages are lying there with comments
[01:46] <raphink> and no modif has been done on them for months
[01:46] <raphink> e.g. http://revu.tauware.de/details.py?upid=816
[01:47] <raphink> or http://revu.tauware.de/details.py?upid=866
[01:47] <raphink> or http://revu.tauware.de/details.py?upid=797
[01:47] <raphink> just very old packages
[01:47] <raphink> whose packagers didn't work on for ages
[01:47] <raphink> and that are preventing (imo) reviewers from working efficiently
[01:48] <raphink> hmm wait a min
[01:48] <raphink> chmsee has been archived actually
[01:50] <raphink> should very old packages be archived ajmitch ?
[01:50] <ajmitch> why should they?
[01:50] <ajmitch> are they any less valid?
[01:50] <raphink> no
[01:51] <raphink> but it seems their packagers have left with no interest in having them in dapper anymore
[01:51] <raphink> when it's been almost 3 months
[01:51] <ajmitch> that may be the case
[01:51] <raphink> that's why I asked if maybe these guys could be emailed
[01:51] <raphink> to get to know if they still want to work on these packages
[01:52] <raphink> some people are waiting for up-to-date packages to be reviewed
[01:52] <raphink> and old packages staying around with no modif doesn't help
[01:53] <ajmitch>  http://revu.tauware.de/details.py?upid=866
[01:53] <ajmitch> should be archived/nuked
[01:53] <ajmitch> because a newer version is in dapper
[01:53] <raphink> ok
[01:53] <raphink> I'll archive it
[01:54] <ajmitch> I was about to, but if you want, go ahead
[01:54] <raphink> already done ;)
[01:55] <raphink> http://revu.tauware.de/details.py?upid=738 hasn't been worked on in 3 months
[01:56] <ajmitch> so?
[01:56] <raphink> so nothing ...
[01:56] <ajmitch> hub is around here at least every couple of days
[01:56] <raphink> I guess
[01:56] <raphink> that's right
[01:57] <raphink> maybe we could have an automatic tool to check if the version if dapper has not made packages in REVU outdated
[01:57] <ajmitch> sure, that'd only take a couple of minutes to do
[01:57] <raphink> ok :)
[01:58] <raphink> this way, packages who are obsolete could be nuked
[01:58] <ajmitch> maybe a few more, anyway
[01:58] <raphink> s/who/which/
[01:58] <ajmitch> *most* packages are new packages anyway
[01:58] <raphink> yes most
[01:59] <hub> raphink: because there is an argument about a line in the license, and I have forgotten to write upstream for clarification
[02:00] <raphink> oh ok
[02:00] <hub> raphink: once I become MOTU, I'll give a hand...
[02:00] <raphink> :)
[02:00] <raphink> same here hub
[02:00] <raphink> I wish to do more once I'm a MOTU :)
[02:02] <hub> raphink: I'll do what I cna
[02:02] <raphink> sure :)
[02:03] <raphink> hub: I'm a bit frustrated about REVU right now, cause it's not easy to know which packages are to be reviewed and which ones are not
[02:03] <raphink> without checking all pages
[02:03] <raphink> :s
[02:03] <raphink> but I guess I should go to sleep and stop complaining ;)
[02:03] <psusi> is there a way to ask cut for only the LAST field on the line?
[02:04] <ajmitch> use awk :)
[02:05] <psusi> heh
[02:05] <hub> raphink: yeah it is a bit late on your side of the pond
[02:05] <raphink> yes
[02:06] <hub> 2AM?
[02:06] <ajmitch> awk '{print $NF}'
[02:06] <ajmitch> will tend to print the last field
[02:06] <psusi> hrm...
[02:06] <raphink> yes hub
[02:07] <ajmitch> psusi: what do you need it for?
[02:07] <raphink> k well I'll go
[02:07] <raphink> later
[02:07] <psusi> I have modified pktsetup to auto assign the first availible virtual device to the given cdrom device, and print it's dev number... I'm now writing a hal fdi policy callout that will run pktsetup and grab the device number and store it in a hal property
[02:13] <psusi> that way all you have to do is install the udftools package, and hal will configure the pktcdvd devices for each cdrw drive it detects
[02:59] <Mez>  sanyone interested in giving me a crash course on how to write man pages
[03:01] <LaserJock> Mez: I did one using Docbook and converting it
[03:01] <seth> http://jr.falleri.free.fr/files/kubuntu/sample.1.docbook
[03:01] <seth> then in the build rule, docbook2x-man debian/blah.1.docbook
[03:02] <crimsun> Mez: take the sample that dh_make gives you
[03:02] <seth> and in the clean rule, rm -f blah.1
[03:02] <crimsun> it's a pretty good starter.
[03:02] <ajmitch> or if you're really lazy
[03:02] <ajmitch> use help2man
[03:02] <Mez> ajmitch, it doesnt have a --help or --version
[03:02] <Mez> though it does have man pages
[03:03] <Mez> which I didnt spot :D
[03:03] <ajmitch> the program sucks then :)
[03:03] <LaserJock> yeahhh!!!! my first package just hit dapper-changes!
[03:03] <ajmitch> LaserJock: well done
[03:03] <Mez> seeing as it has man pages and installs them automatically with the make install - I dont need to run anything else for it do I ?
[03:07] <Riddell> no
[03:08] <Burgundavia> Riddell, do you feel dirty after touching a GNOME package? ;)
[03:09] <Riddell> I always believe in being open to new cultures and experiences, even Gnome
[03:09] <Burgundavia> sorry, had to ask that
[03:10] <Riddell> Burgundavia: now go and revu the KDE packages!
[03:11] <Burgundavia> Riddell, not a MOTU
[03:11] <Riddell> write some KDE docs!
[03:11] <LaserJock> lol
[03:12] <Burgundavia> might be soon
[03:12] <Riddell> infact to get you learning about new cultured and experiences please rewrite the whole of KDE docs and put them under a Debian-happy licence, that would make things much easier for us
[03:13] <spacey_ki> anyone experience with figuring out the ipp URI from a network printer? :S
[03:13] <Burgundavia> Riddell, the doc team is unlikely to move from GFDL/cc-by-sa 2.0. We (including Mako) made that decision at Mataro
[03:15] <hub> Burgundavia: to be incompatible with debian? :-/
[03:16] <Burgundavia> hub, no
[03:16] <Burgundavia> hub, to be more compatible with the vast majority of docs out there
[03:17] <hub> Burgundavia: I was sort of kidding.
[03:17] <hub> Burgundavia: I have nothing against this license
[03:17] <Burgundavia> From a documentation perspective, GNOME/KDE are the more important upstreams
[03:18] <crimsun> I don't agree with the decision, but I didn't take part, and I don't care to debate it.
[03:18] <hub> I was not about to debate either
[03:18] <hub> maybe I should just shush
[03:19] <jsgotangco> :D
[03:19] <Burgundavia> I get ask that question a lot and since I was there for the decision, I usually tell people what happened
[03:19] <Burgundavia> s/ask/asked
[04:07] <thierry_> any cdbs guru around?
[04:08] <ajmitch> define 'guru'
[04:08] <ajmitch> and don't ask to ask
[04:09] <thierry_> I need to pass the option --enable-shared to the configure script but I'm using cdbs, is there anyway to do this or do I have to switch for debhelper?
[04:10] <thierry_> ajmitch : what do you think?
[04:10] <ajmitch> of course you can
[04:11] <thierry_> how?
[04:12] <ajmitch> DEB_CONFIGURE_EXTRA_FLAGS = --enable-shared
[04:12] <ajmitch> is the most common way
[04:12] <ajmitch> assuming you're using autotools.mk
[04:13] <thierry_> ajmitch : ok thanks : last thing, I need to add the changes made in a file by the upstream author in the source of the librairy I package, how do I that?
[04:13] <ajmitch> and put that line below the include lines
[04:13] <thierry_> I add the change and that's all?
[04:13] <ajmitch> yeah, unless you feel like using a patch system
[04:14] <thierry_> ajmitch : and by below you mean under? (my english is not so good, I speak mainly french)
[04:14] <ajmitch> yes
[04:15] <ajmitch> for example http://lists.debian.org/debian-devel/2005/03/msg00083.html
[04:16] <thierry_> k thanks
[04:18] <thierry_> ajmitch : do I also need to change the file in the orig.tar.gz archive?
[04:18] <ajmitch> no
[04:18] <ajmitch> don't do that
[04:18] <thierry_> k
[04:26] <thierry_> ajmitch : would you mind checking my package? I already had a advocate before, but we found this shared librairy problem that I just solved so I think he's alright
[04:26] <ajmitch> I'm very surprised it got advocated
[04:26] <ajmitch> since it was an empty package
[04:27] <thierry_> it was zakame
[04:28] <ajmitch> I'll have to talk to him :)
[04:28] <thierry_> don't poke him too bad :)
[04:28] <thierry_> too hard*
[04:29] <thierry_> anyway, do you have time to check my package?
[04:29] <ajmitch> building it on tiber now
[04:29] <thierry_> :D thanks!
[04:30] <thierry_> it's a dependency for another package I want to do for dapper wich is in the universe candidates
[04:41] <jsgotangco> wow Riddell thats a lot of CDs
[04:45] <psusi> is there a gui tool that shows all the configurable settings in all the packages in the system?  rather than having to dpkg-reconfigure?
[04:54] <thierry_> ajmitch : is it looking good
[04:54] <ajmitch> -rw-r--r-- root/root    776012 2006-01-08 22:47:30 ./usr/lib/libfxscintilla.so.17.0.0
[04:54] <ajmitch> lrwxrwxrwx root/root         0 2006-01-08 22:47:27 ./usr/lib/libfxscintilla.so.17 -> libfxscintilla.so.17.0.0
[04:57] <thierry_> so? is it ok?
[04:58] <thierry_> I knew I had my .so files... but is the whole thing alright to get an advocate? :)
[04:59] <ajmitch> oh, I haven't done an indepth look :)
[05:00] <thierry_> ho!... then I'll sleeping :) but if you want to leave a comment when you'll have the time to finish this, I would be grateful
[05:02] <ajmitch> ok
[05:02] <thierry_> ho crap, it's the only thing I'm really not sure ;) anyway good night
[05:02] <ajmitch> night
[05:15] <ajmitch> why do people persist in using the mailing list to report bugs?
[05:15] <ajmitch> or worse, the forums
[05:16] <desrt> for a lot of people it's hard to tell the difference between a bug report and a support request
[05:16] <Hobbsee> scared of bugzilla/malone?
[05:16] <desrt> most computer users are used to "it's not working" being their own fault
[05:17] <desrt> which makes a forum an appropriate place to seek help
[05:17] <LaserJock> desrt: I agree
[05:17] <LaserJock> I am still that way
[05:18] <psusi> anyone else notice the firefox in dapper is HORRIBLY slow at redrawing it's window when previewing edits to the ubuntu wiki?  tab to a terminal and back, or worse, drag the terminal window over the dapper window, and holy moley it lags!
[05:18] <LaserJock> I at least like to confirm that it's not me before I do a bug report
[05:18] <desrt> psusi; sounds like you're not using accelerated X drivers
[05:18] <desrt> psusi; fglrx is completely fixed now, fwiw
[05:18] <psusi> dereks_, nope... it's accelerated... not using fglrx, but have dri and mesa working
[05:19] <psusi> firefox is nice and fast otherwise, it is only when redrawing uncovered areas on the wiki preview page
[05:19] <desrt> psusi; that has very little to do with 2D accel
[05:19] <psusi> scrolling the preview page is plenty fast... doesn't even make my cpu bump up speed
[05:19] <psusi> desrt, dri has everything to do with 2d accel
[05:19] <desrt> no.  it seriously does not :)
[05:20] <psusi> ummm.... I'm prety darn sure it does
[05:20] <desrt> dri is when libGL connects directly to the kernel
[05:20] <desrt> (ie: bypasses the X server hence "direct")
[05:20] <psusi> it's also for 2d apps to do the same, is it not?
[05:20] <desrt> no.
[05:20] <desrt> 2d apps always render through X
[05:20] <psusi> well it sure as hell makes 2d go faster when you turn on dri ;)
[05:21] <desrt> possibly because you're also enabling XRENDER acceleration at the same time
[05:23] <psusi> well, got any 2d speed tests to check it?  scrolling works nice and smooth... might be that firefox scrolls sanely, but whenever it has part of the window uncovered, it repains the ENTIRE page, and this is a rather complex page... so it may be making a crap-ton of calls to the X server
[05:23] <psusi> cause it does seem to be the X server that is actually getting bogged down
[05:23] <desrt> uhm
[05:24] <desrt> there is something
[05:24] <desrt> xperftest or something
[05:24] <desrt> i don't know the exact name
[05:28] <Mez> anyone fancy reviewing
[05:28] <psusi> hrm... yea... I think it is actually the X server that is bogging down... because if I tab to firefox, then immediately tab out again, the tab lags
[05:29] <desrt> try out fglrx and see if the situation improves
[05:29] <psusi> don't want to run proprietary code
[05:29] <desrt> if it does, consider filing a bug against the dri driver
[05:29] <desrt> arf
[05:30] <desrt> intuition knocked again:
[05:30] <psusi> getting rid of fglrx so mesa worked right was a pain
[05:30] <desrt> i remember :)
[05:30] <desrt> but that was your own fault :)
[05:30] <psusi> hehe....
[05:30] <desrt> the ubuntu fglrx packages install cleanly
[05:30] <psusi> do they now?  they didn't for a while
[05:30] <desrt> as of today they do
[05:30] <desrt> i said so ^^ up there :p
[05:31] <desrt> dapper is actually working great right now
[05:31] <desrt> i'm pretty impressed with how rapidly it's coming together
[05:32] <desrt> a lot of really nice things have happened in this release
[05:32] <psusi> ok... nevermind... it seems dri broke somewhere ;)
[05:32] <psusi> turn your back for 5 seconds and it breaks... sheehs... heh... ;)
[05:32] <desrt> there's your problem :)
[05:33] <psusi> desrt, I would be really pleased if dmraid could make it into dapper so I can cleanly install it
[05:33] <desrt> file an RFE on bugzilla?
[05:33] <desrt> if it's small, easy, and has a reasonable use it will probably get in
[05:33] <psusi> I filed a bug when I first installed ubuntu breezy preview... it is currently assigned to infinity I believe
[05:34] <psusi> fabione also had a hand in it, but seems to have more important things to work on
[05:34] <psusi> he build the package from source and put it in universe before breezy was released
[05:34] <psusi> I ended up making some initramfs scripts to go with it and wrote a howto on the wiki
[05:35] <psusi> works fine for me and a few others, but the integration has not moved anywhere in months
[05:35] <desrt> hmm
[05:35] <psusi> https://wiki.ubuntu.com/FakeRaidHowto if you are interested
[05:35] <desrt> nothx :)
[05:35] <psusi> ;)
[05:35] <desrt> my ubuntu install is not valuable
[05:36] <desrt> and my home directory is backed up
[05:36] <desrt> which reminds me
[05:36] <psusi> hehe
[05:36] <psusi> backing up will be nicer once I finish this: https://wiki.ubuntu.com/PacketCD
[05:37] <desrt> i can't imagine backing up to cd
[05:37] <desrt> that would be pain
[05:37] <psusi> and why the hell does revu allways say access is forbidden when I try to look at the source.changes?
[05:38] <psusi> desrt, it would? drag and drop your documents is a pain?
[05:39] <psusi> full system backups are nice, but your averge user just likes to drag and drop a copy of their important files to a disc
[05:39] <lifeless> I find that 'average user' argument really annoying
[05:40] <lifeless> I've known lots of users, and my assessment of the average is they want to not do backups manually, just to set it up once, and then have a button to press that does it and /tells/ them it did it
[05:41] <desrt> ya
[05:41] <psusi> some people like that... some people would rather have a disc they can hold in their hot little hands and know their data is on it and they can open it in another computer ;)
[05:41] <desrt> if i were an average user i'd have to agree with lifeless :p
[05:41] <desrt> i like the idea of an external firewire drive
[05:42] <desrt> it's large and i can unplug it
[05:42] <psusi> for full system backups?  yea... that's great
[05:42] <desrt> unplugging is important
[05:42] <psusi> aye
[05:42] <desrt> it means that if my computer explodes in the worst way possible it still can't take the backup with it
[05:43] <psusi> well, as long as the backup is off site... the computer could burn the building down and the firewire drive with it ;)
[05:43] <psusi> rsync snapshot backups look really nice
[05:43] <lifeless> psusi: they get a disk
[05:43] <lifeless> tand they can
[05:43] <lifeless> *and they can*
[05:43] <psusi> if you have a removable drive with more space than your main disk
[05:43] <lifeless> psusi: no, you are making assumptions that were not present in what I said
[05:44] <psusi> or it can be on a remote server... handy for those peskey fires
[05:44] <psusi> huh?
[05:44] <psusi> I was still talking about rsync snapshot backups... I have no idea what you said ;)
[05:44] <desrt> psusi; ya.  rsync is what i use
[05:44] <desrt> it's bloody quick
[05:44] <desrt> it takes about 10 seconds if nothing has changed
[05:44] <desrt> (and obviously longer if things have)
[05:45] <psusi> desrt, I'd still like to have a chattr attribute or something for copy on write... then you could do rsync snapshot type backups on one disk, without wasting half the space
[05:45] <desrt> psusi; but then you lose the advantage of the backup in the case your computer goes bonkers
[05:45] <desrt> i want my backup offline, k thx
[05:45] <psusi> desrt, yes... but you get the history without having to buy more hardware ;)
[05:46] <desrt> i also want to protect against media faulure
[05:46] <desrt> and that implies redundancy
[05:46] <psusi> ideally, I'd like to have copy on write on all the time... then make periodic backups to removable media
[05:46] <lifeless> psusi: you talked about drag and drop as what the average user *wants*
[05:46] <lifeless> psusi: I dont think thats accurate
[05:47] <desrt> gmailfs is an amusing concept
[05:47] <psusi> lifeless, why not?  from what I have seen, the _average_ person ( not the average linux user ) basically (only) understands copying things to removable media
[05:47] <psusi> Yagisan, hey... welcome back
[05:48] <Yagisan> desrt: I have so many accounts, I thought why not. I encrypt my backups anyway ..
[05:48] <desrt> heh
[05:48] <psusi> Yagisan, you ever build my defrag package?  I was playing with it last night... kept giving it different lists of files to store in order hehe...
[05:48] <desrt> that's an amusing concept
[05:48] <desrt> use a cluster of gmail accounts as a large disk
[05:48] <Yagisan> psusi: not for long - I just stepped in, saw my name lit up, and have to leave in few minutes again
[05:48] <psusi> Yagisan, btw... make sure you use the -p option... it defaults to only use 2 MB of ram to move data with... goes MUCH faster if you give it more
[05:50] <Yagisan> psusi: I have only /boot and / on ext3, the rest are jfs - so I don't have much to defrag. btw on ext2/3 it will *never* completely defrag, due to the way ext2/3 stores files
[05:51] <psusi> Yagisan, well, assuming it is a very large file, then yea... it can't be stored without being broken up to fit around the inode and bitmap blocks
[05:51] <LaserJock> so do you guys use tar when you are backing up?
[05:51] <psusi> it's fun seeing fsck report 0.0% fragmentation though ;)
[05:52] <Yagisan> desrt: I was wondering if I could raid my gmailfs systems, but gmailfs stopped working before I could try - I have 3 active + 150 invites, so it could be a fun thing to try
[05:52] <psusi> LaserJock, I do... but I see uses for rsync snapshots too...
[05:52] <psusi> Yagisan, rofl
[05:52] <Yagisan> LaserJock: I use sbackup in universe - but only because I converted my network to ltsp based, so I only need to admin 1 box :)
[05:53] <LaserJock> I have only a few linux boxes and ~ 4 iMac OSX boxes that I would like to backup but I really don't know what the best way to back them up is
[05:54] <LaserJock> right now it is pretty scattered, I just tar /home and try to have at least two copies on two different machines
[05:54] <lifeless> psusi: I thnk you are insulting average, and substituting 'new'
[05:54] <psusi> lifeless, I don't... I think you are thinking of linux users, which are not average ;)
[05:55] <Yagisan> LaserJock: as you get bigger, bacula is good - even does non-linux systems
[05:55] <psusi> LaserJock, I'd periodically backup the system with tar, and rsync snapshot /home back and forth between machines
[05:56] <lifeless> psusi: no, I'm no
[05:56] <lifeless> *t*
[05:56] <psusi> lifeless, do you know a single average user who actually even makes backups?
[05:56] <Yagisan> I rather have cron do my backup for me - every day at 5:30 it should be done, and every 21 days that needs to be a full backup
[05:56] <Yagisan> psusi: All my clients do
[05:57] <psusi> average windows user who couldn't explain to you the difference between sdram and rambus mind you
[05:57] <lifeless> psusi: just think about this for a few minutes - how many people start using computers and in (saY) 5 years still dont understand the idea of 'backup software'
[05:57] <psusi> Yagisan, personal or corporate?
[05:57] <Yagisan> psusi: they are far from technical - it's a hard sell for me to get them as clients considering what I actually do
[05:57] <Yagisan> psusi: mixed, mainly small business though
[05:57] <psusi> Yagisan, incremental backups for 21 days?  with tar?
[05:58] <lifeless> psusi: now, if that amount is < 50%, AND even a small fraction of users that start using computers dont stop using them, then the average user MUST understand backup software, by definition
[05:58] <Yagisan> psusi: it's like 100mb a day on incremental
[05:58] <psusi> lifeless, they might understand it, but I know virtually nobody who backs up their home computer regularly
[05:58] <lifeless> psusi: the difference between sdram and rambus is irrelevant to everyone except when they are upgrading, and then they will ask around. backups however matter all the time, so I dont see why you bring that up
[05:58] <psusi> Yagisan, then you're dealing with their IT guys are't you?  not average users
[05:59] <psusi> Yagisan, incremental backups with tar for 21 days is insane... if you loose one of those daily backups in the middle, you can't restore to current
[05:59] <ajmitch> psusi: small businesses like that don't necessarily have an IT guy
[06:00] <Yagisan> psusi: I am the it guy
[06:00] <Yagisan> psusi: It is also backed up to 3 locations
[06:00] <lifeless> psusi: short story, there are several guys here disagreeing with your defn of average.
[06:00] <lifeless> psusi:  if you want to say 'new users that dont have a clue - fine, just dont claim they are 'average''.
[06:00] <psusi> yea... your average buisiness person understands they need an IT guy to do backups... now how many of them backup their home computers?
[06:00] <Yagisan> psusi: 1 is off site, 1 is onsite on dvd, 1 is on a network box
[06:01] <Yagisan> psusi: bigger workflow == shorter backup period
[06:01] <psusi> how many of these average buisness users who understand the need to backup actually do backup their home computers?
[06:02] <Yagisan> psusi: most of them - especially after I tell them what data recovery costs
[06:02] <Yagisan> psusi: Of course, other non-ubuntu systems don't ship with a decent default backup system ...
[06:02] <psusi> I don't know a single person who makes regular backups of their home computer
[06:02] <LaserJock> Yagisan: your doing a lot better than me, I just backup when I'm feeling bored (every couple of months I would guess), and I don't do any incremental ;-)
[06:03] <lifeless> I love the way you are moving the goalposts
[06:03] <psusi> not one
[06:03] <lifeless> you started out talking about 'wanting drag and drop'
[06:03] <lifeless> now you are talking about who actually *does* backup, and these are pretty much unrelated.
[06:03] <psusi> yea.... because it would help users backup their data ;)
[06:04] <psusi> it is quit relavent to the discussion at hand... average users don't backup
[06:04] <ajmitch> psusi: and I do, so your point is?
[06:04] <psusi> ajmitch, my point is that your average home windows user does not backup their system... but they do understand how to copy important files to a disc to put in a safe place, and if doing that were easy, they would be more inclined to do so
[06:05] <psusi> hence https://wiki.ubuntu.com/PacketCD
[06:05] <LaserJock> psusi: maybe I'm showing my ignorance here but can't you do that with Nautilus
[06:05] <ajmitch> https://wiki.ubuntu.com/HomeUserBackup
[06:06] <lifeless> psusi: meh, backup windows is incredibly easy, there is a wizard to do it for just their own files
[06:06] <lifeless> psusi: ease is -not- the problem!
[06:06] <psusi> LaserJock, sort of...
[06:06] <psusi> LaserJock, nautilus appends new sessions to the disc... can't remove stuff without reformatting
[06:07] <LaserJock> ah, I see
[06:07] <psusi> plus you can't write to it normally from the command line... i.e. you can't tar -czlf / /media/cdrecorder/backup.tar.gz
[06:07] <Yagisan> LaserJock: well, not so long ago I felt the pain of complete system meltdown, with 1 backup failing to restore.
[06:07] <psusi> with packet writing, you can
[06:08] <psusi> lifeless, if it is so easy then why don't people do it?  because they have to find the wizzard... they already understand how to copy files around
[06:09] <Yagisan> ajmitch: I think sbackup does what is listed on your wiki page
[06:09] <ajmitch> Yagisan: probably, but it's not my spec :)
[06:10] <lifeless> psusi: your logic is now inconsistent
[06:11] <lifeless> psusi: as windows comes with packetcd support built in and AFAICT this has not changed things
[06:11] <desrt> oh
[06:11] <desrt> inconsistent logics?
[06:11] <desrt> i'm totally in on this convo
[06:11] <Yagisan> psusi: 1) does your packetcd thing work with dvds, 2) will windows/mac/netware read those discs ?
[06:11] <desrt> are we assuming the axiom of choice?
[06:11] <psusi> lifeless, no, windows does not
[06:11] <Yagisan> brb
[06:11] <lifeless> desrt: hell no, predicate logic need not apply
[06:11] <psusi> you need incd or something to get packet mode in windows
[06:12] <desrt> lifeless; ?
[06:12] <desrt> lifeless; you do not need the axiom of choice to have a meaningful predicate logic
[06:12] <psusi> Yagisan, I believe yes on both
[06:12] <desrt> lifeless; hell.. you don't even need ZF
[06:12] <lifeless> meh, I'm too tired and grumpy for this. psusi I support packetcd as something to implement and make easy, but if you invoke 'average user' be VERY VERY VERY careful to get your facts straight
[06:12] <zakame> hi all
[06:13] <Yagisan> re
[06:13] <psusi> I just remember several users ago it seemed like everyone would copy their work to floppies at the end of the day for safe keeping
[06:13] <psusi> they don't do that anymore
[06:14] <desrt> floppies?  for safe keeping?
[06:15] <desrt> my sister had a college class in effective web communication
[06:15] <desrt> the teacher mandated the use of floppy disks because she didn't trust usb keys
[06:15] <desrt> i told my sister to expect the worst
[06:15] <psusi> rofl
[06:15] <desrt> the worst, of course, occured.
[06:15] <psusi> that's funny
[06:15] <psusi> well, no... it's sad
[06:16] <psusi> usb sticks are way more reliable than floppies ever were
[06:16] <desrt> this is why i find funny
[06:16] <desrt> the amount of memory i've casually amassed on my person
[06:16] <psusi> I hate floppies... wrote a disk driver for them once... that hardware is hell
[06:16] <desrt> 1gb on my keychain
[06:16] <Yagisan> desrt: Oh no, you couldn't find replacement 8" disks ;)
[06:16] <desrt> 1gb in my vorbis player
[06:16] <desrt> 1gb in my camera
[06:16] <desrt> i routinely have 3gb of memory on my body without even thinking about it
[06:16] <Yagisan> psusi: me too - I did min in x86 asm. I though it was rather fun
[06:16] <Yagisan> s/min/mine
[06:17] <desrt> can you even imagine how ridiculous it would be to carry around 3gb on floppy?
[06:17] <psusi> desrt, and it wasn't that long ago that a 40 MB hard drive was gigantic ;)
[06:17] <desrt> you'd definitely be aware that you were doing it :)
[06:17] <desrt> my camera takes pictures that don't even fit on a floppy :p
[06:17] <psusi> Yagisan, I nearly blew a gasket after I wrote it to detect the disc using the query command I found in the specs... then found out that NO disks on the market support it
[06:18] <psusi> which is why you have to manually tell the bios what kind of floppy there is, if any... and the OS believes it... even if it's a lie
[06:19] <desrt> yet
[06:19] <desrt> linux can detect them
[06:19] <Yagisan> psusi: oh - I never had specs - I reverse engineered the bios so I could add support for 3.5" drives on a 286 that didn't support them
[06:19] <Yagisan> psusi: does you packetcd stuff work on a multi-user system ?
[06:20] <psusi> desrt, no, it can't... it uses the info provided in the bios
[06:20] <desrt> oh
[06:20] <psusi> believe me... I pulled my hair out for days over that
[06:20] <desrt> how... unlinuxy
[06:21] <psusi> no disks support the required commands to detect... and both linux and windows grab the bios info during boot and run with it
[06:21] <desrt> special.
[06:22] <psusi> writing the code to control the dma in a 32bit os was especially fun... heh...
[06:22] <desrt> dma used to be funny :)
[06:22] <Yagisan> desrt: It's "fun" to play with floppys. Esp when you adjust tract and sector size to squeeze more data on is interesting too
[06:22] <Yagisan> s/tract/track
[06:23] <psusi> yea.... 1920 kb with mixed sector sizes
[06:23] <Yagisan> desrt: 1.64 is the most you can reliably fit on a disk, and still have other systems read it
[06:23] <psusi> but couldn't boot from them because the bios can only read the standard 18 512 byte sectors per track
[06:24] <Yagisan> psusi: Yagisan: psusi: does you packetcd stuff work on a multi-user system ?
[06:24] <psusi> but you could format track0 normally and the other 79 with MSS... and have the boot code in track0 talk directly to the FDC... heh
[06:24] <psusi> Yagisan, yes... had to fix a few bugs in the kernel and the format utilities to make that work
[06:24] <psusi> Yagisan, in a sane manner that is
[06:25] <psusi> described it all on the spec page
[06:25] <Yagisan> psusi: so multiple users can write to those disks at the same time ?
[06:25] <psusi> Yagisan, ohh... well... yea... the way I got it set up now is a regular desktop user is going to have it auto mount with the uid= and gid= params set to their id
[06:26] <psusi> which will cause the owner on disk for files they create/own to be set to -1
[06:26] <psusi> which causes them to be owned by whoever is specified by the uid=/gid= params when mounted, so it can be used in another computer sanely, or by another user
[06:27] <psusi> but if you really want to, you can do away with the mount options and multiple users can access it at once, and the real ids will be saved
[06:27] <Yagisan> psusi: brb - but I'd like to talk about it more when I get back
[06:32] <Yagisan> re
[06:33] <jonshea> Does anyone know if there is a particular reason that mpeg2vidcodec isn't included as a package anywhere? Are the licensing issues?
[06:34] <Yagisan> psusi: I wonder about the automount feature. eg on my ltsp system, when you put a cd/dvd in, it appears on everyones desktop
[06:34] <Yagisan> jonshea: does a package exist ?
[06:34] <psusi> ltsp?
[06:35] <Yagisan> psusi: linux terminal server project. it's in breezy, and edubuntu uses it extensivley
[06:35] <psusi> Yagisan, currently the automounter sets the uid/gid options when it auto mounts external media like cds and usb sticks, so only one user will have access to it, assuming it is fat, iso9660, or udf
[06:35] <jonshea> Yagisan: Not as far as I can tell, which is about 30 minutes worth of looking. I'd love to have mpeg2vidcodec, because I use it with imagemagick convert.
[06:35] <psusi> hrm... sounds very cool
[06:35] <ajmitch> sigh
[06:36] <Yagisan> psusi: I use it to make all those old p2/k6 boxes useful
[06:36] <psusi> nifty
[06:36] <jonshea> I can install from source just fine, but obviously packages are nicer. I I believe that fink has a package for it.
[06:36] <Yagisan> jonshea: you'll need to package it yourself then
[06:37] <psusi> Yagisan, how the heck does that work?  does each login get its own gnome-volume-manager?
[06:37] <Yagisan> psusi: yep - I have one overpowerful amd64 box - and all the pc's people would chuck away hanging off it - rather cool, and easy to manage
[06:38] <psusi> Yagisan, normally g-v-m sees the disc and mounts it... in a multi user environment like that you'd want it mounted only once... and either set to be accessible to everyone, or set to actually use the permissions on the disc..
[06:38] <jonshea> I suspect I can handle that. Thanks.
[06:38] <Yagisan> psusi: each log in gets there own everything - but most of the code and data can be shared in memory
[06:39] <ajmitch> sigh
[06:39] <ajmitch> hammer, please
[06:39] <psusi> Yagisan, if multiple g-v-m's are running, then won't they race to be the first to mount the media?
[06:39] <Yagisan> ajmitch: there there, it will be ok
[06:39] <ajmitch> Yagisan: #ubuntu drones...
[06:39] <Yagisan> ajmitch: I know - I avoid there, and the forums now. I learnt my lesson
[06:40] <ajmitch> yeah
[06:40] <zakame> gaah
[06:40] <Yagisan> ajmitch: I got sick of being flamed for trying to stop them breaking the boxes
[06:40] <ajmitch> someone who's convinced that only by having a fully 64-bit system without 32-bit exe support in the kernel will be faster
[06:41] <Yagisan> psusi: I don't know - I've never noticed a problem so far
[06:41] <Yagisan> ajmitch: tell them racing stripes will make their pc faster too
[06:41] <psusi> Yagisan, which uid owns the files on the filesystem?
[06:41] <ajmitch> hehe
[06:42] <zakame> i suppose having both types of systems at hand would be essential to understanding the prob
[06:42] <psusi> they usually respond by saying it improves the drag coefficient... then they get all confused when I ask them what a drag coefficient is...
[06:42] <psusi> Yagisan, so... which uid owns the files on the cd?
[06:43] <Yagisan> psusi: I'm looking for a cd - just a second
[06:44] <psusi> Yagisan, and I guess what you were interested in is what happens if you were to insert a packet mode cd?  if you want to you can have each user be able to have a home directory on it they have own ;)
[06:44] <Yagisan> psusi: I just tossed a cd in, and apparently root owns it
[06:44] <seth> aw
[06:44] <seth> there there honey
[06:44] <ajmitch> I think I should leave #ubuntu soon, I don't have the temperament for it :)
[06:45] <ajmitch> thanks Yagisan
[06:45] <Yagisan> ajmitch: are you a mod there - ban everyone for a few hours ;) that will make you feel better
[06:45] <seth> I need to leave #ubuntu b/c there's a guy with the nick sethk, and people ALWAYS ping me instead of him :P
[06:46] <Yagisan> ajmitch: childish yes - but still good fun
[06:46] <ajmitch> haha
[06:46] <ajmitch> it's a very tempting thought
[06:46] <psusi> Yagisan, did that answer your question?  I'm sleepy ;)
[06:47] <Yagisan> psusi: I think so. Of course, I'd have to try it at some stage
[06:48] <Yagisan> psusi: how can you be sleepy - I'm usually here until 4-5am my time. you can do it, go on
[06:48] <psusi> ok.... the kernel patch is on the wiki, and udftools is on revu any time you want to play with it ;)
[06:49] <psusi> Yagisan, I was here untill 4-5 am the last two nights... in the morning I have work ;)
[06:50] <Yagisan> psusi: I have to work too - why can't customers beat a path to my door for little or no effort
[06:55] <LaserJock> is there any web hosting for MOTU related material?
[06:55] <Yagisan> ouch they are eating /sh alive on d-d
[06:55] <lifeless> yes
[06:55] <lifeless> comes of pushing rather than leading
[06:55] <zakame> heya LaserJock , jamessan , whiprush :)
[06:56] <LaserJock> I have some lists and things for the MOTUScience team that aren't very wiki friendly but I don't know where I can host them
[06:57] <LaserJock> hi zakame
[07:03] <jsgotangco> oh well
[07:04] <zakame> LaserJock: how's packagingguide going?
[07:04] <LaserJock> oh, its getting there
[07:04] <jsgotangco> ouchh...
[07:04] <LaserJock> I have been on vacation so it hasn't gotten far for a few weeks but I'm going to be working on it again
[07:06] <ajmitch> Yagisan: yeah? I haven't read the carnage for a few hours
[07:07] <ajmitch> oh not much new since I last looked
[07:07] <Yagisan> ajmitch: I just get a digest. btw their digest is much better then what ubuntu uses - at least there I can easily reply to a message, which I have done on occasion
[07:07] <minghua> hi zakame, I am going to pester you again about the octave2.1 sync
[07:08] <minghua> zakame: did you request it?  it still hasn't shown up in build logs yet
[07:08] <Yagisan> ajmitch: but as I don' want to be spit-roasted I refrain from posting if possible
[07:08] <zakame> minghua: it hasn't, I don't have the ACCEPTED mail yet, though I've requested it already
[07:08] <Yagisan> ajmitch: debian-mentors is a much safer place to post
[07:09] <ajmitch> yep
[07:09] <zakame> ajmitch: what carnage?
[07:09] <minghua> zakame: ok I see.  I'll keep waiting, then, thanks
[07:09] <zakame> ah, the d-devel...
[07:10] <zakame> no prob :)
[07:10] <jsgotangco> ouchhhh
[07:10] <jsgotangco> the more i read it, the more i go ouchhh
[07:11] <zakame> :(
[07:13] <Yagisan> yep, I see /sh's point, but that won't fit debian culture - it's not the way they work. BTW that list should have a disclaimer, beware the of the asuffield
[07:14] <jsgotangco> heh
[07:14] <Yagisan> I do prefer if lp was oss though, but that's just my opinion
[07:14] <zakame> Yagisan: haha, well asuffied bites, but he's a good guy :)
[07:15] <Yagisan> zakame: I didn't say he was bad - just beware - it's like a gaurd dog, he's good to you, but he sees everyone else as lunch
[07:18] <zakame> hehe
[07:22] <minghua> nice metaphor :-)
[07:24] <zakame> hehe
[07:24] <zakame> wb tritium :)
[07:24] <tritium> thank zakame :)
[07:24] <tritium> thanks, even
[07:42] <zakame> back
[07:43] <Burgundavia> all those who blog, lets push us up the google rankings for "linux desktop", as we currently have no juice there
[07:44] <zakame> Burgundavia: blogging then :)
[07:44] <jsgotangco> RedHat! SuSE!
[08:33] <zakame> StevenK: linkchecker is in, the buildLogs says ftbfs on ia64, wait a while perhaps :)
[08:35] <zakame> heya dholbach :)
[08:35] <dholbach> good morning
[08:35] <dholbach> hey zakame
[08:49] <StevenK> zakame: It fails due to not being able to install stuff, which is just silly.
[08:51] <lifeless> dholbach: opensync is in new
[08:51] <lifeless> dholbach: its hurry up and wait time
[08:54] <dholbach> lifeless: then upload it to ubuntu's NEW queue too
[08:54] <lifeless> dholbach: I cant
[08:54] <dholbach> why is that?
[08:55] <lifeless> dholbach: I dont have upload rights to ubuntu
[08:55] <dholbach> then tell me where the source packages are and i'll sponsor the uploads :)
[08:55] <lifeless> in debian NEW ;0
[08:55] <dholbach> right
[08:56] <lifeless> back in  abit
[08:58] <dholbach> lifeless: i couldn't find a link to the source packages there
[09:03] <Burgundavia> dholbach, did you see that itp for the galago stuff?
[09:04] <zakame> seen that
[09:05] <dholbach> Burgundavia: yes i did - i talked to giskard about all the stuff, but found some problems in the packges, else they'd be in dapper already :)
[09:05] <Burgundavia> dholbach, ah, figured as much
[09:05] <Burgundavia> dholbach, that would be cool stuff to push for dapper+1
[09:06] <dholbach> i'm confident in dapper universe :)
[09:06] <Burgundavia> I mean by default for dapper+1
[09:07] <dholbach> :)
[09:07] <Burgundavia> I just pinged the xchat-gnome guys about doing a backend for it
[09:15] <dholbach> cool
[09:52] <lucas> hi all
[10:02] <Tonio_> hi all
[10:02] <Tonio_> hi dholbach
[10:02] <dholbach> hellas Tonio_
[10:03] <Tonio_> dholbach: I'll be there tomorrow for the CC
[10:03] <dholbach> cool :)
[10:03] <Tonio_> finally I'll get in ;)
[10:21] <dholbach> Bug Day on Friday!
[10:35] <freeflying> looking for reviewers for this http://revu.tauware.de/details.py?upid=1434
[10:35] <raphink> hi dholbach
[10:35] <freeflying> raphink: hi
[10:35] <raphink> hi freeflying
[10:36] <freeflying> raphink: looking for review
[10:36] <raphink> sorry can't review right now
[10:36] <raphink> I have to get going ;)
[10:36] <freeflying> raphink:  :)
[10:36] <dholbach> hellas raphink
[10:37] <ajmitch> freeflying: I'd clean up debian/rules, remove all those unneeded dh_* calls
[10:37] <ajmitch> you've also got the patch/unpatch rules commented out there
[10:38] <freeflying> ajmitch: got it ,thx
[10:45] <\sh> ogra: ping
[10:46] <ajmitch> morning '
[10:46] <ajmitch> morning \sh ;)
[10:46] <\sh> hey ajmitch
[10:46] <ajmitch> survived the night of d-d mails?
[10:46] <\sh> ajmitch: as I said, I'm not posting anymore to d-d
[10:48] <ajmitch> not a surprise
[10:49] <Amaranth> ooh, another ubuntu flamewar on d-d?
[10:49] <\sh> ajmitch: actually it is...I could send them a lot of nice greetings as well..but I don't do it...it's a time wasting afford.
[10:50] <\sh> ajmitch: and I have other problems right now, as to argue with those people...
[10:50] <freeflying> \sh: hi
[10:51] <Amaranth> ooh, a launchpad flame
[10:51] <Amaranth> this should be interesting
[10:51] <ajmitch> not really
[10:52] <Amaranth> flamewars are always fun to read/watch
[10:52] <dholbach> Amaranth: it's as annoying and as demotivating as every other flamefest too
[10:52] <Amaranth> because it's so damn funny
[10:52] <\sh> Amaranth: it isn't ... the conclusion is: i'm an asshole..and fighting against windmills...and smoking shit
[10:52] <Amaranth> \sh: You took the bait.
[10:53] <\sh> Amaranth: was a mistake...I thought the people in debian were grown ups..
[10:54] <Amaranth> some/most are
[10:54] <zakame> hey a :0
[10:54] <\sh> sometimes I have to make up my mind, and have to tell myself: well, nothing is changing, it's just like debian in the 90ties, and they will behave like this in the 21st century..if debian still exists then
[10:55] <zakame> hi \sh, yes, 'twas smoking :)
[10:55] <Tonio_> some debian users are really fabulous assholes....
[10:55] <Tonio_> that's amazing....
[10:56] <Treenaks> s/debian// ;)
[10:56] <\sh> well anyways I have other problems
[10:56] <Tonio_> Treenaks: lol
[10:56] <zakame> I just regard them to have some serious communication gaps
[10:56] <Tonio_> hum, they are proud of their geek power
[10:57] <Tonio_> and afraid by seeing complated things accessible to standard users
[10:57] <ajmitch> Tonio_: please refrain from inciting any further flames
[10:57] <Tonio_> how is possible to criticize launchpad when you have seen that debian mess !!!
[10:57] <Tonio_> ajmitch: okay
[10:58] <zakame> err, wait
[10:58] <Tonio_> ajmitch: note that I said "some" ;)
[10:59] <raphink> not to give names ;)
[10:59] <raphink> the list archives are public anyway
[10:59] <raphink> anyone can get an opinion of `sectarian` behaviours ;)
[10:59] <zakame> well, the thread lost its real point anyway
[10:59] <raphink> yes
[11:00] <ajmitch> Tonio_: I don't care if you said some, just what you said was highly inflammatory
[11:00] <ajmitch> we do not need to carry it into here
[11:01] <zakame> let's just respect what each side wants to do, after all, we are all for free software, right? :)
[11:01] <jsgotangco> no!
[11:01] <ajmitch> zakame: that's some of what the issue is about
[11:01] <\sh> zakame: free software != opensource software
[11:01] <raphink> yes zakame somehow
[11:02] <zakame> \sh I am well aware of that distinction
[11:02] <\sh> zakame: and that was the discussion about..they would use launchpad, if it were OSS
[11:03] <\sh> zakame: but because of not being OSS, it's bad, it's evil, and it's a webapp
[11:03] <raphink> warf
[11:03] <raphink> not sure they would use it if it were OSS
[11:03] <raphink> actually
[11:03] <raphink> they may find another excuse
[11:03] <zakame> \sh : which almost got lost into ad hominem arguments
[11:03] <\sh> raphink: ok..s/would/could/
[11:03] <ajmitch> good bye, I'm sick of this argument
[11:03] <raphink> yes
[11:04] <\sh> ajmitch: there is a stop now...
[11:04] <zakame> ajmitch : sick, indeed
[11:04] <\sh> I'm wondering who would like to take over njam?
[11:05] <zakame> njam?
[11:05] <jsgotangco> awesome game
[11:06] <\sh> Ok, some serious stuff now: It looks like that I have to give up my dsl line and some other stuff the next month, because I can't pay it anymore...so I need some volunteers to take over my favorite packages...which is python-sip4-qt3 python-sip4 python-kde3 njam (gajim will be taken over by motu im anyways)
[11:06] <\sh> zakame: network enabled pacman clone with level editor etc.
[11:07] <zakame> shawarma: checking packages.u.c/src:njam
[11:07] <zakame> gaah \sh I mean
[11:07] <\sh> zakame: I uploaded the last time a new upstream version, which has now autotools support...
[11:07] <\sh> so there is nothing complicated anymore :)
[11:08] <zakame> w00t
[11:08] <\sh> zakame: also to mention, it's not in debian, and someone could find his/her way into debian with this package
[11:08] <raphink> :)
[11:09] <zakame> shawarma: well I'm looking for RFPs anyway, would it be ok if I proceed so?
[11:09] <Burgundavia> \sh, how is the job hunt going?
[11:09] <zakame> again, I mean \sh
[11:09] <zakame> this bitchx is killing me
[11:10] <\sh> Burgundavia: waiting for me 2nd interview with the big internet search company in 50 minutes
[11:10] <\sh> zakame: you are welcome :)
[11:10] <\sh> s/me/the/
[11:10] <jsgotangco> big internet search company...
[11:10] <jsgotangco> ahhh!!!
[11:10] <zakame> w00t!
[11:10] <jsgotangco> Teoma?
[11:10] <\sh> lol
[11:11] <Treenaks> msn?
[11:11] <StevenK> No, it's alta vista.
[11:11] <\sh> but I don't know if this works out as expected...and most propably I won't be able to hold to my normal standards of living....meaning, the worst thing what will happen, that I resign from all my ubuntu duties
[11:11] <zakame> awww :(
[11:15] <\sh> so I need to know if somebody is interested in taking over my favorite packages, so that kubuntu can work with the python stuff and somebody has to take the duty to repackage wine from wine-hq which is sometimes a bit hard :) you can't use the debian wine-hq packages..because of their stupid numbering
[11:16] <zakame> hm I though StevenK was doing something with wine/
[11:16] <zakame> ?
[11:17] <\sh> we don't even use the debian packages :)
[11:19] <zakame> which is understandable :)
[11:20] <Yagisan> re
[11:21] <Yagisan> \sh_away: I wanted to talk to you about wine later, esp re wine on amd64
[11:26] <\sh_away> Yagisan: is there a solution for wine on amd64?
[11:27] <Yagisan> \sh_away: well I have some ideas on how to get win64 + win32 apps going, but I think I'll need some help with it
[11:28] <Yagisan> \sh: wine should build on amd64, but when it does, it runs win64 apps only
[11:28] <\sh> Yagisan: what I meant is, is there a solution from upstream for this? if there is, we could put it into the package
[11:28] <Yagisan> \sh: upstreams says can't be done, but I think it can, as it is a packaging issue
[11:29] <\sh> well make it :) and put some test packages on a website :)
[11:29] <Yagisan> \sh: I built a 32bit libs package, its on revu - that should run 32bit wine
[11:30] <Yagisan> \sh: what I need to do, is make a wrapper that can tell the difference between a win64,win32 and win16 app
[11:30] <Yagisan> \sh: Would you be able to help test, or at least double check that I don't break anything too bad ?
[11:31] <\sh> Yagisan: well..the problem with win64, win32 and win16 is, that win32 can run win16, and win64 normally can run win32 with a compatiblity layer in windows...
[11:31] <Yagisan> \sh: nope, on windows it is seperate libs
[11:32] <Yagisan> \sh: wine has gone the same way, 64bit wine only runs 64bit apps
[11:32] <Yagisan> \sh: so I need wine + wine64
[11:32] <\sh> Yagisan: i think that's the word at ms for "separate libs" ,)
[11:33] <Yagisan> \sh: Oh, I thought you meant something like the WoW thunker
[11:33] <\sh> Yagisan: well it works the same way as "linux32"
[11:34] <\sh> the problem is, to get it build on amd64 as 32bit app
[11:34] <\sh> without changing anything on the buildd
[11:34] <Yagisan> \sh: nope, better idea
[11:34] <Yagisan> \sh: build as 32bit on i386, repack for amd64
[11:34] <Yagisan> \sh: see zsnes in revu for proof of concept
[11:35] <\sh> Yagisan: argl..
[11:36] <Yagisan> \sh: no one has a better solution, until then, I'd like to go with that
[11:36] <\sh> Yagisan: are you sure, it's in law with infinity and lamont?
[11:37] <Yagisan> \sh: it's from the same source, it just needs to successfully be built on i386, either in pbuilder, or by a buildd first
[11:37] <\sh> and that is something we can't see somehow..
[11:38] <Yagisan> \sh: it should be fine, but if you have a better idea, I'd love to hear it.
[11:39] <Yagisan> \sh: we need multi-arch, but it does not yet exist, and we are a pure64 distro. cross compiles are failing for every test case I try
[11:40] <Yagisan> \sh: next best thing is uuencoded i386 debs, repacked with support libs
[11:40] <\sh> well...I just have a look on ubuntu-fetch-*
[11:41] <Yagisan> \sh: ??
[11:41] <\sh> you assume that the zsnes package in the archive is already the i386 binary package of the latest source
[11:41] <\sh> Yagisan: I just read your zsnes debian dir :)
[11:41] <Yagisan> \sh: I can't have network access at build time :( so I "assume" you built it ok first yes
[11:42] <Yagisan> \sh: you can pbuilder it first, and stick the deb in /local/pkgs and it won't download from a mirror
[11:43] <Yagisan> \sh: but I don't think that directory is in the diff, as it was empty when I made the package
[11:44] <\sh> Yagisan: yes...but how to do it on the buildd? 1. there is no network 2. how do you get the i386 packages into the debian/pkgs dir, without manual intervention?
[11:45] <Yagisan> \sh: there is a little issue of scheduling it, a bit like with OOo
[11:46] <Yagisan> \sh: It was fully automatic, but Mithrandir said that was a bad idea(tm) - see the comments on ia32libs-universe
[11:49] <Yagisan> \sh: :( file can't tell the difference between win64 and win32 apps
[11:50] <\sh> well...back to the zsnes package...I read that you want to have local packages already in the source, means you have to compile first for i386 and then moving the binary packages into the source tree and compile again for amd64
[11:50] <\sh> or you image you have during build time network access...which is sometimes not true
[11:51] <Yagisan> \sh: yes, the multiple build. Intended as a stop-gap until multi-arch. The is no network access during build.
[11:51] <Yagisan> s/The/there
[11:52] <Yagisan> \sh: If done in a pbuilder, the first build won't affect the buildds, and only requires one source upload
[11:52] <\sh> Yagisan: ok..so we are stuck with the multiple build..how can we achieve this with only one source package...and thinking about building i386 packages first, then downloading those packages, putting them again into the same source package and reupload the source
[11:53] <Mithrandir> Yagisan: what are you trying to achieve?
[11:53] <\sh> Mithrandir: wine for 64bit with 32bit support
[11:53] <\sh> Mithrandir: compiled out of one source package
[11:53] <Yagisan> G'day Mithrandir - want to run some i386 only apps on amd64, while waiting for multi-arch
[11:54] <Mithrandir> so you want a 32 bit wine repackaged for amd64?
[11:55] <Yagisan> \sh: I build the i386 package in pbuilder first, copy the resulting deb to /local/pkg, run the ubuntu-fetch-and-build script, repack source, and send to buildd
[11:55] <Mithrandir> Yagisan: _no_.
[11:55] <\sh> Yagisan: which is not secure :)
[11:55] <Yagisan> Mithrandir: yes, but there is also a wine64 that only runs win64 apps
[11:55] <\sh> Yagisan: i don't trust compiles from random hosts :)
[11:55] <Mithrandir> Yagisan: you are to take the binaries which comes from the buildd if you do repackaging like that.
[11:56] <Mithrandir> Yagisan: just build the _amd64.debs when doing the 32 bit build, then
[11:57] <Yagisan> Mithrandir: how ?
[11:57] <\sh> well, I think then about a wine-compat32 package
[11:57] <Yagisan> \sh: already done
[11:57] <Yagisan> Mithrandir: if I can build amd64 debs from i386, I'm very happy
[11:58] <Yagisan> \sh: you are talking about the 32bit support libs for wine right ?
[12:00] <Yagisan> Mithrandir: would it work if I swapped the control file during build ?
[12:00] <Mithrandir> Yagisan: possibly
[12:00] <\sh> ok....time for the interview :)
[12:01] <Yagisan> Mithrandir: I tried cross compile first, but that failed. I'll try with zsnes again, see if I can make amd64 and i386 out of it
[12:01] <\sh> wish me good luck :)
[12:01] <Yagisan> \sh: good luck
[12:05] <raphink> good luck \sh
[12:05] <raphink> :)
[12:05] <Tonio_> good luck
[12:08] <segfault> hi
[12:13] <raphink> hi segfault
[12:22] <raphink> ouch 180 merges to go :s
[12:23] <raphink> and LP down
[12:23] <lucas> life sucks ;)
[12:34] <raphink> haha
[12:43] <Gloubiboulga> hi
[12:58] <segfault> 180!?
[12:58] <segfault> sometime ago were just 5
[01:22] <Yagisan> w00t: dpkg-gencontrol: error: current build architecture i386 does not appear in package's list (amd64)
[01:23] <Yagisan> guess switching doesn't work that easily
[01:43] <juliux> hi, anyone here who wants to make a ubuntu-dev talk at the linuxtag 2006 in wiesbaden/germany ?
[01:50] <Yagisan> G'day ogra_ibook
[01:51] <ogra_ibook> hey Yagisan
[01:52] <Yagisan> ogra_ibook: ever tried to force arch a out of pbuilder b ?
[01:52] <ogra_ibook> you mean i386 on ppc or the opposite ?
[01:53] <Yagisan> ogra_ibook: amd64 out of i386
[01:53] <ogra_ibook> nope, but will work fine
[01:53] <pappan> dholbach: hi
[01:53] <Yagisan> ogra_ibook: I was told I'm not allowed to uuencode i386 debs :(
[01:54] <dholbach> pappan: hellas! :)
[01:54] <Yagisan> ogra_ibook: got a nice error -> dpkg-gencontrol: error: current build architecture i386 does not appear in package's list (amd64)
[01:54] <pappan> dholbach: how are you
[01:54] <pappan> query dholbach
[01:54] <dholbach> pappan: thanks a lot, i'm fine - how are YOU?
[01:56] <abelcheung> Hi, if a main inclusion report is ready, and all problems blocking upload of package are solved, what should be done next to ask for universe->main move?
[02:01] <abelcheung> Oh, I asked the question here because the fixed package is in REVU, and waiting for review
[02:02] <abelcheung> http://revu.tauware.de/details.py?upid=1436
[02:14] <Yagisan> good -> dpkg-deb: building package `zsnes' in `../zsnes_1.420-0.1ubuntu2_i386.deb'.
[02:14] <Yagisan> good -> dpkg-deb: building package `zsnes-32' in `../zsnes-32_1.420-0.1ubuntu2_amd64.deb'.
[02:15] <Yagisan> bad -> dpkg-genchanges: failure: cannot open upload file ../zsnes-32_1.420-0.1ubuntu2_i386.deb for reading: No such file or directory
[02:15] <Yagisan> :(
[02:28] <Yagisan> ogra_ibook: wb
[02:38] <zakame> evening MOTUs
[02:39] <dholbach> hellas zakame
[02:40] <Yagisan> G'day zakame
[02:41] <Yagisan> why does dpkg-genchanges hate me ?
[02:42] <dholbach> Yagisan: you could    strace -e open,stat   it
[02:42] <Yagisan> zakame: not too close - I can't breath
[02:43] <Yagisan> dholbach: it's in pbuilder when it chokes - it's looking for a non-existent file
[02:43] <dholbach> permissions and disk space are cool?
[02:44] <Yagisan> dholbach: yep - the error is a few lines up /|\
[02:44] <dholbach> yeah, i read that
[02:44] <zakame> Yagisan: buwahaha
[02:54] <Yagisan> zakame: adopting debian packages ?
[02:54] <zakame> yep
[02:56] <zakame> I'm currently looking at njam though, it looks quite nice and good to have in debian :)
[02:56] <Yagisan> zakame: I find it rather funny esp as I saw a few ubuntu doesn't contribute back to debian posts today - good luck, and have fun
[02:57] <zakame> Yagisan: indeed, that's why I've made it a personal goal to debunk that claim ;)
[02:57] <Yagisan> zakame: what is njam ?
[02:58] <zakame> Yagisan: it's from \sh , pacman-like game
[02:58] <zakame> network-friendly too :)
[02:59] <Yagisan> zakame: cool - I like nice games, esp when waiting for pbuilder to finish
[02:59] <\sh> zakame: remove my name completly from the package
[02:59] <zakame> \sh: even from changelog?
[03:00] <\sh> yes please
[03:00] <Yagisan> \sh: I don't think they will keep flaming you. Sorry you had to meet it first hand
[03:00] <zakame> \sh: awww, but I'll respect your wish, ok
[03:01] <\sh> Yagisan: it's not the first time...so don't worry...no offense taken from that :)
[03:01] <\sh> zakame: actually you decide..it's your package now :)
[03:02] <zakame> hm seems I shouldn't have filed that ITP, there's already a previous debian #316706
[03:02] <Ubugtu> Debian bug 316706: "njam -- Njam is a full-featured cross-platform pacman-like game written in C++ using SDL library" Package: ITP, Severity: wishlist, Maintainer: wnpp@debian.org</a http://bugs.debian.org/316706
[03:03] <gypsymauro> hello
[03:03] <zakame> heya gypsymauro
[03:03] <gypsymauro> hello I've doenloaded some packages and then created a cd using dpkg-scanpackages, now I can add my cd  with apt-cdrom add but when I try to install a packages from the CD apt-get says "WARNING: The following packages cannot be authenticated!" a list of packages and "Install these packages without verification [y/N] ", how I can make them authenticated? I suppose it's cause I missed the Release file but I dunno how to crate 
[03:04] <Mithrandir> gypsymauro: apt-ftparchive release can create Releases files.
[03:04] <\sh> zakame: merge both ITP bugs then :)
[03:04] <Yagisan> zakame: 1) it's old, 2) it's not in debian. 3) you could tag your itp with aready-in-ubuntu
[03:04] <\sh> zakame: or mark the last one as duplicate
[03:05] <ogra_ibook> gypsymauro, also make sure to use apt-key add to add the key the packages are signed with to the CD
[03:05] <gypsymauro> Mithrandir: it works for cd too?
[03:05] <zakame> Yagisan, \sh: indeed :) how can I not work without my wonderful MOTUs :)
[03:05] <Mithrandir> gypsymauro: the CD is just a dump of what's available over HTTP ordinarily, so yes.
[03:06] <gypsymauro> Mithrandir: it's something done with wget and so on, then I do a dpkg-scanpackages, so I can do an apt-cdrom add, but this is not enough it seems to authenticate packages
[03:07] <gypsymauro> I'm missing something but i dunno what
[03:07] <Mithrandir> gypsymauro: apt-ftparchive release, as I said.
[03:07] <Mithrandir> you also need to sign the Releases file using gpg and have that key in the apt keyring
[03:27] <gypsymauro> tanx Mithrandir :) now I c the light
[03:51] <azeem> quicker for what?
[03:52] <Yagisan> azeem: It's taking a long time to tar and untar for every build
[03:52] <azeem> the base system?
[03:52] <azeem> sbuild doesn't do that, FWIW
[03:53] <Yagisan> azeem: yep - i'm irritated, 17 ftbfs in a row
[03:53] <Yagisan> 18 + new error message now
[04:02] <jsgotangco> byzanz? wow dholbach
[04:02] <hub> what
[04:02] <jsgotangco> that's like only  yesterday
[04:03] <hub> I did it last night too
[04:03] <jsgotangco> ahhh
[04:03] <hub> but didn't upload it to REVU :-/
[04:03] <hub> :_/
[04:03] <dholbach> jsgotangco: i only had to wait for the upstream author to tell me that it only worked on composite enabled x :)
[04:03] <hub> I have libgopersist, but it FTBFS
[04:03] <hub> dholbach: libdamage
[04:03] <dholbach> hub: i added that
[04:04] <hub> dholbach: my package has it :-)
[04:04] <dholbach> my package has it too (if you mean the build-dep)
[04:04] <hub> yep
[04:04] <hub> whatever I'll just rm -fr it
[04:05] <Yagisan> 19 :(
[04:05] <dholbach> hub: gdk_screen_get_rgba_colormap (gdk_screen_get_default ()) returns NULL and makes the applet unusable on boxes that have no composite
[04:09] <thierry_> ajmitch : thanks for sending a comment for my package! there's only two point I don't know how to fix : no pkg-config installed and no headers installed into -dev
[04:12] <Mithrandir> thierry_: build-depend on pkg-config?
[04:13] <thierry_> k thanks
[04:17] <thierry_> Mithrandir : and how do I bump debhelper compatibility to 5
[04:17] <thierry_> ?
[04:17] <Gloubiboulga> thierry_, modify the compat file (4 -> 5)
[04:18] <thierry_> only 5?
[04:18] <thierry_> like Standards-Version: 5 ?
[04:18] <Gloubiboulga> and in debian/control: debhelper (>=5.0.7)
[04:18] <thierry_> ho ok
[04:21] <Mithrandir> thierry_: echo 5 > debian/compat, usually.
 I have the power!
[04:35] <zakame> w00t
[04:37] <zakame> gn8 all
[04:38] <lotusleaf> ;-p
[04:42] <thierry_> while updating my dapper chroot I get this : dpkg: error processing /var/cache/apt/archives/coreutils_5.93-5ubuntu1_i386.deb (--unpack):
[04:42] <thierry_>  subprocess pre-installation script returned error exit status 2
[04:42] <thierry_> Errors were encountered while processing:
[04:42] <thierry_>  /var/cache/apt/archives/coreutils_5.93-5ubuntu1_i386.deb
[04:42] <thierry_> E: Sub-process /usr/bin/dpkg returned an error code (1)
[04:42] <thierry_> I think there's a porblem with coreutils
[05:02] <lotusleaf> lol
[05:29] <siretart> hi
[05:29] <Yagisan> G'day siretart
[05:31] <Gloubiboulga> hello siretart
[05:31] <siretart> huhu Yagisan, hi Gloubiboulga
[05:34] <\sh> I hope I will manage today to sync my sleeping time to european standards again
[05:34] <thierry_> siretart : are you in a ruby-pkg-extras debian team? Someone told me this team would review very fast my package : http://revu.tauware.de/details.py?upid=1414
[05:34] <Yagisan> \sh: I gave up syncing my sleep time to Australian standards
[05:35] <siretart> thierry_: Sorry, I didn't touch ruby yet
[05:35] <\sh> Yagisan: but it's not good for me, to stay awake from 17pm to 10am and sleeping from 10:01am to 16:59pm
[05:35] <\sh> why do I mention "pm" now, when I'm using 24h format
[05:36] <\sh> I'm tired already :)
[05:36] <thierry_> k
[05:36] <\sh> which means, I can adjust my times ...
[05:37] <Yagisan> \sh: I understand - I'm a night person myself. daylight, whats that ;)
[05:37] <lotusleaf> Yagisan, I got a wallpaper for the sun and the moon to remind me what they both look like
[05:39] <Yagisan> lotusleaf: I don't use wallpapers, they use up valuable ram that could be used as disk cache
[05:40] <lotusleaf> Yagisan, gotta have a little fun on at least one box ;)
[05:40] <\sh> bbl...
[05:41] <Yagisan> lotusleaf: well, I have been known to play doomsday on my server
[05:42] <lotusleaf> Yagisan, Dungeon Crawl (aka 'crawl') forever! :P
[05:44] <Yagisan> lotusleaf: I just like to shoot at things when I'm frustrated. I wish psdoom would work http://psdoom.sourceforge.net/
[05:44] <Yagisan> lotusleaf: maybe one day, I'll get around to seeing why it didn't work last time
[05:46] <lotusleaf> Yagisan, heh, I haven't played any of the Doom games in years. I was more of a Duke Nukem 3d fan, still waiting for the second coming of Duke Nukem Forever. :P
[05:47] <Yagisan> lotusleaf: never going to happen. You like your doom games with eye candy ?
[05:50] <lotusleaf> Yagisan, Well I haven't tried any doom games for nix.. only the doom3 demo and the only good thing about that was the super turkey fighter in-game game.
[06:10] <Yagisan> night all.
[06:11] <jsgotangco> night
[06:11] <jsgotangco> me too
[06:11] <jsgotangco> i gotta sleep
[06:11] <lotusleaf> Yagisan, nn
[06:23] <Gloubiboulga> could a MTOU have a look at my libtranslate package ? http://revu.tauware.de/details.py?upid=1442
[06:46] <LaserJock> Is there any particular place other than the wiki were we can put MOTU material?
[06:47] <seth> Gloubiboulga, your diff includes config.{guess,sub}, you should remove those in the clean: rule. Also maybe you could describe what the 3 upstream patches do, briefly :)
[06:47] <Gloubiboulga> seth, thanks, but upstream provides config.{guess,sub}
[06:48] <Gloubiboulga> and I don't anderstand why they appear in the debdiff
[06:48] <seth> because your system changed them during compile
[06:48] <Gloubiboulga> s/debdiff/diff.gz
[06:48] <seth> that's why you remove them in the clean rule :) so they don't appear in the diff.gz
[06:48] <seth> it's okay to leave them in the .orig
[06:49] <seth> your system will re-create them when compiling
[06:49] <Gloubiboulga> seth, ok
[06:49] <seth> so the order is: remove config.{guess,sub} => .orig and .diff are created => compilation (and recreation of config.{guess,sub})
[06:50] <Gloubiboulga> ok !
[06:50] <seth> Gloubiboulga, also in the description: "allows to implement" => "allows the implementation of"
[06:50] <seth> :)
[06:50] <seth> I'm not a MOTU but it looks good
[06:50] <Gloubiboulga> seth, french guys like me have some troubles with english :p
[06:51] <seth> eh, je vous assure que j'ai beaucoup des problemes avec mon francais
[06:51] <seth> (beaucoup PLUS) ;)
[06:51] <Gloubiboulga> :)
[06:51] <hub> Gloubiboulga: what is the issue with French and english?
[06:51] <hub> I thought that english and french made peace
[06:52] <seth> the language, not the people :)
[06:52] <Gloubiboulga> yep
[06:52] <hub> :-)
[06:53] <seth> have a good day MOTUs and MOTU hopefuls, I am off to job training
[06:53] <LaserJock> cya seth
[06:53] <Gloubiboulga> bye seth, and thanks
[06:55] <lucas> hello all
[06:55] <Yagisan> woo hoo!! amd64 +i386 debs built out of the i386 pbuilder !!!!!
[06:56] <Yagisan> ah shit - they aren't listed in the changes file
[07:07] <thierry_> ajmitch : I resent my package at REVU (http://revu.tauware.de/details.py?upid=1439) and could it be possible that you delete the old one? (libfxscintilla1.6)
[07:11] <Gloubiboulga> I've also reuploaded my package... http://revu.tauware.de/details.py?upid=1444
[07:29] <seth> thierry_away, do you mean for libfxscintilla to be a native package?
[07:52] <Yagisan> I did it. amd64 packages building out of an i386 arch pbuilder
[08:06] <Yagisan> Mithrandir: siretart: Please revu this package. It builds BOTH amd64 and i386 in an i386 pbuilder. Is this method acceptable ? http://revu.tauware.de/details.py?upid=1445
[08:06] <Yagisan> I'll be out most of the day, so comments on the page please
[08:06] <Yagisan> bye
[08:20] <sistpoty> hi folks
[08:20] <dholbach> hellas sistpoty
[08:21] <sistpoty> hey dholbach
[08:21] <sistpoty> dholbach: read my proposal for motu-meeting?
[08:21] <LaserJock> hi sistpoty and dholbach
[08:21] <sistpoty> hi LaserJock
[08:21] <dholbach> sistpoty: yeah, we should do it
[08:21] <dholbach> sistpoty: we have a couple of planning to do
[08:21] <sistpoty> fur sure
[08:21] <dholbach> uvf coming on, NEW packages until ff, and we need universe action on the bug days
[08:21] <sistpoty> dholbach: what about date/time of meeting?
[08:22] <LaserJock> heah, you guys might be able to help me. I want to have some lists (similar to lucas's) for the MOTUScience team but I am hesitent to host them on my uni account, is there a place where things like that could be hosted?
[08:22] <dholbach> sistpoty: if we don't overlap with other meetings any time is fine for me, just propose one on the mailing list, wait 1-2 days, do it :)
[08:22] <dholbach> LaserJock: the wiki? :)
[08:23] <LaserJock> well, I don't think they are terribly wiki friendly but I might just have to do that
[08:23] <sistpoty> dholbach: I did propose a time (doesn't overlap as said on fridge) ;)
[08:23] <dholbach> sistpoty: *whistle innocently* *has a look*
[08:23] <sistpoty> hehe
[08:23] <dholbach> is fine for me
[08:23] <sistpoty> :)
[08:24] <lucas> LaserJock: how did you generate them ?
[08:24] <sistpoty> hi lucas
[08:24] <lucas> hi siretart
[08:24] <lucas> sistpoty:
[08:25] <LaserJock> lucas: hopefully a combination of some python stuff of mine with a quite a bit of use of your scripts
[08:25] <sistpoty> lucas: take a look at /home/sistpoty/merge_offline/mergebase_yappy.py ;)
[08:26] <sistpoty> hm... why did I call this yappy? should be yaml
[08:26] <lucas> LaserJock: what does your python scripts do ?
[08:27] <sistpoty> LaserJock: we might as well put them on tiber
[08:28] <lucas> sistpoty: excellent
[08:28] <lucas> LaserJock: the easiest thing to do would probably be to integrate your scripts into mdt, and then run it on my account on tiber until you are a member
[08:28] <lucas> (if you aren't already)
[08:29] <LaserJock> lucas: well, I'm not completely sure of all I want but I was thinking of linking to http://people.ubuntu.com/~scott/patches/ for science packages
[08:29] <LaserJock> as well as having the Debian/Ubuntu versions and bts links
[08:30] <lucas> mmh
[08:30] <lucas> will you be online later tonight ?
[08:30] <LaserJock> probably
[08:30] <LaserJock> what is tonight?
[08:30] <lucas> I'm trying to configure gnomemeeting with my girlfriend
[08:30] <Amaranth> i somehow have 18 launchpad karma
[08:30] <Amaranth> woo?
[08:30] <lucas> 1 or 2 hours from now
[08:30] <LaserJock> oh, yes. that would be just after lunch here ;-)
[08:30] <dholbach> Amaranth: congratulations
[08:31] <LaserJock> btw, what does it take to get an account on tiber? MOTUness?
[08:32] <dholbach> LaserJock: i don't have one :)
[08:32] <dholbach> So I can't be blamed when tiber explodes or REVU auto-approves packages. :-)
[08:32] <desrt> dholbach; not what he asked :)
[08:32] <sistpoty> LaserJock: you should be trustworthy ;)
[08:32] <LaserJock> dholbach: true
[08:33] <LaserJock> dholbach: ignorance is bliss?
[08:33] <eruin> Amaranth, I have 1502. that makes me your master, and you must follow my bidding ;-)
[08:33] <sistpoty> LaserJock: just write a signed mail to admin@tiber.tauware.de requesting an account ;)
[08:33] <Amaranth> eruin: i got mine by doing nothing :P
[08:33] <eruin> bah
[08:34] <dholbach> LaserJock: something like that, yes :)
[08:34] <LaserJock> sistpoty: oh, ok. Am I considered trustworthy?
[08:34] <dholbach> LaserJock: I have to fix *my* boxes all the time, so that's enough for me. :-)
[08:34] <sistpoty> LaserJock: based that you hang around in -motu, and we can flame you if tiber is down :P
[08:35] <dholbach> sistpoty: err, you'd better blame the solar radiation or something
[08:35] <LaserJock> sistpoty: well, I think I am pretty good about just sticking to what I know and not killing other peoples stuff ;-)
[08:36] <sistpoty> dholbach: or as bofh ;)
[08:36] <dholbach> yeah :)
[08:36] <sistpoty> ask even
[08:36] <sistpoty> LaserJock: but we still will blame you if it's been an admin's fault :P
[08:37] <LaserJock> sistpoty: sure I can understand that ;-)
[08:37] <sistpoty> hehe
[08:39] <LaserJock> ok, email sent. I think I signed it properly and everything ( I haven't had to sign anything since I got enigmail working in thunderbird).
[08:41] <LaserJock> man, \sh has got >3000 karma
[08:42] <Gloubiboulga> his next life will be really great :p
[08:43] <sistpoty> LaserJock: what username do you want? laserjock?
[08:43] <LaserJock> sistpoty: sure
[08:44] <LaserJock> sistpoty: or you could us my LP id mantha if that is more convenient
[08:44] <sistpoty> LaserJock: I guess laserjock is convenient as well, since it's your nick
[08:45] <LaserJock> right, I just didn't know if it would cause any probs if it wasn't the same as my LP id. I guess they are independent of each other
[08:47] <sistpoty> LaserJock: yep, tiber is a server on it's own ;)
[08:49] <sistpoty> LaserJock: mail with password sent
[08:50] <LaserJock> got it, thanks so much
[08:50] <sistpoty> np
[08:54] <Tonio_> re
[09:45] <LaserJock> is it relatively easy to switch a svn repo to bzr?
[09:53] <Amaranth> probably, if you want the full histroy
[09:53] <Amaranth> err, history
[09:56] <LaserJock> is bzr generally considered better for offline (local)  repos? I would like to start using a rcs for my packaging but I don't know what to use
[09:57] <lifeless> lamont__: bzr ;)
[09:57] <lifeless> Amaranth: svn2bzr
[09:58] <Amaranth> lifeless: i thought it was kinda of...incomplete
[09:58] <lamont__> lifeless: baz2bzr yet?  cvs2bzr yet?
[09:58] <lifeless> Amaranth: the author seems to figure its done
[09:58] <lifeless> lamont__: baz2bzr is usable but not the final output format yet
[09:58] <lifeless> cvs2bzr haahhaahaha
[09:59] <lamont__> lifeless: I don't care if it has particular patch history, I just want to be able to check out old CVS-tags from my new bzr repo...
[09:59] <lamont__> you know: cvs get; bzr commit; loop until done
[10:00] <lifeless> help out the guys working on ForeignBranch support
[10:04] <thierry_> ajmitch : thanks again for the comment on my package, for the name of the package what do I put in the end??
[10:05] <thierry_> is it the librairy name?
[10:05] <sistpoty> thierry_: you mean libfxscintilla or a different one?
[10:05] <thierry_> libfxscintilla
[10:05] <sistpoty> thierry_: oh, daemon@poleboy.de is me ;)
[10:06] <thierry_> ho ok!
[10:06] <thierry_> by the way the libtool thing is needed, it's a change the upstream author sent me
[10:06] <sistpoty> thierry_: ah, k
[10:06] <thierry_> sistpoty : so what should be the name of the package? libfxscintilla17 ?
[10:07] <sistpoty> thierry_: I suggest you name the _source_ package libfxscintilla (w.o. soname and w.o. version)
[10:07] <sistpoty> thierry_: thus you won't end up getting new sourcepackage names once upon a new upstream version
[10:07] <thierry_> ok, are you a MOTU, if yes, I'd like that you delete the bad names from the REVU list when I'll upload the new one
[10:08] <sistpoty> thierry_: I am... will do that
[10:09] <sistpoty> thierry_: if you would need to have more than one version of the same (source)package in the archives (e.g. many packages building against it and ftbfs with new version) you could append a verison number to the sourcepackage
[10:09] <thierry_> :) thanks, in about 10 minutes I'll upload everything, and should be able to advocate it
[10:09] <Ubugtu> An error has occurred.
[10:09] <sistpoty> thierry_: the generated library package should match the soname however...
[10:10] <sistpoty> thierry_: maybe you want to take a look at the library packaging guide, if s.th. is not clear: http://www.netfort.gr.jp/~dancer/column/libpkg-guide/libpkg-guide.html
[10:10] <thierry_> so in control I put Package: libfxscintilla17 ?
[10:11] <sistpoty> thierry_: yes
[10:16] <thierry_> and the changelog is also libfxscintilla17?
[10:17] <sistpoty> no, the changelog refers to the sourcepackage
[10:17] <thierry_> k
[10:17] <thierry_> last thing : in scintilla/include/*.h           /usr/include/libfxscintilla
[10:17] <thierry_> include/*.h                     /usr/include/libfxscintilla do I put libfxscintilla or libfxscintilla17 ?
[10:18] <tseng> er
[10:18] <tseng> libfxscintilla-dev ?
[10:19] <tseng> http://www.netfort.gr.jp/~dancer/column/libpkg-guide/libpkg-guide.html
[10:20] <sistpoty> thierry_: you mean where to put the include files inside the -dev package?
[10:22] <sistpoty> thierry_: name the directory like upstream does (but it should be in /usr/include/)
[10:23] <sistpoty> thierry_: actually like upstream installs it ;)
[10:24] <LaserJock> lucas: ping?
[10:25] <sistpoty> Gloubiboulga: are you gauvain pocentek?
[10:25] <Gloubiboulga> sistpoty, yep
[10:25] <Gloubiboulga> thanks for the review :)
[10:25] <sistpoty> hehe
[10:26] <sistpoty> Gloubiboulga: there is no need to change the orig-tarball if it contains wrong directory name... dpkg-source is quite flexible in this regard ;)
[10:27] <Gloubiboulga> sistpoty, ok, I will take care of this tomorrow, and include the fdl
[10:27] <sistpoty> Gloubiboulga: cool
[10:35] <sistpoty> thierry_: I will review your package tomorrow ;()
[10:35] <sistpoty> ;)
[10:35] <sistpoty> cya
[10:35] <thierry_> goodbye
[10:40] <seth> Seveas, sorry to bother, but $freenode_staffer won't approve the linking of my cloak to this new nick (I changed master nicks so I think it broke the cloak). He said I needed to go through you. (If you could make it just ubuntu/member/seth since I now own the nick 'seth', thanks)
[10:41] <Seveas> seth, will do as soon as I speak lilo
[10:51] <Gloubiboulga> time to sleep
[10:51] <Gloubiboulga> bye
[11:03] <lucas> LaserJock: pong
[11:07] <LaserJock> lucas: I was trying to figure out where to get the lastest mdt but I found it on https://wiki.ubuntu.com/MultiDistroTools
[11:07] <lucas> oh ok
[11:08] <LaserJock> I got an account on tiber so I was going to try to figure out what all I wanted and what you already have, etc.
[11:08] <lucas> ok
[11:08] <lucas> the best way to go is probably to work together on improving mdt
[11:08] <lucas> since it already does quite a lot
[11:09] <lucas> I don't mind rewriting some ruby scripts if you don't understand them
[11:09] <LaserJock> right, I just need to think about what exactly I want to do ;-)
[11:09] <lucas> ok
[11:10] <lucas> my next steps were to make the HTML lists more configurable
[11:10] <lucas> I'd like to be able to display columns on demand with javascript
[11:10] <lucas> and add "comments/notes" columns
[11:10] <lucas> so it's easily to keep some notes on some packages
[11:10] <lucas> like "don't merge this, debian's is broken!"
[11:11] <LaserJock> cool
[11:34] <LaserJock> ok, this might be a silly question, if you make a bzr branch out of a directory can you move that directory somwhere else and have the branch be OK?
[11:35] <lucas> yes
[11:56] <LaserJock> hi crimsun
[11:58] <dholbach> good night guys
[11:58] <LaserJock> cya dholbach
[11:58] <dholbach> bye LaserJock
[12:02] <LaserJock> lucas: still around?