[00:09] <cousteau> hi, I've made a small program that fixes the numeric keypad functionality on laptops. I think it could be of interest for someone else so I made a .deb package, is there anything useful I can do with it?
[02:10] <psusi> something seems really wrong with the way the kernel is handling pipes... tar -cf - > /dev/null takes 15 seconds... tar -cf - | cat > /dev/null takes 3 minutes, 15 seconds
[02:21] <xnox> are .desktop for mozilla (e.g. Thunderbird) bet translated in Rosetta? or Suspended in Rosetta? or is it manual?
[02:21] <xnox> s/ bet / get
[02:22] <micahg> xnox: manual right now
[02:22] <xnox> ok so I'll get the patch in then =)
[02:23] <xnox> micahg, manual as in usuall mozillateam branches, right?
[02:24] <micahg> xnox: yes, I need know though if we need a UIFe or FFe for it
[02:24] <nigelb> micahg: my bet is neither
[02:24] <micahg> we have some requests pending and I'd like to commit them if an exception isn't needed
[02:24] <xnox> it's translation of a desktop file and it's not translation freeze yet AFAIK
[02:26] <nigelb> robert_ancell: can you review the patch in bug 501054 and if not going to be integrated add your comments so I can reject the patch?
[02:41] <persia> micahg: If you're just adding translations, but not changing strings, you don't need a UIFe.  If you're changing strings, you need approval from the docteam.
[02:45] <robert_ancell> nigelb, looking now
[02:47] <nigelb> robert_ancell: thank you :)
[02:51] <micahg> persia: so new ones are ok even though it displays something new in a foreign language?
[02:52] <persia> micahg: Yes, because the *meaning* of the string hasn't changed.
[02:52] <micahg> persia: ah, ok, great
[02:52] <xnox> micahg, https://code.edge.launchpad.net/~dmitrij.ledkov/thunderbird/desktop.es/+merge/21824
[02:52] <persia> For example, if a program is based in French, and someone adds a new English translation, that might change the screenshots for English documentation, but few people will complain, because they can now read it in their preferred locale.
[02:53] <persia> *but* if the original French changed and broke all the translations, this annoys the docteam (who has to change) and annoys all the translators (who have to update).
[02:53] <xnox> and annoys all people where docs talk about one thing and it's actually called something else in the software =)
[02:54] <micahg> persia: what about adding to categories in Software Center?
[02:54] <persia> I suspect that's enough of a change that you'd want to contact the docteam.
[02:55] <persia> translations are a special case, and the langpacks get continually updated post-release, which leads me to believe that even post-release translations are acceptable (but take great care, as rebuilding code can change behaviour in unexpected ways).
[03:27] <ebroder> Anybody from ubuntu-release that could look at bug #543888?
[03:54] <lamont> so I burn a CD, and it ejects, and keeps on trying to eject until it says "oh, sorry.  I couldn't eject it - you'll need to do it manually" shortly after I finish labeling the disk that I removed when it initially ejected it... known bug?  and against what do I file it?
[03:56]  * lamont guesses wodim
[04:01] <RAOF> I recall a discussion in #u-kernel (?) about not locking the CD drive during normal use, and that cd writers would need to explicitly lock the drive.
[04:02] <superm1> lamalex, https://lists.ubuntu.com/archives/kernel-team/2010-March/009315.html
[04:02] <superm1> er lamont
[04:02] <RAOF> Maybe wodim or whatever burning software you're using is confused?
[04:04] <lamont> superm1: well, the burn finished fine, it says it's ejecting, ejects, and keeps on trying until it eventually decides that it has failed.  closing the tray during that time results in a re-eject, and still eventually gets to failure
[04:05] <lamont> not particularly happy that the laptop locked up during the do-release-upgrade, either
[04:06] <lamont> RAOF: almost certainly
[04:08] <RAOF> It might be... interesting... to try ejecting the CD while it's being written.  No responsibility for superpowers gained by being hit by CD writing lasers is accepted.
[04:12] <lamont> heh
[04:25] <lamont> I wonder if I can get it to spit out the tray 4 times
[04:39] <TheMuso> Isn't not locking the drive when a disk is mounted unsafe, in terms of copying data from the disk?
[04:40] <TheMuso> although on the other hand, it does make sense.
[04:41] <RAOF> The read will just fail, surely?
[04:43] <persia> I think it's on;y unsafe for hardware that makes assumptions.
[04:44] <persia> So for hardware that is either open or locked, there will be bugginess.  For everything else, it's probably better.
[04:44] <persia> And I think there's only a small number of raised-lid readers without hardware eject buttons that might act like that.
[06:43] <pitti> Good morning
[06:43] <pitti> StevenK, persia: terribly sorry for the cdbs blunder, and thanks for handling that!
[06:43] <persia> pitti: wgrant deserves most of the credit, really.
[06:43] <StevenK> pitti: You're welcome ;-)
[06:44] <pitti> superm1: checking for dkms sounds good; would that suit your use case, too?
[06:44]  * StevenK wasn't expecting to get dragged into uploading a bunch of stuff on Saturday night
[06:44] <pitti> wgrant: thanks a lot for handling the cdbs breakage!
[06:51] <wgrant> persia orchestrated most of it.
[06:51] <wgrant> Thanks StevenK -- sorry for dragging you away from whatever you were doing.
[07:01] <pitti> lamont: hm, it seems jackfruit's http server hangs forever (noticed from hanging ddeb-retriever, but also reproducible with w3m)
[07:17] <pitti> doko_: hm, does that directory have strange permissions?
[07:18] <pitti> didrocks: so, Q-FUNK just pointed out that gthumb 2.11 is an unstable series, and not fit for release
[07:19] <pitti> directhex: was there a pressing reason to upgrade to it? or should we downgrade to 3:2.10.11-3build1?
[07:19] <pitti> sorry, didrocks ^
[07:19] <directhex> moo?
[07:19] <pitti> directhex: sorry, tab completion error
[07:19] <Q-FUNK> pitti: actually, others that responded to my blog post pointed it out and, now that I'm taking pictures again, I cannot help but notice that it indeed is unstable, as in constant random crashes and features no longer working.
[07:20] <Q-FUNK> pitti: it also seems ot be used as a sandbox for testing new UI paradigms.
[07:21] <Q-FUNK> pitti: by the time something as simple as re-saving a JPEG as a PNG makes the application crash, I think that we can safely conclude that this is not fit for an LTS release.
[07:21] <pitti> I agree
[07:24] <Q-FUNK> pitti: just to be sure, I pulled a slightly more recent upstream from debian that supposedly fixes a lot of bugs, but it still doesn't.
[07:49] <pitti> amitk: did you happen to see bug 427925? it seems that pm-powersave-policy's sata link power management doesn't actually work in a lot of cases?
[07:52] <amitk> pitti: I didn't. Thanks for pointing it out. I've passed on my work to Chase Douglas who will be carrying on futher work on pm-powersave-policy. I'll point him to it.
[07:53] <pitti> amitk: thanks
[08:05] <didrocks> Q-FUNK: pitti: the older one didn't work at all with the new udev/devicekit layer we have for camera detection. So, it was better than nothing. I saw that debian was going to package a new one (didn't check yet which version) a little bit better in term of stability. We should sync that one
[08:06] <pitti> didrocks: hm, the old one had a patch to gvfs-umount the camera before talking to it through libgphoto
[08:08] <didrocks> pitti: it wasn't working IIRC (I tried it with 3 devices before asking for sync)
[08:10] <pitti> didrocks: hm, then perhaps we should restore that old patch; nothing changed wrt. gvfs/libgphoto since hardy
[08:10] <pitti> udisks etc. is just underlying maic
[08:10] <pitti> magic
[08:11] <didrocks> pitti: can do that if you want. I have many crashy devices to test it :)
[08:35] <Q-FUNK> didrocks: import works fine for me.
[08:35] <didrocks> Q-FUNK: tried with 3 different devices, but I can revert the sync (or you can, if you want :))
[08:36] <didrocks> Q-FUNK: just that it was crashing on 2 laptop with 3 devices at home
[08:36] <Q-FUNK> didrocks: it's a bit more complicated than it appears.  we'll need to either introduce an epoch or end up with an ugly version number like pulseaudio.
[08:37] <Q-FUNK> and the minute we introduce an epoch is the day we can no longer sync from debian.
[08:37] <didrocks> Q-FUNK: avoid epoch, use newer-version.is.older-veresion
[08:37] <Q-FUNK> here, the latest 2.10 works fine on all my hardware.
[08:38] <didrocks> Q-FUNK: maybe I had bad luck with mine but I tried with a camera, a video recorder and a mobile phone…
[08:39] <didrocks> Q-FUNK: I could see them in nautilus, but gthumb didn't import anything
[08:39] <Q-FUNK> didrocks: that's a different bug altogether.   nautilus has incorrect import apps defaults.
[08:39] <Q-FUNK> that was already reported... back in jaunty, IIRC
[08:40] <didrocks> Q-FUNK: hum? I just told I can see the devices in nautilus, but not import using them in gthumb
[08:40] <Q-FUNK> didrocks: yes, because nautilus prevents you from importing, with its default media application settings.
[08:41] <Q-FUNK> its instance of libgphoto2 locks the camera
[08:41] <didrocks> Q-FUNK: oh, that was the issue so, and how do you achieve that? you have to close nautilus (even when daemonize) to import with gthumb?
[08:42] <Q-FUNK> here, getting this to work required two things:  1) pitti's unmount script that shipped with 2.10 2) changing the media application defaults in nautilus 3) purging gnome-mount 4) purging hal.
[08:42] <Q-FUNK> right, make that 4
[08:43] <Q-FUNK> actually, I cannot recall if purging hal was needed or not, but disabling anything else that could access libgphoto2, yes.
[08:44] <didrocks> Q-FUNK: ok, understanding better now. Well, step 0 is to revert gthumb, I'm fine with this if we can workaround the importing from devices. Do you want to do it or that I do it?
[08:45] <Q-FUNK> didrocks: can we work on this later this afternoon?  then we can compare notes about testing this and making possible changes to nautilus gconf keys as needed?
[08:45] <didrocks> Q-FUNK: sure, ping me when you are ready to work on that with me :)
[08:45] <Q-FUNK> didrocks: cheers! :)
[08:45] <didrocks> Q-FUNK: see you ;)
[08:46] <seb128> didrocks, was that discussion about downgrading gthumb?
[08:46] <didrocks> seb128: right
[08:46] <seb128> didrocks, why
[08:46] <seb128> ?
[08:48] <Q-FUNK> seb128: because it's totally unstable
[08:48] <Q-FUNK> and because it's a development version that won't mature into a stable version on time for Lucid.
[08:48] <didrocks> seb128: it's an unstable version, still a little bit crashy but no issue for me in one day. Apparently some people has crashes with it regularly. I looked with upstream who told me (when I decided to upload new version) that the new stable one will be really soon available (before beta1), unfortunately, there are late
[08:48] <seb128> did you try syncing from debian?
[08:49] <Q-FUNK> seb128: I filed a bug about the sync. needs to be acted upon.
[08:49] <seb128> the crashes will likely go down once synced
[08:49] <seb128> I would stay on the new serie
[08:49] <seb128> it will be easier to mainin forward that an outdated versiojn
[08:51] <Q-FUNK> the outdated version is rock-solid.  much better suited to an LTS.
[08:51] <seb128> it also use old technologies and has issues
[08:52] <Q-FUNK> it did not have any issue.  it simply never was ported to GIO.
[08:53] <seb128> softwares with no issues, we don't have any of those
[09:14] <pitti> Q-FUNK, didrocks: hal doesn't access libgphoto, that's fine
[09:15] <pitti> gthumb is supposed to try and gvfs-unmount the libgphoto mount
[09:15] <didrocks> pitti: thanks for the info :)
[09:22] <geser> pitti: Hi, do you have a minute for a question about the doc symlinking code in cdbs? (it's for the gdb FTBFS)
[09:22] <pitti> geser: sure
[09:23] <pitti> geser: that build log looked like the directory wouldn't be accessible or wouldn't be a directory in the first place
[09:23] <pitti> like a broken symlink
[09:23] <geser> pitti: yes, gdb symlinks the whole /usr/share/doc directory
[09:23] <pitti> ah, that'd be it
[09:24] <geser> the doc symlinking code has some guards but they only check debian/$(cdbs_curpkg)/usr/share/doc
[09:24] <geser> isn't there a "/$(cdbs_curpkg)" missing at the end? or did I miss something this tests check?
[09:25] <pitti> geser: "symlinks the whole /usr/share/doc" -> apparently that was done only recently?
[09:25] <pitti> this will lead to trouble either way, though
[09:25] <pitti> dpkg does not replace directories with symlinks during upgrade
[09:25] <pitti> doko_: ^
[09:25] <geser> pitti: yes, this changes got introduced with the recent gdb upload (merge)
[09:25] <pitti> that's why cdbs does not symlink directories, just files
[09:29] <geser> pitti: is there a reason to only check if debian/$(cdbs_curpkg)/usr/share/doc is not a symlink and a real directory and not the directory below it (the one with the doc files for the package)? in what case could usr/share/doc not be a directory?
[09:30] <persia> Never underestimate what is possible as a result of an upstream build.
[09:30] <pitti> geser: you mean the if [ -d debian/$$dep/usr/share/doc ], righht?
[09:31] <geser> pitti: yes
[09:31] <pitti> geser: I don't think it was deliberately written that way
[09:31] <pitti> i. e. it could be /$(cdbs_curpkg) too
[09:31] <geser> pitti: no, the one 4 lines above: [ -h debian/$(cdbs_curpkg)/usr/share/doc ]
[09:31] <geser> and the next one: [ ! -d debian/$(cdbs_curpkg)/usr/share/doc ]
[09:32] <pitti> geser: ah, that was introduced for some reason
[09:32] <pitti> geser: when we introduced this, this already stumbled over packages which do such crazy things
[09:32] <pitti> geser: but we could add an additional check for /$(curpkg)
[09:33] <tseliot> pitti: do you know why the xorg-driver-fglrx package is still available in the archive? Shall I make it a transitional package?
[09:33] <tseliot> (the package is named "fglrx" now)
[09:34] <pitti> tseliot: hm, wasn't that the official name until now? yes, it'd need a transitional package then, for lucid
[09:34] <tseliot> pitti: ok, I'll take care of it then. Thanks
[09:35] <pitti> geser: you don't happen to have a built gdb tree around, do you?
[09:35] <geser> pitti: no, but could have it in around 10 min
[09:35] <pitti> geser: how does http://paste.ubuntu.com/399192/ look to you?
[09:35] <pitti> geser: oh, I can test-build it myself, too
[09:36] <pitti> geser: I'll reproduce the ftbfs and test this patch then
[09:36] <pitti> thanks for pointing out!
[09:37] <geser> pitti: looks good, I did a similar change to test my idea (but I changed the original one instead of adding a new one)
[09:38] <geser> pitti: and you could add an "i" to my surname in your changelog entry: Bienia
[09:38] <pitti> geser: whoops, sorry; fixed
[09:54] <tseliot> pitti: do you know why dpkg-trigger complains when I call it from the postinst?
[09:54] <tseliot> dpkg-trigger: dpkg-trigger must be called from a maintainer script (or with a --by-package option)
[09:54] <tseliot> I call it with dpkg-trigger gmenucache
[09:55] <pitti> hm, it sohuld be able to figure that out
[09:55] <pitti> apparmor.postinst:        /usr/bin/dpkg-trigger update-initramfs
[09:55] <pitti> that looks very similar, and apparenlty works
[10:07] <pitti> tseliot: does adding an explicit --by-package fglrx help?
[10:07] <pitti> (or nvidia*, whereever you call it from)
[10:17] <tseliot> pitti: I'll try it, thanks
[10:23] <pitti> geser: confirmed to work, I'll upload that
[10:24] <doko_> pitti: thanks, not a change which I did introduce myself, therefore the question to fix it in cdbs ...
[10:24] <pitti> doko_: fixed cdbs uploaded
[10:24] <pitti> doko_: right, it should, I was just pointing out that symlinking dirs is brittle
[10:28] <doko_> replacing a dir with a symlink isn't a problem if you know that the package is the only one placing files in this dir
[10:29] <cjwatson> doko_: you do still need a preinst hack
[10:30] <cjwatson> i.e. rm -rf old-directory in the preinst, probably with a version guard, assuming that there are no configuration files in there
[10:33] <geser> there should be no configuration files in /usr/share/doc/$pkg
[10:35] <doko_> cjwatson: really? we'll see with gdb-dev, gdbserver or gdb64, but isn't a package first removed before the new one is unpackacked?
[10:35] <doko_> coffee first ...
[10:37] <pitti> I definitively know that dpkg will never replace a symlink from the old version with a directory from the new one; I'm not sure about the other direction (replace dir with symlink)
[10:38] <directhex> nafaik
[10:39] <persia> I believe dpkg unpacks *over* the existing package, rather than removing the package first, preserving any symlinks, etc.
[10:43] <cjwatson> doko_: no, as persia says.  removing the package first would cause other problems
[10:43] <cjwatson> pitti: definitely in neither direction
[10:43] <cjwatson> it's quite deliberate
[10:44] <pitti> cjwatson: yes, it'd make sense, but I haven't tested it myself in that directio
[10:44] <cjwatson> too easy to make mistakes, so you have to explicitly force it by removing the old object in the preinst
[10:44] <pitti> thanks for the heads-up
[10:44] <cjwatson> geser: indeed - I was just trying to make my statement general
[10:52] <pitti> apw: 2.6.32-17.26 binNEWed, FYI
[10:56] <tseliot> pitti: any ideas as to why I've just received 5 messages about my uploads of fglrx-installer being rejected?
[10:56] <pitti> tseliot: yes, I do
[10:56] <pitti> tseliot: those were the binaries for older uploads
[10:57] <pitti> tseliot: I just kept the ones for the current version, makes review a bit easier
[10:57] <pitti> tseliot: please ignore
[10:57] <tseliot> pitti: ah, ok, I was starting to think that I did something wrong ;)
[11:15] <lamont> pitti: jackfruit is, um, interesting
[11:22] <highvoltage> lamont: hi! If I understood you correctly last week, the ltsp squashfs should appear on the disc around any day now right? or was there something we sill needed to do?
[11:23] <lamont> the CD image building side of things needs to actually deliver the file that it now has available
[11:23] <lamont> and that bit is in the "not me" category
[11:25] <siretart> asac: did you have a chance to take a look at bug #542506? - or can you name someone else who can comment on this issue?
[11:28] <lamont> highvoltage: sadly, it would appear that they're not building 'edubuntu', but rather 'edubuntu-dvd' :(
[11:29] <highvoltage> lamont: would you be able to fix it?
[11:30] <lamont> yeah - I'll add that to the dvd target as well, and upload a new livecd-rootfs today
[11:31] <highvoltage> thanks!
[12:14] <sistpoty|work> asac: I don't exactly parse your rationale for bug #544085, can you explain why we really want to have it in for lucid?
[13:24] <asac> sistpoty|work: simply to get upstreams easier adopt this
[13:48] <sistpoty|work> asac: ah, k. you've got a "not" that mislead me in the description
[13:48] <sistpoty|work> asac: would a backport be ok as well?
[13:48] <sistpoty|work> (we're past beta1 so I'm very hesitant wrt. new packages)
[13:50] <asac> sistpoty|work: backport wouldnt work as its not considere in-lucid
[13:52] <ogra> is anyone deeply familiar with dh_make here ? according to the manpage -p should override the directory check but apparently it doesnt
[13:53] <sistpoty|work> asac: hm... I guess I'm ok with it but you'll need to find and bribe an archive admin willing to denew it ;)
[13:53] <asac> sistpoty|work: yeah. thats understood
[13:53] <sistpoty|work> :)
[14:08] <lamont> bug 542955
[14:13] <Keybuk> argh
[14:14] <Keybuk> my git-fu is not strong today
[14:14] <Keybuk> lamont: every time I try and do any merge in util-linux, I get conflicts all over the place
[14:14] <lamont> Keybuk: ew.  I'm afk for a few hours, can we chat after that?
[14:15] <lamont> or toss me what you're trying to merge from->to and I'll see about bludgeoning my way through
[14:15] <lamont> I have to rebuild my laptop, because lucid ate it.
[14:15] <lamont> well, to be fair, I fed it
[14:15] <Keybuk> I'm trying to merge ubuntu/master into your stable/v2.17
[14:15] <lamont> in your git tree, and you've pushed the current state?
[14:16] <Keybuk> no, can't push cause just lots of conflicts
[14:17] <lamont> let me rephrase that - if I pull your published tree, will I have what you started with?
[14:17] <Keybuk> yes
[14:17] <lamont> ok
[14:17] <lamont> and afk
[14:17] <Keybuk> I guess it's just a "commit on both sides" issue?
[14:17] <Keybuk> GIT seems to be quite bad at those
[14:17] <Keybuk> (worse than bzr, and that's saying its shorter than Ronnie Corbet)
[14:18] <Keybuk> aha!
[14:19] <Keybuk> it is just that
[14:19] <amitk> Keybuk: commit on both sides?
[14:22] <Keybuk> amitk: right
[14:22] <Keybuk> the same commit has been applied to both sides
[14:22] <Keybuk> ie. to master and to stable/v2.17
[14:22] <Keybuk> so is in the history of master
[14:22] <Keybuk> and in the history of where I'm merging from
[14:23] <Keybuk> so git is helpfully conflicting it
[14:23] <c_korn> don't you have to rebase in this case ? (just guessing)
[14:24] <amitk> Keybuk: because they applied as different shaids, I guess?
[14:24] <amitk> c_korn: rebase would still conflict
[14:25] <Keybuk> right
[14:26] <nigelb> pitti: apport seems to not like forms like "GST_DEBUG=*cheese*:3 cheese -v" for the command_output function.  any thoughts on this?
[14:26] <pitti> nigelb: That's not a program invocation, but shell syntax :)
[14:26] <nigelb> pitti: which means I can do that?
[14:26] <nigelb> can't
[14:27] <pitti> nigelb: simplest is probably to use ['env', 'GST_DEBUG=*cheese*:3', 'cheese', '-v']
[14:27] <pitti> i. e. call env explicitly
[14:27] <nigelb> pitti: ah, and one more question :) how do I do 'killall cheese' this way?
[14:27] <pitti> nigelb: the more general method is to call[ '/bin/sh', '-c', command]
[14:28] <pitti> nigelb: you don't want killall :)
[14:28] <pitti> nigelb: but that doesn't work with the simple command_output any more
[14:28] <nigelb> pitti: the idea is to start cheese in debug mode
[14:28] <nigelb> so if its already running, I want to kill it
[14:28] <pitti> nigelb: use subprocess.Popen(), wait ten seconds (or similar), and then call .kill()
[14:28] <pitti> nigelb: hm, that sounds a bit ... unexpected; it might be "data loss" in some cases?
[14:29] <Keybuk> pitti, nigelb: even Upstart calls /bin/sh -c "command"
[14:29] <pitti> nigelb: you could ask for it interactively
[14:29] <nigelb> pitti: oh, wait I'm not clear
[14:29] <pitti> nigelb: i. e. ui.yesno("Blabla rationale need to restart cheese in debug mode")
[14:29] <nigelb> The idea is, if uses calls Help> report bug and I want to start in debug mode, cheese needs to restart
[14:29] <pitti> nigelb: sure
[14:30] <pitti> nigelb: but you shouldn't "just do" this without warning, I think
[14:30] <nigelb> the warning is thre
[14:30] <nigelb> but the code to do it isn't working yet ;)
[14:30] <pitti> nigelb: well, unless there's no way to have unsaved changes in cheese (I'm not that familiar with it TBH)
[14:30] <nigelb> so l[ '/bin/sh', '-c', 'killall cheese'] would work?
[14:30] <pitti> no
[14:30] <pitti> nigelb: just ['killall', 'cheese']
[14:30] <pitti> that's a proper command
[14:30] <nigelb> that didn't work
[14:31] <pitti> nigelb: but again, that's not what command_output is meant to be used for. Use subprocess.call() for that
[14:31] <nigelb> pitti: ah, I will try that :)
[14:32] <pitti> superm1: http://bazaar.launchpad.net/%7Eubuntu-core-dev/jockey/ubuntu/revision/400 -> ok for you?
[14:32] <pitti> superm1: oh, nice round commit :)
[14:32] <nigelb> Keybuk: thank you too :)
[14:45] <pitti> fabbione: Padre! how are you?
[14:45] <fabbione> pitti: wrong question today... just come back with 28 hours delay on my flight
[14:46] <fabbione> need to settled down the kids
[14:46] <pitti> urgh, that sucks
[14:46] <fabbione> pitti: sorry, I'd love to talk but I am sort of busy :)
[14:46] <pitti> fabbione: np, good luck with settling in!
[14:47] <ccheney`> how do you tell dpkg to unapply patches in an extracted fmt 3.0 source?
[14:48] <pitti> quilt pop -a ?
[14:48] <directhe`> RAOF, ping. can you look at kamujin's super ghetto gnome-keyring-sharp replacement & see if it provides what f-spot needs?
[14:48] <ccheney`> pitti: ah thanks
[14:48] <pitti> ccheney`: although it's not immediately visible, it should still be quilt underneath
[14:48] <ccheney`> pitti: hmm quilt isn't installed does dpkg have an internal version too?
[14:48] <pitti> ccheney`: yes, it does
[14:49] <ccheney`> pitti: ah ok so just install quilt to work with it then, thanks
[14:49] <pitti> ccheney`: if you haven't worked with it, you might want to set QUILT_PATCHES=debian/patches
[14:49] <pitti> ccheney`: (it defaults to ./patches)
[14:53]  * persia` notes that not all Format: 3.0 packages are Format: 3.0 (quilt)
[14:53] <maco> persia`: well thats confusing
[14:53] <persia`> maco: Why?
[14:54] <pitti> persia`: there's also (native), but also somethign else?
[14:54] <maco> i thought all v3 packages used quilt by default
[14:54] <persia`> No.
[14:54] <persia`> Lots of packages use patch systems just like Format: 1.0
[14:54] <maco> i thought quilt was a patch system
[14:54] <pitti> AFAIR there was some discussion about "3.0 (bzr)" and "3.0 (git)", but I don't think either works today
[14:54] <persia`> Tbe base means of Format: 3.0 is essentially just a tarball diff.
[14:54] <maco> or do you mean *other* patch systems
[14:55] <maco> pitti: so you cant use quilt and bzr together? i guess that makes sense...
[14:55] <pitti> maco: you can, but it's really ugly
[14:55] <pitti> so far I avoided converting bzr or git maintained packages
[14:56] <pitti> it's confusing
[14:56] <persia`> quilt is 1) a tool for working with sets of packages, 2) a system that can be used as a substrate for a primitive VCS, 3) a patch system for debian format package, 4) a format specification for patch application in one flavour for debian format 3.0 packages, ...
[14:56] <pitti> git-buildpackage actually DTRT and applies the patches in the build-area
[15:02] <muelli> which packages does ubuntu automatically pull? suggests? Recommends?
[15:02] <cjwatson> pitti: I actually find it works really well for me
[15:03] <pitti> cjwatson: I find it confusing to both apply patches and have them in debian/patches; that will still take a while to get used to for me, I think
[15:03] <cjwatson> pitti: in 3.0 (quilt), the tree generally stays in patched state except when you're actively working on the patches, so I find that tools like 'bzr blame' work much better than they do with old-style patch systems
[15:03] <cjwatson> pitti: definitely a feature as far as I'm concerned :)
[15:04] <cjwatson> pitti: http://paste.ubuntu.com/399335/ for example
[15:05] <cjwatson> it just means that when you commit a change you effectively see it twice in 'bzr diff', but *shrug* doesn't really bother me
[15:05] <cjwatson> maco: openssh is an example of using quilt and bzr together
[15:06] <maco> cjwatson: ok, ill have a look
[15:06] <maco> muelli: depends and recommends. if using apt on command line (dunno about guis) itll mention the existence of suggests
[15:07] <cjwatson> the only real glitch I found was the one I filed as http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=572204
[15:09] <cjwatson> persia`: IME you have to be fairly perverse to use format 3.0 with some other patch system, and the last (and only) time I encountered this I filed a bug and the maintainer said "er, yeah, sorry"
[15:10] <cjwatson> part of the point of format 3.0 was to unify all the patch system mess, so I think using it with an old-style patch system is very definitely Missing The Point
[15:10] <persia`> I'll agree with that, excepting when it's useful to add extra tarballs.
[15:11] <cjwatson> right, didn't say it was the whole point :)
[15:11] <pitti> yay for unifying patch systems indeed
[15:11] <pitti> I tried to package with bzr loom some months ago
[15:12] <cjwatson> bzr loom is still definitely in the "nice idea but experimental" bucket for me, I think
[15:12] <pitti> while that worked pretty well, everyone else will just ruin your tree, though, so doing it for some packages isn't worth the trouble
[15:12] <pitti> but it still feels weird to have to deal with debian/patches/ stuff if we have all that VCS magic already
[15:13] <pitti> especially now that you have to do _both_ (in-tree and separate patch)
[15:13] <cjwatson> I like the principle that patches should be an export format, but I'm not willing to deal with lots of scary experimental VCS technology just for that; having to commit to the original files and debian/patches/ together is IMO a small price to pay for the benefits
[15:13] <directhe`> someone wake me when bzr and multi-tar work nicely together. things like pristine-tar need to deal with multi-tar
[15:15] <ccheney`> gah
[15:23] <superm1> pitti, yes i believe it should help me.  i'll double check today and let you know if it doesnt
[15:32] <Keybuk> Should not be doing an Octopus.
[15:32] <Keybuk> ...
[15:32] <Keybuk> thanks git.
[15:33] <aboSamoor> Keybuk, I read your post on ubuntu forums and I checked the GSoC regarding the boot performance. I talked to Riddle and he forwarded me to talk to you. I am asking for any advices regarding making a proposal for improving boot performance that can satisfy for GSoC proposal.
[15:34] <Keybuk> aboSamoor: what kind of thing did you have in mind?
[15:38] <aboSamoor> Keybuk, on personal level, I am just annoyed by the frequent re-profiling mechanism. As a power user using all the time pre-release software I am all the time updating the software. So according to the re-profiling policy I loose much of the ureadahead advantage. Apart from that, I was following the efforts of the boot performance team, and I am interested in going faster than what I have now. My laptop boots in 22 seconds.
[15:39] <siretart> asac: around?
[15:40] <Keybuk> aboSamoor: in terms of reprofiling, what we've been missing is the kernel side of tracking which blocks in the page cache were actually used -- that's a kernel patch we've written but haven't tested yet
[15:41] <Keybuk> in terms of ureadahead performance in of itself, what we need here is the ability to load blocks from disk into the page cache without needing to open a file first -- since the open()s take up a number of seconds
[15:41] <Keybuk> I suspect that's way beyond GSoC level
[15:41] <Keybuk> boot performance > what ways of booting faster can you think of?
[15:42] <cjwatson> oh, come on, lists.gnu.org, archive things faster so that I can have a URL for my patch file
[15:44] <ccheney`> ext4 defrag would be nice for booting faster :)
[15:44] <aboSamoor> Keybuk, I read the post yesterday, and it make my mind busy. I am not a computer engineer, but my work does not deal with PCs usually, more of embedded systems. If you do not mind I would like to ask questions regarding the mechanisms used now, so it helps me to understand better.
[15:44] <ccheney`> hopefully that doesn't eat your data in the process
[15:44] <Keybuk> aboSamoor: of course
[15:45] <Keybuk> ccheney`: would you trust an ext4 defrag that wasn't written by ext4 upstream?
[15:45] <Keybuk> (then again, would you trust an ext4 defrag that *was* :p)
[15:49] <psusi> ARGH!@#  fuckign gnu.... I've been starting to think I was going insane because of all of these bizare performance results I've been seeing with tar while trying to compare it with dump.. .turns out gnu tar cheats and doesn't bother actually reading the input files and writing them to the output if it detects that the output is /dev/null... it just stats the input files
[15:50] <Keybuk> that seems like a sensible optimisation to me
[15:50] <psusi> no, that's like the old compiler "optimization" that explicitly detected if it was compiling a drystone benchmark and "fixed" it to not bother with all of the computations
[15:51] <Keybuk> no, that's "tar is not a benchmark"
[15:51] <psusi> and simply not naming the input file drystone.c would defeat the "optimization"
[15:51] <cjwatson> and the reason why tar does that is in ChangeLog.1
[15:51] <cjwatson> "Corrections to speed-up the sizeing pass in Amanda:"
[15:51] <ccheney`> Keybuk: perhaps, ext4 upstream has shown to be somewhat odd in the past :)
[15:52] <psusi> no, but if I want to see how long it takes to read the files, without having the output compete with the read IO... it should do that... not run me around in circles not doing what I expect it to do when I redirect the output to null, I mean I want the output to go there
[15:52] <Keybuk> ccheney`: "ext4 defrag corrupted my data!" ... "no, you can't have your pony back"
[15:52] <ccheney`> Keybuk: definitely would want it well tested before use, i remember using xfs defrag and it eating all my data in the past
[15:52] <Keybuk> psusi: could you take this to #beingwrongontheinternet plz
[15:52] <Keybuk> psusi: it's not really relevant
[15:52] <psusi> hehe
[15:53] <Keybuk> and we're not going to patch tar ;)
[15:54] <psusi> now maybe tonight I can get around to testing defrag packing ureadahead boot files at the start of the disk in order...
[15:55] <asac> siretart: yes on a call, but whats up?
[15:56] <siretart> asac: I wanted to hear your opinion on the libmozjs.so issue. it breaks applications like gxine
[15:56] <siretart> asac: see bug #544085 for details
[15:56] <siretart> err
[15:57] <siretart> make that bug #542506
[16:01] <Keybuk> #	modified:   disk-utils/fsck.minix.c
[16:01] <Keybuk> #	modified:   disk-utils/mkfs.c
[16:01] <Keybuk> ... no I didn't, *you* did
[16:01] <cjwatson> patch system?
[16:02] <Keybuk> cjwatson: just GIT being unhelpful
[16:02] <aboSamoor> sorry, I am computer engineer I made typos. I was reading the lastest updates to the post, so avoid redundant questions. Now, you cache all the files you will read during the boot in pack files and store names and some attributes. But what about making binary image of the memory for the files have been read, so we are sure that we are reading contiguous data from the hard disk? That will consume 200-300 MB from the hard disk space but i
[16:02] <aboSamoor> t is not problem on many of the HDD drives
[16:02] <asac> siretart: what does gxine use js for?
[16:03] <Keybuk> cjwatson: it's having fun by duplicating blocks in all the files
[16:03] <Keybuk> aboSamoor: because there's no way to tell the kernel that the binary image you're loading into memory is really the contents of blocks of other files on the filesystem
[16:03] <Keybuk> aboSamoor: as soon as the real files were needed, the kernel would just re-read them all over again
[16:04] <siretart> asac: you can script it with js
[16:04] <siretart> asac: on the command line with parameter '-c'
[16:05] <cjwatson> Keybuk: hah
[16:05] <Keybuk> cjwatson: when lamont made the 2.17 release, he merged "ahead" of the release on master and made that the release
[16:06] <Keybuk> cjwatson: unfortunately the release was the branch point for 2.17.1 - so now I have the "commits with different sha1s on two different branches that are otherwise the same" problem
[16:06] <cjwatson> Keybuk: this reminds me of doing the parted 2.2-1 release; it took ages to glue all the git history together, and I had to do something like three separate merges to clue git in to what was going on
[16:07] <cjwatson> I did eventually get to the point where it wasn't giving me spurious conflicts, but it wasn't straightforward
[16:10] <aboSamoor> Keybuk, if I am not wrong, there is something called ramdisk that will create file system in ram. If we build the filesystem out of the binary image and forward all the files known to be cached to be read from that ram disk, i think it will help. sorry if this looks unrealistic or naive.
[16:11] <Keybuk> aboSamoor: it's the "forward" bit that's missing
[16:11] <Keybuk> all copying the files into a ramdisk will give you is a ramdisk with a copy of the files in it
[16:11] <Keybuk> which is a second copy of the file
[16:11] <Keybuk> so it'll mean you read the file twice into memory
[16:12] <Keybuk> instead of once
[16:14] <aboSamoor> Keybuk, you mean the kernel see the ramdisk as another hard disk ? I meant by forward that if we need file as /bla/foo.conf we can use a table that will give us the version that is stored in memory. /ramdisk/bla/foo.conf
[16:15] <Keybuk> aboSamoor: yes, the kernel literally just sees it as another hard disk
[16:15] <Keybuk> albeit one that is stored in RAM and vanishes when you turn off the power
[16:15] <Keybuk> the kind of "table" you're talking about sounds a *lot* like a union filesystem
[16:16] <Keybuk> those things are very problematic, and there isn't a good one available
[16:16] <Keybuk> they also tend to go in the wrong direction
[16:16] <Keybuk> you'd need things set up in such a way that updates to the file went to the hard drive underneath, and removed the entry from the ram disk
[16:17] <Keybuk> not to mention the difficulty of generating that ram disk
[16:17] <Keybuk> that's quite deep kernel-level engineering
[16:19] <Keybuk> aboSamoor: the thing as well is that the kernel already has a mechanism for storing files from a hard disk in memory
[16:19] <Keybuk> it's used every time you open any file, a kernel fetches bits of it into memory, and it's actually from memory that read() returns
[16:19] <Keybuk> that way if you, or any other process, read that same bit of the file - it's already in memory
[16:19] <Keybuk> this is called the page cache
[16:20] <Keybuk> ideally any solution should simply pre-populate the page cache
[16:20] <Keybuk> rather than add another layer on top
[16:22] <Keybuk> plus I've seen a lot ideas like "put all the content of the files we need into one big file and read that"
[16:22] <Keybuk> without fixing the obvious problem that the one big file can get fragmented just as much as the individual files can
[16:22] <Keybuk> so probably doesn't help ;-)
[16:28] <aboSamoor> Keybuk, what about if that big file stored in special filesystem ? I think some databases perform better in file systems that fit with their needs. Maybe the restriction that the filesystem makes is what brought swap to be different partition. I think windows7 is now using the same approach.
[16:30] <Keybuk> aboSamoor: then you'd not only have to write the kernel VFS patches to redirect lookups to the "cache file", but you'd also have to write an entirely new filesystem
[16:30] <Keybuk> this is going a little over-scope for GSoC
[16:36] <aboSamoor> Keybuk, Even it is within the range of GSoC, i believe it is more of what I can learn and learn in the time provided. However, hypothetical scenarios help to understand the problem better. In the booting process what does take the largest percentage ? the boot loader, reading the pack files lists, processing the read files, or hardware initialization ?
[16:37] <Keybuk> aboSamoor: reading the files from disk
[16:42] <Keybuk> aboSamoor: well, strictly speaking, most of the time in the boot process is desktop
[16:42] <Keybuk> aboSamoor: reading files from disk takes about 7s
[16:42] <Keybuk> aboSamoor: core OS boot takes about 6s
[16:43] <Keybuk> aboSamoor: initialising the desktop, drawing it, etc. takes 12s
[16:48] <aboSamoor> Keybuk, that is strange how the guys behind chrome-os are planning to do less than five seconds boot if drawing the desktop take all that time. They can tweak the other factor as they do not plan to run general hw and instead they will optimize the hardware.
[16:48] <Keybuk> aboSamoor: Chrome OS doesn't have a desktop environment!
[16:49] <Keybuk> there are no panels, no file managers, no little services to notify you of interesting things, etc.
[16:49] <Keybuk> it's just a big fullscreen web browser
[16:52] <Keybuk> aboSamoor: having control over the hardware means you can specify a solid-state disk and a specific disk controller
[16:53] <matumba> and kill the slow BIOS crap :P
[16:53] <Keybuk> right, exactly
[17:02] <psusi> will be nice when bios is dead and we're using EFI to boot
[17:03] <psusi> seems that the video drivers though still rely on the video bios for mode setting, which is a problem for EFI
[17:03] <Keybuk> sadly EFI does not appear to be making any difference
[17:03] <psusi> what do you mean?
[17:03] <Keybuk> machine EFI implementations are just as bad as their BIOS implementations
[17:14] <Keybuk> pitti: what is the package that mounts things under /media nowadays?
[17:15] <Keybuk> I've lost track of where that code has gone :p
[17:25] <ccheney`> NCommander: uploaded the new OOo with your patch :)
[17:25] <ccheney`> rickspencer3: uploaded new OOo that should hopefully fix the bug silbs had
[17:25] <rickspencer3> thanks ccheney`
[17:36] <YokoZar> Is the text highlight color a property of the theme?
[17:36] <YokoZar> (eg orange for checkboxes)
[17:41] <superm1> pitti, yes i can confirm that works properly
[17:47] <pitti> Keybuk: udisks
[17:47] <pitti> superm1: great
[17:52] <Keybuk> pitti: btw, your e2fsprogs upload wasn't committed to the repo
[17:53] <pitti> Keybuk: wouldn't the auto-importer take care of it?
[17:53] <pitti> you mean lp:ubuntu/e2fsprogs?
[17:53] <Keybuk> pitti: the auto-importer can't commit to GIT
[17:53] <Keybuk> no, http://kernel.ubuntu.com/git?p=scott/e2fsprogs.git;a=summary
[17:53] <pitti> hm, could that be added as Vcs-Git: ?
[17:54] <pitti> Keybuk: can I commit to that one?
[17:54] <Keybuk> yeah will do
[17:54] <Keybuk> but it missed this upload
[17:54] <Keybuk> pitti: no
[17:55] <Keybuk> pitti: git doesn't really do shared repos, after all
[17:55] <pitti> ok, sorry about that
[17:55] <Keybuk> and tbh, I prefer not to patch e2fsprogs - it's better to get the patches into upstream and just update
[17:55] <pitti> I didn't even know it was in git
[17:55] <Keybuk> patches in e2fsprogs adds pain from Ted Tso
[17:59] <Keybuk> pitti: of course, it gets even more confusing with util-linux ;)
[17:59] <Keybuk> since the Vcs-Git header for that is correct
[17:59] <Keybuk> even though it's a Debian one :p
[18:40] <pa> hi
[18:40] <pa> if i dist-upgrade my lucid alpha, will it become lucid beta now?
[18:43] <Pici> !final | pa
[18:43] <Pici> pa: Also, further Lucid support is in #ubuntu+1
[18:44] <pa> ah ok
[18:44] <pa> sorry :)
[18:44] <pa> but thanks
[18:44] <sbeattie> need to do s/Karmic/Lucid/ for ubottu
[18:45] <Pici> sbeattie: darn, I just fixed it, but missed the second one in there... /me fixes now
[18:45] <Pici> Thanks for pointing it out.
[19:09] <nosse1> Hi. What do I need to put up my own private apt repo?
[19:11] <nosse1> We have a handful of application/packages private to our organization and I want to create a apt repository for our users. Are there any tools/script available for creating the neccessary files and structure?
[19:12] <cody-somerville> nosse1, This isn't the place to ask about that. However, you might look at http://mirrorer.alioth.debian.org/
[19:13] <nosse1> cody-somerville: Thanks
[19:31] <lamont> cjwatson: out of curiosity, when I wound up bailing out of the install because "install grub" failed totally, what other bits of "finish the install" do I want to have emulated?
[19:32] <lamont> and is it just me, or did we completely eliminate the option of using a partition for a filesystem without reformatting it (in the installer)
[19:32] <lamont> because, frankly, I'd like my old /home back
[19:36] <benkong2> I have the strangest problem but am not sure of the info needed to file a report.
[19:37] <benkong2> When I do sudo apt-get install build-essential I get http://pastebin.com/Wiitjz59
[19:37] <benkong2> The above is the terminal output
[19:38] <bdrung> james_w: regarding sync request bug #539990: can you sync version 1.1.1-2 instead of 1.1.1-1?
[19:38] <bdrung> i updated the title, but forgot the description
[19:39] <geser> benkong2: what error do you get when you also try to install g++?
[19:39] <benkong2> geser, g++: Depends: g++-4.4 (>= 4.4.3-1) but it is not going to be installed
[19:39] <benkong2> Broken Packages
[19:39] <james_w> bdrung: please file a new reuqest
[19:39] <benkong2> The following packages have unmet dependencies:
[19:40] <benkong2> bdrung, is that for me?
[19:40] <benkong2> oops
[19:40] <bdrung> benkong2: no, it was for james_w
[19:40] <benkong2> ok sorry
[19:41] <benkong2> geser, this is a fresh install
[19:42] <sebner> james_w: thx for syncing gpsd but wondering about LP (maintenance ?) as I can't see the update there
[19:43] <james_w> sebner: it's still in progreee
[19:43] <sebner> james_w: ah, ic. just wondering because you closed the sync bug 1 houra go :)
[19:43] <sebner> ubottu: hahah!
[19:44] <cjwatson> lamont: I'm not sure I'm comfortable with advising on that, can I find out why it broke instead?
[19:45] <cjwatson> lamont: and I certainly wasn't aware that we'd eliminated that option.  did you file a bug?
[19:45] <lamont> cjwatson: I'm still in the process of recovering access to the world
[19:45] <lamont> cjwatson: I was kind of off-script when I got dead.
[19:46] <cjwatson> grub-installer is the last thing before finish-install, but finish-install does various things
[19:46] <james_w> sebner: there were a lot of syncs today, and it's not the quickest process in the world
[19:46] <bdrung> james_w: ok, filed bug #544440
[19:47] <lamont> laptop is encrypted-disk-with-lvm, where I split /home onto its own partition, and created a few alternate root partions.  There isn't an option in d-i to say "just mount the ^%&*(%&%( disk in luks, thanks"  at least not that I could see - it would happily create a _NEW_ encrypted partition stomping on the entire thing, but not give me access.
[19:47] <sebner> james_w: kk, np :)
[19:47] <lamont> so... cryptsetup luksOpen, vgscan; vgchange -a y; and back to the partitioner
[19:47] <lamont> not sure if it was because I failed to pick the same name, or because I failed to tell the system how we got the encrypted partition and LVs there, but it was sadly confused and very pissed when grub-installer went to do its thing
[19:48] <lamont> mind you, I got to _THIS_ point because I was trying to save the state of the root partition when the upgraed locked up hard.
[19:48] <lamont> and the first thing the new install did was happily format the /home partition.
[19:48] <lamont> needless to say, this did not leave me too happy with it
[19:49] <lamont> cjwatson: so far, I know that update-initramfs and actually creating the initial user are on the list of finish-install tasks...  just wondering what else is waiting to bite me
[19:50] <lamont> once I finish syncing my firefox profile over, I'll go file a bug or 9
[19:59] <Pretto> is there a way to check if a repository is ok? os if it is faling?
[20:04] <slangasek> Pretto: a repository is ok when you're able to install from it, and not when you aren't? :)
[20:05] <Pretto> slangasek: i mean this, that information is not that usefull http://paste.ubuntu.com/399499/
[20:05] <zyga> Pretto: that's someone's ppa
[20:05] <Pretto> suppose i have set up 10 ppa's
[20:06] <slangasek> Pretto: so you're trying to figure out which one that you've set up is broken?
[20:06] <Pretto> how can i know if that ppa is for emesene for example?
[20:07] <Pretto> slangasek: yes, not easy to do because apt will show only the launchpad main address
[20:07] <slangasek> Pretto: you can comment them out one-by-one in /etc/apt/sources.list or /etc/apt/sources.list.d and re-run 'apt-get update' to see if the error goes away; or you can pull the URLs out of the config and browse to them in your browser
[20:08] <cjwatson> IMO that's an apt bug
[20:08] <Pretto> slangasek: i know, but this way i will be guessing
[20:08]  * slangasek nods to cjwatson 
[20:08] <cjwatson> I never thought it was a helpful truncation even before PPAs
[20:08] <Pretto> cjohnston: bingo
[20:08] <slangasek> Pretto: not guessing, just an exhaustive search
[20:08] <cjohnston> :-(
[20:08] <cjwatson> Pretto: your tab completion is buggy
[20:08] <Pretto> cjohnston: sorry :D
[20:09] <cjwatson> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=317576
[20:09] <Pretto> if there's a broken one, apt must say it in a helpfull way
[20:10] <Pretto> that's not hard to do
[20:43] <lamont> so how the heck do I tell compiz to give windows more than 1 pixel boarders?
[20:43] <lamont> borders, too
[21:02] <cnd> anyone know of an interface like packages.ubuntu.com for the -proposed archives?
[21:18] <wgrant> cnd: For what purpose? Launchpad is the canonical source of information on the Ubuntu archives, and provides most of the information that p.u.c does.
[21:19] <cnd> wgrant: you're right, I didn't remember that the info I wanted was in lp
[21:19] <cnd> I just wanted the latest changelog
[21:20] <wgrant> We don't currently expose the raw changelog (but that will change in a few weeks). You can approximately see the new entries in each version, though.
[23:03] <NCommander> ccheney: woo, win!. We need to test on Debian to make sure thebug fully fixed there though
[23:36] <psusi> can someone with a little more knowledge of udev than me take a look at bug #534743 and see if the more complex solution would be appropriate or not, and maybe see about applying one or the other in time for the next lucid beta?  this is a show stopper for many users for lucid.
[23:38] <Keybuk> psusi: any particular reason that dmraid opens the block device for writing?
[23:38] <Keybuk> if it opens it read-only, it won't trigger a change surely?
[23:39] <psusi> Keybuk, it doesn't.... what it does is delete the kernel partition table on the activated devices, since they are invalid
[23:39] <psusi> I think the proper way to fix this is to have udev delete those tables as soon as blkid decides the disk is a dmraid member rather than have udev do it, but just removing the |change from the rule worked around it for me
[23:40] <Keybuk> removing change would have other side-effects
[23:40] <Keybuk> and certainly this late in a beta we wouldn't want to make that change
[23:40] <psusi> I figured it likely would, but I couldn't think of any...
[23:40] <psusi> I mean what else generates a change event on a raw disk other than updating the partition table?
[23:40] <Keybuk> various disk controllers do
[23:40] <Keybuk> if you unplug and plug the disk in
[23:40] <Keybuk> for example
[23:41] <psusi> wouldn't that do a remove then add?
[23:41] <Keybuk> no
[23:41] <psusi> and I don't think it handles a surprise removal anyhow ;)
[23:41] <Keybuk> change can mean medium change
[23:41] <Keybuk> change can mean a device that was previously disabled has been activated
[23:42] <psusi> fixed disks.. no medium to change
[23:42] <Keybuk> (ie. at the add point, the device probe returns error codes, and only after the change can it be read)
[23:42] <Keybuk> it might be only fixed disks in your setup
[23:42] <psusi> no, I mean only fixed disks are possible... dmraid is bios raid support.. the bios only deals with fixed disks ;)
[23:43] <Keybuk> ah, well, that's different then
[23:43] <psusi> in an idea world, you're right, which is why I think the proper solution is to have udev remove the partitions when blkid says it's a raid member rather than having dmraid do it on activation, then it can continue to activate on change or add
[23:44] <psusi> but practically speaking, dmraid disks don't have any circumstances where they emit a change event other than this, AFAICS
[23:44] <psusi> another possibility is to use watershed maybe
[23:45] <psusi> like lvm does
[23:50] <Keybuk> that would work too
[23:51] <psusi> waterhshed I think would stop the infinite loop, but would still have more activations run than needed
[23:51] <Keybuk> yeah you'd get two for each one