[02:13] <bkerensa> bilal: are you about?
[02:25] <psusi> RAOF, you know lots about X et all right?  how about glib and main loops?  I'm trying to figure out whether emitting signals goes through the main loop and whether a background thread can emit a signal and have the main thread actually process it and invoke the callback?
[02:26] <RAOF> What do you mean by "signal" in this case?
[02:26] <RAOF> A gobject signal?
[02:26] <psusi> RAOF, a glib signal? yea
[02:27] <RAOF> IIRC, signal dispatch is handled in the calling thread.
[02:27] <psusi> hrm... darn... I was reading something that made it sound like it got queued and processed in the main loop
[02:27] <RAOF> If you want that to occur on the main thread, I believe the idiom is to g_timeout_add() a callback which raises the signal.
[02:27] <psusi> but I couldn't see how emit figures out what maincontext to queue it to
[02:27] <psusi> or idle_add right?
[02:27] <RAOF> Or idle_add, yeah.
[02:28] <psusi> and I saw a convience function earlier that does that if the calling thread doesn't already own the context
[02:28] <RAOF> But you probably want timeout_add, with zero timeout.
[02:28] <RAOF> Yeah, you could probably whip that up.
[02:28] <RAOF> You're after the default maincontext / mainloop.
[02:29] <psusi> right... I want to just queue it up to have the main context call it... I was hoping you could just ->emit() and it would do that, but... ;)
[02:29] <RAOF> Well, unless you want to run on a specific mainloop.
[02:29] <RAOF> I don't *think* ->emit has those smarts.
[02:29] <RAOF> But it might be worth a check.
[02:29] <psusi> yea, so theoretically the background thread can make its own context and run its own main loop right?
[02:30] <psusi> or it can aquire the main main loop and service events in it for a while... I have patched gparted to do that so the background thread can pop up a modal dialog box
[02:31] <RAOF> Yes.
[02:31] <psusi> but I'm now trying to figure out how to signal the main thread that it is done and it should perform post processing there and the background thread can exit
[02:32] <psusi> jesus.. bzr branch of glib is up to 320mb and still going?!
[02:32] <RAOF> Hm.  Why does the modal dialog need to be in a thread again?
[02:32] <psusi> RAOF, it uses a background thread to make long running libparted calls so they don't block the main loop... but those calls can have exceptions and when they do, they call an exception handler to ask what to do... gparted used to ignore them so the libparted calls would take default action, i.e. fail...
[02:33] <psusi> I patched it so that it actually handles the exception by poping up a messagebox asking the user what to do
[02:33] <psusi> i.e. ignore, retry, cancel
[02:34] <RAOF> Hm.  I think I'd do that by making the background thread raise a signal, wait for the mainloop to handle it, then return.
[02:34] <RAOF> Whee, 11.4K commits
[02:35] <psusi> I considered that, but a modal dialog seemed to fit better just calling gdk_threads_enter() and taking over the main loop to run the nested dialog main loop there
[02:35] <psusi> needed far less modification to that existing code too
[02:36] <psusi> but now there's some other code that right now, runs a nested loop of usleep, Gtk::Main::events_pend() Gtk::Main::iteration() until the background thread signals that it is done
[02:37] <psusi> and I'm trying to refactor it to instead spawn the background thread, and return to the main loop, then do the rest in a completion callback when the background thread is done
[02:38] <psusi> and I was hoping I could just refactor the code following the loop into a signal handler and emit the signal in the background thread before it terminates
[02:39] <psusi> jesus, git cloning the linux kernel doesn't download as much as branching glib...
[02:42] <psusi> what the hell?  bzr said it had downloaded like 450mb, but the .bzr directory is only 236mb... the whole branch dir including all checked out sources is only 305m
[02:42] <RAOF> Huh.  What were you branching?  My branch downloaded 40Mb
[02:43] <RAOF> (Of lp:glib)
[02:46] <RAOF> Anyway.  Lunch/errands time.
[02:48] <psusi> RAOF, lp:ubuntu/glib2.0
[02:48] <RAOF> That'll be bigger because it contains all the original tarballs for every release in Ubuntu
[02:48] <RAOF> Or, rather, it contains enough information to generate them.
[05:21] <lifeless> RAOF: also the protocol; if psusi wasn't lp-logged-in, it would be http which has fat
[05:22] <RAOF> lifeless: Yeah; I checked lp:ubuntu/glib2.0.  It took at least 200MB before I forgot about it.
[05:22] <RAOF> There are some *serious* savings available for package-branches.
[05:23] <RAOF> A bit more than an order of magnitude, in fact.
[05:24] <RAOF> I wonder why the package-branch is so much bigger, though.
[08:19] <dholbach> good morning
[09:39] <Laney> howdy
[13:34] <Q-FUNK> https://launchpadlibrarian.net/90354168/buildlog_ubuntu-precise-powerpc.smartcardpp_0.3.0-0ubuntu3_FAILEDTOBUILD.txt.gz
[13:35] <Q-FUNK> this seems to be cause by a new cmake (upon which I build-depends) which left cmake and cmake-data out of sync.
[13:35] <tumbleweed> yes. Occassionally we have some installability problems
[13:35] <Q-FUNK> how do I re-launch the build later?
[13:36] <tumbleweed> click the retry button
[13:36] <tumbleweed> (on the build record page)
[13:37] <tumbleweed> it looks like the cmake is already published
[13:40] <Q-FUNK> tumbleweed: hopefully.  relaunched.
[13:41] <Q-FUNK> interesting. it seems that my two previous uploads haven't propagated to the archive yet.
[13:41] <Q-FUNK> they show on the package's source page, but apt-get comes back empty handed.
[13:41] <tumbleweed> hanging out in bin-NEW?
[13:42] <Q-FUNK> could be, but they show as published.
[13:42] <tumbleweed> then they should be...
[14:08] <Q-FUNK> how long does it usualy take before binaries are pushed in?  relaunching the powerpc build failed again, again because cmake and cmake-data are out of sync.
[14:17] <geser> Q-FUNK: the publisher runs now every 30 minutes
[14:18] <geser> if you want to be sure, you can also check the publishing status for a binary package in LP
[14:21] <geser> cmake-data got published on 2012-01-18 12:33:57 UTC, cmake (powerpc) got published on 2012-01-18 13:33:38 UTC
[14:27] <micahg> udienz: FYI, for stuff like https://launchpad.net/ubuntu/+source/seqan/1.3-1ubuntu1, anything that switches to xz compressed binaries, for precise, you need a Pre-Depends on dpkg (>= 1.15.6~)
[14:35] <Q-FUNK> geser: ok, good to know. thanks for the info.
[14:35] <porthose> so is dirspec in universe or main? https://launchpad.net/ubuntu/+source/dirspec says universe however the rejection mail I just got says "Signer is not permitted to upload to the component 'main'" ???
[14:36] <Laney> https://launchpad.net/ubuntu/+source/dirspec/+publishinghistory
[14:36] <Laney> main
[14:36] <Q-FUNK> what about pushing new binaries?  smaprtcardpp is marked as published, but its binaries are markerd as new.
[14:36] <Q-FUNK> how often does that take place?
[14:36] <Laney> it is a manual process
[14:36] <Laney> (the reviewing)
[14:37] <Laney> sticking stuff in the queue is an automatic part of publishing
[14:38] <Q-FUNK> right, I figured that it was a manual process, but how often does it take place?
[14:38] <porthose> Laney, thx
[14:38] <Laney> whenever someone does it, but not usually more than a couple of days
[14:39] <Q-FUNK> ok
[14:40] <tumbleweed> Q-FUNK: IIRC you have a bunch of interdependant packages. Just upload them all, and let them depwait on each other?
[14:41] <Q-FUNK> tumbleweed: can I do that?  I thought that I would get the dependencies uploaded and approved in order.
[14:41] <tumbleweed> if you're pretty confident that they will work, go for it
[14:41] <Q-FUNK> that's how it goes at debian, so I assumed the same here.
[14:42] <Q-FUNK> ok. good to know. in that case, I might as well upload them all now so that they all get the initial review and upload at the same time.
[14:43] <Laney> you could do the same at debian (and actually better, because bd-uninstallable makes wanna-build a bit smarter than launchpad), just that you'd have to first locally build the stack
[14:44] <Q-FUNK> afaik at debian, one must get the dependencies approved via NEW and uploaded to unstable, in order, before more packages can be built using them.
[14:46] <tumbleweed> in debian, the maintainer uploads binaries
[14:46] <tumbleweed> so the maintainer can upload locally bootstrapped binaries for all the packages in th set
[14:47] <Q-FUNK> right, but won't the buildd fail to build for some architectures because of missing items in the build-dep chain?
[14:49] <tumbleweed> they won't even build, beacuse they are bd-uninstallable
[14:53] <Q-FUNK> ok. good to know.
[15:16] <psusi> is there a way to get from a sigc functor to a GSourceFunc/gpointer pair so you can use it as a glib callback?
[15:48] <udienz> micahg: Right, i on it now
[17:37] <l3on> hey guys... I found packages with unmetdep in oneiric.. they are so many → http://paste.ubuntu.com/808815/
[17:37] <l3on> what we should do ?
[17:38] <ScottK> l3on: First fix them for precise and then see if it's worth fixing in oneiric.
[17:39] <l3on> ScottK, ok... I have to open a bug for each of them ?
[17:39] <ScottK> l3on: There's no real need to open a bug unless you also have a fix.
[17:39] <l3on> ok :)
[17:40] <ScottK> Things like that come and go for various reasons so maintaining bugs isn't really worth the trouble.
[17:45] <l3on> if I'm not wrong, unmet dep are fixed appeding "build1" version if there's no change in ubuntu, right ?
[17:45] <jtaylor> only if the dependency is catched by a rebuild
[17:46] <l3on> ah of course, yes... otherwise "ubuntu0.1"
[17:47] <l3on> (if it's not in precise I mean)
[17:48] <ScottK> You need to understand why the dependency is missing.
[17:48] <ScottK> It's not as simple as that.
[17:48] <l3on> ok, let me know understand :)
[17:49] <l3on> Case 0: just rebuild → "buildX+1"
[17:49] <l3on> Case 1a: precise (dev) "ubuntuX+1"
[17:50] <l3on> Case 2a: SRU "ubuntuX.Y+1" (if has changes, otherwise "ubuntu0.X+1")
[17:51] <l3on> and if it's a simple rebuild, but it's a SRU ?
[17:51] <ScottK> Yes.
[17:57] <micahg> l3on: dch -R for rebuilds, SRUs should have a X.Y version, i.e. build0.1 for an SRU rebuild for a package with no Ubuntu changes
[17:58] <l3on> ah ok, thanks micahg ScottK :)
[18:53] <rubiojr> Good evening Gentlemen
[18:54] <rubiojr> Quick question, to contribute some packages and become a MOTU at some point
[18:54] <rubiojr> do I need to follow the bazaar route?
[18:54] <rubiojr> or is it ok to keep the packages in let's say GH
[18:55] <jtaylor> its up to you where you hold your package vcs
[18:55] <jtaylor> you don't even need to us a vcs at all, but its recommended
[18:55] <rubiojr> ah
[18:55] <rubiojr> good to know
[18:55] <rubiojr> thanks jtaylor
[18:56] <jtaylor> also MOTU does not necessarily mean you ahve to maintain your own packages (I think)
[18:56] <rubiojr> alright
[18:56] <rubiojr> I'll be happy to do that anyways
[18:56] <jtaylor> packages should be contributed to debian if possible, it benefits more users that way
[18:57] <rubiojr> yeah, I got that bit
[18:57] <rubiojr> but if I want the package to be in precise (if possible)
[18:57] <rubiojr> the Debian route doesn't make sense right now
[18:58] <rubiojr> correct?
[18:58] <jtaylor> depends, finding a sponsor in ubuntu can take a long time too
[18:58] <rubiojr> yeah, trying to talk to highvoltage
[18:58] <rubiojr> submitted an app the other day to the Store
[18:58] <rubiojr> and told me it was probably better to try to push it to universe
[19:00] <rubiojr> so how do you guys build the relation with the sponsor?
[19:00] <rubiojr> following procedure in Wiki and then waiting someone to pickup the sponsorship?
[19:01] <jtaylor> pretty much, but that can take forever
[19:01] <jtaylor> finding a fitting packaging team in debian is often a good route
[19:01] <rubiojr> yeah, I know that :)
[19:01] <rubiojr> Ok I see
[19:03] <rubiojr> thanks jtaylor
[19:03] <rubiojr> let's practice some mail bombing
[19:07] <highvoltage> hey runasand
[19:08] <highvoltage> rubiojr: I'm willing to sponsor the package for you, it didn't a lot of fixing to get into universe
[19:09] <rubiojr> awe cool, that was unexpected :)
[20:01] <psusi> so normally apps ship .po files with translations in them... but when debianized, they are stripped out of the package and stored in giant langpacks for each language that has all translations for all packages that you install if you want to use that language?
[20:03] <micahg> psusi: that's only for main and select universe packages
[20:03] <micahg> *most of main
[20:04] <psusi> micahg: so if I wanted to run parted in fresh, I'd have to install... what package?  and that would get me ALL frensh translations for all packages in main?
[20:04] <psusi> french rather
[20:05] <micahg> psusi: go to language settings and add french
[20:05] <psusi> I'm on a server atm, so.. what package would that actually install?
[20:06] <micahg> well, language-pack-fr, language-pack-fr-base, and a few others
[20:08] <psusi> so when the build daemons build another package, they strip out the language files and automatically update those language-pack packages?
[20:08] <micahg> pkgbinarymangler does that I think, the files get uploaded to launchpad to be included in the langpacks
[20:12] <psusi> it's a shame that dpkg can't do conditional recommends... then each package could have its own set of translation packages that would automatically be installed if you have that package, and the language metapackage installed... instead of having to get the translations for all packages in bulk...
[20:13] <micahg> psusi: it's done this way so translations can be updated independently of the packages themselves
[20:14] <psusi> micahg: that's why the translations are in a separate package, yes... but there's not a good reason to glob the translations for all packages into one giant langpack
[20:15] <psusi> other than dpkg lacking the ability to figure out that if you have the french metapackage installed, and parted installed, then you probably want the parted-french package
[20:15] <micahg> yes, it is, managing the binaries otherwise would be a nightmare
[20:15] <psusi> what binaries?
[20:16] <micahg> the translations binaries
[20:16] <psusi> why would it be a nightmare?
[20:16] <micahg> you would have 3k binaries built out of each langpack source
[20:17] <psusi> what's wrong with 3k binaries?
[20:18] <micahg> first, that's a lot of extra overhead, you would end up with ~300k more binary packages in the archive
[20:20] <micahg> it just sounds very troublesome
[20:24] <blair> so hdf5 1.8.8 made it to debian unstable, will this make it into 12.04?
[20:26] <Laney> isn't that a transition?
[20:26] <geser> only if someone requests a sync (and has a reason to sync from unstable or waits till it hits testing)
[20:27] <micahg> blair: well, there, there's still a transition that needs to happen: http://release.debian.org/transitions/html/hdf5.html
[20:28] <micahg> if the archs we need are green or almost all green (close to Feature freeze), we can do the sync
[20:28] <Laney> if someone volunteers to shepherd it
[20:29] <micahg> that too
[21:05] <Rhonda> I think I need a FFe soonish. :)
[21:06] <Laney> not until Feb 16th
[21:06] <Rhonda> Just tried to fire up the wiki for looking up the dates, but it isnt loading  …  *grmpf*
[21:06] <Rhonda> local issue though
[23:20] <Rhonda> ah, wait  …
[23:21] <Rhonda> Laney: wesnoth-1.10 will be a new source package, so it is affected by the import freeze, right?  Or would a regular sync request still be possible, for a completely new package?
[23:21] <Rhonda> … "sorta" completely new, actually.
[23:24] <Laney> Rhonda: Everything is affected by the import freeze, but syncs of new packages aren't a particular problem
[23:26] <Rhonda> Wasn't a problem last time for wesnoth-1.8 I seem to remember :)
[23:45] <blair> geser, micahg i'm pushing for our next OS for 500 desktops to be 12.04 and we have projects that use hdf5 and the current ubuntu version has bugs (and it's older than what we have currently deployed)
[23:45] <jtaylor> how can one add a bug link to http://qa.ubuntuwire.com/ftbfs/?
[23:45] <blair> so i should wait a bit and then could request a sync?
[23:45] <blair> does sheparding the transition need somebody who is a debian committer?
[23:46] <Laney> needs someone who is going to see all of the needed work through
[23:46] <Laney> rebuilds and whatever else needs doing
[23:46] <blair> i mean, ubuntu commiter
[23:46] <Laney> no
[23:46] <blair> ok
[23:46] <Laney> you just need to get sponsorship when you need it
[23:47] <Laney> keep an eye and make sure everything gets done
[23:47] <Laney> might be best to watch for a bit and see how it goes in debian first though
[23:51] <Laney> goodnight!
[23:53] <tumbleweed> jtaylor: tag the bug ftbfs
[23:53] <jtaylor> too obvious :)
[23:53] <jtaylor> thx
[23:54] <tumbleweed> submit a merge proposal with a hint? :)