[07:37] <ali1234> does console-kit-daemon really need to use all spare cpu time or is it a bug?
[08:49] <lool> ali1234: it's a bug
[08:49] <lool> ali1234: What environment do you see this in?
[08:50] <ali1234> lool: i'm running on omap850 after using the "root from scratch" script with --seed=lxde,gdm
[08:51] <lool> ali1234: Could it be an option missing from your kernel?
[08:51] <ali1234> yes, it certainly could. but which one?
[08:51] <lool> ali1234: I would strace consolekit to see whether it's failing a syscall or something like that
[08:52] <lool> ogra: Did you ever see 100% used with CK?
[08:52] <lool> I didnt ever see this
[08:52] <ogra> nope
[08:53] <ogra> but i remember the desktop team had a bug about it early in jaunty ... though that was fixed for release
[08:53] <ogra> CK did spawn threads endlessly iirc
[08:53] <ali1234> i saw that bug on launch pad, it seems to be a memory leak issue with lots of ssh connections. i only have one
[08:55] <ali1234> strace just says: restart_syscall(<... resuming interrupted call ...>
[08:55] <ali1234> then nothing
[08:57] <ogra> hmm, why dont we have a dove image today
[08:58] <ali1234> it almost is a kernel issue since i'm using 2.6.25 and no particular config
[08:58] <ali1234> *almost certainly
[09:02] <ogra> lool, Couldn't open .flint/dbw/postlist.baseA: Too many open files
[09:02] <ogra> Couldn't open .flint/dbw/postlist.baseB: Too many open files
[09:02] <ogra> )
[09:02] <ogra> do you think it's worth trying to giv that back ?
[09:03] <ogra> (thats xapian-core)
[09:03] <ogra> seems like a buildd issue
[09:13] <lool> hmm
[09:13] <lool> Any other build like this?
[09:13] <ogra> nope
[09:14] <ogra> xapian seems to hammer the disk during its test phase
[09:14] <lool> I guess you could give it back
[09:14] <ogra> we have two idling buildds, i'll just try
[09:14] <lool> ogra: livefs failed to build
[09:14] <lool> http://people.canonical.com/~ubuntu-archive/livefs-build-logs/karmic/ubuntu-dove/latest/livecd-20090901-armel.out
[09:15] <lool>   empathy: Depends: libmissioncontrol-client0 (>= 4.37) but it is not installable
[09:15] <ogra> ah, yeah, empathy
[09:15] <lool> evolution and others
[09:15] <lool>   evolution: Depends: evolution-common (= 2.27.90-0ubuntu2) but it is not going to be installed
[09:15] <ogra> i just gave back evo, it always needs handholing
[09:15] <ogra> seb never waqits until the "all" packages are published
[09:16] <ogra> hmm, xapian has the same issue on ia64
[09:16] <ogra> and on sparc
[09:18] <ogra> lool, i dont get why dove wouldnt use an older livefs as all other arches do though
[09:19]  * ogra pbuilds xapian-core locally
[09:24] <lool> Oh that's unrelated, I actually broke the build but I fixed it now
[09:24] <lool> Well I think, I'm test building
[09:28] <lool> poumpoudoum
[09:30] <ogra> poumpoudoum ?
[09:31] <lool> lalala if you wish
[09:31] <lool> ok it built
[09:31] <lool> hmm that's bad
[09:31] <lool> /home/lool/cdimage/debian-cd/tools/pi-makelist-vfat: Couldn't list any drive
[09:31] <ogra> oh
[09:32] <ogra> i thought you were talking about xapian and wondered whn you secretly pushed that to the buildds :)
[09:32] <ogra> ETOOMANYTOPICSATONCE
[09:35] <lool> Oh
[09:35] <lool> We never actually cared to support partition=1 in pi-makelist-vfat
[09:35] <lool> only no partition or part 2
[09:35] <ogra> which didnt fail since we never used partition 1 yet
[09:36] <lool> Hmm oddly it doesnt work with partition=1 either
[09:37] <lool> Ok found another bug in that script which went unnoticed
[09:39] <lool> Oh no it was just local to my edit, some typo during copy-paste caused a lowercase or something
[09:40] <lool> and it worked, cool
[09:40] <lool> all deployed, just waiting to confirm I can kick a rebuild
[09:42] <lool> Sure enough http://cdimage.ubuntu.com/ports/daily-live/20090831/karmic-desktop-armel+dove.list was empty
[09:42] <rabeeh> lool: which machine are you using for that build?
[09:43] <rabeeh> lool: are dove machines already part of the buildd cloud?
[09:48] <ogra> rabeeh, nope
[09:48] <lool> rabeeh: No, imx51 based
[09:48] <ogra> we hope to add dove soon though
[09:48] <lool> rabeeh: I dont think we can get z0 stable enough and we dont have enough yX
[09:49] <rabeeh> isn't that incremental work? meaning if you have 1 yx then you can add to buildd, and if you have 2 then you can add 2?
[09:50] <lool> My main worry is silent breakage: it might be ok if a buildd goes completely missing and has to be remote rebooted but it's not ok if it breaks in the middle of a build: it could either cause a subtle breakage to go unnoticed and if it actually fails the build we cant distinguish hardware errors from software ones which we need to fix  :-/
[09:50] <rabeeh> i mean you can start with what you have, and get more soon.
[09:50] <ogra> but we dont have a single one we can spare yet
[09:50] <lool> rabeeh: I think we could start like that, but we have no facility to schedule builds on this or that buildd
[09:53] <rabeeh> so, what is the yx board allocation o buildd/developer?
[09:54] <rabeeh> btw - is it possible to set a board remotely part of a buildd?
[09:55] <rabeeh> for example i have my own y0 board; that is free at nights and weekends; can i set it (behind a firewall) to be part of the buildd cloud?
[09:55] <rabeeh> (i know; this is intermediate solution until you get the amount of boards you have requested)
[09:56] <ogra> we have 7 armel buildds atm ... https://launchpad.net/builders
[09:56] <rabeeh> for example, you can never ssh to the board; but the board can poll and pull build requests from the web
[09:56] <ogra> and as you can see currently two are idle, one is in maintenance
[09:57] <ogra> having more doesnt really help, having faster ones will ...
[09:57] <ogra> which is why we look forward to add doves
[09:57] <rabeeh> hmm. why?
[09:57] <ogra> because one package is built on one buildd
[09:58] <ogra> if you take openoffice it takes about 36h to build
[09:58] <rabeeh> but they are idle :)
[09:58] <rabeeh> don't they have queues for karmic builds?
[09:58] <ogra> having it building in 24h would improve ... having one more buildd wouldnt help speeding it up
[09:59] <ogra> its not a cluster where adding more power would improve anything for the single build
[10:00] <ogra> adding more machines just lets the qeue get empty faster
[10:00] <ogra> butu the queue as you can see is empty most of the time ... it might indeed help at high times where you have tons and tons of uploads
[10:02] <ogra> the amount of 7 machines is quite ok ... replacing them incrementally with doves will help a lot but we currently need the few doves we have for development and QA of the dove image we build
[10:02] <rabeeh> which machines you have here are running on discovery 78100?
[10:03] <rabeeh> i recall canonical had few of those machines for jaunty armv5 armel distro
[10:03] <rabeeh> my point is can we use armv5+vfp machines for armv6+vfp?
[10:04] <ogra> right, but after the alpha5 release (thursday) we will default to only build for v6
[10:04] <ogra> no, you cant use v5 machines for that
[10:04] <rabeeh> (implementing software instruction abort handlers for the armv6 part)
[10:04] <ogra> thats why we currently are stuck with imx51 machines
[10:05] <ogra> how stable would such a software layer be ... how much slowdown would you get through such an additional layer
[10:06] <rabeeh> well. it's hard to guess; but armv6 has big SIMD addition; which is never used in integer/floating based application
[10:06] <lool> sorry had an UPS delivery
[10:06] <rabeeh> it is used in packages like ffmpeg; and might only break if the debian/rules calls some test to use that
[10:06] <lool> and it's broken so I need to deal with it
[10:06] <rabeeh> lool: hope it's not Dove :)
[10:06] <lool> It's not  :)
[10:07] <ogra> my assumption is that if you develop such a layer to run on v5 buildds you invest a lot of time to make sure its stable enough, do QA on it etc
[10:07] <ogra> and i bet once you are done the amopunt of y0 we want is available :)
[10:08] <ogra> so using imx51 until that point serves as well without risking stability
[10:09] <rabeeh> right. and i'm looking at the instruction set addition; and seems lots of instructions being added
[10:09] <ogra> right
[10:10] <rabeeh> will take lots of time to implement those exception handlers
[10:10] <rabeeh> ogra: what i can promise is that y1 boards will come with 1GB of DDR :)
[10:10] <ogra> effectively you would end up with something like qemu-arm inbetween the CPU and userspace
[10:10] <ogra> WOW !
[10:10] <rabeeh> right
[10:10] <ogra> that will make a huge improvement !
[10:11] <ogra> the diskspeed already is a huge improvement on the dove vs imx
[10:11] <ogra> adding more ram will make the builders fly :)
[10:12] <rabeeh> personally i think we should have buildd version of dove
[10:12] <rabeeh> overclocked and even more DDR
[10:14] <rabeeh> lunch time :) ttysw
[10:14] <ogra> :)
[10:37] <lool> rabeeh: It's common for packages to call binaries from others; not frequent but common enough that it needs to work well; for instances all GStreamer apps call the equivalent of gst-inspect on startup which will cause all plugins to be loaded
[10:37] <lool> Including ffmpeg
[10:37] <lool> (Now Gst is a special case because it actually traps sigill but still :)
[11:08] <rabeeh> ogra: back to the original question; is it possible that put my board part of the buildd cloud? (inside a firewall)
[11:08] <rabeeh> lool: got it.
[11:15] <ogra> rabeeh, nope, our IS team wouldnt allow that i guess
[17:38] <tlee> Has karmic's arm build system move to vfp yet?
[17:54] <ogra> after alpha5 we'll move immediately
[17:58] <tlee> Thanks Ogra.  How many days (weeks) is that from now?    Also how long you think it would take to rebuild all the packages?
[17:59] <ogra> alpha5 images will be released on thursday
[17:59] <ogra> (including a dove live image btw)
[17:59] <ogra> testing appreciated :)
[18:00] <ogra> no idea how long it will take to rebuild the whole archive ... i'd guess a week or two
[18:00] <ogra> lool, any idea ?
[18:04] <lool> ogra: we moved already
[18:04] <lool> to vfp
[18:04] <lool> tlee: we're using v6+vfp at the moment
[18:04] <ogra> oh !
[18:04] <lool> We wont rebuild the archive
[18:05] <ogra> dont we do a complete rebuild anyway during the process ?
[18:05] <lool> What do you mean?
[18:05] <tlee> wow.
[18:06] <ogra> lool, on the std arches we usually do a complete archive rebuild before final
[18:06] <ogra> https://wiki.ubuntu.com/KarmicReleaseSchedule spt24th
[18:06] <ogra> *sept
[18:06] <ogra> and oct 15th
[18:09] <lool> ogra: That's out of archive
[18:09] <lool> We dont keep the results
[18:09] <ogra> oh, i thought we did
[18:09] <lool> ogra: So did you get like thousands of packages to download on the week before release?   :)
[18:10] <ogra> why would i if the versions dont change
[18:14] <lool> ogra: Because the binaries arent identical
[18:14] <lool> ogra: It would be a terrible idea to have two .debs with different md5 and contents named the same thing and using the same version; we never do that
[18:15] <ogra> oh, indeed
[18:15] <lool> And it's also why we wont rebuild the whole archive just for armel
[18:15] <ogra> yeah, understood
[18:15] <ogra> thast a bit sad though
[18:16] <lool> Not too much
[18:16] <lool> But it's why the lack of buildd was such a big deal
[18:16] <ogra> indeed
[18:16] <lool> But given the number of uploads before release it's not that bad
[18:16] <ogra> true
[18:16] <ogra> problematic is that we wont see fallout caused by the setting for packages that werent uploaded
[18:17] <ogra> we'll likely hit the ftbfs'es for these in karmic+1 only
[18:55] <tlee> Have any of you seen this "BUG: soft lockup - CPU#0 stuck for 61s!"?
[18:56] <ogra> nope, not on armel yet
[18:56] <tlee> Google it and it seems to happen a lot of times in kernel.
[18:56] <ogra> i have seen it on via eden
[18:56] <ogra> and other thin clients i worked with (geode etc)
[18:56] <tlee> orga:  turn on nullmailer and see if you can see what I seen on karmic.
[18:57] <ogra> oh, wait, looking at my dmesg ...
[18:57] <ogra> http://paste.ubuntu.com/263307/
[18:57] <tlee> I believe I have 400+ email Q to send out and nullmailer is trying to run every few minutes.  Seems to trigger that erorrs for me.
[18:57] <ogra> it's currently doing a heavy duty build
[18:58] <tlee> on Karmic.  Quite offer.
[18:58] <ogra> NCommander, http://paste.ubuntu.com/263307/
[18:58] <tlee> On Jaunty, I see the bash has issue.
[18:58] <ogra> NCommander, watch your dmesg on the dove under load, appears to be a kernel bug
[18:58] <tlee> On Karmic, seems only the nullmailer.
[18:59] <ogra> well, i'm building a package here
[18:59] <ogra> and it seems to have been triggered by bash
[19:01] <tlee> Do you know any method to help track this bug down?
[19:01] <ogra> i pinged bjf in #ubuntu-kernel
[19:02] <bjf> orga, I can *sigh* here just as easily
[19:03] <ogra> heh
[19:04] <ogra> note that i use the -201 build still
[19:04] <bjf> ogra, oh! well that's obviously your problem :-)
[19:04] <ogra> not sure if 202 or 203 fix it, but i guess tlee runs something newer
[19:05] <ogra> i cant replace the kernel before webkit isnt finished, i need the log
[19:05] <ogra> and that might go on for the rest of the nigh ...
[19:05] <ogra> +t
[19:05] <bjf> ogra, I'll boot mine up and see what I see
[19:05] <ogra> i didnt see it before
[19:05] <ogra> seems to be load related
[19:14] <tlee> orga: what do you mean by 201, 202, 203?
[19:15] <ogra> the kernel builds
[19:15] <ogra> its the ABI version of the packaged kernels we use
[19:16] <bjf> tlee, you will see the version as 2.6.31-203.8, the 203 is the API versions
[19:16] <ogra> http://ports.ubuntu.com/pool/main/l/linux-mvl-dove/
[19:16] <tlee> I am still back in 2.6.30.  I built 2.6.31 last week, have not try it yet.
[19:17] <ogra> linux-image-2.6.31-203-dove_2.6.31-203.7_armel.deb is the current package
[19:17] <ogra> (i think there is one pending in the queue though)
[19:18] <bjf> ogra, yes there is one "in the queue", it has some of the latest mvl changes and we are building zImages instaed of uImages
[19:18] <ogra> yeah, i saw the thread on the ML
[19:18] <tlee> Ok, I will try to sync it up tomorrow. Have some demo to do today. Don't want to mess with the demo god for now.
[19:19] <ogra> heh
[19:19] <bjf> tlee, I hear that!
[19:20] <tlee> I am in trouble now.  Lighting is stiking from above..... run ..... :-)
[19:21] <ogra> haha
[19:30] <NCommander> ogra, yeah, I've seen that. Thought it was an issue with the old kernel I was using
[19:30] <NCommander> It didn't reappear since I updated to a 202 kernel, although I haven't been using that as extensively
[19:31] <NCommander> actually
[19:31] <ogra> ok, i'll see what 203 does for me then
[19:31] <NCommander> er
[19:31] <NCommander> 203
[19:31] <NCommander> er, 202 or 203
[19:31] <NCommander> I used to get it trying to spin livecd-rootfs's
[19:31] <NCommander> then it went away when I updated the kernel
[22:45] <tlee> There is no uImage in the linux-image-2.6.31-203-dove_2.6.31-203.7_armel.deb.  How do I make the uboot to load vmlinuz?