=== cbrake is now known as cbrake_away [07:37] does console-kit-daemon really need to use all spare cpu time or is it a bug? === ericm is now known as ericm-afk [08:49] ali1234: it's a bug [08:49] ali1234: What environment do you see this in? [08:50] lool: i'm running on omap850 after using the "root from scratch" script with --seed=lxde,gdm [08:51] ali1234: Could it be an option missing from your kernel? [08:51] yes, it certainly could. but which one? [08:51] ali1234: I would strace consolekit to see whether it's failing a syscall or something like that [08:52] ogra: Did you ever see 100% used with CK? [08:52] I didnt ever see this [08:52] nope [08:53] but i remember the desktop team had a bug about it early in jaunty ... though that was fixed for release [08:53] CK did spawn threads endlessly iirc [08:53] i saw that bug on launch pad, it seems to be a memory leak issue with lots of ssh connections. i only have one [08:55] strace just says: restart_syscall(<... resuming interrupted call ...> [08:55] then nothing [08:57] hmm, why dont we have a dove image today [08:58] it almost is a kernel issue since i'm using 2.6.25 and no particular config [08:58] *almost certainly [09:02] lool, Couldn't open .flint/dbw/postlist.baseA: Too many open files [09:02] Couldn't open .flint/dbw/postlist.baseB: Too many open files [09:02] ) [09:02] do you think it's worth trying to giv that back ? [09:03] (thats xapian-core) [09:03] seems like a buildd issue [09:13] hmm [09:13] Any other build like this? [09:13] nope [09:14] xapian seems to hammer the disk during its test phase [09:14] I guess you could give it back [09:14] we have two idling buildds, i'll just try [09:14] ogra: livefs failed to build [09:14] http://people.canonical.com/~ubuntu-archive/livefs-build-logs/karmic/ubuntu-dove/latest/livecd-20090901-armel.out [09:15] empathy: Depends: libmissioncontrol-client0 (>= 4.37) but it is not installable [09:15] ah, yeah, empathy [09:15] evolution and others [09:15] evolution: Depends: evolution-common (= 2.27.90-0ubuntu2) but it is not going to be installed [09:15] i just gave back evo, it always needs handholing [09:15] seb never waqits until the "all" packages are published [09:16] hmm, xapian has the same issue on ia64 [09:16] and on sparc [09:18] lool, i dont get why dove wouldnt use an older livefs as all other arches do though [09:19] * ogra pbuilds xapian-core locally [09:24] Oh that's unrelated, I actually broke the build but I fixed it now [09:24] Well I think, I'm test building [09:28] poumpoudoum [09:30] poumpoudoum ? [09:31] lalala if you wish [09:31] ok it built [09:31] hmm that's bad [09:31] /home/lool/cdimage/debian-cd/tools/pi-makelist-vfat: Couldn't list any drive [09:31] oh [09:32] i thought you were talking about xapian and wondered whn you secretly pushed that to the buildds :) [09:32] ETOOMANYTOPICSATONCE [09:35] Oh [09:35] We never actually cared to support partition=1 in pi-makelist-vfat [09:35] only no partition or part 2 [09:35] which didnt fail since we never used partition 1 yet [09:36] Hmm oddly it doesnt work with partition=1 either [09:37] Ok found another bug in that script which went unnoticed [09:39] Oh no it was just local to my edit, some typo during copy-paste caused a lowercase or something [09:40] and it worked, cool [09:40] all deployed, just waiting to confirm I can kick a rebuild [09:42] Sure enough http://cdimage.ubuntu.com/ports/daily-live/20090831/karmic-desktop-armel+dove.list was empty [09:42] lool: which machine are you using for that build? [09:43] lool: are dove machines already part of the buildd cloud? [09:48] rabeeh, nope [09:48] rabeeh: No, imx51 based [09:48] we hope to add dove soon though [09:48] rabeeh: I dont think we can get z0 stable enough and we dont have enough yX [09:49] isn't that incremental work? meaning if you have 1 yx then you can add to buildd, and if you have 2 then you can add 2? [09:50] My main worry is silent breakage: it might be ok if a buildd goes completely missing and has to be remote rebooted but it's not ok if it breaks in the middle of a build: it could either cause a subtle breakage to go unnoticed and if it actually fails the build we cant distinguish hardware errors from software ones which we need to fix :-/ [09:50] i mean you can start with what you have, and get more soon. [09:50] but we dont have a single one we can spare yet [09:50] rabeeh: I think we could start like that, but we have no facility to schedule builds on this or that buildd [09:53] so, what is the yx board allocation o buildd/developer? [09:54] btw - is it possible to set a board remotely part of a buildd? [09:55] for example i have my own y0 board; that is free at nights and weekends; can i set it (behind a firewall) to be part of the buildd cloud? [09:55] (i know; this is intermediate solution until you get the amount of boards you have requested) [09:56] we have 7 armel buildds atm ... https://launchpad.net/builders [09:56] for example, you can never ssh to the board; but the board can poll and pull build requests from the web [09:56] and as you can see currently two are idle, one is in maintenance [09:57] having more doesnt really help, having faster ones will ... [09:57] which is why we look forward to add doves [09:57] hmm. why? [09:57] because one package is built on one buildd [09:58] if you take openoffice it takes about 36h to build [09:58] but they are idle :) [09:58] don't they have queues for karmic builds? [09:58] having it building in 24h would improve ... having one more buildd wouldnt help speeding it up [09:59] its not a cluster where adding more power would improve anything for the single build [10:00] adding more machines just lets the qeue get empty faster [10:00] butu the queue as you can see is empty most of the time ... it might indeed help at high times where you have tons and tons of uploads [10:02] the amount of 7 machines is quite ok ... replacing them incrementally with doves will help a lot but we currently need the few doves we have for development and QA of the dove image we build [10:02] which machines you have here are running on discovery 78100? [10:03] i recall canonical had few of those machines for jaunty armv5 armel distro [10:03] my point is can we use armv5+vfp machines for armv6+vfp? [10:04] right, but after the alpha5 release (thursday) we will default to only build for v6 [10:04] no, you cant use v5 machines for that [10:04] (implementing software instruction abort handlers for the armv6 part) [10:04] thats why we currently are stuck with imx51 machines [10:05] how stable would such a software layer be ... how much slowdown would you get through such an additional layer [10:06] well. it's hard to guess; but armv6 has big SIMD addition; which is never used in integer/floating based application [10:06] sorry had an UPS delivery [10:06] it is used in packages like ffmpeg; and might only break if the debian/rules calls some test to use that [10:06] and it's broken so I need to deal with it [10:06] lool: hope it's not Dove :) [10:06] It's not :) [10:07] my assumption is that if you develop such a layer to run on v5 buildds you invest a lot of time to make sure its stable enough, do QA on it etc [10:07] and i bet once you are done the amopunt of y0 we want is available :) [10:08] so using imx51 until that point serves as well without risking stability [10:09] right. and i'm looking at the instruction set addition; and seems lots of instructions being added [10:09] right [10:10] will take lots of time to implement those exception handlers [10:10] ogra: what i can promise is that y1 boards will come with 1GB of DDR :) [10:10] effectively you would end up with something like qemu-arm inbetween the CPU and userspace [10:10] WOW ! [10:10] right [10:10] that will make a huge improvement ! [10:11] the diskspeed already is a huge improvement on the dove vs imx [10:11] adding more ram will make the builders fly :) [10:12] personally i think we should have buildd version of dove [10:12] overclocked and even more DDR [10:14] lunch time :) ttysw [10:14] :) [10:37] rabeeh: It's common for packages to call binaries from others; not frequent but common enough that it needs to work well; for instances all GStreamer apps call the equivalent of gst-inspect on startup which will cause all plugins to be loaded [10:37] Including ffmpeg [10:37] (Now Gst is a special case because it actually traps sigill but still :) [11:08] ogra: back to the original question; is it possible that put my board part of the buildd cloud? (inside a firewall) [11:08] lool: got it. [11:15] rabeeh, nope, our IS team wouldnt allow that i guess === cbrake_away is now known as cbrake === bjf-afk is now known as bjf === ericm-afk is now known as ericm-Zzz === bjf is now known as bjf-afk [17:38] Has karmic's arm build system move to vfp yet? [17:54] after alpha5 we'll move immediately [17:58] Thanks Ogra. How many days (weeks) is that from now? Also how long you think it would take to rebuild all the packages? [17:59] alpha5 images will be released on thursday [17:59] (including a dove live image btw) [17:59] testing appreciated :) [18:00] no idea how long it will take to rebuild the whole archive ... i'd guess a week or two [18:00] lool, any idea ? [18:04] ogra: we moved already [18:04] to vfp [18:04] tlee: we're using v6+vfp at the moment [18:04] oh ! [18:04] We wont rebuild the archive [18:05] dont we do a complete rebuild anyway during the process ? [18:05] What do you mean? [18:05] wow. [18:06] lool, on the std arches we usually do a complete archive rebuild before final [18:06] https://wiki.ubuntu.com/KarmicReleaseSchedule spt24th [18:06] *sept [18:06] and oct 15th [18:09] ogra: That's out of archive [18:09] We dont keep the results [18:09] oh, i thought we did [18:09] ogra: So did you get like thousands of packages to download on the week before release? :) [18:10] why would i if the versions dont change [18:14] ogra: Because the binaries arent identical [18:14] ogra: It would be a terrible idea to have two .debs with different md5 and contents named the same thing and using the same version; we never do that [18:15] oh, indeed [18:15] And it's also why we wont rebuild the whole archive just for armel [18:15] yeah, understood [18:15] thast a bit sad though [18:16] Not too much [18:16] But it's why the lack of buildd was such a big deal [18:16] indeed [18:16] But given the number of uploads before release it's not that bad [18:16] true [18:16] problematic is that we wont see fallout caused by the setting for packages that werent uploaded [18:17] we'll likely hit the ftbfs'es for these in karmic+1 only === bjf-afk is now known as bjf [18:55] Have any of you seen this "BUG: soft lockup - CPU#0 stuck for 61s!"? [18:56] nope, not on armel yet [18:56] Google it and it seems to happen a lot of times in kernel. [18:56] i have seen it on via eden [18:56] and other thin clients i worked with (geode etc) [18:56] orga: turn on nullmailer and see if you can see what I seen on karmic. [18:57] oh, wait, looking at my dmesg ... [18:57] http://paste.ubuntu.com/263307/ [18:57] I believe I have 400+ email Q to send out and nullmailer is trying to run every few minutes. Seems to trigger that erorrs for me. [18:57] it's currently doing a heavy duty build [18:58] on Karmic. Quite offer. [18:58] NCommander, http://paste.ubuntu.com/263307/ [18:58] On Jaunty, I see the bash has issue. [18:58] NCommander, watch your dmesg on the dove under load, appears to be a kernel bug [18:58] On Karmic, seems only the nullmailer. [18:59] well, i'm building a package here [18:59] and it seems to have been triggered by bash [19:01] Do you know any method to help track this bug down? [19:01] i pinged bjf in #ubuntu-kernel [19:02] orga, I can *sigh* here just as easily [19:03] heh [19:04] note that i use the -201 build still [19:04] ogra, oh! well that's obviously your problem :-) [19:04] not sure if 202 or 203 fix it, but i guess tlee runs something newer [19:05] i cant replace the kernel before webkit isnt finished, i need the log [19:05] and that might go on for the rest of the nigh ... [19:05] +t [19:05] ogra, I'll boot mine up and see what I see [19:05] i didnt see it before [19:05] seems to be load related [19:14] orga: what do you mean by 201, 202, 203? [19:15] the kernel builds [19:15] its the ABI version of the packaged kernels we use [19:16] tlee, you will see the version as 2.6.31-203.8, the 203 is the API versions [19:16] http://ports.ubuntu.com/pool/main/l/linux-mvl-dove/ [19:16] I am still back in 2.6.30. I built 2.6.31 last week, have not try it yet. [19:17] linux-image-2.6.31-203-dove_2.6.31-203.7_armel.deb is the current package [19:17] (i think there is one pending in the queue though) [19:18] ogra, yes there is one "in the queue", it has some of the latest mvl changes and we are building zImages instaed of uImages [19:18] yeah, i saw the thread on the ML [19:18] Ok, I will try to sync it up tomorrow. Have some demo to do today. Don't want to mess with the demo god for now. [19:19] heh [19:19] tlee, I hear that! [19:20] I am in trouble now. Lighting is stiking from above..... run ..... :-) [19:21] haha [19:30] ogra, yeah, I've seen that. Thought it was an issue with the old kernel I was using [19:30] It didn't reappear since I updated to a 202 kernel, although I haven't been using that as extensively [19:31] actually [19:31] ok, i'll see what 203 does for me then [19:31] er [19:31] 203 [19:31] er, 202 or 203 [19:31] I used to get it trying to spin livecd-rootfs's [19:31] then it went away when I updated the kernel [22:45] There is no uImage in the linux-image-2.6.31-203-dove_2.6.31-203.7_armel.deb. How do I make the uboot to load vmlinuz? === bjf is now known as bjf-afk