[02:16] qemu-arm-static seems to be acting weird all of a sudden === zyga-afk is now known as zyga [08:15] Morning mythripk === hrw|gone is now known as hrw [08:17] morgen [08:19] morgen [08:21] Mornin' all [08:42] morning lag === XorA|gone is now known as XorA [10:32] lool, btw, my ext2 image probs were caused by the FS running out of inodes [10:32] formatting with a higher default value should solve it [10:34] ogra: Any luck? [10:35] archive is out of sync [10:35] i'm waiting for it to test the fix i have [10:35] How do you mean 'out of sync'? [10:35] Out of sync with what? [10:35] computer-janitor: Depends: python-fstab (>= 1.2) but it is not installable [10:35] libmailtools-perl: Depends: libtimedate-perl but it is not installable [10:35] ogra: it's quite surprizing TBH; do you resize the fs at some point? [10:36] (I saw the chat and agree it's a number of inode problem) [10:36] lool, nope only later on first boot [10:36] i create an empty file with dd and format it, then loop mount it and cp -ax [10:36] thats all i do [10:37] it might be the way dd creates the file if you use count=0 and seek=<$imagesize> [10:38] ? [10:38] i know that creates a file with holes (which i.e. swapon complains about) === JamieBen1ett is now known as JamieBennett === bjf[afk] is now known as bjf [14:39] Anybody at GUADEC? [14:50] lool: nope, blew it off to get work done... [15:18] Check my logic: the qemu vm issues exist in the -static builds as well; -static + chroot works better for at least partially unknown reasons [15:19] cwillu_at_work: you mean, with rootstock? [15:19] rsalveti, yes [15:20] yep, had a hug debugging day yesterday [15:20] oh, really? [15:20] with full vm things are slower, seg faults and hangs [15:20] too bad I missed it, because the -static is really hurting me right now :) [15:20] with user emulation sucks with programs that request info from /proc [15:20] like the stupid mono package [15:20] I'm getting mysterious "method http died" messages, and if I shuffle things around, the mysterious deaths move to pip [15:21] I've been running fine for months, and then yesterday my images just started failing [15:21] it's almost as if some update to a package I was installing broke things [15:22] cwillu_at_work: hm, what package failed? [15:22] rsalveti, as far as I can tell, no package failed [15:22] hm [15:22] it's just that some process will randomly die after aptitude finishes [15:23] cwillu_at_work: oh, ok [15:23] (installing packages in the chroot) [15:23] cwillu_at_work: what distro version are you trying to bootstrap? [15:23] lucid [15:23] debootstrap finishes fine [15:24] cwillu_at_work: I'm planning to change to user mode emulation when running with root, like what you're doing, and also add native arm support [15:24] I could push things out to first boot, but I'd really prefer not too [15:24] today, I mean [15:24] which, rootstock? [15:24] yep, that sucks [15:24] cwillu_at_work: yep [15:24] sec [15:25] full vm doesn't work, lots of bugs, and user mode emulation works fine for most of the cases [15:25] then if you still can't create the rootfs, do it on arm [15:27] so, yesterday, did you figure anything out re: triggering it? [15:29] How much of the vm issues can be attributed to issues with the kernel targets? Might we just be seeing something odd there, especially with -updates things could be loosely tested for VM targets. [15:29] chroot isn't using the arm kernel though [15:29] yep, just full vm [15:29] persia: with full vm I'm getting the same behavior with different kernel [15:29] but that's something I haven't checked: whether I'm using -updates as my source [15:30] cwillu_at_work: first, if you use maverick's qemu, you'll get the unsupported syscall for pselect again [15:30] and a huge backlog [15:30] I don't follow [15:30] then if you install anything related with mono, it'll hang [15:31] cwillu_at_work: this was fixed for lucid, but we have a regression for maverick [15:31] I'm not targeting maverick :p [15:31] apt-get uses pselect, and this syscall is implemented at lucid [15:31] happens if you're using maverick as the host :-) [15:31] not doing that either [15:31] cwillu_at_work: also, I get a seg fault while installing humanity-icon-theme [15:32] * cwillu_at_work repeats himself: [15:32] same package set worked fine a week ago :) [15:32] The lucid case should be fairly different from the maverick case, but the lucid Vm kernels come from the linux source package, which doesn't see much careful testing on updates except for i386 and amd64, usually. [15:33] persia, we're not using the vm kernels at all though [15:33] persia, qemu-arm-static doesn't require one [15:33] cwillu_at_work: but for me rootstock is working fine for most of the basic cases [15:33] cwillu_at_work: what packages are you requesting rootstock to install? [15:33] rsalveti, it was for me too, up until a week ago :) [15:33] sec [15:34] I'm just going to pastebin my version [15:34] ok [15:34] cwillu_at_work: Sorry then: I thought the issue was comparison of -static to the VM case. Ignore me :) [15:34] persia, well, it kinda is, but I'm focusing on the parts where -static fails in the same way :) [15:35] http://pastebin.com/1iynGS4j [15:35] line 620 [15:36] you should be able to run that locally if you remove the git calls [15:36] ok, will try [15:36] just a sec [15:36] if I don't download the packages first, aptitude will die while installing firefox (i.e., second invocation) [15:36] as written, it makes it through to the pip calls [15:37] ugh, which are commented out in this version :p [15:37] you might need an empty modules.d directory in the working dir [15:38] given locally cached downloads, it takes about 20-25 minutes to run [15:48] rsalveti, it seems like anything which touches the network dies after that point [15:49] if I remove the pip calls and the aptitude update call at the end, it finishes [15:55] rsalveti, actually, there's an odd thing: [15:56] I split up the installer into multiple files as you noticed, which are each called in the same chroot, but different invocations of it [15:56] (that was done to try to isolate things after they went weird yesterday) [16:00] .... [16:00] hmm, it could be the other arm chroot I had open to build packages was breaking things? [16:03] That should have no effect. I've routinely had multiples open (via schroot) without any apparent effect. Mind you, that sample may not be large enough to prove a negative. [16:03] persia, arm chroot's? [16:04] arm-on-amd64 foreign schroots [16:04] oaky [16:04] Err, armel-on-amd64 [16:05] yep [16:06] well, I'm rerunning without the other chroot's open [16:06] I'd run that test a few times, as there's supposed to be some separation. If you can demonstrate a convincing effect, then we clearly need to do something more advanced with LXC [16:07] cwillu_at_work: sorry, will look at it now, was doing other things [16:07] np [16:08] persia, I've run 40-50 builds in the last day, and hundreds over the last few months [16:08] I haven't established that the extra chroot was the cause though, if that's what you're asking [16:08] but whatever changed is consistent [16:09] cwillu_at_work: I figured, but I doubt you have data on which of them were run with a simultaneous build chroot active (although I'd be happy to know otherwise). [16:09] I could figure it out (both have start and end timelogs) [16:09] Are you targeting -updates? I really think that's the most likely source of regression. [16:09] I checked, I'm not [16:10] -security ? [16:10] take a look at the pastebin I posted [16:10] How about on the host? [16:10] I haven't applied updates this week yet [16:10] at it worked on friday [16:10] s/at/and/ [16:10] Ugh. phase-of-the-moon problem :( [16:10] :) [16:11] MIRROR="http://repository:3142/ports.ubuntu.com/ubuntu-ports" [16:11] REAL_MIRROR="http://ports.ubuntu.com/ubuntu-ports" [16:11] COMPONENTS="main universe" === asac_ is now known as asac [16:11] Right. That should be the same as it was at release. [16:12] the reason I mention the other chroot isn't so much that I had builds running at the same time (that was one of the earlier things I checked), but rather that I never closed the chroot itself [16:12] that's the test I'm running right now [16:12] That really shouldn't have an effect [16:13] you've said this :) [16:13] I'll know in ten minutes === fta_ is now known as fta [16:15] Well, a simultaneously active chroot could have an effect if the chroot boundaries are insufficient, leading to a need to do more with LXC, but an inactive chroot is about the same whether chroot() has been called on it or not. [16:18] it had /proc, /sys/, dev and so forth mounted inside it [16:19] But no files open, right? [16:19] whatever a shell would have, yes [16:19] here's the thing though: [16:19] (have you looked at the pastebin yet? :p) [16:19] I wouldn't expect a shell to have enough open to make a difference, but maybe [16:19] I split the script that runs in the chroot into four pieces [16:19] yes [16:19] to debug this [16:19] each script is run in a separate chroot, sequentially [16:20] Now, I can rerun rootstock, and it'll get up to the same point each time (i.e., first download works, next thing to touch the network dies) [16:20] but... the next thing to touch the network is in a different chroot [16:20] and it still dies [16:20] hm, werid [16:20] ...even though re-running rootstock doesn't [16:21] I'm pretty sure this will reduce to a config change that I forgot I made or something silly like that, but even so, I don't think I'm doing anything that _should_ be broken :) [16:21] Very odd. [16:22] At least it's isolated enough that it can be debugged, so once it's known, it can be made to never happen. [16:23] I'm secretly hoping that this is the same trigger as the grief in qemu, but I'm not sure how that plays into what you said rsalveti earlier about lucid patches [16:24] moments away from knowing [16:24] nope, that wasn't it :p [16:24] damn [16:25] * persia is glad the LXC integration isn't actually required, as that has looked painful the last few investigations [16:25] LXC? [16:25] cwillu_at_work: I'm running it here, will let you know if it worked or not [16:25] http://lxc.sourceforge.net/ [16:25] k [16:26] rsalveti, I don't think there's too many hardcoded dependencies on my environment [16:26] Basically, one can create even more segregation than with a regular chroot, which we haven't (quite) needed for anything yet, but I keep expecting it when people start talking about issues with multiple simultaneous chroots. [16:26] cwillu_at_work: I removed most of the stuff I could easily identify [16:26] ah, k [16:26] mirror, rsync, etc [16:27] the rsync might be of interest [16:29] (of modules.d, at least) [16:35] pulling up a shell inside the chroot after it starts dying [16:35] I'd like to confirm that it's network related activity === fta_ is now known as fta [16:46] cwillu_at_work: yep, worked fine [16:54] okay, bash prompt up [16:54] yep, definitely something weird on this box [16:58] ifconfig eth0 shows information, ifconfig dies with ": error fetching interface information: Device not found" [16:58] with no indication of which device it's looking for [17:00] host repository [17:00] qemu: Unsupported syscall: 250 [17:00] errno2result.c:111: unable to convert errno to isc_result: 38: Function not implemented [17:00] socket.c:3851: epoll_create failed: Function not implemented [17:00] /usr/bin/host: isc_socketmgr_create: unexpected error === amitk is now known as amitk-afk [17:03] That's exceedingly annoying. Is that unique to one install, or replicable? === hrw is now known as hrw|gone [17:05] not sure yet; copying the files over to my desktop to try it there === zyga is now known as zyga-afk [17:10] strace doesn't work :D [17:11] strace-in-chroot or strace-on-qemu? [17:11] chroot [17:11] qemu can't emulate ptrace right now [17:11] and it would probably be hard [17:11] which is the package for binfmt? [17:11] Well, the issue is in qemu, if you're getting "Function not implemented". Try stracing that. [17:12] binfmt-support? [17:12] that's the base, but it's pluggable. [17:12] You want to edit the binfmt entry for armel binaries to call strace, or run host strace attaching to a PID. === mturquet1e is now known as mturquette [17:17] attached [17:19] will be a moment, hit ctrl-c in the prompt once too many times === fta_ is now known as fta [18:05] * cwillu_at_work cries === fta_ is now known as fta [18:29] persia, did you want an strace of qemu when a "host google.ca" fails? [18:32] persia, rsalveti, http://pastebin.com/8agEskqT [18:32] http://pastebin.com/6EfBPFLu is the shell output [18:50] there's another thing I didn't notice before: [18:51] my build environment is an unpacked copy of the output of my rootstock [18:53] which I haven't regenerated in a few weeks [18:53] same problems in it though [19:01] er, not quite [19:01] host dies in the same way, pip and apt seem to work fine :/ [19:16] and, success [19:16] bouncing everything through a local proxy on 127.0.0.1 works === fta_ is now known as fta === andy_ is now known as andyj === jkridner_ is now known as jkridner === fta_ is now known as fta === bjf is now known as bjf[food] === bjf[food] is now known as bjf === fta_ is now known as fta === bjf is now known as bjf[afk] === fta_ is now known as fta