[07:54] ogra: So this smells the overflow if 15 -> 16 blocks don't work [09:27] It's not, it's APEX being clobbered by the kernel! :-) [09:27] ogra: Quick, try Marc's suggestion! [09:45] lool, if my eyes are open enough :) [09:45] * ogra yawns and goes to look for coffee [09:45] ogra: Dude why did you mail debian-arm@? [09:45] I told you I would mail him and I Cc:ed you [09:46] whats wrong with asking in multiple places ... [09:46] It creates duplicate work? [09:47] and brings more ideas in in case the solution isnt clear [09:47] I absolutely hate IRC users coming to multiple channels I'm in and copy-pasting the same question [09:47] Sorry but it's resource abuse [09:48] I value the high quality of the responses I get so far, I don't think Marc will give us much attention if we keep asking at multiple places [09:48] It's exactly like researching before asking [09:48] well, to me it wasnt clear it was apex [09:50] apart from that the next guy having this prob will find it in the ML archive [09:51] That's an issue I created, but your fix is worse than the disease [09:52] It's like these cross-posts spanning multiple lists, but it's worse in that these are separate threads [09:58] well, sorry for that if you feel like that, but i still think it was okayish to have it on a public ML my questio was different and contained other information than yours, but i'll keep away from such things in the future now that i know you dont like it [09:59] I basically don't want to take any chance to piss off valuable people we rely on (how would we move forward if we lose help from these folks?) [09:59] And here Marc had to reply to both and said he did [09:59] Probably he doesn't mind much, I'd hate if we'd do that again [09:59] s/doesn't/didn't [10:00] right, i'll be more cautious about that in the future ... [10:02] Thanks, I appreciate your effort [10:35] lool, hmm changing CONFIG_KERNEL_LMA=0x00008000 to 0x01000000 means that it will have the identical address CONFIG_RAMDISK_LMA has ... [10:37] i suspect we need to do some more math with that ... but will test first [10:37] * ogra goes to rebuild apex [10:48] lool, well, it doesnt boot ... [10:48] ... but it get kernel and ramdisk loaded [10:49] lool, http://paste.ubuntu.com/116755/ [10:56] * ogra starts shuffling the ramdisk address around [11:09] hmm, so moving the ramdisk to 0x011FFFE0 les it panic but the kernel unpacks .... [11:09] *lets [11:19] bah and our kernel stops after uncompressing [11:19] (using it unswapped, else it doesnt work at all) [11:46] hrm [11:48] lool, any idea ? the kernel is 0x00200000 big, so i add that to CONFIG_RAMDISK_LMA (making it 0x01200000) that makes the kernel load but apparently it doesnt find the initramfs and panics now [11:50] intrestingly it seems to find the start of the ramdisk ... "[42949378.630000] RAMDISK: Compressed image found at block 0" [12:00] ogra: Sorry didn't follow; do you have an idea of the memory map since the board powers up down to apex calling into the kernel? [12:00] I would start by writing that down [12:01] well, when i move the kernel from 8000 to 1000000 i need to move the ramdisk up by the size of the kernel [12:01] since the ramdisk used to live at 1000000 [12:03] so the kernel occupies 1000000 to 1200000 (as its 200000 big) [12:03] so naturally the ramdisk load address needs to be 1200000 [12:15] HA ! lool !! got it :) indeed we also need to adjust the max size of the ramdisk, it boots with CONFIG_RAMDISK_SIZE=0x00400000 [12:16] i'll try to bump that up bock by block now to find the possible max limit, then we should be done ... only a fixed eb kernel is missig [12:17] Right, kernel is being loaded at 0x00008000, ramdisk at 0x01000000 [12:18] right, now kernel is 0x01000000, ramdisk is at 0x01200000 and ramdisk size is 0x00400000 ... with these values it boots just fine [12:18] ogra: That's only 2M for kernel [12:19] ogra: Where is the kernel unpacked? [12:19] * lool lunch & [12:20] erm, its unpacked in ram, these values are flash [12:21] its only about the flash partitions apex uses [12:27] err, strike that [12:28] apex assigns 8M in ram it seems and then copies itself and the flash partitions to that area [12:30] why would you want more than 2M for the kernel ? [12:39] all former debian kernels are below 1.5M and i expect us to still drop some bload from the 1.9M we have now [12:39] *bloat [13:14] ogra: ?? [13:14] ogra: isn't 0x00008000 the address where to copy to? [13:15] ogra: The flash read address is fis://kernel, the RAM write address 0x00008000 [13:15] yeah, ignore me [13:15] ogra: "why would you want more than 2M for the kernel" well we just hit the case where the preceeding assumptions were not enough [13:15] i thought we had the 8M limit in ram as well, but that doesnt seem to be the case at all [13:15] Why not make sure we cover all cases and solve the problem once and for all? [13:15] slugimage works no matter what sizes you pass to it for instance [13:16] yes, i just booted with a ramdisk at 0x01400000 [13:16] So if the contents are discarded, I'd recommend loading apex, kernel, and initrd at 0, 8M, and 16M [13:16] that way we shouldn't ever have to worry about them [13:16] i'm just trying with 6M for the kernel [13:16] The flash is 8M right? [13:16] i'll do 8 next [13:16] yes [13:17] so the bigger the kernel gets the more we limit ourselves on the ramdisk wrt flash ... [13:22] lool, ok, seems the max limit for the ramdisk size is actually 5636080 bytes (our flash ramdisk partition size ...) if i go above it it corrupts the initrd ... [13:23] and it seems it doesnt matter where i load it to so 0 and 8M wont be an issue, but 16M wont be possible due to the 8M flash constraint [13:24] ogra: I want us to plan the apex script to accept any theoritical initrd or kernel sizes [13:25] And apex' builtin defaults [13:25] not for jaunty though [13:25] Why not? [13:25] that really requires deep apex tinkering and a proper spec imho [13:25] feature freeze is in 8 days [13:26] So you prefer learning it all, implementing a workaround, then having to learn it all again to fix it properly or simply not bother to fix it properly? [13:26] no, i prefer to have sane defaults for now and to add additional features if we have time for that [13:27] i havent touched the touchscreen issues at all yet, we dont even have a working kernel for the slug etc etc ... [13:27] Why would a ramdisk size larger than 5636080 corrupt the initrd? [13:28] I hate leaving things half fixed when we can fix them properly [13:28] This is seriously not fun for me to invest so much time on an issue and then look back and think "I didn't really fix it" [13:28] i guess thats a question for marc ... using any bigger size than the one we padded it to simply results in a panicking kernel [13:29] I think I shouldn't have looked at this issue at all, it's too frustrating, we don't have the same goals [13:29] right, so do you want me to drop all my specs for supporting an arch nobody uses ? [13:30] I guess it depends how much time you think you need to fix it properly versus implementing a workaround [13:30] the only committment i have atm is to get a working d-i image for that thing [13:30] which i have now module a decision on the proper default values for apex [13:31] and a working kernel ... which isnt in my hands [15:07] lool, moving the apex VMA adress to 3MiB seems to work as well as shuffling kernel and rmadisk adresses, given that we differ in only one value from debian here (and will likely be able to convince tham to accept that as default) i'll g with that ... the ramdisk size propblem persists though and i think its likely due to the fact that apex tries to assign a memory area but the ramdisk image itself doesnt fill that up with zeroes as soon as the apex va [15:07] lue is bigger than the actual partition ... values between the actual initrd.gz size up to the padded partition size do work well, i'll ask on debian-arm to get a confirmation from marc about the theory