[00:00] persia: mahmoh: http://paste.ubuntu.com/642981/ [00:00] using a usb-sata disk with a powered usb hub [00:01] persia: I know, I'm not scared, just don't want to get distracted [00:01] ah, 'cause hubs are supported! that's cheating [00:02] Ok, I just got to a point where I could reboot with usb-sata attached, and u-boot detects it. [00:02] u-boot can't tell if the hubs are in the SoC, on the board, or external. [00:02] you shouldn't connect a usb-sata disk directly, board doesn't have all the power to power it [00:02] GrueMaster: you're using a hub? [00:02] GrueMaster, Excellent. thanks for confirming this is a problem with mahmoh's device. [00:02] no [00:02] it's working on my board [00:03] GrueMaster: what's the model/make? [00:03] but don't know if u-boot is have the support to give more power to the device [00:03] - Interfaces: 1 Self Powered 100mA [00:03] ouch [00:03] directly it'd consume 500mA [00:03] let me try without a hub [00:03] ok, powered hub it is [00:03] Hornettek Rhino. [00:04] Using external power supply (panda doesn't have enough power to support otherwise). [00:04] gazoontite [00:04] mahmoh: yeah, that's the problem [00:04] ok [00:05] http://www.hornettek.com/pcaccessory/index.php/25q-hdd-enclosure/usb-20/rhino [00:05] I don't see anything in the u-boot USB code that would indicate it does power level negotiation, but I'm not that familiar with the codebase. [00:05] rsalveti: but wait, it works under linux though? [00:05] probably need to improve the usb protocol to do allow devices to request more power than the usual [00:05] I bought 4 at Fry's. [00:05] mahmoh: yeah, but because on linux the kernel allows it to take 500mA [00:05] ok [00:05] looks bulletproof [00:07] so, it works, but you need to power it with a powered usb [00:07] That would be the Defender model. http://www.youtube.com/watch?v=uOCVEVNU-PA [00:08] It has a 5vdc plug. [00:08] persia: I don't think it does [00:08] rsalveti, http://www.usb.org/developers/devclass_docs/batt_charging_1_1.zip has the info on the new shiny for granting up to 1.8A [00:09] persia: USB2.0 or 3.0? Makes a difference. [00:10] GrueMaster, Battery Charging was introduced as part of 2.0, and I believe it was grandfathered into 3.0, although I'll admit that I haven't been paying as much attention to USB as I did 5 years ago. [00:11] Reason I say that is that the 2.0 plug can only handle x amps electrically, and that may be pushing the envelope. [00:11] There's been something like 5 revisions of the plugs. [00:12] But if the host isn't implemented with a plug that can handle that amperage, it isn't likely to be able to negotiate the higher amperage. [00:14] Mind you, some idiot might have wired up a port to appear to handle charging without sufficient support, but the legion of iPod owners complaining about burned devices would likely be a sufficient source of complaint that we oughtn't care. [00:15] Section 3.5 says the standard A connector is capable of 1500mA. [00:15] (not to pick on the iPod specifically, but most of the USB power hardware I see in shops that supports higher amperages is labeled "supports iPod", rather than "supports Battery Charging Detection") [00:15] A charger can push more if it has a port that can handle the load. [00:16] Right. [00:16] But that ought be limited at a level u-boot can't fiddle. [00:16] My nook color pulls 1500 if it has the custom microusb cable. [00:16] u-boot would fiddle the device registers to increase the level, but isn't likely to be able to reach higher than the device supports. [00:17] At any rate, I don't know what current the panda can push out the usb ports. [00:17] Supposedly 2A of the input power is expected to be used by the USB ports. Mind you, that's hearsay: I've seen no documentation about this. [00:17] But it doesn't appear to be enough to handle these drives even with a USB Y cable. The drives are 160GB rated at 0.56A [00:17] I suspect that the ports are likely rated at 1.5A each. [00:18] Remember, if the negotiation doesn't happen properly, even a Y cable will only provide 200mA [00:19] Ah. So definitely a u-boot issue. [00:19] (kernel as well). [00:19] Looks that way. Looks like u-boot doesn't support current negotiation, so if the device needs more than 100mA, then it just won't work. [00:20] Explains my earlier headache. [00:20] linux does have some current negotiation, at least up to the 500mA level. That said, it may be that the specific USB driver used here isn't implementing it correctly, or something. [00:20] check what is behind the ports [00:20] I have no idea if linux can support battery charging: anecdotally folks don't seem to get fast-charge of their media players / phones / etc. from plugging them into Ubuntu, so I suspect it doesn't. [00:21] usually you have something like an ISL6185XXC there which will be rated for a particular current and for a particular overcurrent protection [00:21] NekoXP, Is that something software detectable? [00:21] absolutely not [00:21] but the pandaboard schematics are available right? [00:21] That's what I thought. [00:21] persia: I have not seen an issue charging my cell from my desktop, so I think the kernel has *some* logic in it. [00:22] GrueMaster, Does it charge as fast as if you attach it to a 1.5A 5V supply? [00:23] Not sure. never really measured it, but it appears to be fairly decent. And I don't think my phone can pull that much. Need to look at the spec on micro usb. [00:23] NekoXP, So, while board designers might do all sorts of things, do you think there's any risk in attempting to comply with the current negotiation specs all the way up to 1.8A in u-boot/linux? Would it be safe to rely on the hardware to simply not do what was asked if it hit it's limites? [00:23] Something to test with my nook though. [00:24] I've pushed 2.0A over microUSB, but that required special HW on both ends. [00:24] absolutely do not :D [00:25] 1.8A is way, way over the spec [00:25] Which spec? [00:26] According to Battery Charging Specification, it looks like a device can expect 5.25V at 1.8A assuming all negotiation is successful. [00:26] ONLY if the USB controller and the stuff behind it complies with the battery charging specification [00:26] which is a very complicated and kinda ass backwards spec [00:27] Wouldn't a non-compliant implementation not respond to bit-banging to tell it to provide more than 1.5A (or, in practice, really 500mA)? [00:27] (and yes, the spec is incredibly complicated) [00:27] I would assume that the panda pmic is supplying port power [00:27] so check if the pmic supports the BCS [00:28] and check that it supports it from a *we are the host* POV, instead of the "we are the device and you are charging your pandaboard's battery via the OTG port" which I am absolutely sure it would support [00:28] Oh, panda almost certainly doesn't support BCS. [00:29] But in terms of adding support for current negotiation to upstream u-boot, I'm unsure how much we care about a specific implementation. [00:32] it is entirely device specific [00:32] but consider this [00:33] if your system loaded and did not get past bootloader, why would you be plugging your iPhone into it [00:33] Sure, but if the system loaded, and started the bootloader, I might want to load my OS off my USB optical drive.. Those usually require ~700mA to spin up. [00:34] but the maximum in the USB spec is 500mA [00:34] Unless one implements BCS. [00:34] your CDROM drive will not comply with the BCS [00:34] It will with a usb Y cable. [00:35] 500ma per port. [00:35] Hrm. I've seen a number of one-plug optical drives floating around, which only work when attached to newer platforms. I had presumed they were doing something like that. [00:35] GrueMaster, But that doesn't require BCS compliance: just regular current negotiation (which u-boot also doesn't have) [00:35] a USB Y cable is not the battery charging specification [00:38] Charging spec or not, the question remains of why I can't power a usb-sata drive with a usb Y cable. Charging my cell is very low on the list, but having working usb-sata is very high. [00:38] (and I lost track of this thread when it diverged from that). [00:38] GrueMaster, quite possibly because at the end of the day the port power provided will never reach the levels you want [00:39] GrueMaster, There's a bug in u-boot. It doesn't do *any* current negotiation. As a result, your Y-cable is only providing 200mA to your HD. [00:39] 506mA? [00:39] I'm curious if we want to implement BCS in u-boot. [00:39] for instance if you're using Freescale's MC13892 PMIC on a device, using ther VUSB regulator to provide port power, it will supply 100mA and that's it [00:39] I think NekoXP is suggesting that this probably isn't a good idea. [00:40] Is it a u-boot or platform issue? If u-boot, it is fixable. If the platform can't handle the load due to design, that's a different issue. [00:40] NekoXP, Even if the software supports current negotiation? The port can never do 500mA? [00:40] So, simple answer - hardware limitation. [00:40] GrueMaster, Both, but in the specific instance of the panda, there is support for 500mA. [00:41] in the specific case of the board you're trying to make it work on you shouldn't need to NEGOTIATE for current [00:41] persia: if the hw will support 500ma but u-boot doesn't support autoneg, that's a software issue. If the panda power regulator can't handle it, then it is hardware. [00:41] What? [00:41] you simply have to configure the host controller, configure the port, turn VBUS on and initialize the device [00:42] NekoXP, But USB 2.0 specifies that the device is supposed to *ASK* if it wants more than 100mA. [00:42] before you actually kick the device it cannot draw more than 100mA per spec anyway [00:42] GrueMaster, For those definitions of "software" and "hardware", it's a software problem. [00:42] the descriptors will say whether it wants more [00:42] * GrueMaster will check back for an answer later. [00:42] and if so the host can grant that by turning on the regulator [00:42] there's not really any protocol for it [00:43] NekoXP, Right, but what we've discovered is that u-boot isn't lifting the power even when the descriptors request more. [00:44] On a more immediate note, I just discovered panda (u-boot, kernel, whatever) doesn't like the SD to be removed, even when rootfs is on usb. [00:44] Very odd. [00:45] GrueMaster, probably kernel [00:45] GrueMaster, Did you boot kernel from SD, or from somewhere else? [00:45] there is an option to make the device a little persistent [00:45] i.e. assume it's there and if not, puke [00:45] persia: Should be irrelevant as it isn't a mounted filesystem. [00:46] persia, USB "current negotiation" is down to physics, mostly [00:46] GrueMaster, I think linux still uses the boot source as a backing store, if it can. I might be mistaken. [00:46] Works fine on babbage3. [00:46] NekoXP, Except the bit where the OS reads the descriptors and tweaks the regulator to push to 500mA, unless I misunderstood something (in which case, please tell me what I should have read) [00:46] GrueMaster, Very odd. [00:48] a device will not draw more current than you can provide it, and it will not draw current unless you turn it on.. you plug in a USB stick, the USB host controller is providing capability of 500mA (or whatever) to the device 99.9% of the time. if this is controlled by some kind of regulator this voltage can be changed based on power management policy. The USB spec states that an unconfigured device may only draw 20mA suspended (which goes away the moment you [00:48] talk to it) 100mA otherwise. If you read a descriptor that says 500mA then you may turn your regulator up to 500mA, but in actual fact, most systems just leave it at 500mA and you have no choice really [00:49] if a device wants 1.8A then it will draw it if you supply it [00:49] the real problem with magnetic media, cdrom drives and so on is that they have a ridiculous need for spinup current, which spikes and is probably way way over the maximum most USB regulators provide. [00:49] as the device gets older, it actually needs more to spin up [00:50] from experience, most board designers don't bother to actually have a configurable regulator [00:50] it's on or it's off [00:51] Aha. I think I understand then. So the panda apparently defaulting to 100mA unless told to act differently by software is a specific quirk of the implementation, and in general the practice is to pump 500mA while expecting 100mA draw unless something odd happens? [00:51] it supports 500mA or not. [00:51] yeah [00:51] and the reason it doesn't work is your disk is about 6 months old, it has some wear and tear, and the switcher behind the usb port can't handle a load spike as fast as the disk needs it [00:52] I see. So, if my understanding of the issue is correct, we'd need panda-specific code in u-boot that checked stuff and turned up the regulators if appropriate. [00:52] possibly [00:52] Mind you, u-boot needs to completely turn off USB before booting the OS, so just randomly cranking them probably isn't ideal. [00:53] Do you happen to know that this sort of thing is not required for common i.MX51 and i.MX53 boards? [00:54] on the efika, vbus controls whether power is on or off [00:54] it's configured through the phy which some people consider weird [00:54] we just turn the damn thing on [00:54] and this always goes through a hub, which is always powered [00:55] OK, so u-boot just scans USB, does it's thing, turns "off" the devices (but the hub is powered), and loads the OS? [00:55] if you don't twiddle vbus, the hub gets 20mA and it fucking likes it. that's actually not enough to bring the hub up. [00:58] Right. I think I understand. Thanks. [01:03] oh, yeah [01:03] the other problem with disks is they actually need 12V supplies [01:03] and USB is 5V [01:03] Depends on the disk. Lots of 2.5" and 1.8" disks work fine with 5V. [01:03] so there's some stuff on the board to get a12V supply from a 5V one and they kind of take time to work, and it reduces the current, and of course as THOSE components age.. they suck :D [01:05] depends on the disk, really [01:06] Oh, indeed. [01:24] I have yet to see a laptop sata drive that needs 12v. [02:14] * jburkholder got his pandaboard [03:19] hey anyone have probs with getting the build-essential packages? [03:21] anyone here? [03:28] armelTest, Lots of folk. [03:28] For hich release are you having issues getting build-essential packages? [03:29] 9.10 karmic [03:30] any thoughts? [03:30] Are you pointing at ubuntu-ports.ubuntu.com in your sources.list? [03:31] idk brb [03:33] persia: By which you mean posts.ubuntu.com? [03:33] s/posts/ports/ [03:34] yeah im waiting on this device to restart [03:34] Well, considering your hostname and my hostname work equally well for karmic, I'm not sure it matters. [03:34] persia: Eh? ubuntu-ports.ubuntu.com doesn't exist. :P [03:34] ports.ubuntu.com/ubuntu-ports sure does, though. [03:35] But it's after 29th April. [03:35] * persia gets confused [03:36] ummm can i just echo "deb http://ports.ubuntu.com/ubuntu-ports/" > /etc/apt/sources.list [03:36] You could. [03:37] But that this works is an accident. [03:37] Except that's wrong. [03:37] huh? [03:37] y? [03:37] You want "deb http://old-releases.ubuntu.com/ubuntu/ karmic main restricted universe multiverse" [03:37] ahhh [03:37] echo "deb http://ports.ubuntu.com/ubuntu-ports karmic main" > /etc/apt/sources.list [03:38] Oh, or old-releases, if karmic's been moved. Right. [03:38] Except karmic will disappear from ports.ubuntu.com any day now [03:38] I just meant "wrong" in that it was missing the dist and component(s). :) [03:38] It hasn't, but it's EOL, so there's no reason for it not to have moved. [03:38] Hence my earlier confusion. [03:39] armelTest, What is your hardware platform? [03:39] arm7 [03:39] Be a little more specific. ;) [03:39] Supports the ARMv7-A ISA? [03:39] htc incredible [03:39] android phone [03:39] brb [03:40] idk is there a way i could find out [03:40] Well, EOL for the incredible and EOL for karmic match, but given that platform, I'd want to run natty. [03:41] call me a dumb ass but what is natty? [03:41] Ubuntu 11.04 [03:41] armelTest, Find out what? That you have a QSD8650? I used wikipedia. [03:41] I used htc.com, similar result. :P [03:42] yes that is it :-) [03:42] And then I saw the word "Snapdragon" and died a little inside. ;) [03:42] lol [03:42] armelTest, So, karmic is EOL, and there's no support for it at all, plus it's slow and buggy. [03:42] bad exp with a snappy [03:43] Nope, bad experiences with Qualcomm. I'll recover some day. [03:43] From the command line, if you run `do-release-upgrade` you should get upgraded to something newer, shinier, and supported. [03:43] persia: (Which would just be lucid, going from karmic) [03:44] Though lucid runs fine on all my armel hardware here, with the possible exception of kernels. [03:44] And I assume he's running an Android kernel. [03:44] infinity, Would it? The do-release-upgrade manpage is kinda unspecific about precisely where you end up from older systems. [03:44] Perhaps a bad assumption, but... [03:44] yeah cyanogen 2.6.37.6 [03:44] persia: Only LTSes provide the option to jump to another LTS, everything else will upgrade to the next release. [03:45] persia: (In this case, his next release IS an LTS, but whatever) [03:45] That's a bug in the manpage then, because it says "upgrade to the latest release", rather than the "next newer release". [03:45] http://archive.ubuntu.com/ubuntu/dists/lucid/universe/binary-armel/Packages.gz404 Not Found [IP: 91.189.88.46 80], W:Failed to fetch [03:45] Well, if you run it over and over again, you'll get to the latest! ;) [03:45] But yeah, file a bug. [03:46] ahhhhh!!!! [03:46] armelTest: No armel on archive. [03:46] Oh. [03:46] do-release-upgrade is confused. [03:46] Hrm? I thought I fixed that all the way back to hardy! [03:46] I bet it would be unconfused if your sources.list said ports. [03:46] persia: Did you fix it for the old-releases case? [03:47] (Since there's special-case code for s/old-releases/archive/) [03:47] infinity, I summarily rewrote deb lines in sources.list with what I thought was correct based on other conditions, excepting a whitelist (which didn't include old-releases) [03:47] Whacky. [03:47] Aha, which special case might happen after my summary execution of random mirrors :( [03:49] armelTest: echo "deb http://ports.ubuntu.com/ubuntu-ports karmic main restricted universe multiverse" > /etc/apt/sources.list && do-release-upgrade [03:49] armelTest: 20 to 1 odds that doesn't trip the broken codepath. :) [03:49] ok brb [03:51] it may be working!!! [03:52] Such excitement over a "maybe". I like it. :) [03:53] nope [03:53] brb imma try something [03:53] infinity, Any idea where that special-case lives? I don't see it in the update-manager or python-apt code, although I see lots of tests in update-manager to make sure it works. [03:54] persia: I didn't write it, so not entirely sure, I just know it's there somewhere. [03:54] persia: And might sometimes work. [03:54] I'm certain it's there, or the tests would very clearly fail. [03:56] root@localhost:/# do-release-upgradedo-release-upgradeChecking for a new ubuntu releaseDone Upgrade tool signatureDone Upgrade toolDone downloadingextracting 'lucid.tar.gz'authenticate 'lucid.tar.gz' against 'lucid.tar.gz.gpg'tar: Removing leading `/' from member namespcilib: Cannot open /proc/bus/pcilspci: Cannot find any working access method.Reading cacheChecking package managerPreparing the upgrade failedPreparing the system for [03:56] blah [03:57] Heh. Seems nobody ever tested do-release-upgrade from karmic. [03:58] And, yep, it's marked in LP so that we can't update it. [03:58] OK. Next method. [03:59] Try "deb http://ports.ubuntu.com/ubuntu-ports/ lucid main restricted universe multiverse" as your sources.list [03:59] Then try `apt-get update && apt-get dist-upgrade` [04:00] This is more likely to break than do-release-upgrade, but also less prone to failures from the various bits trying to blunt the sharp edges. [04:08] i can do the apt-get dist-upgrade on a seperate line [04:09] i may run outta room [04:09] That works too :) [04:09] lol i only have an 8g sdcard and ummm its 82% full [04:12] Hrm. That *might* work, but it might not. [04:12] The packages will be upgraded in place, so they shouldn't take up that much room. [04:12] That said, they all need to be downloaded first, so /var/cache/apt gets kinda full. [04:13] Running `apt-get clean` will remove any cached package files you have, which may improve things. [04:13] kewl i am learning so much! [04:14] this is awesome. the only reason i wanna do this is so i can develop android apps while driving down the road... [04:14] I usually cheat and put /var/cache/apt on a NFS or SMB share when I'm upgrading devices with low space. [04:14] that is really smart [04:15] Android still doesn't have a self-hosted development environment? Oh my. [04:15] it is upgrading! [04:15] persia: It's not meant to. [04:15] well they have the ndk but i like using eclipse tho [04:15] persia: Though, I imagine at some point it will anyway. [04:15] * persia grumbles about silly people using embedded paradigms on perfectly reasonable general purpose computers [04:15] along with the jdk [04:15] (Maybe that point is now, noting the mention of an ndk) [04:15] lol [04:16] persia: And preaching to the choir on that one. You have no idea how long I cursed Maemo failing at being perfectly self-hosting and for NO GOOD REASON, except that scratchbox broke it in subtle ways. [04:16] what sux is you have to use vncviewer to load x [04:17] (Most things still built fine on Maemo itself, but sometimes things went pear-shaped due to SB breakage) [04:17] infinity, even the Maemo that shipped with the n810 was self-hosting, if you recompiled everything in a sane environment once: it just needed a bootstrap. [04:17] persia: I think my N900 still has a Debian chroot with a full scratchbox environment IN THAT... Stop and think about that insanity for a second. [04:18] persia: "Self-hosting, if you re-bootstrap" doesn't quite qualify. ;) [04:18] infinity, Consider a reinstall: http://wiki.ubuntu.com/ARM/N900 [04:18] that is how linux runs on android chroot [04:18] Beats failing to self-host because of design decisions in my book. [04:18] persia: That would be a better sell, if the URL worked. [04:18] through busybox [04:19] Anyhow, off to dinner. [04:19] https://wiki.ubuntu.com/ARM/n900 [04:19] armelTest: Good luck with your fiddling. [04:19] armelTest, Oh, you don't run Ubuntu native? [04:19] no its chroot [04:20] i cant run native and still have my phone functionality [04:20] which sux [04:21] Lack or drivers? [04:22] idk never really looked in to it really brb [04:24] No worries. There's only drivers for a few baseband chipsets, and not so many dialers. [04:24] https://wiki.ubuntu.com/Specs/AndroidExecutionEnvironment check this out [04:25] Yeah, that got squashed. The mechanism by which it was supposed to work ended up being unmaintained by the Android folk. [04:25] see what you can make outta it. im just a simple programmer just spreading my wings into other platforms [04:25] NCommander and I should clean that up, and make it clear why it will/won't happen, but we keep putting it off because there is some good idea about how to do it which we don't get around to implementing. [04:26] As it turns out, lots of other people are trying to do similar things, which may mean that we can eventually implement it just by adding a couple upstream packages and filing some minor bugs. [04:30] xda has some good stuff on it but its greek to me right now [04:34] can i just manually delete the stuff from /var/cache/apt? [04:35] Better to use `apt-get clean` [04:35] Otherwise apt can get confused and complain. [04:35] oic [04:35] apt-get clean will delete everything safe to delete in there. [04:35] well ummm apt-get clean wont work because the dir is locked because its full [04:37] Right. [04:37] So, let's step back. Since this is only a chroot, we aren't quite as afraid of bricking the device. [04:37] How did you construct the chroot? [04:37] nope aint afraid of bricking [04:38] hold on ill send you the script [04:39] !paste [04:39] For posting multi-line texts into the channel, please use http://paste.ubuntu.com | To post !screenshots use http://imagebin.org/?page=add | !pastebinit to paste directly from command line | Make sure you give us the URL for your paste - see also the channel topic. [04:39] kewl [04:46] sorry son was trippin bout something [04:46] lol [04:47] http://paste.ubuntu.com/643036/ [04:49] OK. From where did you get /sdcard/ubuntu/ubuntu.img ? [04:50] I'm thinking it's probably easier to just replace that then try to upgrade, given the space. [04:52] from xda-developers.com [04:54] i could mount any image and chroot as long as i have the dirs right and the arc is arm [04:54] OK. [04:54] is that correct? [04:54] So, I won't recommend any image that isn't hosted by Ubuntu, because I can't trust that it's really Ubuntu [04:54] * persia fails to say anything about http://forum.xda-developers.com/showthread.php?t=987740 [04:55] So, there's a few ways you can get a mountable image file using Ubuntu. [04:55] 1) You can download one of the images, and then make it do what you want. [04:56] 2) You can run rootstock to generate a local image (for development only) [04:56] 3) You can run debootstrap yourself, and construct an image [04:56] I'd be happy to explain any of those if you like :) [04:57] intresting... i have heard of rootstock. i dont have another arm dev to make an image myself tho [04:58] Most folk run rootstock on an i386 or amd64 machine, but you will want to have Ubuntu installed there. [04:59] i do have an external 1tb [05:01] u suppose i could install it on that the live/persistant boot and make an image [05:02] Base images are usually less than 1GB. [05:03] i am about to free up some space on my sd card... got vids of my son i can move over [05:03] But they often need 2-3GB uncompressed space, so as long as you have more than 4G on that drive, you should be fine. [05:03] Heh, that might let the upgrade work :) [05:04] lol yeah if i didnt mess up this image by rm *.* in var/cache/apt [05:04] (and running `apt-get clean` post-upgrade will regain most of the space the upgrade needs) [05:04] only 1 way to find out... yeah the apt-clean didnt work [05:04] apt-get clean [05:04] Supposedly nothing in /var/cache is important. That said, while I expect apt to be fairly robust, I've not tried that. [05:05] Does `apt-get update` work? [05:05] brb gonna restart my phone [05:06] You may need to repopulate apt's local cache of the available software sources before you can clean up. [05:14] shit this is gonna take a min [05:16] well my wife wants me to come be a husband... thanx for your help!!! [05:58] janimo`: can you trigger a rebuild of wacomtablet? this will fix the ftbfs for it [05:59] rsalveti: Given back. [06:00] StevenK: thanks [07:52] StevenK: libtuxcap also need a rebuild [08:00] crtmpserver too [09:40] rsalveti, given back, now that LP is writable again [10:38] is there a way of configuring the lid open sensor information? I am inverted [10:39] if I open the lid, it blanks the screen as per my powere management settings [11:45] i'm quite unable to read information in launcpad.net, but is this package https://launchpad.net/ubuntu-omap4-extras-multimedia supposed to be available to natty ? [11:49] Hi All [11:50] Anyone tried XBMC on ubuntu arm ? [12:24] hello [12:24] My rootstock build is returning errors like [12:24] I: Switching to Virtual Machine for second stage processing [12:24] Segmentation fault [12:24] W: Bad Bad Qemu, trying second stage one more time (LP #604872) [12:24] Launchpad bug 604872 in qemu-linaro "qemu-system-arm segfaults emulating versatile machine after running debootstrap --second-stage inside vm" [Medium,Fix released] https://launchpad.net/bugs/604872 [12:25] How to fix this [12:32] ogra_, any idea ? [13:19] friends ,my issue solved [13:19] added qemu-arm-static [13:33] my io tests are still runnning on both boards, now init's behaving but the threaded io is really impacting performance (terminal interaction is slow at best) [13:34] cpu load is minimal, 4-30%, I guess that makes sense - I should check the scheduler though [13:35] note that we default to no-op on all our images === zyga is now known as zyga-afk [13:56] Preparation material for tomorrow's Ubuntu developer week, ARM FTBFS session. Feedback welcome :) https://wiki.ubuntu.com/ARM/FTBFS [13:59] well mine are set to cfg, oops, I need to check to see which kernel is getting loaded again [13:59] but the server kernel should be deadline scheduler ogra_ [14:00] ^ right Daviey? [14:00] mahmoh, thats something the image build scripts should do, if they dont on your images, files a bug [14:00] ack [14:00] for preinstalled we can be sure we are running on SD [14:00] for that no-op is the fastest [14:01] thats why we set it to no-op there by default, for server netinstall where you might install to rotary disks, the installer should select the scheduler based on the root device [14:01] agreed but netinstall should install the correct kernel after tasksel and the server kernel should be deadline, which I need to verify [14:02] so if I'm able to check - during - netinstall it'll [14:02] change the scheduler? [14:02] i doubt there is code for that yet [14:03] talk to NCommander [14:04] iirc on the netinst images the flash-kernel-installer udeb creates the kernel cmdline, it should get code to detect the root device and add the proper elevator option to the cmdline [14:04] makes sense, if it doesn't do that I'll file a bug [14:05] thx [14:39] hello, I would like to know if I can find a Ubuntu for OMAP35x EVM [14:39] more exactly for AM3517 [14:41] * ogra_ isnt sure we have a kernel supporting that, but userspace will definitely be fine [14:44] oh ogra, I've seen that you have built a Ubuntu for OMAP35x EVM [14:44] but the link is dead [14:48] that was loooong ago [14:49] haha okay [14:49] you should be able to just use the omap3 images but replace the uImage with yours [14:49] (and MLO/U-boot.bin) [14:49] yeah I have my uImage [14:50] see wiki.ubuntu.com/ARM/OMAP [14:50] also mlo and u-boot and x-loader [14:50] but do you know how to create the SD card for an update of the system? [14:50] ?? [14:52] I've followed this [14:52] http://processors.wiki.ti.com/index.php/MMC_Boot_Format [14:53] ah, no, just create the SD like described on the ubuntu wikiw [14:53] but I don't understant how to place my kernel and filesystem into the nand flash then (update) [14:53] ah okay [14:53] then replace MLO, u-boot.bin and uImage on the first partition of the SD [14:53] oh okay [14:54] the EVM only has 128M, right ? [14:54] you will most likely want the headless image with that [14:54] 256 [14:54] yeah, still to low for an ubuntu desktop [14:54] yeah I was thinking about :) [14:55] hum [14:55] unity.-2d with idling desktop eats 148M here [14:56] okay [14:56] gnome and xubuntu will be similar [14:56] KDE will use even more [14:56] lubuntu could work [14:58] maybe Matchbox [14:58] but don't know if Ubuntu support that [14:59] its in the archive [14:59] in universe [15:01] I can't find a version of ubuntu without X11 [15:04] the headless image is what yxou want, just follow the wiki [15:06] okay I'm trying [15:28] janimo`: thanks [15:42] LPhas: not yet, TI didn't release all multimedia components still [15:42] I know robclark is working on trying to get it all working, so soon we should have something :-) [15:43] once we get it integrated with gst and such, then normal applications will be able to decode using hw acceleration [15:49] rsalveti, display is in bad shape on 2.6.38.. I'm actually starting to work on 3.0.. [15:49] robclark: and how it's going with 3.0? [15:50] so we'll end up skipping natty, but luckly it'll work for oneiric :-) [15:50] and if we get sound working out-of-box it'd be a huge step forward ;-) [15:51] I had some issues w/ i2c initially, making DVI not work.. but I seem to have that working now [15:51] HDMI is a bit flakey... at least when you have kernel w/ PM enabled.. [15:51] robclark: hm, interesting [15:52] robclark: are you also porting the drm driver? [15:52] yes, of course ;-) [15:52] awesome :-) [15:52] robclark: and where are you publishing your work this time? [15:52] robclark: would be nice to integrate at the linaro tree once you think it's good enough [15:55] rsalveti, well, I'm trying to get display team to push the DSS patches.. and then once those are upstream I can push the drm driver [15:56] well.. soon it will have a TILER dependency too (for GEM), but I'll try and keep that separate.. [15:56] robclark: cool, even better :-) [16:06] is oneiric working with the pandaboard yet anyone? [16:12] sure === zyga-afk is now known as zyga [16:42] brendand: Alpha 2 server images work well, desktop is hit or miss (due to SD card variants). Current netinstall works as well. [17:46] rsalveti: It looks like u-boot is generating two different mac addresses currently. 00:02:03:04:05:06 for bootp and then it looks for panda/pxelinux.cfg/C0A80045 for the pxe config. pxe should be looking for the file with the mac address. [17:47] It may be doing a reciprical of the mac. Haven't checked. [17:47] GrueMaster: how it's getting to C0A80045? [17:47] probably [17:47] that's why this will be probably fixed once jcrigby fixes the mac address at u-boot level [17:47] ok [17:49] Grrr. Local mirror isn't syncing with ports.u.c properly. Can't do netinstall with local mirror. [17:50] GrueMaster, i've had good luck with apt-cacher-ng .... [17:51] * ogra_ is approx fan [17:51] I'm using a combo of apt-mirror & ubumirror since karmic. Worked quite well until Tuesday. [17:52] ah, its the tuesday bug [17:52] Fill me in. [17:53] a bug that shows up on tuesdays :) [17:53] This is not Windows! [17:53] * ogra_ was just kidding ... no bug to point to [17:53] heh [17:53] we had one where printing didnt work on tuesdays [17:54] its not impossible :) [17:54] I heard that there was an issue with debootstrap earlier this week. [17:54] well, with udev and the new /run directory i think [17:54] but yeah, it breaks debootstrap [17:55] I'm having to reinstall one system. Somehow the partitions did a role reversal. 1G rootfs, 156G swap. oops. [17:55] Doing a netinstall from ports.u.c works. So I need to figure out what isn't getting mirrored. [17:57] apt-mirror only copies debs & source, ubumirror is setup with a large list of excludes to skip old releases & arches I don't care about. Somewhere in there something is getting lost in translation. [18:00] https://bugs.launchpad.net/ubuntu/+source/cupsys/+bug/255161/comments/28 [18:01] Ubuntu bug 255161 in cupsys "I am unable to print from open office, I tried reinstalling open office but it did not work. I use a brother mfc240c printer and I am running Hardy. Printing from other apps has not been an issue. (dup-of: 248619)" [Undecided,Invalid] [18:01] btw :) [18:01] Ubuntu bug 248619 in file "file incorrectly labeled as Erlang JAM file" [High,Fix released] [19:49] rsalveti, pvr Xorg driver is up and running on 3.0.. i2c still hitting "timeout waiting for bus ready" a lot, which is a bit inconvenient.. [20:31] robclark: cool, great progress [20:47] pmcgowan: ping [20:48] jeremiah, He doesn't seem to be around just now. Is there a general question with which we can help? [20:55] persia, linux-ac100 is in NEW fyi === arun__ is now known as arun_ [21:56] it looks like all ocaml stuff started segfaulting recently on Ubuntu's armel... (whereas everything works fine in Debian) [22:00] sgnb, Do you have a trace (or a bug with a trace)? [22:03] persia: no, but have a look at https://launchpadlibrarian.net/75113357/buildlog_ubuntu-oneiric-armel.lwt_2.3.0-3_FAILEDTOBUILD.txt.gz [22:03] (I don't have an Ubuntu armel box at hand) [22:04] sgnb, Hrm. Indeed, that looks unpleasant. [22:05] (same for postgresql-ocaml, ocaml-sqlite3, oasis...) [22:05] Yeah, with that level, of segfaulting, I'd expect it for everything. [22:06] Even works for Debian armhf, so unlikely to be an ISA issue. [22:06] ( http://buildd.debian-ports.org/status/fetch.php?pkg=postgresql-ocaml&arch=armhf&ver=1.16.0-1&stamp=1309697444 by example) [22:08] bah, sigh, how many more languages are there [22:09] * ogra_ waits for apachelogger's spam on -changes to stop :P [22:09] I seem to recall that the answer was somewhere above 370 last time I tried to answer that question. That was a few years ago, so the number can only have increased. [22:09] persia: but armhf is not native in Debian [22:09] sgnb, Hrm? Does that affect how the buildd works? [22:10] persia: I mean, ocaml doesn't generate native code for armhf (whereas it does for armel) [22:10] ogra_, also, thanks for the wiki edit: I now have a happy glow. [22:10] so the generic bytecode compiler/interpreter is used there [22:10] sgnb, What sort of code does ocaml generate for armhf? Why isn't it native? [22:10] persia, heh, welcome, lets just hope an archive admin is bored enough to review the kernel now [22:11] it generates a portable bytecode, with run with an interpreter written in C [22:11] Without the ability to debootstrap, I'm unsure how much I care about the timing there. [22:11] lol [22:11] * ogra_ will land the flash-kernel bits and -meta tomorrow [22:11] ogra_: couple more two come [22:11] sgnb, Do you happen to know why it isn't native for armhf? [22:11] <50 ^^ [22:11] * ogra_ hugs apachelogger :) [22:11] persia: because it hasn't been activated [22:11] * apachelogger hugs ogra_ right back [22:12] (so never tried) [22:12] sgnb, Hrm. Then it *could* be an ISA issue :( [22:12] native code generated by ocaml compiler on arm always uses soft floats... would that be a problem? [22:13] No, Ubuntu armel uses a softfloat ABI, but it's ARMv7 vs. ARMv4t for Debian armel, which means that some assembly isn't compatible. [22:13] but that would have been a prob since lucid [22:14] Ubuntu armel is also compiled to thumb2 by default, rather than ARM, which has other knock-on effects in terms of allowable ISA and ABI interactions. [22:14] ogra_, Depends what instructions are being excited. A new upstream version might have some improved optimisation, or changes to call semantics that could trigger something. Needs investigation. [22:14] * ogra_ blames toolchain or libc [22:15] note that the old ocaml segfaults here [22:15] while building new ocaml stuff [22:15] "old"? "here"? [22:16] in the build log you linked [22:16] That's for lwt. [22:16] it calls /usr/bin/ocamlfoo [22:16] which segfaults durign execution [22:16] Sure, but I'm not sure why that is necessarily "old". [22:17] its not a new upstream, it is what is in the archive right now [22:17] Do you happen to know that no new upstream for ocaml-nox has entered the archive in oneiric? [22:17] no, i dont, do you happen to know it did ? [22:18] ;) [22:18] I do. oneiric has 3.12.0, whereas natty only had 3.11.2 [22:19] but stuff has been already compiled successfully with 3.12.0 in oneiric in the past [22:19] That's extra annoying. [22:19] Because it means that some library changed ABI without updating SONAME underneath ocaml. [22:19] sgnb, Do you happen to know the *first* package that FTBFS due to ocaml segfaulting wildly? [22:21] I think it is oasis [22:21] (according to http://people.canonical.com/~ubuntu-archive/transitions/ocaml.html ) [22:23] ogra_, Do you know of any way to find stuff that built on 29th June? [22:23] not really, apart from checking -changes [22:23] That's sources, not binaries. [22:24] And 29th June was pre-DIF, so all the autosync stuff is invisible to -changes [22:24] sgnb, I really don't expect you to have an answer, but do you happen to know the last package that worked? [22:25] persia: my guess would be nurpawiki [22:26] or ocaml-data-notation, maybe [22:26] (based on what happend in Debian) [22:26] * persia goes to investigate that, with hope, wondering how one could possible find this [22:27] Ah, yeah, given the essentially random timing of autosync, I have no idea how reliable that might be (autosync is a manual process, which happens whenever an archive admin has ~10 minutes to kick it off, but not less than twice a week) [22:27] nurpawiki build 2011-05-20 [22:28] ocaml-data-notation failed 2011-06-28 with mp segfaults (unlreated failure). [22:29] 2011-06-28 to 2011-06-29 is close enough I'll dig through the versions of build-depends, hoping to find a useful discrepancy [22:32] libmpfr4 3.0.1-3 -> 3.0.1-4 [22:33] libudev 171-0ubuntu3 -> 171-0ubuntu4 [22:33] apt 0.8.14.1ubuntu7 -> 0.8.15ubuntu1 [22:33] udev matching libudev [22:34] apt-transport-https matching apt [22:36] mprf4 looks like a no-op failed attempt to clean up how patches are applied in a format: 3.0 (quilt) package. [22:37] and the others dont look like they could make binaries segfault [22:37] (that worked before) [22:38] udev is a cleanup on stopping udevd processes, which shouldn't even be used in a buildd chroot. [22:38] No, but if I don't investigate, I'm not going to be confident :) [22:39] i think its some transient issue from something completely ocaml unrelated [22:39] i would have said libc but that doesnt match the timing [22:39] I'm not convinced it's transient, as it has persisted for two weeks. [22:39] Still, checking build-essential happens first, then specific build-depends (in part because I have to clean up my script to generate readable diffs for build-deps) [22:40] binutils was upgraded on the 18th ... doesnt fit either [22:40] the first segfaulting command is "/usr/bin/ocamlfind query -format %v findlib"... maybe it's easier to test it in past snapshots? (à la git-bisect) [22:41] apt change is large, but I can't see anything that should affect other binaries at runtime. [22:41] oh! but ocamlfind itself changed meanwhile [22:41] (1.2.6 -> 1.2.7) [22:42] Indeed. 1.2.7+debian-1 failed, but 1.2.6+debian-1build1 succeeded. [22:42] aha, i though it didnt [22:42] I cannot imagine how this could have an influence, though [22:42] Maybe there was a misbuild. [22:43] "misbuild"? [22:43] Could "Make Camlp4 depend on Dynlink on every arch" affect the autodependency generators? [22:43] - add armel to native architectures; note that the Dynlink module is [22:43] not available in native code there: software using it should take care [22:43] of this new possibility [22:44] Aha! That would be the issue then. [22:44] from the ocaml changelog [22:44] this was way before [22:44] not sure thats the issue but it could [22:44] Debian bug #630490 [22:44] Debian bug 630490 in libfindlib-ocaml "camlp4 depends on Dynlink on all architectures" [Important,Fixed] http://bugs.debian.org/630490 [22:44] debian bug #347270 [22:44] Debian bug 347270 in ocaml "ocamlopt produces buggy arm programs" [Important,Fixed] http://bugs.debian.org/347270 [22:45] oh, another intresting entry [22:45] add binutils-dev to Buid-Depends [22:45] That's *OLD* [22:45] yes, way before nurpawiki [22:45] Oh, but not Fixed old :) [22:46] But yeah, ocaml isn't the issue. [22:46] Both the working and failed builds use ocaml 3.12.0-7 [22:46] thats what came into ubuntu on may 5th [22:46] afterwards the ocaml transition started [22:47] Don't care. It's not different between oasis and ocaml-data-notation builds, so it is unlikely to be the cause of one failing and the other succeeding. [22:47] the ocaml transition finished before stuff started segfaulting [22:47] yes [22:47] (it finished with nurpawiki) [22:48] Well, ocaml-data-notation was built in Ubuntu over a month after nurpawiki, but kinda :) [22:48] it was not part of the "ocaml 3.12.0 transition" [22:48] Ah. [22:49] by "ocaml 3.12.0 transition", I mean "recompile everything because ABI changed" [22:49] Right. [22:49] after that, we continued updating things in Debian as usual [22:49] Still, I'm suspecting findlib for now. [22:50] But I don't see anything in the patch that is especially exciting. [22:50] https://launchpadlibrarian.net/74281241/findlib_1.2.6+debian-1build1_1.2.7+debian-1.diff.gz [22:51] the changes also look innocuous to me [22:51] it might be that the generated assembly for the new version is somehow bugguy [22:51] There's a number of "failed to remove" and "cannot stat" messages in the build log. [22:52] OCaml code shouldn't segfault by itself [22:54] Shouldn't or can't? It is native-compiled, right? [22:54] I don't see the difference [22:54] it is impossible in pure OCaml to do a segfault, it's a property of the language [22:55] and ocamlfind looks like pure OCaml [22:55] (I mean, except basic I/O) [22:56] segfaults in ocaml programs usually comes from C code called from OCaml code [22:57] That'd be can't :) [22:57] Lesser languages aren't defined in a way that prevents such behaviours. [22:58] But some *shouldn't*, because of how they work, for example python or java. [22:58] Whereas something like C has neither sort of protection, and segfaults reliably. [22:59] Aha! So, the new findlib build is whining about not being able to parse ${ocaml:Depends}. [23:00] persia: you mean dpkg-gencontrol? [23:00] Yes [23:00] it doesn't matter [23:00] it is a common warning [23:01] and only related to build tools [23:01] (I mean dpkg-dev) [23:01] I thought that warning happened whenever one didn't handle the substr stuff properly. [23:02] it happens when the variable is not defined [23:03] But the package relationships are very similar between 1.2.6+debian-1build1 and 1.2.7+debian-1 : the latter adds a recommendation on libfindlib-ocaml-dev [23:03] it does that even for shlib:Depends [23:03] I'm of the opinion that one should only use variables in control that one defines, but I'm known for being kinda particular about things :) [23:04] maybe, but that's not the point here [23:04] if I had access to a faulty box, I would start running gdb on "ocamlfind query -format %v findlib" to see where exactly the segfault occurs [23:04] Unfortunately, it seems findlib doesn't have a test suite that runs at build-time, so we can't know that the build worked (or didn't). [23:06] * persia tries to run debootstrap again, in vain hope, given the continued discussion in #ubuntu-devel [23:07] ogra_, In unrelated news, did you see my 0.0.0.0.0.0.0.0.1 package? Any suggestions to improve the README or change the package split? [23:08] well, i would at least add another 0 to the version [23:08] i saw your ping in the other channel, but havent looked yet [23:08] Please don't. I tried to add enough that when upstream (you) decides to actually release code, there would be no change of conflict. [23:09] s/change/chance/ [23:09] wsell, i would just take yours :) [23:09] the code surely needs cleanup, i didnt write it with a generic installer in mind [23:09] Considering that the README isn't complete and talks about killing kittens, that -tools is missing one of the scripts, and all the other issues, I'd hope not directly. [23:10] and i cant imagine many ways to package it [23:10] you split it ? [23:10] what did you split into tools ? [23:10] Surely. [23:10] it should eb a single package with three initrd files [23:10] There's the stuff that belongs in the image, and the stuff that belongs on the machine used to create the image. [23:10] Go look. [23:11] one in conf.d one in hooks and one ins scripts [23:11] I think I have four scripts. [23:11] Maybe five. [23:11] the one additional script isnt for the public ... it will be worked into debian-cd [23:11] debian-cd is public :) [23:11] (the one that adds the md5sum file) [23:11] well, i meant as extra code [23:11] But yeah, if you're sticking the control code into debian-cd, then -tools isn't useful. [23:12] sure it should be public [23:12] Oh well. libc6 still doesn't install in the chroot :( [23:12] build natty and upgrade ;) [23:13] Hrm. I wonder if my wrapper scripts can handle that. [23:13] Maybe I'll just do a one-off upgrade of natty to test this one thing. [23:50] * persia grumbles at dpkg: error: failed to open '/var/lib/dpkg/status' for writing status database: Invalid argument on a natty->natty upgrade [23:51] sgnb, I get an immediate segfault return from `ocamlfind query -format %v findlib` on current oneiric/armel [23:51] Any hints on what to check, or do you recommend digging through core? [23:57] persia: run through gdb, and pinpoint where the segfault comes from [23:58] (is it C code? is it assembler? is it a library?)