[00:55] kees: while trying to merge ipsec-tools I got this error - cc1: warnings being treated as errors [00:55] policy_token.c: In function '__libipseclex': [00:56] kees: http://paste.ubuntu.com/11039/ [00:56] kees: is this related to the new hardening flags ? [00:57] mathiaz: https://wiki.ubuntu.com/CompilerFlags <- -D_FORTIFY_SOURCE=2 seems to be relevant here. [00:59] james_w: thanks :) [01:05] Who is it that actually decides what goes onto the Ubuntu CD, and how do us mortals go about understanding (and potentially altering) their reasoning processes? [01:05] As opposed to universe etc. [01:08] <_MMA_> The Tech Board and or Desktop team I believe. [01:08] <_MMA_> And nothing from Universe goes on to the disks. [01:09] not true [01:09] _MMA_: well - it depends which iso you're refering to [01:09] xubuntu is in universe [01:09] <_MMA_> Oh jesus. Nit-pick. He said Ubuntu. ;) [01:10] hehe [01:10] <_MMA_> So :P [01:10] * Amaranth pokes _MMA_ [01:10] _MMA_: it was true that nothing from universe could be on the cd. But this has been changed during the last release cycle so that xubuntu, ubuntustudio and other derivatives can be built [01:11] ubuntustudio is in main, no? [01:11] No. [01:11] Its the primary reason that universe packages can be on disks. [01:11] most of it is :P [01:11] <_MMA_> mathiaz: I'm sorry. You obviously don't know who you're talking to. [01:11] The reason I'm asking is that to some extent the remote recovery issue is about defaults and policies rather than optional components, and I'd like to understand a little about that side of the process. [01:12] <_MMA_> andrew___: I think people like Colin are the best resource for that. [01:12] <_MMA_> A post to ubuntu-devel-discuss might be in order. [01:13] There've been a few on the issue already :) [01:13] what, specifically, do you want changed? [01:14] Well, I'm still going through the understanding stage... [01:14] I've been bugging anyone that will listen for the past few days about ways to do better tech support for friends. [01:15] andrew___: technically, a ubuntu-core-dev has to make a change to the seeds - so you'd have to convince a core-dev to make that change. [01:16] core-dev has access to the seeds? [01:16] How do they tend to feel about new packages getting accepted into main? [01:16] <_MMA_> No. It's not that simple. It is a team decision. [01:16] Amaranth: yes [01:16] andrew___: To get a package into main you have to file a main inclusion report [01:16] Do they prefer packages that are small, or well established, or well-maintained...? [01:16] andrew___: then you need to talk to the relevant team [01:17] if it's in main it should be well maintained and have no crazy security issues [01:18] andrew___: https://wiki.ubuntu.com/MainInclusionProcess [01:19] Thanks - I think that's what I needed to know. [01:19] that page is no longer valid [01:19] Or at least, it'll shut me up for another few hours :) [01:19] not everything in main is supported by canonical [01:20] Amaranth: huh - that page is still valid IIRC [01:20] the steps are right, the reasons are not [01:20] Amaranth: what do you refer to by 'supported by canonical' ? [01:23] commercially supported by commercial support people, commercially :) === foka_ is now known as foka [01:31] <_MMA_> mathiaz: ie: There are things in Main you can't get paid support from Canonical on. [01:34] _MMA_: im sure if you name the right price you can? [01:36] hmm, compiz runs a LOT slower with DDR-533 unmatched sticks of RAM [01:36] on the GMA950. [01:36] time to revert this setup [01:37] gma950 uses RAM? [01:37] pwnguin: GMA950 uses only RAM [01:37] pwnguin: it has no dedicated graphics RAM [01:37] <_MMA_> pwnguin: Who knows. I haven't talked to the support guys enough to know. [01:37] im sure handing off hard stuff to the CPU doesn't help [01:38] pwnguin: so I guess having dual-channel interleaved RAM is important [01:38] double throughput is important -- 3d stuff is very bad at caching [01:39] oddly seems like Firefox is the only major offender [01:39] hmm [01:40] * TheMuso has never liked shared RAM. [01:40] lots of pixmaps in ff [01:41] anyone here willing to help me debug a kernel-related suspend/resume problem on 2.6.24-17 on Hardy? It appears to be scheduler-related, but debugging it beyond what I can see in the log is a bit tricky. https://bugs.launchpad.net/ubuntu/+source/linux/+bug/212660 has details -- the short version is that suspend/resume last worked properly on 2.6.24-12 for me. [01:41] Launchpad bug 212660 in linux "kernel 2.6.24-16 fails suspending" [Undecided,Confirmed] [01:41] vlowther: You may get more help in #ubuntu-kernel. [01:42] heh, did not know that channel existed. [01:42] I will ask there. [01:43] TheMuso: that assertion is tenous at best [01:44] vlowther: you may get better help in the ubuntu-kernel mailing list. [01:45] pwnguin: indeed. [01:46] as far as i can tell, the ubuntu kernel team uses IRC to communicate with other ubuntu kernel team members, and anyone else's best hope is getting ogarasawa's attention =( [01:47] pwnguin: ? [01:47] TheMuso: i'm just suggesting that i see unanswered calls for attention plenty in that channel. [01:48] wn that case, why bother with a public channel at all? [01:48] pwnguin: Right. [01:49] well, this channel is the same way, really [01:50] except nobody says "go to ubuntu-devel for bug help", that I know of [01:57] well, I come here because people only want to look at easy things on #ubuntu [02:20] _MMA_: do you have example of such packages ? [02:20] _MMA_: got an example? [02:20] hahahahahahahaha! [02:20] * mneptok bops mathiaz on the nose [02:24] I did before hardy [02:24] maybe that's changed now, i dunno [02:24] Amaranth: examples of unsupported packages in Main? [02:25] yeah, xubuntu stuff :P [02:25] uh .. [02:25] fark you, too. ;) [02:25] *smewch* [02:25] * mneptok beams brightly at Amaranth [02:25] * Amaranth gets confused [02:26] do i have someone on ignore? cuz this isnt making sense =/ [02:26] Amaranth: you had me wracking my brain for Main packages i didn;t think of. [02:26] and then ... "Xubuntu." [02:26] pwnguin: same here ;-) [02:26] there is no support for Xubuntu. [02:26] any pacakges. at any time. [02:27] so the slap was for making me start to actually think ;) [02:27] I believe there are other packages in Main Canonical doesn't support [02:27] and Xubuntu is in Universe now right? [02:27] LaserJock: do you have an example? [02:27] LaserJock: 'cos i don't [02:27] well [02:27] i'm not saying i couldn;t have overlooked something. [02:28] do you have a list of things Canonical *does* support? :-) [02:28] packages in Main. [02:28] a short list of vital Universe stuff. [02:28] that's it. [02:28] (slmodemdaemon, when that was in universe) [02:29] Main/Universe is not how Canonical support is defined [02:29] it is here. [02:29] how interesting [02:29] mdz said it wasn't [02:30] and I would think he'd know [02:30] LaserJock: like i said, it doesn;t break out exactly along repo lines [02:30] sure [02:30] LaserJock: but we still use repo names to define things [02:30] like "a few critical packages from Universe" [02:31] mneptok: and some Main packages that aren't supported ;-) [02:31] i know of none [02:31] i await an example. [02:31] (not to make that sound like a stupid challenge) [02:31] (I presume a support contract w/ Canonical overrides component lines anyhow.) [02:32] well, is abiword and gnumeric supported? [02:32] yes. [02:32] you sure? [02:32] *blink* [02:32] well, stupid question [02:32] but I wasn't aware that it was [02:33] all fourteen users of both apps are entitled to support. [02:33] * mneptok runs [02:33] overall I'm not sure that Canonical knows what's supporte [02:34] +d [02:34] we do not fully support anything with a non-Free license [02:34] as we cannot guarantee resolution [02:34] sure [02:35] *with* a free license, it basically breaks out along repo lines, but with some exceptions. hence mdz's comment. [02:36] personally, i'd like to stop supporting GUI environments and server daemons. when i mentioned it, sabdfl asked me to wait in the lobby for the adults to finish their conversation. [02:37] If I allow untrusted users SSH access to an account on my computer, where the account is in a chroot jail, they can only run a specific command I specify (not a shell), and remote and X11 forwarding are disabled (but local forwarding is enabled), have I compromised the security of my computer in any way? [02:38] mneptok: well, there have historically been examples like Xubuntu [02:38] Xubuntu != Ubuntu [02:38] so? [02:38] Canonical Support offers commercial support for Ubuntu, Kubuntu, and Edubuntu [02:39] For bandwith challanged users, it'd make sense to just download the compressed differences between an old package in the apt cache and a new version of a package. Could an rsync method driver to apt be a solution, for instance? [02:39] <_MMA_> mneptok: re: "Example packages" when Xubuntu went to Universe there was a big chat in here and people like Matt Z. Colin and Scott R. mentioned a good few I believe. Besides the previously mentioned Xubuntu packages. Making a list of unsupported packages was mentioned but I don't know if anything came of it. [02:39] right, but Xubuntu was in main and dragged in quite a bit [02:39] _MMA_: Xubuntu is not supported. therefore, its packages, regardless of repos, are not supported. [02:39] right [02:40] _MMA_: it's not that "stuff in Main is not supported." it's that "Canonical only supports certain variants" [02:40] <_MMA_> Sure. But that was under contention at the time. [02:40] but you asked for packages in Main that aren't supported by Canonical [02:40] <_MMA_> No. I don't think it's that cut and dry either. [02:40] or did I get the question wrong? [02:40] LaserJock: Main/Xubuntu is not the same as Main/Ubuntu [02:40] <_MMA_> As I said others had examples. [02:41] mneptok: so? [02:41] so, when you say "Canonical supported stuff in Main" it assumes a variant that actually has support. [02:41] <_MMA_> I want to get jackd back into Main to be able to build PulseAudio packages that can work with JACK. I doubt that will get support from Canonical. [02:42] mneptok: when people say "Canonical provides support for Main" it implies that *all* of Main is supported, right? [02:42] hi5: Isn't that what jigdo does? [02:42] andrew__: I don't think so, but i might be wrong [02:42] LaserJock: Canonical provides support Main packages on supported platforms. [02:42] LaserJock: therefore, if you install a Main package on Sid, no dice. [02:42] that's more for dividing iso files into .debs and recompiling afaik [02:43] mneptok: Canonical provides support for most of Main on supported platforms [02:43] it doesn't provide 32/64K diff from one .deb to another [02:43] and it's not for apt [02:43] however, I again don't know that it matters much [02:45] <_MMA_> LaserJock: At one point I thought on ubuntu.com it wasn't as clear as that. Hence all the confusion about Xubuntu. [02:45] <_MMA_> I think a bug was even filed. [02:47] _MMA_: the website says "The main distribution component contains applications that are free software, can freely be redistributed and are fully supported by the Ubuntu team" [02:47] <_MMA_> bug 172672 [02:47] Launchpad bug 172672 in canonical-website "http://www.canonical.com/projects claims that Xubuntu is supported by Canonical" [Undecided,New] https://launchpad.net/bugs/172672 [02:48] <_MMA_> Looks like it's been fixed. SHould close the bug. [02:52] one should note that the Ubuntu team and canonical are not identical sets [02:52] Re:rysnc method driver for apt. The context of this can be anything from people over sat connections, 3rd world countries (like iraq where there is no fiber, or copper backbones/lines since the ground is so hard it's almost impossible to run it so everything is expensive sat. based), or saving bandwidth / speeding up updates 1000x for all users / saving money to run repos. Are there no thoughts? [02:52] pwnguin: exactly [02:53] what goes in main is independent of Canonical [02:53] hi5: Depends how much work you're planning to put in. [02:53] though it certainly works well for Canonical to use Main to define support [02:53] andrew__: Well, I've spent days getting enough info to formulate that question properly. [02:54] hi5: Yeah, I know that feeling :) [02:54] hi5: so this means you're doing investing time, or does it mean you're in it for the long haul? [02:54] Annoyingly, people's response of "bandwidth is cheap" is highly beside the point, and innacurate for most of the non-western world. [02:55] hi5: What's the average size of the binary diff between two versions of the same package? How does it differ between packages? How does that compare to the diff between source packages (a la Gentoo)? [02:55] if you're merely looking for counterpoints, CPU time isn't cheap either. [02:56] andrew might have a point -- .debs are compressed and may not make good rdiff candidates [02:56] So I'm pretty invested in this. I should note that I've receive many responses to contact the various devel lists however I'm not a developer. Hence my apprehention about contacting a dev list which a.) may not care b.) would be annoyed I'm just a (l)user [02:56] However, after much research it appears no such solution exists... which is very surprising to me considering how obvious this seems [02:56] (even to a new user) [02:57] it may seem obvious, but the solution is definitely non-trivial to implement. [02:57] As to (b), I'm not much more than a (l)user myself. If they start picking on us, we should unionise ;) [02:57] awaiton: Why? nobody has been able to tell me why? [02:57] im not familiar with how rdiff works, but it may require a non trivial amount of disk space [02:57] hi5, better idea: try it. realize just how hard it really is. [02:58] doesn't it take quite a bit of CPU on the server as well? [02:58] hi5: I'm not sure of the technology details, but here's a thought experiment based on how I would expect it to work: [02:58] LaserJock: indeed [02:58] oh dear [02:58] Imagine a 10MB binary split into 1024 10KB chunks... [02:58] Well, real-time decompression of binaries on the server end, and similar on the client end mean you dont't need 20x the server space i'd think [02:58] but that was a theoretical concern i also had [02:59] though, hdd space is still cheaper than bw for a repo's purpose [02:59] I'm fairly sure the current servers basically out of space [02:59] if there's fifty versions you might upgrade from [02:59] Version n+1 of the file inserts a single byte at the front of the first chunk.... [02:59] this causes every single chunk to appear different, and need to be re-downloaded. [02:59] ideally, packages would be distributed over something like bittorrent, where you don't care so much about compression and on-disk space, but focus on the perfect reproduction of the file and various locations of retrieval. at least, IMO. [03:00] if you imagine having to house diffs for all packages for all versions it can be pretty significant [03:01] LaserJock: wikipedia is enlightening [03:01] awaiton__:no offense, did you even read my post? [03:01] basically, it's compute intensive because it network computes the diff [03:02] Seriously, empathise with someone outside of your country for a moment.. bit torrent doesn't solve a bandwidth issue for a single node. It merely distributes it among many similarly bandwidth endowed nodes [03:02] hi5: wouldn't a CD be just as / more effective? [03:02] pwnguin: yes, i can see that being a minimal issue [03:02] "minimal" [03:02] pwnguin: a cd would not be effective [03:03] hi5, the best way I can sympathize with you is to say, I don't want you wasting any bandwidth on bad downloads [03:03] a cd as a compromise would be a tad bizarre to me in fact [03:03] that "minimal" issue of CPU use very well may kill servers on upgrade day [03:04] take one server and add a couple hundred / thousand users and bam, instant problem [03:04] pwnguin: almost for sure [03:04] bittorrent is designed so files don't 'fail' during download. time sensitivity can come from using the absolute best compression available (something like lzma), but using something like binary diffs is almost never a good idea due to just how tricky they are to get to work... [03:04] so, this avoids the topic at hand doesn't it? [03:04] I think those are administrative issues, and ones that aren't impossible to deal with [03:04] what? [03:04] cpu load vs bw load [03:04] CPU is an administrative problem? [03:04] etc [03:04] hi5: but you wondered why it hadn't been implemented [03:05] that's one very good reason [03:05] First, less defensively folks, is there consensus that this isn't implemented somewhere? [03:05] that's for you to answer. but I doubt it. [03:05] It seems many people say "use jigo" or "debian already does that" and such [03:05] I'm not aware of anything like that for binaries. [03:05] ive seen about a billion attepts at alternative apt transports [03:05] i think i saw an apt-rdiff [03:06] http://www.tjansen.de/debiff/ comes to mind [03:06] Source downloads are a different matter, because that's a much easier problem to solve. [03:06] i know ive seen an apt torrent [03:06] pwnguin: same [03:06] someone tried apt over bittorrent before, but it was pretty poor :/ [03:06] awaiton__: there's better alternatives to that too. it works fine for me ;) [03:06] the idea wasn't bad though, it's just hard to make work. [03:06] agreed [03:07] binary diffs on the other hand, are almost impossibly hard to make work on a widescale. [03:07] the BT problem is one of availability -- you need to be able to trade chunks across packages [03:07] hi5: If you don't mind trading a lot of CPU for less bandwidth, how do you feel about downloading source and compiling it? [03:07] pwnguin, that's not such a hard problem once you realize you have a captive set of mirror servers across the globe to act as seeds... [03:07] ah, a good point about client end cpu [03:08] seems then patching would be possible via using deb-src? [03:08] I still think it's not apple-apples though. Compiling an OS, vs cpu use for making diffs of binaries is not comparable [03:09] hi5: thats not at all the point though. the point is you're asking ubuntu and mirrors to spend massive CPU on behalf of users, and he countered with a scenario where the user spends instead ;) [03:09] well, bandwidth bills are already an issue [03:10] kinda [03:10] a lot of university institutions donate to the cause ;) [03:10] so, i don't see why a bandwidth -> cpu tradeoff would be so unheard of [03:10] you might be better off with the postal system if bandwidth is that big of a problem... [03:10] hi5: more fundamentally, I'm not sold on rdiff substantially improving the bandwidth problem [03:10] hi5: Usually since it's CPU hit on both ends of the pipe. [03:10] right, but donate != free [03:10] it's still costly [03:11] just not costly to someone on a wester DSL line [03:11] (again, a tradeoff does exist) [03:11] there is no free lunch [03:11] hi5: And server administrators tend to not like process that constantly chew CPU. [03:11] canonical offers something close with the CDs though ;) [03:11] The claim being made is that binary diffs are a hard problem. You can accept that, or explain what you're not understanding about the problem, or go and prove everyone wrong, but (without wanting to be rude), just dismissing the claim doesn't work. [03:12] hi, anyone here who knows about that broken ubuntu alsa configuration? [03:12] andrew__: the claim being made is unsubstantiated [03:12] andrew___: i think that IF you managed to undo the ar compression on packages, you'd probably see some great gains in bandwidth [03:12] hi5, google it. do some research into the issue. [03:12] so without being rude (which you are) it's not helping me. besides, this isn't my personal pet peeve. i'm trying to help [03:12] i don't think many ppl understand this [03:13] hi5: as a challenger to the status quo, and a champion of the under championed, you'd be well off to substantiate the opposite [03:13] i've seen this topic brought up many times before, and it's odd how there's a peculiarly hostile reaction to it. sorry for being new, i just don't understand why this issue sticks out like a sore thumb [03:13] hi5: I believe the Canonical sysadmins have said that it'd probably be too CPU intensive with what we have now [03:14] if all rdiff/rsync does is transfer modified blocks, compression is likely to nuke that idea in hurry [03:14] laserjock: I appreciate the response, and I've been trying to figure out if that's the case [03:14] hi5: people *have* looked into this issue and it's not that we're saying "it can't be done" but just that "it's probably not feasible at this time" [03:14] does anyone know if the alsa drivers got recompiled for ubuntu hardy or are they just taken from debian? [03:15] although there are significant other concerns about some techniques [03:15] mrec: whats the version of alsa? [03:15] hi5: In fairness to you, there is an issue that this doesn't scratch the itch of many developers, because they tend to be behind fat pipes (or they wouldn't have been able to learn enough to become developers). [03:15] laserjock: interesting [03:15] andrew___: I honestly don't know that that's totally true [03:15] andrew___: yes, this is part of my aggravation. if i had dev skills, i'd work on this. I'm actually reading "Computer Programming for Dummies" right now sadly [03:15] I know of devs on dialup [03:15] if i get the skills, i'll work on this [03:16] pwnguin: that's a good question... [03:16] LaserJock: I'm in Australia, which is worse. [03:16] hi5, that's probably why you don't appreciate just how difficult it is, no offense. [03:16] hehhaha [03:16] it annoys me how selfish and unempathetic ppl can be (no offense to anyone here, it's a general sentiment i get though sometimes) [03:16] I cannot compile external modules against the ubuntu hardy kernel because the alsa headers seem to be out of sync with the binary files [03:16] awaiton__: well, i don't know if you appreciate how easy it can be [03:16] hi5: on the other hand, you'd be surprised I think at how unselfish and empathetic people can be [03:17] hi5, I've done binary patching before, I can tell you just how hard it is. [03:17] whoever build that kernel didn't know what he did [03:17] calling people selfish and unempathetic strikes me as arrogant, on the other hand ;) [03:17] hi5, it's especially not trivial when you realize just how many users there are, how many different versions are in the field, and how hard quilting binary diffs is.. [03:17] built* [03:18] hi5, if it were trivial, it would have been done years ago, and thusly not an issue. [03:18] hi5: I think you need to be reading a book on statistics rather than programming, actually. If you can show that debiff will reduce bandwidth by X% and increase CPU usage by Y%, we can have a proper debate about whether it's worth it. Until then, we're all just hand-waving. [03:18] *not be [03:18] awaiton__: andrew__: well, i can appreciate the science and bit swapping of binary comparisons a lot [03:19] hi5: andrew is right. if you can demonstrate that rdiff is a win or at least a substantial tradeoff, you'd have far fewer opponents. and possibly a few volunteers [03:19] however, tools for doing this already exist it seems [03:19] so it's not like the wheel needs to be reinvented. so i consider some of those points red herrings [03:19] well, without implementation, statistical theory is pulling numbers out of my arse [03:19] i need a test bed [03:19] and i'm trying to build one! [03:19] hi5, just because a peg exists, doesn't mean that it necessarily fits the whole... [03:20] *hole [03:20] right, that's a platitude [03:20] not a response to the problem at hand [03:20] think about it this way: ever had to downgrade a package? [03:20] lol [03:20] yes [03:20] it's a PITA [03:20] pwnguin: just confirmed, switching from 1x2GiB + 1x1GB @533MHzz --> 2x1GB @667MHz made a HUGE difference in GUI performance [03:20] hi5, think about doing a reverse binary patch. [03:20] :) [03:20] jdong: is anyone surprised? ;) [03:20] pwnguin: I sure am [03:21] pwnguin: I didn't expect noninterleaving RAM to have such a big performance impact as to make compiz UNUSABLE [03:21] jdong, pwnguin: As am I. [03:21] I mean, it scrolled at 3 lines a sec in Firefox [03:21] you upped the MHz, and dual channeled it [03:21] hi5: this is a very rough methodology, but try this: check the version history for all the packages in Ubuntu, and see how often they're updated... [03:21] pwnguin: it was dual channeled [03:21] pwnguin: just not interleaved (striped) [03:21] what? [03:21] andrew__: i'm listening [03:21] pwnguin: popular benchmarks "cited" a 5% difference in performance [03:22] pwnguin: what I saw was more like a 75% difference [03:22] jdong: popular benchmarks use gma950 now? [03:22] Then download the current and last-but-one version, compute the diff, and divide by the number of days between updates... [03:22] pwnguin: popular benchmarks OF the GMA950 [03:22] jdong: that sounds seriously broken [03:22] ajmitch: idn if it's an EXA bug or what [03:22] That should put you on the road to a very rough number for bandwidth saved/day. [03:22] ajmitch: OS X felt the same speed [03:22] just Ubuntu crumbled without matched RAM [03:23] Then you publish that, it gets ripped to shreds for being unscientific, you come up with a better methodology, and after half a dozen iterations, people start agreeing with you. [03:23] then calculate the average frustrated user who got smited by a bad binary patch vs. the average user's anger of having to wait a few extra minutes for a package... [03:24] awalton__: wait for the numbers. The conclusion might be something that nobody expects. Like, 90% of bandwidth goes on OpenOffice, and splitting that into yet more packages would cut bandwidth in half. [03:24] (keeping in mind, of course, that the user who got bitten by a bad binary patch now has to download the whole package over again, or hope their file system does automatic revisioning.. or hope for a miracle of some other kind) [03:25] jdong: how do you dual channel sticks of differing size? [03:26] pwnguin: the access to RAM is distributed across to both channels independently, but they are not interleaved (i.e. striped) [03:26] pwnguin: from what I understand on Intel dual channel and interleaving are not the same [03:26] hmm [03:26] ive only got AMD, this may explain things [03:26] pwnguin: the AMD K8 memory controller might treat the situation much differently [03:27] heh, in my experience, i cant boot incompatible ram [03:27] if they wont interleave, it wont boot =/ [03:27] the hidden horrors of on-core memory controllers -_- [03:27] hi5: I already have that methodology licked: it doesn't account for the popularity of each package. Multiplying by the results from popularity-contest will give you a (decidedly biased) way to deal with that. [03:27] awalton__: its' a fairly good win too in a lot of cases [03:27] hmm [03:27] pwnguin, oh of course. [03:28] pwnguin, just not the panacea that everyone makes it out to be... [03:28] as far as compiz performance difference goes, can more than one process render at once? [03:30] if not, then the idea that dual channel non interleaved should be fast might block right there =| [03:31] pwnguin: I'm not sure. It could also be Compiz or EXA doing something underneath that is bandwidth intensive on RAM [03:31] pwnguin: because clearly OS X coped with it fine, just Ubuntu whenever Firefox is visible slows down to like 15fps [03:31] you mean like rendering to a texture and then rendering again? [03:32] pwnguin: I'm not sure, I don't nearly understand enouhg about the backend to make that judgement [03:32] pwnguin: but at any rate, my 3GiB setup finds happy home in a mobility radeon x1400 system :) [03:32] pwnguin: at the same time I had both systems open, I also transplanted an ipw3945ABG into the macbook [03:32] which shall be interesting :D [03:33] how do you transplant a wifi chip? [03:33] pwnguin: both are mini-PCIe [03:34] pwnguin: it was a straightforward swap minus fidgeting with antenna connectors [03:34] wait, do any macbooks ever come with ipw? [03:35] pwnguin nope. [03:35] heh [03:35] pwnguin: so I've created a transvestite mac. I think. [03:35] "osx sucks, my wireless card works fine in linux!!" [03:35] jdong: a frankentosh [03:36] pwnguin: lol currently, wifi support in OS X is a TODO [03:36] (wow, how many times do you get to say *THAT*?) [03:36] I had a friend who build a mac from refurb parts [03:36] sit in the osx86 rooms, you'll see it a lot.. ;) [03:36] awalton__: true [03:36] he mentioned that "it mounted into an ATX case just fine after drilling a few holes in the mobo" [03:40] pwnguin: Surely other changes would have had to be amde to the back of the case for connectors etc? [03:41] i have no idea, i didnt question him too hard after that [03:41] heh right. [03:41] even though it was on the floor and apart, i had seen it working previously [03:41] we built and debugged nachOS on it [03:42] heh, surprisingly Macs have been pretty standard PCs for years. [03:42] the newer ones are even close to ATX inside.. [03:42] awalton__: The mac pros? I woulln't think they were. [03:42] wouldn't. [03:43] TheMuso: they are pretty similar to server xeons [03:43] "pretty standard" would not be the phrase that would come to mind [03:43] TheMuso: at least the one that I peeked inside [03:43] they're close.. the mountings are a bit off, but I managed to put a normal PC board in mine [03:43] I had to put it back together when IS&T gave me a stare for disassembling a public terminal [03:43] they have an x86 chip in them, yes. But so does your washing machine and the space-shuttle booster rockets [03:43] sheesh! [03:43] i recently saw a similar minded, though less capable, person who had affixed an apple logo sticker to an acer laptop [03:44] people [03:44] then again I love apple's hardware, so I probably stick out... [03:45] I'd use them exclusively if I weren't kentucky poor... [03:45] you know, ive had to help look after some macs at a community college, and they don't hold up well [03:46] but perhaps im comparing university lab use to my own standards [03:46] mine have.. well, with the exception of the stupid frat kid who thought it would be a good idea to shatter my iBook's LCD... [03:46] the powerbook I had before it held up to some pretty severe torture though [03:47] pwnguin: we have a lab that's all kerberized Macs and they hold up very well from both a software and hardware point of view [03:47] it's okay [03:47] it's decent hardware, not the best, not the only decent ones [03:48] * jdong has a 1 year old macbook that's holding very well to constant abuse [03:48] well outside the halls of MIT we have to deal with the evils of preferred vendors [03:48] my condolences. [03:48] pwnguin: lol we to. 50% HP 40% Dell 10% mac [03:48] ever heard of atipa? [03:48] gesundheit? [03:49] exactly [03:49] :) [03:49] hi5: I don't suppose you could have a community of people that lack bandwidth who collectively keep a memory stick current? [03:49] i dont think its coincindence that atipa spelled backwards is "a pita" [03:49] So, I've just helped someone unbreak their 3d in #ubuntu-bugs because they installed the fglrx-control package, thinking that it would be awesome. [03:50] andrew__: sorry, i've been reading stuff to implement the idea i think is more rational [03:50] What it *actually* did was install xorg-driver-fglrx, not set it up, and break their 3d. [03:50] if this topic is still being discussed, i'll need to read back in a few to see if any useful ideas have emerged [03:50] hi5: As in, Alice has the stick Mondays, Wednesdays and Fridays, Bob has it Tuesdays, Thursdays and Saturdays; when Alice has the stick, she keeps [A-M]* current. [03:50] else, it appears i'm still on my own so [03:51] right, i understand andrew__ [03:51] that's insane [03:51] Not discussed, just me :) [03:51] How so? [03:51] *oops, well.. no offense, that's insane [03:51] so is downloading a gigabyte of updates over dialup :P [03:51] large scale... 100,000 people in iraq won't do that [03:51] i have a question [03:51] how do you know that? [03:51] thanks for the idea, i just don't think you understand the scale [03:51] It would be nice to be able to not break people's systems, or at least flag some sort of warning. [03:52] Yeah, you'd have to do it on the LUG level or something. [03:52] sorry if that sounded a tad asscorbic [03:52] your dns says minnesota [03:52] yeah, well or setup a local server for repos [03:52] a fork of current debian / ubuntu etc used locally [03:52] hi5: have you actually consulted with the people you're presumablly attempting to help? [03:52] that's still what i'm after [03:52] for fuck's sake, screw it [03:53] get beyond the idea of help small group of ppl in africa, *i'm* someone that would be helped by this [03:53] so are MANY other's i've consulted in places like iraq [03:53] this is the solution they want [03:53] i'm sorry nobody in this chat can see why [03:53] i can see why [03:54] hi5: I think we all understand the problem [03:54] sure we can see why. it's the how we take issue with. [03:54] saying "this would be cool" is awesome. doing that cool thing is often a lot harder. [03:54] well, no.. andrew said passing around a memory stick would be a solution. with all due respect, that's represents an extremely misunderstood notion of the problem [03:55] hi5: it's not that bad of an idea though [03:55] so is the problem rural minnesota or highly populated but underserved baghdad? [03:55] I know a lot of people do similar things in Latin America [03:55] ive read the USB stick story on planet somewhere [03:55] somebody sends a CD then it travels from town to town [03:55] physical media being passed around? i disagree.. there's invisible costs that are very high to that method [03:56] i'll admit it seems only advanced users in these circumstances understands the need [03:56] I am assuming is that there are several people in a similar situation within walking distance of each other, but if that assumption holds, making it easy to use isn't a particularly hard problem. [03:56] which might mean that advanced programmer types see why this is a dumb idea (which is fine if it really is) [03:57] it's not that it's dumb. it's just difficult. [03:57] weighing the ideas, cost/benefit, it doesn't seem to work out to me. feel free to disagree and prove me wrong though. [03:57] oh i remember now. it was a story about distributing information in cuba by USB stick [03:58] guerllia sneakernet ;) [03:59] pwnguin: guerilla is right - you're basically talking about a cell structure. The IRA proved how well that scaled :s [03:59] so did al queda? [04:00] (and my dorm wing...) [04:01] awaiton__: appreciate the response, as I'm sure you hope I (or someone else) will also, I'll try and prove you wrong about that for the betterment and greater dispersion of gnu/linux. [04:01] hopefully it is as scalable as it still seems :) [04:01] pwnguin: let's not get into an argument about whose terrorists are better ;) [04:01] i wasnt sure if you were making a postive or negative claim there [04:02] Who, me? [04:02] i mean, the IRA claimed to give up / stop / disband / whatever [04:03] so anyways as an ignorant american i have no idea how well that scales [04:03] Yeah, I'm just pointing out that cell structures are scalable. The fact that the experts also tend to be bad guys isn't really relevant. [04:04] i just think the latency sucks [04:04] ;) [04:04] well thats a tradeoff ;) [04:07] pwnguin: For the record, you're right. The IRA stopped a while back, and so far as I'm aware, the place has been much better lately. [04:29] pwnguin: "so is the problem rural minnesota or highly populated but underserved baghdad?" just saw that. minnesota... you saw my IP? idk why I'm bouncing off a mn server atm, never have before. short answer: shouldn't matter. [04:30] well they're two different problems [04:31] and yes, irc broadcasts dns unless you request a cloak [04:34] well, my pt is they shouldn't be held to two different standards. and i've never been to mn, just using the dns for various reasons [04:35] but it's water under the bridge unless you're driven by this issue as much as i am. otherwise, i've only got a few ppl interested so far i'm talking with and that's fine [04:35] why not? a dense and close knit community might accept different solutions than people living in rural MN or rural KS [04:36] recall the seattle wifi project [04:37] hi5: would it be fair to say that you've settled on keeping the current infrastructure but reducing the bandwidth? [04:37] As opposed to weird-and-wonderful, highly location-specific solutions. [04:38] you mean as opposed to swapping flash drives to millions on sat. connections? yeah... [04:39] Well that wasn't what I was proposing, but fair enough. [04:39] why is the target millions? [04:39] anyway, i fear the conversation here isn't constructive any longer except with those that have PMd me so if you really don't care about the issue, feel free to ignore what I've asked earlier and thanks for the info. [04:40] i just wonder why this particular rdiff solution is important to the classes you presented. [04:41] Yeah, I've said my piece w.r.t. reducing bandwidth. I do wish you good luck, though. [04:41] either way, I'd love to read some concrete results [06:02] Good morning [06:02] morning pitti [06:03] hello pitti [06:22] hi === Arby_ is now known as Arby [06:32] Guten Morgen pitti === asac_ is now known as asac [07:07] so i found a hilarious lecture about rsync [07:08] Orly? [07:08] by the author [07:08] http://ftp.gnumonks.org/pub/congress-talks/ols2000/high/cd2/2000-07-21_15-02-49_C_64.mp3 [07:13] http://olstrans.sourceforge.net/release/OLS2000-rsync/OLS2000-rsync.html <-- transcription [07:43] on ubuntu ftp there is a patch fixing somethings in 2.6.22 kernel - linux-source-2.6.22_2.6.22-14.52.diff.gz. I'm trying to find out where this patch is developed? Some repository I assume (where it's splitted into smaller logical chunks). Does anyone know where such repository can be? [07:43] arekm: That'd be in the Ubuntu kernel git repository. I'll dig up a link for you. [07:44] arekm: https://wiki.ubuntu.com/KernelGitGuide?highlight=%28kernel%29 [07:44] thanks! [08:23] morning folks! [08:31] good morning [08:31] thekorn: argh, seems that p-lp-bugs fails all over the place :/ [08:31] hi dholbach [08:31] hiya pitti [08:32] hey dholbach [08:32] hey mvo [08:51] good morning, pitti and dholbach [08:52] hey thekorn [08:52] pitti, do you have any hint on how to reproduce it? [08:52] heya thekorn [08:52] thekorn: I filed bug 228565 with an analysis and a patch [08:52] Launchpad bug 228565 in python-launchpad-bugs "AssertionError: Wrong XPath-Expr in Secrecy.parse() '__xml'" [Undecided,New] https://launchpad.net/bugs/228565 [08:52] thekorn: (and a reproducer) [08:52] thekorn: ever-changing LP *sigh* [08:53] thekorn: I have more problems, but one after the other [08:54] pitti, this is already fixed in the .main branch by kees and bdmurray [08:54] ah, great [08:54] and I think it was already uploaded to hardy, but I'm not sure [08:55] maybe only intrepid [08:55] yes [08:55] gosh, the duplicate check pool is huge [08:55] the retracers will have a fun time with catching up :) [08:56] thekorn: hm, maybe I should stop using the hardy packages and just keep a checkout of main in the retracers [08:56] thekorn: is there anything in the ubuntu branches except the packaging? [08:57] (I don't use the packaging anyway, I just copy the module directory [08:58] Grm. And the PPA buildds are langpack'd again. [08:59] pitti, using the .main branch is a good decision [09:02] thekorn: that will work for the 'outside' retracer, but of course not for the ones in the chroots [09:04] pitti, maybe we should think about an always up-to-date PPA [09:05] thekorn: if we can (reasonably) count on API backwards compatibility, I could also do some dirty tricks as symlinking the outside branch checkout to the retracer chroots [09:08] pitti, I do not plan the change the API in the intrepid cycle [09:09] thekorn: I might try that then [09:15] thekorn: hm, attachment parsing is broken in hardy final as well [09:16] $ python -c 'import launchpadbugs.connector as Connector; cb = Connector.ConnectBug(); cb.authentication=".lpcookie"; b=cb(218113); print b.attachments' [09:16] [] [09:16] thekorn: is that also fixed in .main? [09:18] pitti, tworks in .main [09:19] the reason for this was yet another string change in launchpad [09:19] gah === Tweenaks is now known as Treenaks [09:32] thekorn: hah, that 'symlink to p-lp-bugs checkout outside of the fakechroot' seems to work === gnomefre1k is now known as gnomefreak [09:34] thekorn: do you already have a test script which exercises the usual stuff? [09:34] thekorn: which we could run after a new lp release (also of edge)? [09:35] mvo: my System Monitor window is being shy [09:35] for no readily apparent reason, it's transparent [09:35] p-lp-bugs should be a lot nicer in a couple of months, particularly with the lack of breakage from changes. [09:35] wgrant: do you know something about a stable XML-RPC LP interface? :-) [09:36] pitti: With Python library included. [09:36] Keybuk: i saw same thing in blam i upgraded to a PPA package of xulrunner-1.9 nad it fixed it [09:37] but mozilla should be relasing RC1 in next few days/week [09:37] System Monitor != XUL [09:37] wgrant: blam isnt xul either [09:38] least i dont think it is [09:39] william@irranat:~/Development/ivle/trunk$ apt-cache show blam | grep gecko > /dev/null && echo "Isn't it?" [09:39] Isn't it? [09:39] pitti, I started some test here: https://code.edge.launchpad.net/~bughelper-dev/python-launchpad-bugs/better.testing.errors [09:39] nope its not from what show says [09:39] wgrant: grep -q [09:39] StevenK: That works too. [09:39] thekorn: ah, cool [09:39] gnomefreak: gecko! [09:39] pitti, I plan to add this in the intrepid cycle [09:41] wgrant: http://gnomefreak.pastebin.ca/1012531 [09:41] pitti, sorry I'm of for a weekend at the nordsee now, but I think it should be not hard to understand how this works, it's basically python testing/run_tests --all [09:41] it looks mono to me [09:42] I smell a libgecko2.0-cil. [09:42] wgrant: ok i missed that [09:43] grep doesn't lie. [09:43] thekorn: right; thanks a lot, and enjoy the Nordsee! [09:43] thekorn: with that much sun it should be fun [09:44] yes, of course === Martinp24 is now known as Martinp23 [10:26] hmm [10:26] preliminary testing suggests apt-rsync cannot work [10:28] at least, not without huge effort somewhere [10:51] http://jldugger.livejournal.com/6115.html for anyone interested in hi5's rsync stuff from earlier. [10:53] I haven't been following apt-rsync [10:55] * lucent reads [10:56] pwnguin: oh geeze, they're talking about rdiff on compressed data? [10:57] to be fair, all you have to do is patch gzip to make it reset every so often [10:58] i ran across a project to try that, with suggested losses of 3 percent compression [10:58] or you could decompress the whole mirror ;) [10:58] (and .debs I guess) [10:58] that sounded silly when I thought to suggest it [10:58] but yeah [11:00] forget Deb package system for a moment, and let's take Gentoo system of source code downloads as an example [11:00] someone already suggested that [11:00] it's still mind-boggingly difficult to track diffs between source versions [11:01] uuh? [11:01] say you have a source-based dist [11:02] foo user wants an efficient and minimal update to the data they already have [11:02] how do you make a package management system which only grabs the diffs between one version of code and the next? [11:02] it's okay for one or two packages, but the storage and management of so many packages, it is a lot of CPU overhead [11:03] lets ask linus torvalds [11:03] i hear git does this [11:03] heh [11:03] yes scm systems are brought in [11:03] have you heard about making apt torrent'able? [11:04] like p2p style [11:04] in your journal entry, I read that the problem is about bandwidth being expensive, so I'm going on a tangent here [11:04] this was also brought in [11:05] when Ubuntu makes a release, I saw an unnacceptable slowdown in the mirror system to fetch updates [11:05] i think the guy who suggested it was called selfish and unempathetic ; [11:05] ;) [11:05] :( [11:05] like i said, too much investment in the solution instead of the problem [11:06] <\sh> who deals with NEWing before the weekend? :) [11:08] \sh: my archive day today [11:09] \sh: I just finished kicking fakechroot and the retracers, I'll start dealing with archive stuff now [11:14] Keybuk: your system monitor window is transparent? out of the blue? you opened it and it was transparent? [11:15] mvo: yeah [11:15] argh, len(NEW) == 584 [11:15] some kind of weird interaction with murrine [11:16] I guess that the system monitor app is rgba-aware [11:16] but dunno why it ended up semi-transparent [11:17] Keybuk: I switched to human-murrine to test it - is that sufficient? [11:17] probably, yeah [11:17] doko: can we remove gcc-4.0 from intrepid? it was removed in Debian long ago [11:19] "long ago" = this year, but yes, we can do that now [11:20] doko: I meant 'not just last week' or so [11:20] mvo: does for me, yes [11:20] doko: ok, thanks [11:20] * mvo tries it on a intel system [11:20] Keybuk: it seems to be ok on a nvidia [11:24] It's fine on hardy/i915. [11:28] pitti: when are the next language packs update planned? I think spanish users really would appreciate, we keep receiving bugs about nautilus crashing or displaying weird strings [11:28] seb128: can they test the PPA ones? I was told to hold off until LP translations fixes a serious bug with teh Firefox translatiosn [11:30] pitti: well, the strings have been fixing in rosetta sor ppa should be correct [11:31] s/fixing/fixed [11:31] right; as I said, I'm waiting for asac/jtv to give me thumbs up [11:31] ok [11:32] what is the ppa line to add again? I should not it [11:32] s/not/note [11:32] can't type today ;-) [11:32] deb http://ppa.launchpad.net/ubuntu-langpack/ hardy main [11:32] danke [11:32] de rien === Shely_ is now known as Shely [11:39] argh [11:39] mvo, why did u-m's icon change to such a scary thing ? [11:40] (i had the nice star/sun a second ago, now there is a huge red arrow) [11:40] isn't that for security updates? [11:41] oh, we have different icons for different purposes now ? [11:41] ah, I just wondered about the same [11:51] james_w, right, thanks for the hint, installing only the security updates changes the icon back [11:51] no problem [11:52] pitti: seb128: current state, waiting on decision from kiko for the cherry-pick [11:56] Why are there so many pending builds for fakechroot in the intrepid build queue? [11:57] because pitti plays around :) [11:57] pitti: please give back strigi/0.5.9-1 [11:57] hunger: many as in 3 (ok), or many as in 1000 (bug) [11:57] ? [11:57] hunger: yes, took me three uploads to get it really right, sorry [11:57] pitti: 3 in the top 4 pending requests. [11:58] hm, it shouldn't build the old versions... [11:58] Riddell: done [11:58] thanks [11:58] pitti: Oh, sorry. I missed the diff in the last digit of the version:-) [11:58] * ogra wonders why he still didnt get the new vbox modules yet [11:58] they were uploaded days ago [12:02] ogra, you were working with loopfiles some time ago', correct? [12:02] do you have any link to your project? [12:02] xivulon_, yes and i resorted to use vfat and syslinux to not waste more time on weird grub hacks [12:03] what was your project about, if I may ask? [12:03] xivulon_, i didnt push the code up anywhere yet, its a specially custom image for the Classmate PC [12:04] but its very dedicated to the HW in its design [12:05] ogra, ok, was looking into bootable usb key/hd devices, and was wondering if there was any overlap [12:05] http://people.ubuntu.com/~ogra/classmate/images/hardy/ [12:05] xivulon_, there wil be overlap for sure if we go towards intrepid [12:06] i want to look into an easy USB image builder for this release we all can use [12:06] ogra: yes, what james_w said [12:06] i was looking for grub4dos + liveCDiso + overlaid file via uninonfs to make a bootable r/w liveCD like environ [12:06] its a general topic on the platform team spec list :) [12:07] ogra if I am around when that is discussed at next uds I will certainly attend [12:08] only there last 2 days :( [12:10] ogra I would think though that grub4dos should also work well for you [12:10] and possibly even grub2 [12:10] xivulon_, you mean you will be in prague for the last two days ? [12:11] grub2 would be the way to go imho, but its not clear that will be ready in time for intrepid yet i think [12:11] ogra yes [12:12] well, lets see that we get the schedule adapted for this :) and at least have a session about it in the last two days then :) [12:13] that would be nice, thanks [12:14] since we'll run into probs with archive size by adding a full usb image my idea was to have a script on the liveCD that builds you one so you can easily generate a liveimage with ubiqiity on the fly fom the iso [12:15] d-i already supports USB keys with some easy fiddling that just needs some improvement [12:19] ogra, in early wubi days, I had this approach of using a LiveCD as is (squashfs) but replacing some files therein at runtime [12:19] the hooks are still there and it is well possible to do so [12:20] well, if you have unionfs on top, there is no prob to use three directories ;) [12:21] yes, I was using unionfs, to override default files, this is how I added loopfile support to the installer in 7.04 [12:21] what i do for the installer is simply using the squashfs as is in any case, have an ext3.img where i do my installer script specific adjustments and then in the end have both of these redonly merged with a tmpfs [12:22] I did something similar squashfs (ro) + tmpfs (rw) merged via unionfs, and then copying over the files from a folder (this spares me the trouble of having to create an ext3.img). [12:22] the installer dumps the squashfs in place and makes adjustments with mounted /cow in the target (the classmate keeps the readonly image, something you likely dont want for normal installs) [12:24] I think eee has a similar approach internally, ro fs + rw fs via unionfs [12:24] likely [12:24] even the eee has 4G [12:25] that would fit a scaled down normal install [12:25] (classmate has 2G in the smallest setup) [12:25] (and i have to squeeze 3G in there :) ) [12:29] heh [12:42] well in fact I have to rectify my previous statements, in 7.04 even if support for livecd/squashfs + unionfs was in the code, I ended up using the alternate ISO for the actual installation [12:51] pitti: thanks for you work :) [12:52] emgent: you're welcome :) [13:00] oh, fun, now i get why i dont have gotten any update to the vbox modules, apprently the last kernel update even removed the ones for -16 ... weird === xivulon_ is now known as xivulon [13:07] seb128, mvo: which was the good one? ccsm or compizconfig-settings-manager? the former is still in Debian [13:08] pitti: ccsm [13:08] pitti: and simple-ccsm [13:08] hm, so shall I remove [13:08] compizconfig-settings-manager | 0.7.4-0ubuntu2 | intrepid/universe | source, all [13:08] and sync ccsm? [13:08] that's a question for mvo I guess [13:08] ccsm has no ubuntu change? [13:08] they are not the same thing, they are different software [13:09] ccsm isn't in Ubuntu [13:09] pitti: please don't yet - debian seems to have choosen a different name for the same thing [13:09] E: ccsm is trying to override compizconfig-settings-manager_0.7.4-0ubuntu2 without -f/--force. [13:09] (oh well) [13:09] pitti: what version does debian have? [13:09] http://packages.qa.debian.org/c/ccsm.html [13:09] 0.6.1~gitsomething [13:09] pitti: please blacklist it for now, that is a ancient version [13:10] (given that both broke my setup i'D vote to remove them all :P ) [13:10] mvo: ok [13:10] there is a effort in debian with compiz, but they decided to go a different route, e.g. not use cdbs [13:16] * pitti spots bzr-dbus from syncing [13:16] WTH? :-) [13:17] works nicely with bzr-avahi [13:32] . o O { libtheschwartz-perl ??? } [13:32] people don't stop inventing crazy names [13:32] heh [13:32] that's a good one. thank the livejournal people === BenC__ is now known as BenC === ryu2 is now known as ryu [14:12] pitti, did the glib fix for not showing inaccessible mounts work for you in your ltsp tests ? even though i got confirmation from users i still see the issue in virtualbox === jw2328_ is now known as james_w [14:18] ogra: yes, it worked for me (didn't I write so in the bug?) [14:19] pitti, yes, you did [14:19] i wonder why it doesnt work for me :( [14:20] ogra: hm, I remember testing the one with the unmount menu [14:20] ah [14:20] ogra: I don't actually remember testing the 'hide inaccessible' one [14:20] what's the bug#? [14:21] 210379 [14:21] bug 210379 [14:21] Launchpad bug 210379 in glib2.0 "should not list mounts that the user doesn't have permission to use" [Low,Fix committed] https://launchpad.net/bugs/210379 [14:22] you said you can verify [14:22] ogra: right, I didn't test that yet [14:22] * pitti does now [14:22] but not that you did [14:22] bu i know at least two users from #ltsp for which it fixed the issue [14:22] that's actually easy, login as a second user, plugin an USB drive, it shouldn't appear for the first user [14:22] one commented [14:22] right [14:22] I do know that this was the case before, and you got a nautilus window and an error [14:23] well, in vobox i have a floppy icon on the desktop for every user and mounting an iso as CD shows up for everyone as well [14:24] erk [14:24] I did that, and now my primary user has a completely white window here [14:24] (nautilus) [14:24] sounds not right [14:24] and another completely white dialog box [14:24] what about desktop icons ? [14:25] got it as well [14:25] :( [14:25] i wonder why the two guys saw it fixed then [14:26] i guess we need to look into that agin [14:26] *again [14:26] thanks for testing [14:26] * ogra gets glib and takes a look at the patch [14:26] ogra: well, CDs might be a different case -- are they really mounted as 700? [14:26] yes [14:27] but probably not if you have it in fstab [14:27] ? [14:27] and all in subdirs owned by the user in /media [14:27] fstab is different [14:27] i have /media/ogar/cdrom and /media/test/cdrom [14:28] *ogra [14:28] and i have a hardy cdrom mounted on the server which is actually supposed to show on all desktops [14:30] is there anyone here who knows about the latest kernel in hardy and how alsa is wired with it? [14:31] I'm getting a general protection fault when trying to load a module built against the sources of the latest kernel [14:31] so I'm actually just looking for sources which are in sync with the running kernel which comes with ubuntu hardy [14:32] I've got quite alot bugrequests because of that [14:32] the solution for me would be to wipe out that kernel and set up my own one but I'd like to avoid that if possible [14:32] mrec: apt-get source linux linux-ubuntu-modules-2.6.24 should give you the two relevant source packages [14:33] mrec: linux is more or less teh upstream kernel, and l-u-m are third-party, and backported modules [14:33] mrec: I believe that l-u-m has newer ALSA drivers [14:33] pitti: alsa is the problem yes [14:33] it's not in sync with the kernel sources which modules are normally built against [14:33] dpkg -L linux-ubuntu-modules-2.6.24-17-generic [14:34] mrec: ^ check this output whether the affected module/driver is in l-u-m or the kernel [14:34] mrec: (that's the kernel from -proposed, BTW; you might have -16, which is hardy final) [14:34] yes I have 16 [14:34] mrec: btw, if you don't use hardy-proposed, it might be worth a try; -17 has several fixes which might also solve your problem [14:35] it's about empiatech hybrid analog / digital TV drivers [14:35] is there a chance to get them included even in final now? [14:35] pitti, oh, its 755, my bad [14:35] wtf [14:36] mrec: see https://launchpad.net/ubuntu/+source/linux-ubuntu-modules-2.6.24, the second-latest changelog [14:36] who changed that? grmbl [14:36] mrec: that pretty much sounds like your problem [14:36] ogra: 'changed'? [14:36] pitti, it was 700 [14:36] checking upstream bzr [14:37] pitti: ya I read through that one already, do you know when this will be in upstream? [14:37] urgency should be high actually [14:38] mvo: it's not an upstream problem, it was an Ubuntu packaging bug [14:38] I could immediatelly add around 60 devices to ubuntu if this would work [14:38] mrec: if you mean '-updates', not 'upstream', a week or two [14:40] pitti, do i need a separate SRU for that one digit change in ltspfs or can that just go under coverage of the existing bug (i.e. can i just upload a fixed package to proposed or do i need extra paperwork ?) [14:40] pitti: ah well it doesn't seem to matter it's just a source change no binary one. [14:40] So I guess to fix my problem I just need those sources [14:51] pitti, http://paste.ubuntu.com/11110/ [14:55] pitti, confirmed, that fixes it [15:14] does anyone know the name of the application which is launched in gnome when pressing ALT + F2 ? [15:16] You mean the "run application" dialogue? [15:16] andrew___: yep [15:17] After a quick play with `ps`, I don't think it's an application at all. [15:17] andrew___: i would like to find the sources of this dialog [15:17] andrew___: yeah, it must depend on the panel or something like that [15:18] Makes sense. [15:30] ogra: it should become a separate task; anyway, I just added a comment; this is all very confusing [15:30] andrew___: I think it's produced by the gnome panel [15:31] pitti, the ltspfs fix is mentioned in the gnome bug [15:32] andrew___: just checked, it's indeed included in the gnome panel code [15:33] pitti, /media/$USER isnt a mointpoint ;) [15:33] ogra: oh? [15:33] /media/$USER/$client_device [15:34] thats the actual mountpoint [15:34] ogra: ah, I see [15:34] but shouldn't $client_device itself be 700? [15:34] so so get indeed E_ACCESS :) [15:34] (by virtue of mounting with umask=700) [15:34] hmm [15:34] ogra: either way, I wonder why this change works for you, but not here [15:35] did you patch lbmount ? [15:35] sbalneav, ! [15:35] Morning! [15:35] youre alive :) [15:36] :) [15:36] hey sbalneav [15:36] Hey pitti [15:38] pitti, lbmount creates the /media/$UID dir (if nonexeisting) on plug and removes it (if empty), i doubt just changing permissions will be a proper way of reproducing [15:39] the devices are all mounted 755 [15:39] with user=$USER [15:42] ogra: ok, if you actually *want* the devices to be umask=022, then I see why you need to chmod 700 the parent dir [15:43] ogra: (in Ubuntu proper we mount devices with umask 077 by default) [15:43] ogra: I am just saying that currently nautilus still tries to open a window for inaccessible devices for me [15:43] pitti, i wonder why they appear as 022 :/ [15:44] it will only not open them if they are in a subdir [15:44] davidz refused to do it for all devices [15:47] ogra: you mean it only checks /media/foo/bar, not /media/bar? [15:47] why on earth?? [15:47] right [15:47] read the upstream bug [15:47] its silly but no way to fix it if its true [15:48] * pitti sends a rant to the upstream bug [15:48] seems gnome-vfs was to hacked up to fail on hanging mounts gvfs is shiny and beautiful and the glossy shoeshine but will hang on E_ACCESS on stale nfsmount [15:49] thats what the current tenor is apparently [15:53] ogra: replied upstream and in the ubuntu bug [15:55] pitti, http://bazaar.launchpad.net/~ltsp-upstream/ltspfs/ltspfs-trunk/revision?start_revid=wtogami%40redhat.com-20080428222323-en6gyfai5fzwdz8k&filter_file_id=lbmount.c-20060916234153-8xltobgv2a2xtqy1-3 [15:55] pitti, explanation about the choice of 750 [15:55] even though they didnt seem to take 0700 into account at all [15:56] ogra: what's the group of those directories? [15:56] lets see [15:57] ah, thanks for asking ... that was the missing bit, the dirs are root.$USER and 750 [15:57] ogra: weird [15:57] well, if they are managed by a root process, it's ok [15:57] no, bmount is suid root, remeber [15:57] all fine that way [15:57] *lbmount [15:57] yeah, I said 'weird', not 'wrong' :) [15:57] * pitti hugs ogra [15:58] :) [15:58] ogra: so, I don't mind that ltspfs change, but I'd still like to see gvfs be fixed properly [15:58] the current behaviour sucks for multiple users [15:59] pitti, your suggestion on the bug wont help ltspfs [15:59] they are no local devices [15:59] ogra: right [16:00] so that still needs special casing [16:00] ogra: if you want the devices to be umask=022, it won't help either [16:00] ogra: right, I agree [16:00] anybody know why the story of why xmms was dropped ? [16:00] hwilde: it's dead [16:00] And it was starting to stink. [16:00] and it had long-standing security issues nobody cared about [16:01] imho gio should have a comparison list for filesystems and their capabilities anyway though [16:01] and act accordingly === jwendell is now known as jwendell|lunch [16:01] so is there a lightweight way to play shoutcast streams (without totem and the visualization) ? [16:02] * cody-somerville wonders why synergyc performs so horribly in Hardy but not Gutsy. :/ [16:02] * ion_ typically uses mplayer-nogui, assuming shoutcast streams are just streaming MP3s. [16:03] ion_, it's a url stream playlist http://www.shoutcast.com/sbin/shoutcast-playlist.pls?rn=2916&file=filename.pls [16:04] I just like the winamp look and feel of xmms :/ [16:04] ogra: processed (please upload to intrepid, too) [16:04] pitti, debians ltspfs will have the fixes [16:05] i'm starting t switch to syncs with the ltsp stuff where possible [16:05] i just havent decided on ldm yet, thats ahy ltspfs still sits on mom [16:05] *why [16:25] * davidm is back (gone 17:37:19) [16:26] davidm: Thanks for the info! [16:26] !away > davidm [16:37] Keybuk, iftab replaced by udev now?? :/ [16:37] hwilde: no [16:37] iftab replaced by udev A LONG TIME AGO ;-) [16:37] heh [16:37] :-) [16:38] are there a set of tools I could be using to make an image that can be ported to multiple machines? [16:38] !away > ion_ [16:38] now I have to change UUIDs for the harddrives and MAC addresses in udev [16:39] how do oem people build images to clone? [16:43] hwilde: Have you tried asking in #ubuntu? [16:43] My understanding is that they're better with support type stuff. === danielm_ is now known as danielm [16:45] hwilde: maybe http://bethesignal.org/blog/2008/04/16/this-is-progress-iftab-vs-udev/ helps [16:48] #ubuntu is just a bunch of noobs asking each other noobish questions... I don't even get responses there [16:48] Fair enough, shows what I know :) [16:49] hehe [16:50] I was not so much asking for support, but more asking how I might develop an image that could be cloned to multiple machines... so I thought maybe devel could help :) [16:50] isn't the correct way not to use images, but to use pre-seed installs, so that information is generated correctly? [16:50] * hwilde writes down new vocabulary word "pre-seed installs" [16:51] Ng, any link or resource about this? [16:52] dholbach: typical jdub sillyness [16:53] Keybuk, can't I just delete that file and trigger whatever builds that during the initial install and have it generate with the correct mac ? [16:54] yes [16:55] can you replace "whaver" in that sentence with what I should be looking for :) [17:00] Is it save to remove the gcc4.2 stuff when upgrading from hardy to intrepid? [17:00] What about the perl-holdback? I guess I need to wait for all the perl stuff in the build queue to get done? [17:01] Keybuk, udevinfo -a -n eth0 doesn't work... how do I get it [17:02] interfaces don't have devices [17:02] udevinfo -a -p /class/net/eth0 [17:03] hunger: if you aren't able to work out the answers to those questions, please don't run intrepid yet [17:07] Keybuk, do you think its safe to take out the MACs and use DRIVERS=="e100" for eth0 and DRIVERS=="ath_pci" for ath0 http://pastebin.com/m770185db [17:08] cjwatson: Add updates to hardy then so that I don't need to suffer from my package addiction ;-) [17:08] !amaranth [17:08] Stabbity stab [17:09] cjwatson: Withdrawal sympthoms have set in;-) === Shely_ is now known as Shely [17:10] hunger: I'm sorry, but this is still not the place to ask basic questions about how to deal with routine package upgrades in a development release, even if you think our update standards are wrong. [17:10] hunger: if you upgrade through intrepid and don't have the ability to deal with this sort of thing, then your system is almost guaranteed to break beyond your ability to fix it [17:11] cjwatson: I'll manage, no worries:-) [17:11] cjwatson: I've been doing this since before breezy. I just don't know whether gcc 4.2 or 4.3 is the default nowadays. [17:12] hwilde: I would just delete the file and let it be generated automatically [17:12] there's zero point writing that by hand [17:12] Keybuk, I don't know how to do that [17:12] Keybuk, I image the disks using ghost [17:12] cjwatson: So far I have not seen that documented... but that probably is my own problem since I hate to use LP:-) [17:12] hwilde: you don't know how to delete files? [17:12] Keybuk, funny... after I delete it, what would regenerate it with the correct macs ? [17:13] automatic stuff [17:13] !find gcc intrepid | hunger [17:13] hunger: Found: gcc, gcc-4.1, gcc-4.1-base, gcc-4.1-doc, gcc-4.1-multilib (and 35 others) [17:13] hwilde: so, do you think that was helpful ...? [17:13] maybe if it displayed the other 35 [17:13] hwilde: Thanks, but I do have the package list. [17:13] hunger: consider for example 'apt-cache show gcc' [17:14] Keybuk, seriously? the udev rules will just rebuild themselves on the next reboot ? [17:14] yes [17:15] wow [17:16] cjwatson: Thanks. So not having cpp-4.2 deinstalled is a oversight and will be fixed at some point. [17:16] hunger: why would we require the old compiler to be deinstalled? [17:16] that would be inconvenient for many people [17:16] the different compiler versions don't conflict [17:17] that statement would be funny out of context [17:17] if you've been doing this since breezy, you'll have seen this before [17:17] cjwatson: I know. But in this system cpp is dragged in as a dependency, so it should get deinstalled. [17:19] cjwatson: Ah, found it. libqt4-dev still depends on it indirectly:-) So I can indeed remove it. Thanks. [17:19] Keybuk, can that rebuild be triggered without a reboot ? [17:20] hwilde: Remove the card or its driver. [17:20] hwilde: yes, but if you don't know how, you don't want to do it [17:20] Keybuk, heh thats what i thought [17:20] very cool tho === gnomefre2k is now known as gnomefreak [17:20] it'll generate it exactly the same as the one you just deleted, after all [17:20] now if grub can just do the same I can delete the menu.lst :) [17:21] Keybuk, no I can delete /etc/udev/rules.d/70-persistent-net.rules from the image, and first reboot on the clone'd machine it will regenerate with the correct macs [17:22] right [17:22] that is freaking awesome === jwendell|lunch is now known as jwendell [17:24] only thing left is the UUIDs in fstab and grub. am I losing anything by replacing the UUID with generic /dev/sda1 so it can be cloned to another machine? [17:24] /dev/sda1 isn't generic [17:24] there are still systems on which it will be /dev/hda1 [17:24] if you don't care about that, then you aren't losing anything [17:24] not on mine, they're all the same [17:25] the older one is hda1 tho you're right [17:26] if there's no benefit then why complicate things with UUIDs ? [17:30] there is a benefit [17:30] 17:24 there are still systems on which it will be /dev/hda1 [17:31] I didn't mean purely older systems [17:31] and also it made the transition vaguely sane [17:32] ahh I see [17:33] at install we can see that hda1 is / and hda6 is /home but if at upgrade time those change to sda1 and sda6 we have no way of knowing [17:33] yeah [17:33] they might have changed to sdb because maybe you already had an sda, etc [17:33] that makes sense [17:34] but I can't clone UUIDs between systems, and /dev/sda1 works fine, so i'll just go with that [17:34] sure, in your case you know what your hardware is and can take such shortcuts [17:35] hwilde: I think the UU part of UUID makes that a bad idea. [17:35] :D [17:35] we know UUIDs are a bit unwieldy, but unfortunately they remain (AFAIK) the best solution to the problem at hand [17:35] oh nvm [17:35] misread your sentence [17:35] sweeheh [17:35] thought you said you were going to clone the UUID from another system [17:35] I wish [17:37] if you insert a second drive, it may become sda1 [17:37] with your original sda1 now sdb1 [17:37] and *boom* she vill not boot [17:37] seriously? [17:38] seriously [17:38] why wouldn't the new one have the new name [17:38] better question [17:38] why _would_ it? [17:38] hwilde: because there's no guaranteed order of initialization [17:38] hwilde: even on the same system drives could come up in somewhat nondeterministic order [17:38] I had one system with 5 drives where 2 of them would consistently swap block device names every boot [17:39] so how does Dell do it? They type in the unique UUID on every system?? [17:39] I would imagine that every Dell computer has the same root filesystem UUID [17:39] since it's a cloned image [17:39] (but maybe not, since that would lead to other problems) [17:40] Keybuk: I'd expect them to have a smarter way of generating them? [17:40] (I hope) [17:40] indeed [17:40] there must be oem tools to handle this... I want them [17:40] hwilde: oem-config [17:40] the installer has its own oem mode [17:40] it's not hard to programmatically get the UUID of a disk and replace placeholders in fstab/menu.lst with it [17:40] oem-config doesn't handle UUIDs, though it does have hooks into which you can drop your own scripts to do that kind of thing [17:41] jdong, it is if the system won't boot. [17:41] why wouldn't it boot? [17:41] hwilde: presumably it's done by the installer [17:41] as the last step of installation [17:41] jdong: he's ghosting an image [17:41] cjwatson: ah. [17:41] ghosting an image copies the filesystem exactly, right? [17:42] yeah if I was going to go through the installer everytime this wouldn't be an issue [17:42] yes it copies bit by bit the entire flashcard [17:42] file-by-file, or at the filesystem level? [17:42] I'm guessing binary? [17:42] (at the file system level) [17:42] hwilde: so what's the problem with UUIDs? [17:43] hwilde: you may be missing the information that UUIDs are associated with a filesystem, not with the hardware [17:43] they are not unique ? [17:43] hwilde: not if you clone them [17:43] they are unique to the filesystem [17:43] hwilde: if you've bit-by-bit duplicated the filesystem [17:43] hwilde: the UUId is a field in the superblock [17:43] you would have duplicated the UUID too [17:43] hwilde: if you do a binary copy it duplicates the UUID [17:43] hwilde: Didn't we have this conversation last week? [17:45] I am struggling to remember said conversation, or the reason why UUIDs allegedly did not work... i'll give it a shot [17:46] are you sure you'll be able to support the systems you're cloning? [17:46] nah we have a support department for that [17:46] :) [17:47] ... [17:47] I'm way out of my depth here, but is this something that LVM could help with? [17:47] I don't think LVMs are any easier to boot ;-) [17:48] andrew___: same problem [17:48] If UUIDs on normal drives are somehow hardware-specific, LVM UUIDs can't be. [17:48] the filesystem in the LVM has a UUID [17:48] the LVM PV has a UUID [17:48] it sounds like everyone says to just use the UUIDs, and I can't remember why we aren't, so i'll try it [17:48] and the drive the PV is on would still change names [17:48] where they never unique ? [17:48] like back in 6.06 ? [17:48] hwilde: *wah*wah*scary*UUID* [17:48] hwilde: they are unique for each invocation of mkfs [17:48] if you copy the filesystem image you make, the UUID is also copied [17:49] the UUID of every Ubuntu Live CD is identical, since we don't mkfs on each boot ;) [17:49] great then I can just clone them [17:49] but... how do I find the UUID now to restore fstab and grub :) [17:49] UUIDFSVOUU [17:50] hwilde: vol_id [17:53] cjwatson: mind if I pick your brains a bit again? [17:53] Keybuk, what kind of device path is it expecting? it doesn't like sda /dev/sda sda1 /dev/sda1 [17:54] Keybuk, nvrmind === Shely_ is now known as Shely [18:14] andrew___: now is probably not the best time, I'm afraid [18:16] kirkland: i hear you're trying to get a hold of me? [18:17] djwon1: hey there, i just msg'd djwong [18:17] djwon1: yeah, deneen pointed me to you about a power management driver you're working on.... [18:18] which one? [18:18] there's two kernel drivers so far [18:18] djwon1: :-) you tell me.... [18:18] ibmpex = old power meter hardware interface driver [18:18] ibmaem = new (2006 onward) power meter interface driver [18:19] djwon1: ibmaem is targeted for 2.6.26? [18:19] maybe. it's in akpm's tree right now; don't know if he'll push to linus before .26 [18:20] alternately the hwmon maintainer might come back to life and push it to linus late in the rc cycle (he did for adt7473 in 2.6.25-rc2) [18:21] djwon1: gotcha [18:21] djwon1: is there some documentation overview you can point me to for ibmaem ? [18:21] the 2.6.26 merge window is over [18:23] Amaranth: yes it is, but linus took new drivers after the 2.6.25 merge window closed, on the grounds that there weren't any regression possibilities [18:24] Hey guys is there a way to manage policy kit for more then one computer remotely [18:24] kirkland: yes there is, but the doc file fell off the patch :( [18:24] djwon1: ;-) could you send something or a url my way? kirkland@canonical.com [18:24] sure [18:25] djwon1: deneen also mentioned a userspace component, written in python? [18:25] djwon1: still undergoing OSSC? [18:25] who knows [18:25] still wrangling with "IP concerns" or some nonsense like that [18:25] djwon1: fun, fun [18:25] a year of meetings for 5 months of coding work [18:26] djwon1: is the userspace stuff necessary for the driver to do any good? [18:26] tis not required [18:27] djwon1: can you tell me briefly what ibmaem does by itself then? [18:27] reads air temperature/energy use registers from the BMC [18:28] djwon1: and logs them /proc or /sys? [18:28] no logic involved, just simple ipmi commands [18:28] and exports them via sysfs [18:28] djwon1: and the userspace code does something smart with cpu freq scaling based on that data in sysfs? [18:28] the userspace program figures out correlations between cpufreq steps and power consumption so that you can set a power budget and constrain the system [18:29] if you only care about _energy_ then it's usually best to run the system at full speed and then fall asleep, of course :) [18:32] djwon1: okay, cool. well let me know when it makes it into Linus' kernel, and when the userspace stuff comes available. i can help with the packaging, and getting the module build flag turned on. [18:33] ok [18:33] infinity: hi are you around? [18:38] kirkland: the "old" ibmpex module seems to be turned on already in hardy [18:39] djwon1: yeah, I saw that [18:39] old is a bit of a misnomer since we only stopped shipping systems with that interface a month or two ago [18:39] djwon1: when i saw that, i figured Deneen must have been talking about something newer [18:40] djwon1: is there a user space app necessary to make that information useful? [18:40] djwon1: or is that said code in the OSSC black hole? [18:40] ibmpex is monitor-only, so lm-sensors 3.0 can pretty-print the readings [18:41] er.. 3.0.2 [18:41] djwon1: gotcha. i'm familiar with lm-sensors [18:42] djwon1: will lm-sensors be able to read ibmaem data, or will it need to be enhanced to do so? [18:42] no enhancements to lm-sensors needed [18:43] except for the parts that make it read power/energy sensors, which is part of the 3.0.2 release [18:43] (if they've even released that yet) [18:43] djwon1: gotcha. hardy ships lm-sensors-3.0.0-4ubuntu1 [18:44] djwon1: your new userspace code, are you seeking to contribute it to lm-sensors, or somewhere else? [18:45] wishing i'd just contributed it to lm-sensors [18:46] djwon1: always easier to push to an existing project :-P [18:46] someone got wind of it and said "This would be great to release on IBM's web site" [18:46] *bam* legal approval hell [18:47] djwon1: my experience was a) write whitepapers and put those on ibm's website (developerWorks), b) write code and put it were it belongs, in properly managed/packaged open source projects ;-) [18:48] hmm [18:48] i might have an old tarball on kernel.org still [18:51] nope, gone. [18:52] sadly, the blueprint route is the fastest approach to getting it out [18:52] though allegedly it's shipping in the idataplex [18:54] * ogra giggles [18:54] * ogra giggles even loder [18:54] *louder [18:55] Ok, ogra finally had a mental breakdown. === Shely_ is now known as Shely [18:55] opensuse uses my configure-X.sh script from ltsp as default way to configure X on their liveCD [18:56] hrm [18:56] (configure-x.sh is a very hackish sed script that just replaces vlues in the dump of X --configure output, nothing i'D use outside of ltsp) [18:56] yeah [18:56] that's....wow [18:56] yeah [18:58] " it just seems faster than our sax2 tool, so for trial Beta3 live CDs are now using it" [19:04] Anyone know the magic incantation in debian/control for a package to depend on libc6 used to build it? [19:10] compbrain: ${shlibs:Depends} [19:10] unless you actually *mean* the version of libc6 used to build it, rather than the usual answer of the version of libc6 that will satisfy its symbol requirements [19:14] cjwatson: That was the one, thanks! === asac_ is now known as asac === nand_ is now known as nand [20:33] I can't find a policykit mailing list, do they just use the hal one? [20:35] alternatively, anyone here know how to debug why policykit isn't allowing an action? It's telling the app to acquire an authorisation to do it, when I already have said authorisation granted. === bimberi_ is now known as bimberi === asac_ is now known as asac [23:11] how does the brainstorm site decide who is a developer? [23:12] funny, you are the second one to ask that today :) [23:12] :-) [23:13] so, basically we (someone from the QA team) set you as Developer [23:13] Who is the QA team? :) [23:13] You, I guess [23:13] Canonical's QA Team + nand + me [23:14] alright then, can you set me as a developer? [23:14] do you need my #? [23:15] done [23:15] Oh, thanks [23:15] np === asac_ is now known as asac