[00:44] Looks like zirconium has wedged on sysvbanner translation stripping just like molybdenum did for the previous upload [00:46] i have chatzilla and can not connect to ubuntu. I get another window#ubuntu-read-topic [00:51] billisnice: Well, perhaps you should, uh, read the topic? [00:51] i do not understand how to set another port with chatzilla [00:53] billisnice: google for it or ask in #ubuntu [00:53] billisnice: I think its "/server irc.example.org 6667" [00:53] i can get to ubuntu to ask [00:53] billisnice: just type "/join #ubuntu" [00:54] when i type that i get === #ubuntu #ubuntu-read-topic Forwarding to another channel [00:54] i am not very computer literate for sure [00:54] lol [00:56] billisnice: ok sry you have some weird router problem.. im not sure why its not letting you join.. [00:56] billisnice: the topic in that channel says: [00:56] Your router is buggy 1) Please follow these instructions: https://help.ubuntu.com/community/FixDCCExploit to FIX it (yes, it can be fixed) 2) after carrying out those instructions please type « test me » and wait few minutes | if this fails, type « /join #ubuntu-ops » to be tested manually [00:56] i was in for yrs until a few days ago [00:58] billisnice: maybe they added a check for this recently or something.. not sure [00:59] billisnice: anyway, what you need to do is to start connecting to freenode using port 8001 instead of 6667 [00:59] billisnice: the command to do that is: "/server irc.ubuntu.com 8001" [00:59] let me try [01:01] this is what i get NickServ * This nickname is registered. Please choose a different nickname, or identify via /msg NickServ identify . " =-= User mode for Administrator_ is now +i [01:02] billisnice: just choose another nickname, type: "/nick billisnice2" or whatever [01:03] billisnice: anyway, you should probably talk to the folks in the #ubuntu-ops channel, they seem to have added this router check thingie [01:03] it says =-= YOU are now known as billisnice2 but there is no one in the room buy me lol === StooJ|Away is now known as StooJ === DaIRC34444 is now known as randomnic === randomnic is now known as DaIRC82633 === DaIRC82633 is now known as sea4ever [02:11] anyone know why wacoms create so many XInput devices? [02:11] in jaunty [02:16] wow the buildds are way behind [02:19] ianm_: how many? [02:20] pwnguin: I think 4 per physical device [02:20] sounds about right [02:21] stylus, cursor, erasor and? [02:21] pwnguin: and one without any suffix [02:22] I don't get it, they all claim to have the same axes and buttons [02:23] it's "Wacom Bamboo", "Wacom Bamboo eraser", "Wacom Bamboo cursor", and "Wacom Bamboo pad" [02:23] ah [02:23] the pads sometimes have buttons on themp [02:23] them [02:24] but why don't those just show up in "Wacom Bamboo" ? [02:24] probably the better question is what "Wacom Bamboo" is about [02:24] ? [02:25] Wacom Bamboo = a pad and a pen, and both have buttons on them [02:25] whats the pen have on it? just an erasor and right click button? [02:27] the pen has two side buttons [02:28] plus cursor tip + eraser tip [02:28] i only have 1 wacom device, so the other stuff is foriegn to me [02:29] pwnguin: in Jaunty? [02:29] i mean [02:29] physical device [02:30] one laptop [02:38] Right, so wacoms are special. [02:38] ? [02:38] Different ways of using them are supposed to have different results. [02:39] A common case is that where someone has multiple pens, say a pen, a pencil, and an airbrush. [02:40] So, to support using lots of different devices with a wacom, the input layer has to be able to differentiate different pens. [02:40] (it doesn't really matter what comes in the box with a given set: the facilities are in the hardware, and expansion is supported). [02:40] so i can buy a super duper pen? [02:40] Right. [02:41] or steal one from the graphic design program [02:41] And you can have the different pens serve different purposes, or act differently. [02:42] persia: ok. do you know if this current model (4 XInput devices) is going to change soon? (the way wacoms work seem to be changing a lot) [02:42] Of course, this requires application support, and will be able to support even cooler effects with full MPX. [02:43] of course it's going to change? [02:43] ianm_, Well, there's actually that many devices. You have a pen that contains both a nib and eraser, and a pad that also has buttons. [02:44] If you add another pad, and no more pens, you'd get six. If you added an airbrush, you'd get seven. [02:44] thats only three [02:45] nib, eraser and pad is three by my count [02:45] persia: so why 4? I mean why both "Wacom Bamboo" and "Wacom Bamboo cursor" ? [02:45] Erm, I'd have to look at the events generated by them to answer that. Send me one? [02:46] heh [02:46] (or fiddle with input-utils yourself to achieve faster results) [02:46] i think the bamboo's are ~100 dollars [02:46] persia: a wacom? but I only have 5 :( [02:46] shows nothing: xinput test "Wacom Bamboo cursor" [02:46] ianm_, I'm mostly joking. Inspect the input-events streams from each device. [02:47] how? [02:47] maybe cursor is for the mouse that some come with? [02:48] cursor is for the nib [02:48] nib? the normal tip? [02:48] yes [02:48] hm? but it has no input events [02:49] ==> shows nothing: xinput test "Wacom Bamboo cursor" [02:49] and you shouldn't get a device showing for a physical device you don't own. [02:50] you mean, if I steal a usb stick it should not show up :) [02:51] ianm, my question would be was there any difference with previous versions when you are specific about jaunty? [02:53] it's not clear from this discussion if you ask 'should this really have 4 input devices?' or 'it used to have only n now it has 4, what changed?' [02:54] my #1 question is, is this right/normal and likely to remain this way in the next ubuntu version [02:54] I guess this is not a decision for ubuntu devels, it comes from the kernel drivers [02:55] hm OK [02:55] ianm_, It's at least explicable, and not entirely unexpected from discussions on tablet support during the jaunty cycle. I would expect future solutions to also identify individual devices (although perhaps not with that level of specificity: I would only have expected 2 guessing entirely based on the packing description for the device) [02:56] of course I don't know anything about wacoms, but it might simply show in hardware level as four devices in same usb cable to system [02:56] hile, No, that's not the issue. wacoms are special. [02:56] ah, ok [03:00] ianm, my guess is based on this discussion (remember, not really familiar with these devices) the driver shows all things which this tablet might support, if you had them, and maybe should be hiding things which are not connected [03:01] I guess it really _is_ good idea to have multiple devices for different parts, of course it's not fun if the names and IDs change between releases though [03:01] No. It *doesn't* show the set of possible devices. It shows the set of devices required to process the set of input events the hardware generates. [03:03] I was just wondering about the 'extra' device myself here, but maybe I'll shut up if I don't know tablets anyway :) [03:12] persia: when did this happen? [03:13] last i knew the HAL detection gave you one device and you just got to deal [03:13] pwnguin: HAL was fixed fairly late in Jaunty to provide lots of devices. [03:13] this is what i get for not paying attention [03:14] and for working around the whole autodetection with xorg.conf [03:17] persia: do you know if the device IDs show up in the same order for all wacoms? "Wacom Bamboo eraser", "Wacom Bamboo cursor", "Wacom Bamboo pad", "Wacom Bamboo" as 9,10,11,12 respectively in this case (NOT asking if they'll be the same numbers, of course) [03:18] ianm_, I don't,. [03:18] presumably, you could write a script to resolve the deviceID from the string [03:19] pwnguin: yes, but I use multiple wacoms at once :D [03:19] pwnguin: so several have the name "Wacom Bamboo" [03:19] heh [03:20] i would imagine usb connected pads are not detected deterministically [03:20] ianm_: I'm fairly sure that's not guaranteed. [03:20] I'm certain it's guaranteed *not* to be deterministic. [03:21] so how do I (as programmer) know which eraser goes with which cursor and which pad? [03:21] The X server could quite reasonably one day start to randomly assign numbers. [03:21] ianm_: You check the properties on the input device, which probably don't exist yet. [03:21] ianm_, It doesn't work that way: you can use any eraser on any pad. [03:21] Ideally something like the device's serial number would be exposed in the device. [03:21] every time i think wacom is nearly solved [03:21] And beause of the identity of the eraser, it will be the same device. [03:21] Device *properties*, that is. [03:22] someone comes up with a crazier "but, but.." scheme [03:22] persia: Oh, it works like that, does it? [03:22] wgrant, That's actually sensible. [03:22] wgrant, That's my understanding, to support different (physically identical) devices. [03:22] ianm_: what exactly is the purpose of having multiple pads? [03:22] But including the serial numbers, or some other UUID would be a way to resolve it. [03:22] persia: I was under the impression that the tablet just thought "ooh look, an eraser! I'm going to report events to my eraser device now" [03:23] competitive crayon physics? [03:23] pwnguin: to have multiple people drawing [03:23] ianm_: You could check the HAL device name, although that might not be visible to normal apps. [03:24] pwnguin: or just two super high res x/y/pressure inputs. I'm a VJ [03:24] wgrant, From a which search, I believe that it's not per-class, but per-device. [03:24] s/which/quick/ [03:25] a VJ? [03:25] do you do parties? [03:25] persia: That doesn't seem likely, as that would mean it would have to grow a new XI device when it saw a new pen. That's doable, but doesn't seem likely. [03:25] yeah [03:26] so in an app that just knows the id of a wacom device (the one called "Wacom Bamboo") which reports the pen tip x/y/pressure and the pen buttons, how do I connect to the eraser? [03:26] i thought video jockies were an MTV invention [03:27] pwnguin: don't get caught up on the name, it's fun interactive light shows [03:27] wgrant, Hrm. I don't know how else to do the "automatic per-pen profile selection" I see in the data sheets, and without the ability to grow new devices, how does one support an airbrush, mouse, etc. [03:27] persia: It sounds like you might be right, then, although it seems a little strange. [03:28] But I suspect I should stop participating in this conversation before I end up adding a collection of wacom stuff to my shopping list (when I don't really have a use case for owning them beyond comprehension) [03:28] Heh. [03:28] wgrant, Right. wacoms are special. (which is where I came in) [03:28] persia: do you have a projector? [03:29] ianm_, No. [03:29] persia: I knew they had lots of heads, but I didn't know they grew new ones at will. [03:30] wgrant, At least the shops sell a variety of heads for use. I'm not sure about the *any pen* bit: there seem to be several different color-coded options for pens. [03:30] So, rather than pen+UUUD it might just be pen+color [03:30] (limiting one to 5-6 devices per-pad) [03:30] How confusing. [03:31] Indeed. It's a really cool solution for the intended purpose, but not a close fit to our model of handling input events. [03:33] hm [03:33] maybe I should remodel this [03:33] More complication: according to wacom(4), wacom protocol IV doesn't support the pen-profile bit, and wacom protocol V does, and both are used in currently-shipping hardware. [03:34] http://www.penny-arcade.com/comic/2009/1/9/ [03:34] (and also according to that it is based on serial number, so exposing that as a UUID to HAL would make sense) [03:34] wgrant: so are you saying there's no way to determine which eraser is on which pen? [03:35] ianm_, If you set your X debug level to 6, the driver will show the serial number, but it's apparently not used by HAL. [03:35] For your use case, I would have recommended a couple of USB synaptics touchpads. [03:36] HAL doesn't really need to know about it - the driver or evdev needs to expose it as an XI property. [03:36] persia: touchpads are pretty awkward for drawing though (I also have a driver for touchpads as X/Y pads) [03:37] wgrant, So, it's just a driver issue? [03:37] ianm_, Hrm. That's certainly true. [03:38] persia: If the driver is able to spew it into the log, it is also able to put it into a property. [03:38] persia: So it looks like it's just down to the driver now, yes. [03:38] Right. So that's the bug. Someone with a wacom should track that down and fix it. [03:38] persia: http://photos-d.ak.fbcdn.net/hphotos-ak-snc1/hs004.snc1/4155_72565298156_531288156_1844763_3590172_n.jpg [03:38] And then we have eraser-7a8d8c9e9f9b9a [03:39] Which can then be used sensibly in userspace. For extra points, the UUID should probably also encode which pad is being used, as well as which device. [03:40] for the highest difficulty, convince Ping to do this before the heat death of the universe [03:40] persia: You could either put it in the device name, or hide it away in a property so users don't see the ugliness. [03:40] wgrant, hiding it in a property is better. I just don't know how to express that as a noun effectively. [03:41] Actually two properties: one for the pointer device UUID and one for the tablet UUID. [03:54] wgrant: so currently there's no way to know which eraser goes with which pen? that's rough... :) [03:56] ianm_, As an accident of device enumeration, it's likely that adjacent devices belong to the same physical device. [03:56] persia: yes that does seem to be the case, that's why I asked if that's consistent [03:56] ianm_: It's a lot less rough than having only one device show up, like Intrepid and earlier. [03:56] It's not guaranteed. [03:57] wgrant: yeah! and I would imagine no one uses more wacoms simultaneously than me, so I appreciate that ;) [03:58] so what magic am i missing [03:58] that my tabletPC only has one apparent device [03:58] ianm_: No, no, previously only one of the heads of each device would show up. [03:58] pwnguin: Do you have wacom-tools? [03:58] i do [03:58] wgrant: you mean multiple wacoms worked before, even though you had to set it up manually in xorg.conf? [03:59] ive also moved xorg.conf out of the way to see this in action [03:59] ianm_: I'm not sure. [04:00] if so, I did a lot of upgrading, including running into new intel graphics bugs, for nothing, but oh well :D [04:01] actually, it seems to be finding my cursor and erasor [04:01] but i have no right click [04:01] so maybe I'll hardcode it to use ID-1, ID-2, and ID-3, and see how far that gets me [04:04] i guess its time to go read those bugs ive been ignoring about wacom [04:17] wgrant, I can't seem to find the bit that sets the properties. Do you have any suggestions for function names I should be seeking? [04:20] hm the buttons on the Wacom Bamboo pad don't seem to show up anywhere [04:21] ianm_: i have a similar problem with my right click [04:25] I'm now very confused. There's a changelog entry in xf86Wacom "2005-11-17 47-pc0.7.1-1 - Report tool serial number and ID to Xinput". [04:25] pwnguin, Can you double check the xinput properties for your device? There should be useful information available. [04:26] persia: any tips on what commands do that? [04:26] persia: http://pastebin.com/d6cf59844 ? [04:28] pwnguin, xinput list? [04:28] ianm_, Thanks. [04:28] ianm_, Could I have list-props for the other three as well? [04:28] persia: all identical [04:29] http://pastebin.com/f72f6d546 [04:29] * persia is all sorts of confused. [04:29] probably [04:29] i grepped too much out [04:29] You guys have xserver-xorg-input-wacom loaded? [04:31] http://pastebin.com/m4016b702 [04:31] there's the full entry [04:31] persia: installed yes [04:32] Does Xorg.log report it being used? [04:32] (II) Loading /usr/lib/xorg/modules/input//wacom_drv.so [04:33] I'd expect list-props to provide a serial number or something. There's a bug in here. [04:34] mmhmm [04:34] (II) LoadModule: "wacom" [04:34] (II) Loading /usr/lib/xorg/modules/input//wacom_drv.so [04:41] anyone know why openjdk-6-jre is not installable on hardy? [04:41] openjdk-6-jdk: Depends: openjdk-6-jre (= 6b09-0ubuntu2) but it is not going to be installed [04:42] calc, what happens when you try to install openjdk-6-jre? [04:42] hmm i think the problem may be with my chroot not havint the security/updates uncommented [04:43] hmm actually they are [04:43] openjdk-6-jre: Depends: openjdk-6-jre-headless (= 6b09-0ubuntu2) but it is not going to be installed [04:43] openjdk-6-jre-headless: Depends: tzdata-java but it is not going to be installed [04:43] tzdata-java: Depends: tzdata (= 2008b-1ubuntu1) but 2009f-0ubuntu0.8.04 is to be installed [04:44] pwnguin, ianm_ My apologies, but I think I've gotten as far as I can without a device, and want to delay getting one by at least a few more months. I'd recommend looking at the VCS for the driver, specifically xf86Wacom.c, and review the changes made to "add hotplug support". I believe the passing of the tool and pad serial numbers to Xinput was dropped at that point. [04:44] persia: well, im not interested so much in crazy multi pen / pad maddness [04:45] i'm confused these versions seem to be old versions even though i have security/updates available [04:45] 4 pads is the minimum for any reasonable person [04:45] doh [04:45] cody-somerville: nm i figured out what i did wrong [04:45] calc: no universe -updates? [04:45] cody-somerville: i forgot to add universe to those [04:45] ianm_: thats far too many to carry around [04:45] ajmitch: yea, doh ;-( [04:46] * cody-somerville goes to watch the newest episode of house! :) [04:48] pwnguin, re: multiple pads, I agree. re: multiple tools, I think it's a nice use case to be able to physcially switch between pen and airbrush without worrying about fiddling with software. [04:57] persia: Input properties only appeared in 2008; that 2005 thing must be talkin about some other interface. [04:58] Aha! [04:59] persia: multiple pads sure is useful when you have multiple humans :) [05:01] ianm_, So are multiple computers :) But yes. [05:02] persia: it's quite fun to draw on the same canvas! [05:04] how does that even work without MPX? [05:05] pwnguin, rapid switching on absoute coordinates. === j_ack_ is now known as j_ack [05:28] pwnguin: was that Q for me? [05:29] if so, it works by reading from the xinput devices directly [05:29] not using them as mice === sea4ever is now known as wumpus_2 === wumpus_2 is now known as sea4ever === sea4ever is now known as _4ever [07:07] good morning [07:33] Good morning [07:33] tkamppeter__: hi === tkamppeter__ is now known as tkamppeter [07:45] pitti, hi [07:48] tkamppeter: good morning === StooJ is now known as StooJ|Away [08:04] pitti, I have re-uploaded s-c-p yesterday night, with the missing bits in the debian/changelog missing. [08:05] pitti, now I see you have passed it through. Thanks. [08:05] tkamppeter: no problem, sorry for the hassle === azeem_ is now known as azeem [08:35] pitti: There's a sysvbanner build stuck in dbgsym processing on i386 and lpia again [08:36] wasn't me, i didn't break it === mkorn is now known as thekorn_ [08:44] maxb: weird [08:44] maxb: I'll try to reproduce it locally [08:49] pitti, in the mood to sponsor an upload? (I'm just finishing off an FTBFS fix for apr) [08:51] NCommander: sure [08:52] Let me just finish the bug. [08:53] Oops, hold on, forgot to fill out the dpatch header [08:55] pitti, https://bugs.edge.launchpad.net/ubuntu/+source/apr/+bug/372068 [08:55] Launchpad bug 372068 in apr "FTBFS fix for APR" [Undecided,New] [08:56] kirkland: section 7 of RFC 4252 describes the messages used in SSH publickey auth [08:56] NCommander: please send this to Debian as well, it should hit them as well? [08:57] pitti, I'll test to see if APR if it still breaks [08:58] NCommander: if it breaks for us, why doesn't it break for Debian? [08:58] pitti, I stopped trying to make sense of libtool breakage along time ago [08:58] * NCommander is just waiting for his sid chroot to update [08:59] it built just fine in Debian [08:59] NCommander: anyway, upload prepared, tell me when to fire [09:00] pitti, fire when ready [09:00] NCommander: *boom* [09:01] * NCommander watches karmic die [09:01] * NCommander is somewhat amazed at the level of breakage this cycle [09:02] NCommander: really? except for the new intel driver, karmic didn't break at all yet [09:02] pitti, are we looking at the same FTBFS counts :-) [09:02] oh, those [09:03] NCommander: just talking about stuff that actually reaches my machine [09:03] pitti, you already upgraded to karmic? [09:03] sure [09:03] and speaking about autof*** breakage, /me goes back to that gthumb merge [09:03] pitti, which merge? [09:03] (didn't I just say? gthumb) [09:04] pitti, er, I mean whats broken about the merge :-P [09:04] pitti: we still have some changes ubuntu specific in this one? ideally it should go to universe and be a direct sync [09:04] seb128: yes, lpi and a gvfs fix to unmount when using libgphoto [09:04] ah ok [09:04] the lpi one is sticky [09:04] I guess lpi could be dropped [09:05] but if we have diff anyway we can as well keep it [09:05] I sent the gvfs one upstream months ago, it wasn't applied yet :( [09:05] pitti, Debian already has a fix pending (slightly different from the one they used), and its fixed upstream [09:05] so once Debian has an upload its just a sync [09:05] NCommander: in apr? nice [09:05] is it now some magic for packages with python modules to compile nicely with python2.6 without modification? [09:05] pitti, yeah, someone backported the SVN fix for APR [09:06] seb128: if we drop it to universe, we could drop two changes (libopenrawgnome-dev build dep and POT building) [09:06] but it's one of the packages which call autoconf at runtime, and it just falls apart [09:06] * pitti renews his deep hate towards autotools [09:06] bigon: if you have the site-packages directory coded in rules on install I don't think so [09:07] on -> or [09:07] pitti: so basically once the gvfs fix is upstream or in debian we could move to universe and sync [09:07] yes [09:07] if we don't care about lpi any more [09:08] * NCommander takes out a machine gun and shoots gstreamer0.10 [09:09] seb128: http://pastebin.com/d1b39b68 << because I get that and the module is correctly installed [09:09] NCommander: what is the issue? [09:09] seb128, libtool breakage [09:09] NCommander: you don't try to fix the karmic build do you? [09:10] seb128, huh? [09:10] pitti, do I need to files bugs to get NBS packages with no rdepends removed? (file size 0) [09:10] NCommander: no [09:10] those are autocleaned when somebody have a go at this list [09:10] seb128, which list? (I'm trying to fix outstanding FTBFSes in main and universe) [09:11] NBS [09:11] oh [09:11] NCommander: and gstreamer0.10 is already fix commited [09:11] Oh [09:11] Handy :_) [09:11] it's early to fight karmic ftbfs issues [09:12] NCommander: http://paste.ubuntu.com/164757/ [09:12] I have NFC what libtool wants to tell me [09:13] Oh that fun. [09:13] yeah, I had that issue [09:13] Looks like the package is setup in a way that there are multiple libtool instances [09:13] * NCommander grabs the source [09:13] pitti: run autoreconf [09:14] pitti, autoreconf -if [09:14] pitti: you are using different versions than the one used for the tarball and don't run all the auto* or not in the right order [09:14] pitti: running "autoreconf" should fix it [09:15] pitti: that's a mismatched ltmain.sh/aclocal.m4 [09:15] in fact, the Makefiles themselves run autoconf [09:15] so apparently they don't run libtool (neither the external nor the internal copy) [09:16] seb128, if its too early to start fighting FTBFS, when do you recommend doing so? Alpha 1 is in a week and a half :-/ [09:16] NCommander: no need to clean universe for alpha1 [09:16] seb128, I'm just focusing on main ATM [09:16] (apr is in main) [09:16] NCommander: most of those issues will be fixed along the way with autosyncs, etc [09:16] seb128, there is enough breakage in main to be afraid :-P [09:16] " seb128, which list? (I'm trying to fix outstanding FTBFSes in main and universe)" [09:16] you listed universe [09:17] pitti, I'll do universe ones when I run out of main :-P [09:17] er [09:17] seb128, [09:17] I've nothing against you spending your time on that [09:17] but that's often a waste of time [09:17] * NCommander usually goes right down the list [09:17] don't create extra diff for things which are on sync and that debian will fix though [09:17] NCommander: the archive doesn't need to be buildable for alpha1, just installable [09:17] they're quite different things [09:17] ie gstreamer ;-) [09:18] and it's probable that most FTBFS can be fixed by doing an outstanding merge [09:18] * pitti flushes an autosync run [09:18] seb128, Keybuk: thanks, that helped; I'll add it to debian/rules [09:18] it now FTBFSes in a different way at least [09:18] Keybuk, then what do you recommend I do? [09:18] NCommander: http://merges.ubuntu.com/main.html#] [09:18] intltool-update looking at .pc/04-fix-gvfs-umount.patch/data/gthumb-import.desktop.in [09:19] NCommander: 293 to do there [09:19] pitti: can't you just comment the autotools call there? ;-) [09:19] urg [09:19] seb128: no, the upstream makefiles themselves are set up to call autoconf etc. [09:19] apparently in an insufficient way [09:20] pitti: sounds like something ran aclocal or similar by accident? [09:20] ordinary automake makefiles won't do that [09:21] debian/rules doesn't call any auto* so far; I added an autoreconf -fvi now [09:21] we have it in bzr-buildpackage, so screw an unclean diff.gz after binary build [09:22] Keybuk: we do patch some Makefile.am, so in principle it is right in automake'ing [09:29] pitti: right, but calling automake itself should be harmless [09:29] likewise autoconf [09:30] that won't cause the error you saw [09:31] calling aclocal is what tends to cause that brekage [09:33] probably just need "libtoolize" ran === thekorn_ is now known as thekorn === StooJ|Away is now known as StooJ [09:39] directhex: here? [09:39] directhex: ah, nevermind; unping [09:45] hello, how to let sshd start before desktop? i can't login to ubuntu by ssh at gdm time [09:45] zhxk`: by default that should work; ssh is started at S16, gdm at S30 [09:47] how to troubleshoot? sshd stops as soon as out to gdm from gnome [09:47] (on a sidenote that belongs to #ubuntu) === zhxk` is now known as zhxk [09:52] * directhex smells n-m [09:52] pitti, you can still ping me if you like [10:05] seb128: why are you "rebasing" the gnome packages? [10:05] dholbach: you probably call that merging, why not? [10:06] dholbach: to lower delta, send changes back to debian, etc? [10:06] dholbach: why do we do it for all the non GNOME packages? [10:06] merging and sending stuff upstream is definitely fine - you're dropping history [10:06] we do that since warty [10:06] I was a bit confused by the wiki page and the stuff in the sponsoring queue [10:06] if you are speaking about changelog entries [10:06] yeah [10:07] we do that for ages [10:07] a lot of packages have history since warty [10:07] no point to have hundred of NEWS summaries [10:07] pitti, when you get a free moment, can you please sync https://bugs.edge.launchpad.net/ubuntu/+source/libxfcegui4/+bug/372098 for me? (I have a bunch of outstanding merges that are dep-wait until that goes through :-)) [10:07] and I don't see the point, it's extra work [10:07] Launchpad bug 372098 in libxfcegui4 "Sync libxfcegui4 4.6.1-1 (universe) from Debian unstable (main)." [Wishlist,Confirmed] [10:08] NCommander: you know there is a process to request syncs and several archive admins [10:08] seb128: OK, I was confused - if we do it we should make sure though that we still have pointers to all the bugs that were fixed by patches that we ship [10:08] NCommander: karmic is not such an hurry now that you need to ping people on IRC [10:08] seb128, we're uploading to karmic so we can do a SRU of 4.6.1 [10:08] seb128, (of Xfce) [10:08] which has quite a few bug fixes [10:08] dholbach: we summarize changes in the current changelog entry and try to tag patches [10:09] seb128: that's good - just wanted to make sure I knew what you guys were doing [10:09] NCommander: that doesn't seem urgent enough to IRC ping to speed up things, syncs are processed daily [10:09] (when doing sponsoring) [10:09] dholbach: ok thanks [10:09] * seb128 hugs dholbach [10:09] dholbach: I don't think that changed since you were in the desktop team but thanks for checking ;-) [10:09] * dholbach hugs seb128 back [10:10] NCommander: was going to do it, but my LP cookie is broken again on cocoplum; I'll just defer it to today's archive admin [10:10] pitti, ah well [10:27] yay, there goes my last merge [10:29] pitti: lucky you ;-) [10:29] seb128: I only had about 15 or 20, most of my stuff finally got accepted in Debian [10:30] that's where sending back to debian pays ;-) [10:30] and not having lpi used in his packages too :-p [10:30] * pitti hugs seb128 [10:30] * seb128 hugs pitti [10:32] ok you guys can let go now [10:33] infinity: are you around? long time ago we discuss about the openhackware package that need to be built on ppc, it whould really great if you could build it and put it in the archive === dpm_ is now known as dpm [10:47] bigon, its a non-trivial problem (I've been looking at it myself) [10:49] NCommander: yeah I know but IIRC infinity told me that he whould build the package himself and use some backdoor to but it in the archive [10:49] as the package doesn't change so often [10:49] bigon, he said the opposite in the bug report :-/ [10:49] bigon, I was looking at the possibility of building the necessary cross-toolchains [10:52] k [10:55] oh, i see. how to make it arch:all but build on ppc [10:55] i can think of a couple of hacks which probably wouldn't work because the archive doesn't work the way i think it does [10:55] directhex, I can get it to build, but it will not upload due to some hardcoded bleck in the generated changes file [10:56] * NCommander spent quite a bit of time on this [10:56] NCommander, how are you doing it? [10:56] directhex, deep magic [10:56] directhex, http://launchpadlibrarian.net/26054517/upload_962116_log.txt - here's the failure to upload log [10:57] hm, i see [10:58] what would happen if the package generated both the arch:all openhackware package, and a dummy arch:powerpc package, in the binary-arch rule? leave binary-indep empty? [10:58] Quick question why does kvm and libvirt etc recommend samba? [10:59] directhex, its a problem with how sbuild calls dpkg-buldpackage [10:59] directhex, we can't control the call to dpkg-genchanges [10:59] bloody sbuild [10:59] someone remind me why sbuild is the favoured buildd [11:00] directhex, well, it calls dpkg-buildpackage with -B [11:00] :-P [11:00] which makes the changes called with dpkg-genchanges -B [11:30] Keybuk: how do you update the udev-extras package from upstream? [11:30] Keybuk: I'd like to play with udev-acl, and check whether it can sufficiently replace our hal auto-ACL magic [11:32] pitti: git fast export | bzr fast import, etc. [11:32] I can update it if you like [11:32] ah, so there's no magic debian/rules incantation or so [11:32] no [11:34] Keybuk: if you could pull and push to bzr, that'd be great [11:34] saves me from figuring out where and how to pull and import [11:35] even if you did it, you'd end up with something incompatible with my branch :-( [11:35] sadly git fast-export | bzr fast-import doesn't generate predictable revision ids [11:35] so if you do it, you'll get an entirely different branch [11:35] bzr fast-import --help [11:36] bzr: ERROR: unknown command "fast-import" [11:36] (and that, too) [11:36] ah, bzr-fastimport package [11:36] * Keybuk wishes bzr-git just worked [11:37] Keybuk: http://lists.freedesktop.org/archives/devkit-devel/2009-April/000140.html was quite interesting [11:39] Keybuk: it's improving quickly [11:39] james_w: it didn't even install correctly when I tried at the release sprint [11:40] from the package? [11:40] Keybuk: the source package doesn't even build for me; debian/rules provides no rule for generating "configure" by calling autoreconf for ./autogen.sh; how is that supposed to work? [11:40] pitti: why should the rules do that? [11:40] Keybuk: oh, nevermind me; I didn't get/unpack the tar.gz before [11:41] udev-extras_0~20090414+1-git4817acf-1_source.changes uploaded [11:41] Keybuk: sweet, thank you! [11:42] I think we're still missing the hal-info -> ? conversion [11:42] wow, that changed quite a bit [11:44] Keybuk: I also didn't see any new world counterpart to the setkeycodes bit in hal and the keymaps in hal-info [11:45] no, probably doesn't exist [11:45] I got the hang of it, and I'm quite interested in that one [11:45] Keybuk: I'll mail the devkit list about it [11:46] cool [11:46] the last I heard, the temptation was to just have X do this stuff directly [11:46] rather than have a DeviceKit-input [11:46] I wouldn't like to have a full-blown devkit-* stuff for that [11:46] it's something you do once at boot, and never again [11:46] yeah [11:47] well, "never" -> if you hotplug a new keyboard, it needs to be run again, I figure [11:47] right [11:47] worth chatting on #udev about, probably [11:47] Keybuk: could htat become an udev rule which fires a setkeycodes script whenever you add an input device? [11:47] pitti: possibly, yes [11:48] * pitti will discuss on the list [11:48] * Keybuk gets confused by soyuz [11:48] "Version older than that in the archive" [11:49] $ dpkg --compare-versions 0~-2~gite5fb9bd-1 lt 0~20090414+1-git4817acf-1 && echo yes [11:49] $ [11:50] can't help feeling that putting git sha-1s in things that are meant to be versionwise comparable is a recipe for confusion anyway, not that that's the problem here ... [11:51] * cjwatson gets a migraine from the console-setup merge [11:51] yeah, it's a pretty insane version string ;) [11:51] can't help feeling that putting git sha-1s in things that are meant to be versionwise comparable is a recipe for disaster [11:52] In git, you can refer to commit as -. [11:57] Keybuk: how efficient is the udev rules parser? I. e. if there was a rules file with a thousand laptop model key's worth of keymap data, would that take a significant time? or does it use some clever internal tree structure? [11:57] very efficient [11:57] james_w: I've been finding it very hard to keep dulwich, bzr-git, and bzr in sync for testing :-( [11:57] it does process each in order [11:57] but the time to do that is pretty low [11:59] Keybuk: so it could actually be done that way, instead of providing an udev callout with its own pattern matching processor? [12:00] done what way? [12:00] Keybuk: have the long list of vendor/product -> keymap encoded in udev rules [12:01] echo unsupported_layout=$unsupported_layout >>/tmp/cslog # asdf [12:01] lovely [12:03] Keybuk: also, is there a standard way for an udev rule to refer to DMI data, such as hw vendor/product name? [12:03] Keybuk: (of the system, not the added device) [12:05] pitti: the long list should be ideal [12:05] obviously we'd want to guard it with SUBSYSTEM!="...", GOTO="end" simply to speed up other processing [12:05] but udev should rattle down it quickly [12:06] Keybuk: right, and another shortcut for not adding an input device [12:06] pitti: dmi data such as loading a module based off it? [12:06] Keybuk: I just wondered whether we could do the system vendor/product matching in the udev rules, too, or whether we need a special callout in the beginning to detect that and export it into env vars [12:07] jseval [foo: "bar"]; [12:07] sorry [12:07] Keybuk: not a module, to do the matching; e. g. on "LENOVO"/"ThinkPad X6", keycode 0061 is "next song" [12:08] pitti: SUBSYSTEM=="dmi", ATTR{chassis_vendor}=="LENOVO", ATTR{product_name}=="Th [12:08] inkpad X6", ... [12:08] ? [12:09] Keybuk: right, but that woudl match on "adding" (well, coldplugging) the DMI stuff, not a new input device, woudln't it? [12:09] right [12:10] annoyingly the chassis dmi information isn't a "parent" of anything [12:10] it'd be worth an upstream discussion how to do that kind of thing [12:11] so I would need an ACTION="add|change", SUBSYSTEM=="input", ENV{SYSTEM_VENDOR}=="Lenovo", [run callout with keymap data] [12:11] or, if that isn't possible, just always run the callout on addition of an input device and have the callout do the vendor/product matching [12:11] less elegant, though, since we couldn't re-use the udev rules parser [12:11] I think it'd be better to fix udev so we could just directly use it [12:12] yeah, that would be great [12:12] I wouldn't like to add an IMPORT{program}="dmidecode" something to each rule [12:13] Keybuk: I guess variables are only valid within a rule, i. e. you can't have one callout at the beginning of your .rules which sets SYSTEM_{VENDOR,PRODUCT} and use those vars in the following rules? [12:13] no, sadly not [12:14] can you think of any other cases, apart from dmi, where you'd want to refer to another device? [12:14] cjwatson: yeah, it's a pain unfortunately. It's one of my targets for this cycle to make sure they are up to date in karmic and there are snapshots available [12:14] Keybuk: I saw a lot of rules which refer to a device's parent, or grandparent, but not an arbitrary one [12:14] (other than the equivalent of /org/freedesktop/Hal/devices/computer, which is special) [12:15] right, we obviously can do any ancestor in udev, that's quite easy [12:16] Keybuk: maybe it would make sense to special case system vendor/product/version, since referring to them will become very common once we migrate hal-info [12:17] right, I'm wondering whether we should special-case dmi data [12:17] or introduce the concept of global variables [12:17] sadly the kernel doesn't have the concept of a top-level device [12:17] so that you can have one callout at the beginning of your .rules which exports these vars and you can use them [12:17] which is what they should really be set on [12:21] pitti: obviously the main concern will be making sure that we process the DMI information first [12:21] otherwise you might end up finding the keyboard before the kernel announces the DMI info [12:22] * ogra points out that there are enough broken BIOSes out there that we might introduce issues with using DMI data as reliable source [12:22] Keybuk: the global variables approach seems more generic, and more robust against such races [12:22] Keybuk: but then again, with that we'd probably end up with doing the same DMI decode stuff in 5 different places [12:23] ogra: we have used it for ages [12:23] oh, ok [12:23] ogra: hal reads /org/freedesktop/Hal/devices/computer from DMI, and hal-info relies on it [12:23] the kernel has had DMI-based quirks since shortly after the beginning of time [12:23] you're misunderstanding me [12:23] ogra: so while there are certainly lots of broken BIOSes around, I think we can rely pretty well on the vendor/product data [12:24] right now the kernel puts DMI information in its own kobject under /devices [12:24] so there's inherently a race unless we deliberately process that first [12:24] ogra: no-name products will just have empty or bogus string, then they just don't match anything [12:24] where deliberately process would have to include "spin until the kernel announces it" [12:24] pitti, right, of these i have seen a few [12:24] Keybuk: ah, you are saying that even with the global var/initial callout appraoch, calling dmidecode would fail in that case, since it's not been discovered yet? [12:25] we don't want to call dmidecode [12:25] (if we can avoid it, so much the better) [12:25] we want to import the information from /sys/devices/virtual/dmi/id [12:25] such that it is available to all other objects [12:26] TheMuso: up? [12:26] Keybuk: shall I open a bug about that, or a ML post? === dmesg is now known as edsoncanto [12:28] pitti: I'll chat to kay first and see if there's something obvious ;) [12:28] Keybuk: okay, many thanks [12:29] hmm, I should probably convert d-i over to using /sys/devices/virtual/dmi/id/sys_vendor rather than using dmidecode or stripped-down versions of it [12:30] if that's a reliable path, it does seem very elegant [12:30] it wasn't around when I added DMI handling to d-i originally (for Macs) [12:30] Keybuk: does the SYSFS{} function take a full path? i. e. in theory, if it weren't for the race, rules could refer to that? [12:30] if I were being paranoid, I'd genericise it as /sys/devices/virtual/dmi/*/sys_vendor ;) [12:30] the 2.6.22 kernel, at least, doesn't have it [12:30] but I don't think that's warranted, it should always be "id" [12:30] pitti: no, precisely because otherwise there'd be a race [12:31] Keybuk: heh, robustness by design; I like that [12:34] TheMuso: if you are around, I'm hacking on dmraid, and you seem to be active there... [12:41] lifeless: What can I do for you? [12:42] pitti: turns out it's easy [12:42] SUBSYSTEM=="input", ...some keyboard matching..., ATTR{[dmi/id]product_name}=="Thinkpad X60", ... [12:55] pitti: join #udev ;) [13:19] TheMuso: bug 372170 [13:19] Launchpad bug 372170 in dmraid "intel isw raid metadata at odd offset" [Undecided,New] https://launchpad.net/bugs/372170 [13:19] lifeless: looking [13:21] lifeless: do you have a reason to use dmraid over generic linux software raid? [13:21] dual boot with windows [13:21] I may not do that but I need the option [13:22] I'm aware that dmraid is the devil's spawn and a half arsed compromise from folk that make IDE controllers and should know better. [13:22] lifeless: Right good enough reason [13:22] lifeless: afaik offset locations can be specified in the isw header, but I am not 100% sure, since I've not had to look in that file for a while. [13:22] lifeless: have you tried the latest debian/karmic package? [13:23] no, haven't got an installed system yet ;P [13:23] right [13:23] isw probes only one spot [13:23] that was part of my change [13:23] right [13:25] random question, how big does the installer make swap? [13:25] I want to keep disk space aside for windows, so doing manual partition [13:26] * TheMuso can't remember actually, since he always manually partitions [13:27] lifeless: the BIOS/board you are using didn't happen to create an hpa on those drives did it? [13:27] wheee terabyte disks are fun [13:27] hpa? [13:27] lifeless: since the closest problem I have heard of had something to do with an hpa [13:28] host protected aread [13:28] area [13:28] how can I tell [13:28] I *think* dmesg states something about it [13:28] dmraid has bugs filed against it to do with hpas. [13:28] * TheMuso looks [13:30] hrm ok no bugs indicating hpa problems, at least from the descriptions. [13:30] summary even [13:30] I'm guessing its a real cylinder size thing or something [13:31] but fundamentally I have no idea, haven't written disk level code in 13 years [13:31] Right [13:31] what I want to do is get a solid enough patch you'll apply it :) [13:31] Well, looking at the isw header, it only checks one location for the data, which is what makes me think its an hpa. Is it thesame on both disks? [13:31] so that when karmic comes out I get to upgrade without building a package in-situ during the install [13:32] the same location that is [13:32] yes, the patch in the bug was sufficient to let both disks be found and the mirrored drive activate [13:32] hrm ok [13:32] the disks are identical [13:32] my wintendo blew up on saturday, this is a replacement. [13:32] right [13:33] I'm hoping linux gaming is sufficiently far along that I won't actually *need* windows on it. particularly as its arse. === thekorn_ is now known as thekorn [13:34] At this point, I am not really sure what it could be, since upstream doesn't always appear to be very upfront about changes to bits and pieces like this. [13:34] by upstream you mean intel themselves? [13:34] And since intel wrote the code to the isw data recognition... [13:34] so [13:34] lifeless: the redhat person who works on dmraid and intel [13:35] TheMuso: ah k. [13:35] so heres what I propose [13:35] I fiddle around once I have an installed system and get a version that does di->size-2115*512 [13:35] or something like that [13:35] you apply the patch as its now 'generic' :P, we send it upstream and see what they say. [13:35] ok thats a start. In the meantime, I'm going to poke upstrea mabout this. [13:36] and I'll gather any data people ask for happily. [13:36] ok great [13:36] * TheMuso sighs, dmraid cvs from upstrea has not been updated since last ear or there abouts. [13:37] Getting patches for this piece of crap is like finding a needle in a haystack. [13:38] it hurts your fingers and makes you bleed? [13:38] lifeless: something like that. It seems that mdadm will eventually take over dmraid metadata manaemet however. [13:39] management [13:39] that would be nice [13:39] As long as it works :) [13:40] lifeless: one thing you could do, is when you have a dmraid binary that can actually read the metadata, if you could use the dmraid -rD command to make a dump of the metadata I can send upstream for analysis, that would be useful. [13:40] I fully expect the occasional world of pain with this. [13:40] sure, I'll do that now [13:40] thanks [13:40] is tat the dmraid.isw dir ? [13:41] afaik it dumps a few files in the current dir, but if it makes subdirs then its likely in there. There should be several files. [13:42] lifeless: what RAID level? [13:42] dmaird -rD =attached to the bug report. raid 1 [13:43] thanks [13:44] argh, ubiquity fail [13:46] ubiquity+dmraid is not tested, which is why dmraid is not on the live CD. [13:46] grub failed [13:46] ah ok, thats not so bad. [13:46] it would be nice to be told that directly. [13:47] lifeless: ok I've emailed the dmraid list, and will keep you posted via the bug report. [13:47] cool, thanks! [13:47] lifeless: np thanks for bringnig it to my attention. [13:47] I kinda had to :) [13:48] haha yeah [13:49] when I pinged I hadn't created a workaround yet, I was still grovelling through code and docs [13:49] right [13:49] I really appreciate your jumping on it enthusiastically though. [13:51] as much as I hate maintaining it in Ubuntu, its not so bad, since I work closely with the Debian maintainer, to the point where I have commit access to the vcs tree used for the packaging, and I have some of the hell spawn hardware myself. Even though I don't use it full time, its handy for testing. [13:53] lifeless: is you don't succeed in installing Ubuntu the way you are now, you could attempt to get dmraid to create the metadata for you, using the original Ubuntu binaries of course. It would be interesting if the BIOS did or did not recognise it. If it did, you'd be out of the woods. [13:57] do device mapper raid1 arrays do load balancing? [14:19] how do i setup the resolution of my 19 inchs crt monitor ? i cant get more hten 640x420 [14:19] how do i setup the resolution of my 19 inchs crt monitor ? i cant get more hten 640x480 [14:19] echo... echo... echo... [14:19] :) [14:19] sorry [14:20] can anybody help ? [14:20] try #ubuntu for user questions [14:21] theres only proxy users [14:21] theres nbobody here [14:29] Xlib: extension "Generic Event Extension" missing on display ":0.0". [14:29] how do i fix it ? [14:31] mib_44hfkp: #ubuntu for support questions please [14:32] maco, you there ? [14:34] cant join ubuntu because od proxie issues [14:34] got the output of lshal | grep "Wireless" here http://pastebin.com/d1121261d on boot. it is doubled because i ran it twice :P. i was here yesterday because of the network manager applet lag [14:34] mib_44hfkp, That you can't connect to #ubuntu doesn't make this a support channel. You just won't get help here. [14:35] You might try #ubuntu-CC where CC represents your country code: many of those channels have fewer restrictions, and most offer at least some support. [14:42] doko: do you have any idea what this error might mean: https://bugs.edge.launchpad.net/ubuntu/+source/python-central/+bug/372126/comments/2 ? [14:42] Launchpad bug 372126 in python-central "package update-manager-core 1:0.111.9 failed to install/upgrade: subprocess post-installation script returned error exit status 1" [Undecided,New] [14:43] My network manager applet takes a lot of time to recognize my network card (says 'network disabled') on boot and then takes a lof of time too to connect to my network. The whole proccess takes like above 5 minutes maybe. the lshal | grep "Wireless" output is here http://pastebin.com/d1121261d on boot. Using openbox with ubuntu jaunty updates. [14:44] re [14:44] Keybuk: sorry, went off for lunch and errands; shall I still join #udev? [14:45] pitti: yup, it's a good place to hang out anyway for upstream discussions of this nature [14:45] mvo: corrupt files? [14:46] doko: ok, I ask him for a fs check [14:49] thebloggu: i'm studying for a final that i have in a half hour. seriously, the people that know how network manager really works are in #nm [14:50] maco, ok, thanks [14:52] maco, btw, good luck [14:52] thebloggu: thanks [15:21] * bigon has some diffuculty to see why some packages FTBFS on buildd and not on his machine with cowbuilder [15:21] it seems that on buildd the modules for python2.6 are not moved [15:22] automatically to dist-packages === evand1 is now known as evand === chand_ is now known as chand [15:36] bigon: different versions of cdbs? [15:42] do the buildds install b-d-i? [15:46] Laney, yes. mono stuff uses b-d-i extensively [15:46] I think I'll cut myself now [15:54] lifeless: Yes, device-mapper raid1 devices do load balancing. [15:55] soren: sweet, thanks [15:55] pitti: are you here by chance? [15:55] hey lifeless [15:56] pitti: is there a way to get jockey to install the 32bit interface for the nvidia driver? [15:56] lifeless: what's that? [15:56] lifeless: I guess the answer is "no" [15:56] I'm not entirely sure [15:56] http://www.cedega.com/forums/viewtopic.php?t=10281 [15:57] ^ read that, briefly [15:57] (its short) [15:57] lifeless: that sounds pretty crazy === JFo is now known as JFo-mtg [15:57] Hi, I'm wondering how to get involved in ubuntu? [15:57] on a 64 bit installation you should use the 64 bit driver.. [15:58] pitti: I'm pretty sure what it is is a 32lib libdri2 [15:58] or some such thing === al-maisan_ is now known as al-maisan [16:00] kim_: ask "dholbach" here on IRC, he has a lot of good material on getting started with ubuntu development [16:00] lifeless: right, that might be; but I don't think we build those :/ [16:00] lifeless: tseliot is our nvidia driver master, he might know more about it [16:00] thanks [16:00] tseliot: ^ [16:01] kim_: if you're interested in development and package maintenance, definitely: https://wiki.ubuntu.com/MOTU/GettingStarted - it links to all the important documents [16:01] tseliot: I'd like to get to the point of filing a useful bug somewhere about this [16:01] Riddell: hi [16:01] Thanks to both of you :) [16:02] does ubuntu use postfix as default mta? [16:02] I'll check it out and then I might be back later :) [16:02] Bye for now [16:03] for now, 1am, crash() [16:03] lifeless: the 32bit libraries are already installed by the nvidia driver [16:04] tseliot: oh good timing [16:04] lifeless: ia32-libs doesn't contain the 32bit libraries for nvidia [16:04] tseliot: so cedega wants ia32-libs for other reasons [16:04] Keybuk: fastest meeting minutes evar! [16:04] tseliot: and - hah - its detecting all good now. Sorry for the noise. [16:04] lifeless: np [16:05] Uhm.. How can I know why a new package was rejected? The mail doesn't neither state a reason nor who rejected it [16:06] LaserJock: heh, I always write meeting notes as I go along [16:07] RainCT: you should have got a separate mail, or IRC comment, with a reason [16:08] cjwatson: so, regarding archive reorg and -security. I read through ArchiveReorganisation* (quickly, so I might have missed something) and was curious how security maintenance would be expressed. obviously, now it is via seeds, and nijaba's (and your?) tool will help with LTS. but with archive reorg, there are two things I'm thinking of: [16:08] * cjwatson glares at http://people.ubuntu.com/~ubuntu-archive/architecture-mismatches.txt [16:09] new-binary-debian-universe has a lot to answer for [16:09] lool: do you experience https://bugs.edge.launchpad.net/ubuntu/+source/kvm/+bug/357630 with the jaunty ga iso's? [16:09] 1. how easy will it be to determine if a package is supported. it would be nice if we could express that in current packaging/seeds/etc, but it could be an LP API query [16:09] cjwatson: I didn't, although perhaps it was send only to the packager (I only gave the 2nd -actually, 3th- ack and uploaded) [16:10] RainCT: I think it would be normal (albeit not ideal) for it to be sent only to the packager [16:10] 2. it seems we'll need to be careful about things possibly 'slipping in' to a maintained status (eg, through a package set) [16:10] RainCT: I've had a bug open about allowing us to specify a reason in the rejection mail for about three years [16:10] jdstrand: 1. it'll appear in a field in the Packages file [16:11] jdstrand: 2. to precisely the same extent as we need to be careful about things slipping into main, I think? [16:11] cjwatson: re 1> excellent [16:12] cjwatson: re 2> yes, but I wasn't sure if part of this would become 'this package set is officially supported' as opposed to 'this package is officially supported'. the former could be problematic. if that is not part of the design, then no problem [16:13] cjwatson: I see. Thanks for the info. [16:15] jdstrand: I believe it will be the former, but isn't that the same as saying that main is officially supported? [16:16] jdstrand: certainly archive admins would need to be careful about adding things to that set, the same way they're careful about adding things to main [16:16] s/that set/those sets/ [16:17] cjwatson: I think I see my confusion-- just because a person has upload rights to a particular package set, that doesn't mean that the person can add or remove packages from the set (I thought it did) [16:18] jdstrand: it's just the same as it is today, in a sense [16:18] jdstrand: any developer can ask for packages to be moved back and forward between main and universe, and their opinions are given a high weighting, but only the archive team can actually make that change [16:22] cjwatson: ok, that makes sense. there will just be way more components than those we have now. some will be officially maintained, and some not. I suppose there will be some tool to easily tell what package set is officially maintained as well [16:22] (for archive admins mostly) [16:30] jdstrand: s/tool/list/ I think [16:31] cjwatson: cool. thanks for the info :) [16:31] it shouldn't be that long; the sets that we use for deciding which packages are maintained will essentially be the ones that correspond to seeds, rather than the little ones we use for convenience of granting small groups of people upload access to small numbers of packages [16:33] At this point in the release cycle, would it be kosher to upload an SRU directly to Jaunty and pocket-copy it to Jaunty? We usually require that patches live in the curent development release for a bit before putting them into -proposed, but are we really expecting more people to be running Karmic than jaunty-proposed at this point? [16:33] Erm... By "directly to jaunty" I of course mean "directly to jaunty-proposed". [16:33] And pocket-copy to Karmic. [16:33] Wow. That was difficult. [16:34] soren: it's not encouraged, but it can be a reasonable thing to do, at the discretion of the SRU team [16:35] cjwatson: Ok. The bug in question is bug 359447. [16:35] Launchpad bug 359447 in kvm "kvm segfaults" [High,Triaged] https://launchpad.net/bugs/359447 [16:35] cjwatson: What are the deciding factors? [16:36] soren: gut feel :-) we'll probably say no after alpha 1 or so, once karmic is reasonably usable [16:37] cjwatson: Right. This is for today :) [16:38] soren: I thought the main reason for development release first was to ensure stuff didn't get forgotten and we hit a regression later, so 'development release first' need only be by however long it takes to upload the second time. [16:38] ScottK: that too, although in cases such as this the SRU team member accepting it will typically open a karmic task on the bug [16:39] which is also a reasonable way to ensure that it doesn't get forgotten [16:39] Right. We're in the fortunate situation that we can still just do a pocket-copy. [16:48] cjwatson: It's been a while since I've done this.. I, being a core-dev, can accept the Jaunty task, but I still need to subscribe ubuntu-sru and get an ok from them/you, right? [16:48] ...so accepting the Jaunty task means that the bug is considered severe enough to warrant an SRU, but ubuntu-sru ack's the actual debdiff, I suppose. YEs, that makes sense. [16:49] bryce, hey there [16:50] bryce, do you happen to maintain a list of KMS-aware xorg-video-drivers on the ubuntu wiki? [16:52] MacSlow: there is only one driver that really works ATM, the -intel. -ati is WIP [16:53] amitk, ok... thanks that's about what matches my experience [16:53] amitk, so I'm not missing any new developments on that front [16:54] MacSlow: not really, rtg did enabled support for KMS in the Karmic kernels on the last upload. You can pass a commandline option to grub to enable it. [16:56] amitk, atm I'm using a unpatch mainline/upstream kernel with karmic [16:57] amitk, I mainly have issues with getting my initrd recreated in a way so that fbcon and i915 are loaded _before_ plymouth starts [17:05] soren: you don't need an ack before upload unless it's going to take a long time to prepare and you think you might want to avoid doing the work if it might just be rejected anyway [17:06] soren: you should still subscribe ubuntu-sru, but normally it's OK for the processing to be done in the queue === JFo-mtg is now known as JFo [17:09] cjwatson: Cool, thanks. [17:37] meep, http://people.ubuntu.com/~ubuntu-archive/testing/karmic_probs.html doesn't look good all of a sudden [17:41] pitti, hi, I am currently need some assistant to restart the build process for libg3d. it failed on different architectures due to some missing indirect dependencies at the moment of the build (see LP: #372236). please correct me if I misunderstood the BuildDaemon section in the ubuntu wiki [17:42] Lazhur: you want the builds to be retried, i. e. they should succeed now? [17:42] NCommander: say, can you make sure to follow https://wiki.ubuntu.com/UbuntuDevelopment/PatchTaggingGuidelines for future patches? I've added tags to your shadow patch now. [17:46] pitti: hm, i think i have missed something. let me check something first === rbelem is now known as rbelem-lunch [17:51] pitti: a quick check on packages.ubuntu.com looks good, but I am not quite sure why the dependencies couldn't be resolved as old versions of the packages should have been available on amd64 and co. [17:51] ("old" versions == versions which are new enough to resolv the dependency, but aren't the newest available) [17:52] Lazhur: looks like temporary arch all:/arch any desync [17:53] Lazhur: retried [17:53] ok, big thanks :) [17:53] libselinux1 | 2.0.65-5build1 | karmic | amd64, hppa, lpia, powerpc [17:53] libselinux1 | 2.0.71-1 | karmic/universe | armel, i386, ia64, sparc [17:53] this really isn't clever [17:53] how can this happen? [17:53] I'm going to flip the --no-override default on new-binary-debian-universe; I'm sure it's what's causing a lot of this [17:54] the libselinux-ruby1.8 binary is new in karmic, so it seems a fair bet that new-binary-debian-universe was used [17:56] and bits of libtool in universe too [17:56] * cjwatson runs around behind whoever it is, trying to repair the archive ... [17:59] Is there a plan on unwedging the i386 and lpia buildds currently wedged on sysvbanner builds? [18:00] maxb: I pinged elmo and infinity about it [18:01] ok, it's on someones todo list, good [18:03] What's the policy for syncing a *new* package from Debian multimedia? Let it go through REVU? [18:10] slangasek: Do you have a moment to discuss your spec on release synchronization with Debian? [18:10] ScottK: hmm, ok [18:11] (N.B.: the fact that there's a spec on this, which is mine, may be news to me :) [18:11] I may misremember. [18:11] slangasek: I've seen it too :-) [18:11] I'm also suffering from Hotel internet this week. [18:12] slangasek: I'll just assume it's you for the moment and look it up later. [18:12] ScottK: it's entirely likely that such a thing would be assigned to me - I just don't remember having seen it yet. :) [18:12] Anyway, my thought is that what ought to be synchronized isn't the releases, but the freezes. [18:13] I think it Ubuntu DIF and Debian release freeze were aligned, that would be the point of maximum benifit. [18:13] hmm [18:14] it seems to me that having DIF well after Debian's release freeze is better, because it means getting the extra, release-critical fixes from Debian in with no extra work [18:15] Perhaps [18:15] Regardless of the ideal alignment point, I think it's the freezes that need to be aligned, not the release dates. [18:15] https://blueprints.launchpad.net/ubuntu/+spec/foundations-karmic-lts-debian-release-coordination btw [18:15] alternatively, and this was something that had already been discussed for jaunty, we could run extra package sync runs against testing well past DIF... [18:16] I could see that. [18:17] IME, there is a lot of stuff being crammed into Debian immediately before the release freeze, that then takes a while to shake out - so aligning the freezes also wouldn't necessarily help for getting extra stabilization for the LTS? [18:19] given the relative lengths of the freezes in the past having them un-aligned would mean that we would end up with different upstream versions of some important packages [18:19] e.g. kernel [18:20] is dato going to be at UDS by any chance? [18:22] not that I'm aware [18:22] good point, re: kernel [18:22] though that's a package we don't share with Debian anyway [18:24] actually, it may has the same implication for GNOME (and possibly KDE), which is more significant [18:24] wow [18:24] line edit fail [18:24] s/may has/may have/ === hunger_ is now known as hunger === rbelem-lunch is now known as rbelem [19:28] question [19:29] after yesterdays updates. I lost all networking and wireless support... any ideas? [19:29] running a HP Laptop [19:30] no takers? [19:30] all_is_fair: This isn't a support channel, try #ubuntu [19:30] I did [19:30] it doesn't work === virtuald_ is now known as virtuald === ember_ is now known as ember === rickspencer3 is now known as rickspencer3-afk [21:11] kees, ECONTEXTNEEDED; which shadow patch? [21:22] NCommander: password expiry on arm in 1970. though it looks like upstream already took it === ahe_ is now known as ahe [22:05] pitti, you around? [22:07] Keybuk: if we're building libblkid from util-linux now, could you stop the e2fsprogs source building it, otherwise the next upload will probably fail? also, libblkid1-dbg is still built from e2fsprogs (as far as the archive knows ...) right now and is therefore uninstallable - I don't know whether it should be built from util-linux, or just removed [22:07] probably just removed if there's no need for a separate debug pass [22:14] cjwatson, can I get your opinion on https://bugs.edge.launchpad.net/ubuntu/+source/bluez/+bug/327284? (I've gotten requests for a bluez backport, which brought this bug to my attention). I've been checking bluez's git repo, and there are a *lot* of commits involving fixing bluetooth headphones (to the point I'm not sure I can isolate the commit(s) that fix it in a sane matter). On the other hand, I'm not sure we could sanely release [22:14] a new upstream version via SRU to fix this issue ... [22:14] Ubuntu bug 327284 in pulseaudio "[Jaunty Alpha4] Bluetooth Headset pairs but freezes the system when used" [Undecided,In progress] [22:15] I've got no problem with a backport there - if nothing else it would make it easier to analyse whether it really does fix the problems for most people [22:16] cjwatson, should I prep an SRU for this then? [22:19] NCommander: err, I said a backport not an SRU [22:19] I can't say I'm immediately comfortable with an SRU that people don't understand well enough to unpick the commits [22:25] cjwatson, indeed, the package is a rather twisty set of commits (there are about 50 or so between the two releases :-/) [22:42] NCommander: please note that to fix the BT interaction with PA, one needs karmic's PA and alsa-lib, too. [22:42] NCommander: (PA -> PulseAudio) [22:42] dtchen, fun :-/ (backporting PA is NOT my idea of fun; I don't want a repeat of the flash backport) [22:43] NCommander: luckily, a backport. Also, jaunty's PA is so outdated that a backport isn't terribly likely to regress. === lifeless_ is now known as lifeless [22:44] dtchen, what do you recommend I do? (I can ACK the backport, but unless an audio guy gives it a ruberstamp, I'm not sure I want to do so) [22:45] NCommander: you'll want to backport all three pieces (alsa-lib, pulseaudio, and bluez) [22:45] oh yay [22:45] alsa [22:45] can I start running now :-) [22:49] dtchen, if you feel like putting your two cents on the backporting request, I'll ack it if you think its sane [22:49] sure, i'll look in a bit [22:50] dtchen, np [23:19] seb128: do you know why bug 370366 apparently didn't show up in jaunty (or at least people seem to have started reporting it like mad on karmic)? looking at the source it indeed only seems to have entries for up to 8.04 [23:19] Launchpad bug 370366 in system-tools-backends "[Karmic] Time and Date - platform not supported" [Undecided,Confirmed] https://launchpad.net/bugs/370366 [23:19] seb128: is this a case where we should add an item to NewReleaseCycleProcess to update the list for each new release/ [23:20] ? [23:20] no, I don't know but I don't get the issue here on jaunty [23:20] seems a good idea [23:21] ideally we could deprecate gnome-system-tools at some point but we still don't have an users-admin equivalent so until there ... [23:21] ok, I've added a note to NRCP [23:22] thanks [23:22] that might be fixed when we sync on debian [23:23] I think they dropped the check there to avoid similar issues recently, I've read a commit about that [23:23] can I consider you on top of this bug and close the window in my browser? :) [23:23] I'm amazed we already have people filing bugs about karmic ;-) [23:23] cjwatson: yes [23:23] amazed? I'm scared [23:23] thanks [23:24] well, let's say I'm surprised that users dist upgraded so early yeah [23:25] pgraner: do you know if linux-meta is going to be updated to 2.6.30 in karmic soon? [23:30] * NCommander is getting annoyed that Launchpad keeps 502ing === RainCT_ is now known as RainCT [23:45] cjwatson: it should have been already, I'll pester rtg [23:45] ta === maco_ is now known as maco