[00:07] Congrats jpds! [00:11] what's the outcome of the vote? [00:12] Laney: https://edge.launchpad.net/~ubuntu-dev/+polls [00:12] Yes majority on all 3 votes [00:12] nice one, congrats [00:12] (I didn't know how to interpret the results) [00:13] just a rubber-stamp [00:13] if more yes than no, it passes [00:14] Andrew Mitchell for MOTU Council in 2007 - closed on 2007-02-15 ;-) [00:14] geez, we're fogeys [00:14] *ancient* [00:14] yep [00:14] such a long time ago now [00:36] if i rebuild a .deb package that already exist in the debian repositories (with a debian mantainer), i must write my name as mantainer in the "control" file for the ubuntu build, it's right? [00:37] Where are you building the package blackmoon105 ? In your PPA? [00:38] nhandler: yes, in my PPA [00:39] nhandler: yes, i'm building the package in my PPA [00:40] blackmoon105: Then I would set yourself as the Maintainer and move the Debian Maintainer to the XSBC-Original-Maintainer field === asac_ is now known as asac [00:41] nhandler: ok, thanks [01:18] savvas: ping [01:21] is there an ubuntu equivilant to Debians dev-ref? I've found the customised Ubuntu debian-policy, but not the ref. [01:25] Kamping_Kaiser: Is there something specific you're looking for? Most of our stuff is in w.u.c somewhere. [01:26] ScottK, the build dependancies allowed/disallowed between parts of the archive (main/universe/restrcited and release/release-updates/release-security etc) [01:27] I'm sure that's documented somewhere, but I couldn't tell you exactly where. [01:27] I can tell you how it works though if you want. [01:28] please do. (it'll give me a starting point to try and find the 'proper' documentation) [01:30] OK [01:31] Each layer of the archive has to be completely self contained. [01:31] Packages in Main can only build-dep, depend, or recommend other packages in Main. [01:32] Then for Universe it's Universe + Main. [01:32] Restricted is Main + Restricted. [01:32] Multiverse is Main, Restricted, Universe, and Multiverse. [01:32] Is that sensible? [01:33] yep, with you so far. [01:33] OK. [01:33] At release time you get the release pocket. [01:34] -updates and -security both build on themselves and -release. [01:34] The trick is that -security updates get copies from -security to -updates on some periodic. [01:34] So -updates ends up with all the post-release changes. [01:35] (if a package has a -update and then gets a security fix in -security you have to change the package in -updates (or -proposed) to have that security fix. [01:36] Packages destined for -updates are uploaded to -proposed and then built and tested there. [01:36] Once blessed they are copied from -proposed to -updates. [01:36] How's that? [01:37] Kamping_Kaiser: ^^^ [01:37] so if i undestand correctly, -release is equivilant to debians 'frozen'. then after release -updates builds using -updates and -release, security builds using -security and -release. but does not build using -security, -release *and* -updates ? [01:38] Yes. I'm not certain, but I think -updates also builds against -security although it doesn't matter much because stuff gets copied from -security to -updates anyway. [01:39] The reason for this is that running with just security updates and not bugfixes is a supported use case. [01:39] security is considered more conservative, so it won't grab any of the 'riskier' stuff from -updates [01:39] i see. this bit is specifically what i'm trying to find a policy on. [01:39] ajmitch: I'd put it slightly differently. I'd say it's considered more essential. The risk may be higher or lower, but it's more worth taking. [01:40] same thing, different perspective really [01:40] ScottK, current linux-image in hardy-security depends on stuff in -updates to build and install. this is a pita for gNewSense, since we dont offically support updates, and which is whats started me looking into this whole thing. (i posted on -dev about this last night but go no bites) [01:40] There was a recent discussion on this in ubuntu-devel because of gcc updates getting into -updates and the risk of some stuff ending up misbuilt. [01:41] the discussion on -devel sounds like exactly what you're talking about with the kernel there [01:41] Yes [01:41] and since my recollection was '-updates should be optional', and we've had packages rebuilt because they depended on -updates before, i was surprised to be told privately -security could depend on -updates [01:42] It's not supposed to as I understand it. I'd ask kees. [01:42] or doko, he seemed to know about it as well, since it was gcc [01:43] would sending kees an email be the best move, or ping him on irc later? [01:43] I guess email, then i can include this convo for background. [01:45] ScottK, and ajmitch thanks very much for the help [01:46] you've read the thread about this time last month about kernel-compiler mismatches? [01:46] no, but i could do.happened on u-dev list? === eKZDskZS is now known as ljl === ljl is now known as LjL [01:47] yeah it did, it pretty much covers what you were talking about, I think [01:47] I'll go and check, thanks. [01:47] https://lists.ubuntu.com/archives/ubuntu-devel/2009-February/027366.html [01:48] ta [02:01] I dont see anyone object to ScottK 's mention of the 'without updates' usecase, so I might file a bug on the linux packages === foxbuntu` is now known as foxbuntu_ [02:27] I've jut fired off an email. Thanks again. I'm sure I'll be back later. === foxbuntu_ is now known as foxbuntu [03:23] what do people feel about backporting config-package-dev? [03:23] bug 315264 [03:23] Error: Could not parse data returned by Launchpad: The read operation timed out (https://launchpad.net/bugs/315264/+text) [03:23] I was looking over the diff and couldn't really see anything that would pose a problem === Zarel_ is now known as Zarel [06:13] good morning [06:18] Morning dholbach. [06:18] hey iulian! [07:13] Hi. About python2.6 transition. What should I do if a package still depends on python2.5, but build with python2.6, and don't have any issues with python2.6? add an entry for package rebuild in changelog? [07:13] and submit hte debdiff? [07:15] have to go now. Bye! === fabrice_sp_ is now known as fabrice_sp [08:27] g'morning [09:28] directhex: OK, if I re-add the planet ubuntu thing? [09:29] dholbach, oh, oops. yes, go ahead [09:29] directhex: also I'll run update-maintainer for you - it's time you get a ubuntu.com mail address! :) [09:31] i am the king of upsate-maintainer suck [09:32] does anyone have any idea if libdvdread is used by xine? If not then it should be removed from kubuntu-restricted-extras dependencies. [09:38] slytherin, try playing a dvd with xine - libdvdread spams output on stdout iirc [09:40] directhex: uploaded [09:41] dholbach, thanks. sorry for the debdiff cock-up. i wasn't thinking properly - it's a while since i've merged anything since full syncability was a jaunty goal [09:41] directhex: no worries [09:42] which gnome# 2.24's abi break hasn't helped with. such is life [09:44] directhex: Will try tonight [09:45] dholbach, i'll ask the maintainer if he'd consider adding planet ubuntu at that end [09:45] directhex: super, thanks [10:06] can't believe there's not more people in the motu group: http://identi.ca/group/motu [10:07] i don't "get" this twittering things [10:07] also, not a motu [10:07] ... yet :) [10:08] dholbach: hy, are you Daniel Holbach ? [10:08] cristi: yep [10:09] dholbach: you asked for mmkeys.so here https://bugs.edge.launchpad.net/ubuntu/+source/sonata/+bug/341409 [10:09] Ubuntu bug 341409 in sonata "Edited the package for the Python 2.6 transition" [Undecided,New] [10:09] dholbach: i don't know what are you refering [10:09] cristi: run less on the old python-mmkeys .deb and on the new one and compare [10:09] or dpkg -c [10:10] you'll see that the old package had two versions (one for python2.4, one for python2.5) of the .so file, the new package just has one for python2.5 [10:11] dholbach: i see your point now [10:11] ok good [10:11] I'm not an expert, I just thought it might be a problem :) [10:12] dholbach: however, since 2.4 will not be used anymore, and it was posted for the python transition, is it necessary to add the 2.4 version? [10:12] cristi: no, dropping 2.4 is fine - there's just no 2.6 version [10:16] dholbach: i am new with packaging, so i don't really know what i have to do now. i followed https://lists.ubuntu.com/archives/ubuntu-devel/2009-February/027528.html for the python transition. i used a jaunty pbuilder to build. what should i do? [10:16] cristi: to be honest, I don't know - as I said: I'm not an expert [10:18] dholbach: oh, mkay, then hopefully soon a motu will take a look for sponsoring, and give me a feedback with what is wrong [10:18] yep === pschulz03 is now known as pschulz01 [11:03] nhandler: Thank you, you too! [11:44] if I change my package to a different Series say Januty instead of Intrepid but the version does not change do I need a change log entry to reflect this? [11:44] I noticed some package do some do not, just wanted to know what was best practice [11:51] you can't have 2 different packages with the same version number [12:08] I am just changing the packaging more than anything not the actual source [12:08] so? [12:09] so just bump the version and change the changelog to show the packaging has been updated? [12:09] this is the first time I have been in this situation with packages I have created [12:09] AdamDH: that's right. [12:10] AdamDH: you cannot have two packages of the same version in the same repository, even if it's in a different series [12:10] AdamDH, the bit after the - is specifically for packaging-related versioning [12:10] that's why -2, -3, etc, exist [12:11] so could I just append say -0ubuntu1~ppa1 to show the packaging has been update and change the changelog? [12:11] or is it just -0ubuntu1 and increase the 1? [12:11] AdamDH: something like that. [12:12] AdamDH: read the packaging guide [12:12] and read the ppa guide [12:12] AdamDH, append -0ubuntu1~ppa1 to what existing version? [12:12] 0.14 is the version [12:13] try not to have -N or -NubuntuM in a ppa. [12:13] add a suffix [12:13] AdamDH, 0.14 is the PACKAGE version? [12:13] ~somethingX if you want it to be superseded by the corresponding ubuntu/debian version [12:14] +somethingX otherwise [12:14] yup 0.14 is the upstream version [12:14] okay, and what's the ubuntu version? [12:14] AdamDH, i didn't ask that, i asked what the PACKAGE version was. [12:15] AdamDH, packages are versioned upstreamver-pkgrevision, unless they are "native" packages (i.e. the distro is upstream and it's only used in the distro), say... update-manager [12:16] there is no pkgrevision yet I am doing the revision [12:16] then 0ubuntu1 [12:16] 0ubuntu1 if it goes into ubuntu [12:17] 1 if it goes into debian [12:17] you said you were changing your package though. you have an EXISTING package, with package version number 0.14? [12:17] and 0ubuntu1~ppa1 if it goes into a ppa [12:17] yes there is an exsisting package for intrepid with version 0.14 but the code will not change just the packaging [12:17] * hyperair headdesks [12:17] i give up [12:17] sigh, i feel i'm having trouble communicating here [12:17] directhex: you and me both [12:18] sorry The versioning ubuntu is using is just confusing me at the moment [12:18] 0.14 is NOT A VALID PACKAGE VERSION, other than a HIGHLY specific exception. if you've been using it, you've been doing things wrong. [12:18] AdamDH: did you read ANY DAMN THING that directhex just said?! [12:19] yes I did, want to start a fresh so I can explain what I have started with? [12:19] if the upstream tarball says 0.14, and versions are meant to be upstream-revision, then the format must be 0.14-1 or 0.14-0ubuntu1 [12:19] I think your refering to something I know as something else so its causing a little confusion, sorry [12:19] * hyperair will now head out to eat dinner. [12:19] using a native package causes significant issues, as it kills off the orig/diff system [12:20] i.e. you no longer have a pristine upstream tarball (orig) against which your packaging work applies (diff) so updates to the existing version are impossible (can't re-use orig) [12:20] the upstream tar is 0.14 so it should be upstream-revision? so I should be using 0.14-1 or 0.14-0ubuntu1 ? [12:20] right makes sense [12:21] and if you're only going into a PPA for now, append ~ppa1 to the end, meaning "this is the PPA version of 0.14-0ubuntu1, but i want the 'real' 0.14-0ubuntu1 to be installed given the option" [12:21] that way a distro 0.14-0ubuntu1 repolaces your PPA version [12:22] the packages I created in my PPA I did msp430-binutils - 2.19.1-0ubuntu1~ppa4 etc [12:23] but I am working with packages created by some one else at the moment hence the confusion and updating those [12:23] thanks for the help directhex, hyperair [12:26] james_w, ping [12:26] directhex: hey [12:27] james_w, can you forward me the monodevelop-debugger-gdb reject mail? it's simply not on the ubuntu-archive archive [12:27] the reject mail is just the autogenerated one [12:27] I haven't received an explanation mail [12:28] no clue who rejected? [12:28] nope === Igorots is now known as Igorot [12:38] hi [12:38] what does motu means ? [12:38] !motu [12:38] motu is short for Masters of the Universe. The brave souls who maintain the packages in the Universe section of Ubuntu. See http://wiki.ubuntu.com/MOTU [12:38] Thanks Pici [12:38] kaushal: surely [12:39] will tomboy 0.12 be backported to Ubuntu 8.04 Desktop ? [12:41] ii tomboy 0.10.1-1 desktop note taking program using Wiki style links [12:42] only if someone does the work === mdeslaur_ is now known as mdeslaur === `6og is now known as Kamping_Kaiser [13:47] directhex: xine uses it's own private version of libdvdread/libdvdnav and while option to compile against external libdvdread exists it is not recommended (as said by configure --help) [13:53] slytherin, how lame [13:54] Kamping_Kaiser, ScottK: while kees is the person to talk regarding the kernel -security/-updates issue, I can say that systems without -updates enabled are entirely supported and anything in -security that depends on -updates is a bug [13:56] jdstrand, bril, thank you! I've sent kees an email. with any luck i'll still be awake when he gets on IRC (its about 5am where he is i think). [13:58] Heya gang [13:58] Hey bddebian ! [13:58] Kamping_Kaiser: fyi, in these cases it is entirely appropriate just to send to security@ubuntu.com [13:58] Kamping_Kaiser: we'd all get it and then also be in the loop on it [13:58] bddebian, hey mate. [13:58] Hello nhandler [13:59] Heya Kamping_Kaiser [13:59] Kamping_Kaiser: don't feel you have to resend though-- just fyi [13:59] directhex: Do you think we should compile xine against the libdvdread/libdvdnav in repos? Because a private copy means that we are not sure of it's status. [14:00] jdstrand, ok. I've been getting mixed messages, and been unable to find anything in an Ubuntu policy about it. [14:00] Kamping_Kaiser: sorry about that. I can fix the policy. Where were you looking? [14:01] Kamping_Kaiser: and where did you expect to find it? [14:01] jdstrand, I was going through the ubuntu-policy package, but hadnt found it (iirc i was 10-15% of the way through). [14:02] Kamping_Kaiser: interesting-- I can take a look at it [14:02] none of my searches using google got me joy either, but its one of those situations where i'm not sure what i can search for to get the right results. [14:04] Kamping_Kaiser: I think going to wiki.ubuntu.com/Security and/or SecurityTeam should get you there [14:04] Kamping_Kaiser: I say 'should' as in "I'm going to check to make sure it does" [14:05] hehe. ok, thanks. I'll have a try too [14:05] Kamping_Kaiser: if there is an appropriate place in ubuntu-policy for referencing it, then we can have a generic blurb and maybe reference the wiki [14:05] slytherin, hm, i'd ask someone like siretart for opinion on that [14:05] slytherin, FWIW i ignore recommendation & use distro cairo for moon [14:05] Kamping_Kaiser: but the team will talk about improving the situation [14:06] slytherin: have you verified the changes done to those libraries in xine? [14:07] jdstrand, thanks. I was specifically looking in the section regarding repository split up (main/universe/r/m) in ubuntu-policy, but if theres a better place I wouldnt object to reading further for it [14:09] Kamping_Kaiser: I know I have certainly talked about that in several places-- and pretty sure I wrote about it in the wiki. I think I also came across it the other day in ArchiveReorganisation [14:09] Kamping_Kaiser: thanks for your feedback. We'll get that fixed up [14:10] jdstrand, tbh, even though I can roughly describe what I'm trying to get at, and ScottK explained it all, i'm still not sure i'd know what to search for to find the right answer. (a personal problem, but perhaps relevant when trying to work out what to put on the page) [14:11] * jdstrand nods [14:11] Kamping_Kaiser: I fully agree that it is not widely known [14:11] (which is why I've talked to a bunch of people about it :) [14:11] :) [14:11] siretart: not yet, but the last entry in upstream changelog about update of libdvdnav is from December 2004 [14:12] jdstrand, if i can be a help re this, feel free to ping me. [14:13] Kamping_Kaiser: cool. thanks :) [14:14] np! thanks for /your/ help === azeem_ is now known as azeem [14:36] does someone know when mok0 usually comes online? [15:04] Hey, I'm trying to run a PPA team repository that keeps in sync often with git. The problem is that the program has about 20 plugins that need to be rebuilt whenever a change happens to the main program. Is there some way to automate the rebuilding process for these plugins? [15:05] This isn't counting all the backports for the plugin, the archive contains [15:23] hopefully almost kees-gets-back-oclock. *tick tock tick* :) [15:26] Would a kind MOTU from the release team please have a look at Bug #338408 thx;-) [15:26] Launchpad bug 338408 in coherence "FFe for python-coherence" [Undecided,New] https://launchpad.net/bugs/338408 [15:34] dholbach: did you see my proposal for harvest-data? [15:34] gaspa: no, sorry - seems I did not get that mail - will take a look at it in a sec [15:35] thanks. ;) [15:48] I'm working on packaging pidgin-plugin-pack 2.5.1. When I build it I get loads of dpkg-shlibdeps warnings (see http://launchpadlibrarian.net/23781571/buildlog_ubuntu-jaunty-i386.purple-plugin-pack_2.5.1-0ubuntu1~ppa2_FULLYBUILT.txt.gz ) Is there something wrong and can I do anything about it? [16:06] mbudde: look into banshee packaging, and copy the debian/patches/99_ltmain-as-needed.patch [16:06] mbudde: then add LDFLAGS += -Wl,--as-needed into debian/rules [16:07] mbudde: if the patch doesn't apply due to some context changes in ltmain.sh, you may need to manually make those changes and refresh the patch [16:08] hyperair, ok, I'll take a look at it :) Thanks! [16:08] mbudde: np [16:22] savvas: added to group [16:22] savvas: curios about your plans for gapti [16:34] jdstrand, btw, the KernelMaintenance page at the bottom has a section "main, proposed and security". Might be worth linking from there to whatever doco gets put together about the -updates/-security bit. [16:34] sneaky bastard. :o [16:40] jdstrand, btw, the KernelMaintenance page at the bottom has a section "main, proposed and security". Might be worth linking from there to whatever doco gets put together about the -updates/-security bit. (sorry to the channel who have to see this twice) [16:40] Kamping_Kaiser: thanks. I also found where I wrote it: SecurityTeam/FAQ [16:40] * Kamping_Kaiser looks [16:41] Kamping_Kaiser: it isn't really that exciting :) [16:41] jdstrand, how exciting can you make kernel packaging? ;) [16:41] heh [16:43] Now i have to go and look at the email i sent kees to check how much is still relevant *heh* [16:43] Is anyone here using LXDE? [16:44] Kamping_Kaiser: heh, it'll be a bit before I get through my email this morning. :) [16:44] kees, i've waited to 3am, i'm sure i can wait another hour or two. [16:45] now, where did i put that can of caffeine... [16:45] RainCT: yep, I am [16:46] Kamping_Kaiser: heh, ah, just got to it. [16:46] * RainCT has just installed it out and can only say.. "wow" :). Login time is less than 1 second and it has panel and everything (unlike openbox) and actually looks great :D [16:47] kees, dont recall exactly what i wrote, but since then i'm told canonically that -security should stand alone, so anything about confusion can be ignored. the bit about 'what happens now' remains. [16:47] RainCT, o_0 1 second? thats faster then e16 :o [16:47] Toadstool: .. but I've got some untranslated entries in the menu (like "Game" and "Network"), do you happen to know why that happens? [16:47] heh [16:48] well, it's on a new laptop here.. I'll try it on my sister's PC later (which has 256MB RAM) [16:48] RainCT: uh, er, no, I am using LANG=en_US here [16:48] Kamping_Kaiser: right, it is a bug that anything in -security would depend on -updates. it sounds like fixing the compiler glitch needs to move forward. [16:49] Kamping_Kaiser: is this a problem for hardy, intrepid, or both? [16:50] kees, i know its a problem in hardy (gNS bases off it), dont know about intrepid [16:50] Kamping_Kaiser: okay [16:52] kees, is there a bug about the gcc shuffle i can sub to? i'd like to keep a tab on it. [16:52] Kamping_Kaiser: let me ask the kernel team... [16:53] kees, ok. i'm in there too, so i can hang around if theres no quick reply. [16:55] uhm.. e16 looks interesting, but I don't like it on a first try (and it took ages to generate a menu :P) [16:56] e ftw. [16:58] hm, perhaps I could even get used to it.. anyway, I'm off, cya [16:59] later mate [17:17] kees, I might head to sleep. I'm still in -kernel, so if the bug report gets found i'll subscribe myself tomorrow. Thanks for looking into it. [17:21] Kamping_Kaiser: cool, g'night [17:22] later mate. [17:25] I'm trying to setup an autoppa system for maintaing a bunch of packages. I have 20 plugins that I want to bundle into a single source package that I can upload and build into 20 seperate binary packages. I've seen other packages do this, is there some information somewhere on how to do this? [17:34] I'm looking for a motu to review my package at http://revu.ubuntuwire.com/details.py?package=sqliteman [17:34] wasabi: well not much, just planning to fix it upstream to be compatible with python 2.6 / 3.0 - we'll see how it goes heh :) [17:34] be back later [18:46] Hi. gcompris appears as depending in python2.5 in Jaunty, but it build fine in a schroot. Should I open a bug to bump the version to force the rebuild or it will be automatically rebuild at some point? [18:47] (it's for python2.6 transition) === Andre_Gondim is now known as Andre_Gondim-afk [19:13] It won't get automatically rebuilt. [19:22] * fabrice_sp will open a bug, then. [19:40] Depends line of libghc6-network-doc: Depends: ghc6-doc (>= 6.8.2), ghc6-doc (<< 6.8.2+), libghc6-parsec-doc (= 2.1.0.0-2) [19:40] Is that as self-contradictory as it seems to be? [19:41] It's not. [19:42] Ok. So what does that << mean, then? [19:43] Ornedan: The '<<' means what you expect. The '+' probably doesn't. [19:44] 'k. Something does seem contradictory, though, since trying to install the package fails with [19:44] Depends: ghc6-doc (>= 6.8.2) but it is not going to be installed Depends: ghc6-doc (< 6.8.2+) but it is not going to be installed [19:44] Ornedan: It means any version 6.8.2 - 6.8.2+ inclusive of 6.8.2, but exclusive of 6.8.2+. [19:44] i.e. 6.8.2-* [19:45] 6.8.2-1 or 6.8.2-10ubuntu67~ppa3 matches that [19:45] Ornedan: What version of ghc6-doc do you have? [19:47] Hmm... Seems none. Blargh and nvm. I was trying to get haskell set up with haddock more recent than is available from the official repositories, seeing as the version there is 0.8 and latest release is 2.4.1 [19:49] ghc6 is another one of those special packages [19:49] "spethial" [19:49] And the documentation syntax has changed somewhat since 0.8, so it fails on some 1/3 of recent libraries [19:50] nxvl, other motu-SRU and similar folk, can someone weigh in on bug 341832? [19:50] Launchpad bug 341832 in mit-scheme "SRU: mit-scheme uninstallable on Intrpepid" [Undecided,New] https://launchpad.net/bugs/341832 [19:51] #ubuntu-dev pointed me here. I'm doing some backporting with prevu, but I can't seem to pass it -j8 or similar to do some parallel building. [19:51] Someone did have haddock 2.4.1 & a more recent ghc packaged in their PPA, but that ghc was missing the critical standard libraries :P [19:51] Is there a way to pass that through prevu to dpkg-buildpackage so I can take advantage of all my iceccd installs? [19:51] well that should be a DEB_MAKE_OPT in debian/rules. [19:52] and is not a prevu-specific problem -- it's a debian packaging problem [19:52] err DEB_BUILD_OPTIONS rather. [19:52] ok, but generally, I can go "dpkg-buildpackage -j8 -b" and it does what I expect. [19:52] #ubuntu-dev? [19:53] so how do I get prevu to call dpkg-buildpackage that way? [19:53] well, you really shouldn't. [19:53] prevu does not support such options, just like the Ubuntu build servers [19:53] to do parallel builds you should really edit your rules file to do so correctly [19:54] Ok, is there a way to get prevu to just grab and unpack a specific source release, so at least I don't have to manually hunt that down? [19:54] then I can edit the debian/rules file myself. [19:54] also... if you're not supposed to do that, then why does dpkg-buildpackage have a -j option? [19:54] it's confusing, to say the least. [19:55] well dpkg-buildpackage has a lot of liberties for in place builds that pbuilder/sbuild do not. [19:55] ok. [19:55] http://www.debian.org/doc/debian-policy/ch-source.html [19:55] see 4.9.1 about parallelism [19:55] jdong: Why shouldn't prevu/pbuilder do parallelism? [19:56] maxb: it shouldn't handle it sopecially as a -j flag IMO. AFAIK setting DEB_BUILD_OPTIONS in your environment passes it in correctly [19:56] That seems not particularly well thought out. I mean, I see it there, but the package build system has no idea what level of parallelism is appropriate to my build environment, so why does hardcoding the number of jobs there make sense? [19:56] which is AFAIK how the buildd's do it. [19:56] oh, you can set the environment var? [19:57] that makes more sense. [19:57] and I'm fine with that. :) [19:57] unit3: no no, having your build system support DEB_BUILD_OPTIONS as per 4.9.1. then setting environment :) [19:57] unit3: note that dpkg-buildpackage -jX is only a shorthand for setting MAKEFLAGS and DEB_BUILD_OPTIONS envvars anyway. [19:58] correct [19:58] jdong: ok, I'm confused again. my build system is prevu. Does it support the DEB_BUILD_OPTIONS stuff? [19:58] unit3: no no, your buildsystem is the debian/rules file. [19:58] prevu/pbuilder is your build...er? I guess [19:58] whatever the correct terminology is. [19:58] erm... hrm... that terminology doesn't make sense to me. but I see what you're saying now. [19:59] lol that could be my fault or the pre-existing terminology's fault ;-) [19:59] yeah, it's existing terms, I just mapped them to something else in my head. ;) [20:00] maxb: I'm surprised MAKEFLAGS is being set by dpkg-buildpackage though... are the debian/rules targets really parallel-safe? [20:00] or am I confusing MAKEFLAGs with MAKEOPTS again? [20:00] stupid similarly named environment variables. [20:00] again, though, hardcoding number of jobs into debian/rules still seems badly planned. it certainly means prevu is much less handy ootb in my environment, since I now have to manually unpack all the source packages and edit the debian-rules manually. [20:00] or, at least, that's how I'm reading this. [20:00] unit3: no no, your environment has DEB_BUILD_OPTIONS="parallel=3" or something [20:00] Oh! I see. [20:00] unit3: prevu is a simplified wrapper around pbuilder. pbuilder is a means to invoke dpkg-buildpackage in a minimal clean environment. [20:00] and debian/rules is responsible for parsing out parallel=3 into -j3 [20:00] and then debian/rules handles it. [20:00] if appropriate. [20:00] right [20:00] Gotcha. [20:01] That makes so much more sense. :) [20:01] that's what the snippet of makefile under 4.9.1 is for :) [20:01] Ahhh. [20:01] Ok, I'll try that. Hopefully mysql builds in parallel, because it takes a really long time on my system without the build cluster to help. ;) [20:01] that it does :) [20:01] kernels, too. [20:02] Awesome. :) [20:02] I've got about 6-8 faster cores I can use via icecc, so I'd like to. ;) [20:04] 8 cores? [20:05] directhex: on multiple machines, available via icecc. hence the need for parallel job processing. [20:05] pfft don't provoke directhex [20:05] he'll pull out his 9000000 core itaniums [20:05] * directhex fluffles jdong [20:05] hahahaha [20:05] jms@orac:~> grep -c ^processor /proc/cpuinfo [20:05] 256 [20:05] what did I just tell you? [20:05] oh man. I wish. no, I'm using consumer grade amd64 stuff. [20:05] and yet gnome still runs like a dog ;) [20:06] hahahaha [20:06] switch out metacity for xfwm. your system will thank me. ;) [20:06] and it's crap for x264 dvd ripping [20:06] why? too many optimizations for x64 in x264's codebase? [20:06] more or less [20:07] 'cause otherwise I'd expect it to just scale through the roof, so that's too bad. [20:08] it doesn't scale to more than 4 or so threads well does it? [20:08] I always felt x264 did much better with one core twice as fast than two cores the same speed. [20:08] I thought they showed it scaling up to 8 at least on the new core i7 stuff? [20:10] it barely scales well to 4 on a core 2 quad I tested. [20:10] unit3, it scales well on i7 :) [20:11] it was very much margin of diminishing returns for me... [20:11] unit3, well... except it doesn't fill the hyperthreads, but that's not news [20:11] HT in "sucks" shocker [20:11] yeah. [20:11] But I thought part of the benchmarks I saw indicated that that just showed that i7 scales better than core 2 does in general, and that x264 will scale well if the arch lets it. [20:12] nothing scales like itanic [20:14] unit3: does it really scale better than having one or two cores under the +300MHz IDA boost? [20:14] directhex: yeah? I'd be curious to see how something like luxrender scales on it, then... [20:14] jdong: dunno, tracking down the benchmarks I saw before to try and determine that, since I don't remember well enough. [20:14] most people don't want scalability, they want throughput. [20:14] where clocks can often win [20:15] * jdong agrees [20:15] true. [20:15] but only at the bottom end. [20:15] of course, the bottom end is huuuuge now. ;) [20:15] I still feel if you're talking your average person doing X264 encodes, 2 faster cores is better than 4 not-as-fast ones. [20:15] computing: now with more junk in the trunk! [20:15] scalability is a different topic. [20:15] doesn't x264 still slice the image in some arrangement to thread? [20:16] jdong: no, I seem to recall it was more comprehensive than that, and tried to look at thread count and stuff. [20:16] i.e. somewhat reduced PSNR? [20:17] jdong, don't forget i7 is faster per clock than anything else [20:18] true too. [20:18] absolutely is. [20:18] trust me :) [20:18] I think I was thinking of these two: http://arstechnica.com/hardware/reviews/2008/11/nehalem-launch-review.ars/9 http://arstechnica.com/hardware/reviews/2008/11/nehalem-launch-review.ars/6 [20:18] directhex@desire:~$ grep name /proc/cpuinfo | tail -1 [20:18] model name : Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz [20:18] Which shows i7 scaling well with more threads in another test. [20:18] but also a little more oomph using 8 threads with HT than 4 threads and no HT. [20:18] So inconclusive I guess. [20:19] You'd want to get a system with 2x4core nehelem xeons or something and bench with threads from 4-16 [20:20] Meh. Once a mobo+cpu combo is < $300 I'll upgrade my desktop to i7, but I imagine that won't be for a while. [20:22] unit3: you're thinking the core i5. [20:23] jdong: huh? I thought i7 was just branding for the desktop version of nehelem? [20:23] unit3: the core i5 is an upcoming consumerization of the nehalem [20:23] should bring prices way down into the current core 2 ranges [20:24] unfortunately you're not gonna get your nerdcore 24xSLI whatnot gaming leet card slots [20:24] but you can build yourself a nehalem architecture machine without selling a kidney [20:24] only dual-channel ram though :'( [20:24] laaaaame! [20:24] that's fine. [20:24] *cough* :) [20:24] I never even max out my RAM on my home desktop. [20:24] Mem: 5974 5938 35 0 25 5081 [20:24] and DDR3's still expensive. [20:25] erm RAM bandwidth that is, I totally max out RAM usage. ;) [20:25] yeah, looks like i5 is what I'll be after for home, and then the Xeon i7 stuff for work once it's a little cheaper. [20:25] unit3: that's how I figure. [20:26] I would like to be able to justify to myself building a xeon mac pro like workstation [20:26] but umm... never worked out. [20:26] yeah, me too, but not in this recession. contract work to buy toys is pretty hard to come by right now. :P [20:26] you know I have the money, just don't feel THAT is what I should dump it on. [20:26] hwoever, IMO in this economy rainy day bank money is a bad thing to keep. [20:27] just watching it lose its value is depressing. [20:28] heheh [20:28] the question is what will lose value faster: your desktop or your bank money [20:28] I guess it depends on where you're keeping it. [20:28] yeah, computers generally depreciate the fastest. [20:33] Successfully uploaded packages. [20:33] Not running dinstall. [20:33] anyone can help me? [20:33] e-jat: That sounds fine, are you loading to revu? [20:34] my ppa === FlannelKing is now known as Flannel [20:49] jpds: uploaded to revu [20:50] e-jat: What exactly was your problem? [20:51] not running dinstall .. sorry.. its my 1st package .. [20:51] That's not a problem, you can ignore that message. :) [20:52] jpds: owh ok .. thanks .. [20:52] hoping/waiting that someone will comment about it .. so i can learn from my mistake .. [21:01] e-jat: i've received that message for every upload i've made, and i still haven't figured out what it means ;) [21:02] owh .. [21:02] jpds: do you know what it means? =\ [21:02] i mean what's dinstall for? [21:03] When uploading to Debian it used to be possible to run dinstall, the tool that processed uploads, by hand to check that the package was acceptable to dinstall. [21:03] So dput has the ability to do that automatically when uploading via SSH. [21:03] i see [21:09] heh, my backport of mysql failed yesterday, and now I see in the build that the test which failed is marked as "[ disabled ] randomly fails on Ubuntu amd64 buildds". [21:09] Good to know. :) [21:31] lo all. Screen resolution on my EEE900 dropped to 800 by 600 after today's upgrade, and no option of 1024 by 600. Any idea how I can get it back? === ian_brasil is now known as ian_brasil_ack === ian_brasil_ack is now known as ian_brasil [22:06] noone any idea about the resolution? [22:06] ask in #ubuntu, or #ubuntu+1 if you're using jaunty [22:07] ajmitch: it's jaunty, it was fine until todays upgrade [23:02] Heh... went to all this work backporting the jaunty mysql-server-5.1 packages to hardy, and they don't include the mysql-cluster / ndb stuff. [23:02] Can someone clue me in what's going on with 5.1 in Ubuntu, since packages says MOTU is maintaining that one? [23:03] gfax: Depends: libgnomeprint2.2-0 (< 2.18.6) but 2.18.6-1 is to be installed. [23:03] is anyone working on this? [23:04] unit3: #ubuntu-server is a better channel to ask mysql stuff. [23:04] oh ok. will do. [23:04] Just looked here first since it had MOTU stamped on it. ;) [23:05] Reasonable enough. [23:08] savvas: does it just need a rebuild? [23:08] Laney: I'm checking it right now :) [23:09] Laney: should I file a bug? I'm one button away :P [23:09] find out! [23:09] ok :) [23:14] libgnomeprint2.2-0 (>= ${misc:libgnomeprint-upversion}), libgnomeprint2.2-0 (<< ${misc:libgnomeprint-next-upversion}) [23:14] Laney: yes, I think it does, should I make a quick ppa test build? [23:14] why ppa? [23:14] don't you have pbuilder? [23:15] I do but.. the logs include more info [23:15] ok, pbuilder it is :) === Nicke_ is now known as Nicke [23:29] Laney: works on amd64! :) rebuilt and runs fine: http://paste.ubuntu.com/130391/ http://paste.ubuntu.com/130393/ [23:30] cool [23:30] I'll upload a rebuild [23:30] thanks [23:40] done, thanks for investigating [23:47] hi all, i followed dholbachs youtube packaging tutorials and got to the debuild stage but it failed... since this is my first time building a package, would someone mind helping me troubleshoot the errors? [23:53] bcurtiswx: Could you pastebin the output? [23:56] friday the 13th again hehe :p [23:56] savvas: I still have a few more hours ;) [23:56] I donated my 13th pint of blood yesterday.... [23:57] hi all, sorry i got disconnected, did anyone reply to my debuild debugging request [23:57] nhandler: the funny thing is that it was friday the 13th last month too :) [23:57] bcurtiswx: I did. Could you pastebin the output? [23:58] savvas: I noticed that. We also have Pi Day coming up [23:58] nhandler: instead of in the channel, may i PM you? [23:58] bcurtiswx: That is fine. But please pastebin the output instead of sending it all in /msg's [23:59] Though other people may also try to help if you keep things in the channel. [23:59] That is true [23:59] It also allows other users to learn from the problem [23:59] ok, i will keep in chan.. sorry if this is a dumb ?, but what is pastebin