[03:33] <WorkingOnWise> are the hardy repos down?
[04:03] <WorkingOnWise> How is power management, specifically hibernate and suspend recovery, coming for newer (ACPI) laptops?
[04:23] <mohkohn> what kernel is going into Hardy Heron
[04:24] <mohkohn> ?
[06:35] <shadeofgrey> okayl o ill probabluy bering the wrath ofg the ubuntu gods down on myself but wherekoes one go to get the latest release of hardy's LiveCD?
[06:39] <ompaul> shadeofgrey, you may regret this URL or you may fall in love with it: http://cdimage.ubuntu.com/gobuntu/daily/current/
[06:39] <ompaul> shadeofgrey, you can do ubuntu in place of gobuntu
[06:40] <ompaul> shadeofgrey, actually hack that url to pieces, specifically this one: http://cdimage.ubuntu.com/daily/
[06:41] <shadeofgrey> ompaul:  point me to the correct repositories lisst and ill do a apt-get dist-upgrade
[06:42] <shadeofgrey> im willing to risk it...  if it fails - ill just pop in the 7.10 cd ad reinstall
[06:42] <crdlb> if you cannot figure that out on your own, you shouldn't be using hardy
[06:42] <shadeofgrey> well
[06:44] <shadeofgrey> a google of hardy heron repositories came up with absolutely dick so i figured id come here.  i cant gt dual monitor support to work right on my macbookpro via my 24" dellwidescreen -- 7.10 wont lety me make it the default dfisplay or crank it up to 1920x1200 -- i was hoping hardy would fix that
[06:44] <ompaul> shadeofgrey, I would be with crdlb on that - get the live CD and play with that
[06:44] <shadeofgrey> there isnt a hardy liveCD
[06:44] <ompaul> and you can with that in your drive do a sudo apt-get update ; sudo apt-get upgrade with it
[06:45] <shadeofgrey> okay so your saying download the alternate burn it to disk, then boot into 7.10 and do a dist-update via CD?
[06:45] <ompaul> shadeofgrey, then do this, download the disk, break your existing drive in two or get a second / third one as applicable and you have a working machine and a dev machine
[06:46] <shadeofgrey> right.  thats done
[06:46] <ompaul> shadeofgrey, given your earlier comment I would not suggest a dist upgrade if you value your data
[06:46] <shadeofgrey> i have two partitions on my macbookpro
[06:46]  * ompaul runs away 
[06:46] <ompaul> I know nothing of mac*
[06:46] <shadeofgrey> thats the whole point i dont give a damn about my ubuntu data
[06:46] <shadeofgrey> i just want to try heron as it is amnd see if it fixes the dual monitor issue yet
[06:47] <shadeofgrey> 'or better yet'
[06:47] <crdlb> I'm quite sure it breaks more things than it fixes at this point
[06:47] <crdlb> it's barely been a month
[06:47] <shadeofgrey> just tell me if dual monitor support on ati graphics chips is something that will be addressed iun the next release and ill just defer and wait
[06:47] <crdlb> with fglrx?
[06:47] <shadeofgrey> yeah
[06:47] <crdlb> that's entirely up to ATI
[06:47] <crdlb> nothing ubuntu can do about it
[06:48] <ompaul> nothing to do with ubuntu
[06:48] <ompaul> they write the code - or not as the case may be
[06:49] <shadeofgrey> jesus.  i wish with all my being that id had the patience to wait for the macbookproi with nvidia inside.  i wouldnt have to deal with ati bullshit
[06:49] <Amaranth> you'll get dual monitor support in hardy
[06:49] <Amaranth> but you'll have to choose between dual monitor support and 3d acceleration
[06:49] <shadeofgrey> well i get dual monitor support now..  it juyst doesnt work
[06:50] <Amaranth> right but now we have the 'radeonhd' driver for r500 and r600 cards
[06:50] <shadeofgrey> Amaranth; i wouldnt have to make that choice if i hjad nvidia rather than ati when hardy rolls around?
[06:50] <Amaranth> if you had nvidia it'd probably work now
[06:51] <Amaranth> and who knows, ati might fix their crap
[06:51] <shadeofgrey> im running the 1600 ati chipset in the original first gen macbookpro's
[06:51] <shadeofgrey> its far more likely m crippled ass will take up jogging
[06:52] <shadeofgrey> its so frustrating because on the notebooks screen its beautiful
[06:52] <shadeofgrey> on the 24" digital its a whole other ball ghame
[06:52] <shadeofgrey> and i have a question regarding ubuntu from a business standpoint
[06:53] <shadeofgrey> if i start a business where i charge people to migrate their ruined windows partiutions due to viruses to Ubuntu and charge them for the time and labor it takes thats legal right?  im just  not allowed to charge for ubuntu itself cvorrect?
[06:54] <Amaranth> Of course, that's how Linux businesses make their money
[06:54] <Amaranth> Services and support
[06:55] <shadeofgrey> okay last question
[06:55] <Amaranth> And you can charge if you burn your own discs and distribute them, but only for the disc (materials + time)
[06:55] <shadeofgrey> when i hold down option when my mac boots and i choose to boot ubuntu i see no ubuntu startup shit at all..  just a blank screeen till X starts and im at the login screen.  how do i,gange that so i can watch it bring up the os?
[06:56] <shadeofgrey> or did you guys disable all that?
[06:56] <Amaranth> No idea, I don't know how boot on the mac works
[06:57] <Amaranth> It's mostly likely showing blank because of your video card
[06:57] <Amaranth> Otherwise it should show an Ubuntu logo and a progress bar
[06:57] <shadeofgrey> i was afraid of that
[06:57] <shadeofgrey> lookks like im headed back to PC
[06:58] <shadeofgrey> 'ubuntu is just too important to me -- regardless ofg how awesome apple has been
[06:58] <shadeofgrey> and by the way
[06:58] <shadeofgrey> just for the record
[06:59] <shadeofgrey> thank you from the bottom of my heart for creating ubuntu andcontinuing its development..  i my be a pain in the ass to you guys but you have no idea how much i apppreciate all your hard work.  your enabling one member of the disabled cxommunity to work and makea real living rather than live on welfarew for the rest ofg my life
[07:00] <shadeofgrey> theres no way to express gratitutude of that degree in any languagewhatsoever
[07:00] <Amaranth> No need, that's why we're here :)
[07:01] <shadeofgrey> im tired of being the U.S. governments bitch -- and every line of code you guys write makes me capable of overcoming that trap of cyclical financial destitution
[07:03] <shadeofgrey> Amaranth; if i wanted to write a letter of thanks to the head of the company that owns ubuntu where would i look for that email address?
[07:03] <shadeofgrey> because i know ubuntu is owned by conn...  something
[07:03] <Amaranth> Canonical. The owner is Mark Shuttleworth
[07:04] <shadeofgrey> his email address isl omewhere i assume?
[07:04] <shadeofgrey> or is it hidden so he dsoesnt get spam?
[07:05] <Amaranth> hmm
[07:05] <Amaranth> I've never actually emailed him, let me see if I can find it
[07:05] <shadeofgrey> its okay
[07:05] <shadeofgrey> ill find it
[07:05] <shadeofgrey> if i fail ill come back
[07:05] <Amaranth> Alright, that works too
[07:06] <shadeofgrey> would you mind gicving me youer email address in the event i fail?
[07:06] <buttercups> http://www.markshuttleworth.com/, contact details on the website
[07:07] <Amaranth> ah, there you go
[07:07] <Amaranth> i was just getting ready to say that too :P
[07:07] <buttercups> hehe, sorry
[07:07] <shadeofgrey> thanks
[07:07] <shadeofgrey> thanks so much
[07:08] <Amaranth> That's actually how to contact Claire but it'll get to him through her.
[07:08] <shadeofgrey> and by the way -- ifg you guys ever want to check on the process of the business im starting  around linux migrations the url is closeallyourwindows.com
[07:09] <Amaranth> uh oh, you haven't set it up yet, probably shouldn't give the url out
[07:09] <Amaranth> that's a cool domain name though :)
[07:09] <shadeofgrey> and my polityical commentary site is thetruthdirective.com -- thats live.
[07:09] <shadeofgrey> Amaranth; if you know of any really good digital artists nows the time to say so because i need a logo and a wordpress theme bad
[07:09] <Amaranth> Someone else could take control of that first one
[07:10] <shadeofgrey> why?
[07:10] <shadeofgrey> i own it
[07:10] <shadeofgrey> i bought it through godaddy
[07:10] <Amaranth> You haven't finished the wordpress install, they could finish it
[07:10] <shadeofgrey> and the reg is privatized
[07:10] <shadeofgrey> oh
[07:10] <shadeofgrey> yeah
[07:10] <Amaranth> You would just have to reinstall wordpress but still
[07:11] <shadeofgrey> im going to finish the install now
[07:11] <shadeofgrey> thanks so much for your time
[07:11] <Amaranth> and I don't usually hang out in the same circles as artists, sorry
[07:11]  * shadeofgrey hugs Amaranth in humble appreciation
[07:12] <shadeofgrey> ill leave you alonenow
[10:36] <carlesoriol> does anybody know if hardy alfa is available?
[12:13] <clouder`g> why do web results return hardy as 8.04, shouldn't it be 8.06 since it's LTS?
[12:13] <IdleOne> no
[12:14] <IdleOne> release numbers use year/month so 8.04 is 2008/april
[12:15] <clouder`g> I see
[13:07] <WorkingOnWise> has anyone played with google calendar sync in Evolution 2.21?
[13:11] <WorkingOnWise> the time to install upgrades was 56 minutes whan it started. Now, 45% into it, and 35 minutes into it, there is just over an hour left...is the counter in update manager messed up?
[13:12] <Hobbsee> probably your mirror slowed down
[13:13] <WorkingOnWise> Hobbsee: all the fetching is done. This is the actual install phase I'm in.
[13:14] <Hobbsee> then it's probably expected the unpackaging, etc, to be faster
[13:15] <ppk|laptop> whoo! hardy!
[13:15] <WorkingOnWise> oh...so this installer is dissing my laptop! :)
[13:15] <ppk|laptop> excuse me, where can I find the release notes for Hardy Alpha 1?
[13:15] <ppk|laptop> ...and what are the alphas called?
[13:16] <WorkingOnWise> I'm watchin the versions go by and it looks like hardy is still using kde 3.5 series, not 4.0
[13:16] <ppk|laptop> booo
[13:16] <ppk|laptop> :P
[13:17] <WorkingOnWise> ppk|laptop: the alphas this time are called alpha
[13:17] <ppk|laptop> reaaaaal creative
[13:17] <ppk|laptop> time to download
[13:18] <WorkingOnWise> ppk|laptop: be careful. the alphas are notorious for breaking things when u ned them the mose....u are installing on a test box I hope?
[13:19] <ppk|laptop> VM
[13:19] <WorkingOnWise> most
[13:19] <ppk|laptop> virtualbox pwnz
[13:19] <WorkingOnWise> ah...then u wont even need a band-aid!
[13:19] <WorkingOnWise> for when the alpha slashes your install
[13:20] <ppk|laptop> I prolly won't load it on a production system until the beta
[13:20] <WorkingOnWise> I on the other hand, am not so smart. it is installing as we speak...on my only computer...my laptop.
[13:21] <ppk|laptop> niiiice
[13:21]  * ppk|laptop hands WorkingOnWise a bandage and gauze
[13:21] <WorkingOnWise> I have used redmond OS's for 15 years...I can take whatever Ubuntu can throw!
[13:21] <WorkingOnWise> hehe
[13:21] <ppk|laptop> I haven't even been alive for 15 years...
[13:22] <WorkingOnWise> ppk|laptop: oh my...
[13:22] <ppk|laptop> hmm?
[13:23] <WorkingOnWise> don't know many teens who could set up a vm, let alone one with ubuntu running...
[13:23] <WorkingOnWise> niiice
[13:23] <ppk|laptop> go wget, go!
[13:23]  * ppk|laptop cheers
[13:23] <WorkingOnWise> lol..indeed!
[13:24] <ppk|laptop> hmm...
[13:24] <WorkingOnWise> will this installer generate a log file? I am seeing some failures I want to look into after I reboot
[13:24] <ppk|laptop> I think so
[13:24] <ppk|laptop> Debian does
[13:25] <ppk|laptop> check in /var/logs, maybe
[13:25] <WorkingOnWise> I don't think they are critical, but any failure needs a look
[13:26] <ppk|laptop> hmm...
[13:26] <picard_pwns_kirk> there we go
[13:27] <WorkingOnWise> hahaha...
[13:27] <WorkingOnWise> dig the name man!
[13:27] <picard_pwns_kirk> that's what I usually go by :P
[13:27] <picard_pwns_kirk> the ppk in ppk|laptop
[13:27] <WorkingOnWise> but of course, u know Jaynway owns them Both, right!
[13:28] <picard_pwns_kirk> of course
[13:28] <WorkingOnWise> lol....wow, u r a very intelligent young man!
[13:28] <WorkingOnWise> :D
[13:29] <picard_pwns_kirk> I gave up having a social life to do all this :P
[13:30] <picard_pwns_kirk> I'm hungry
[13:30] <picard_pwns_kirk> time to eat breakfast...
[13:30] <WorkingOnWise> hehe...I had a social life once...not so great!
[13:30] <WorkingOnWise> see ya sir!
[13:37] <picard_pwns_kirk> hola
[15:06] <ConstyXIV> is anyone running hardy on an eee?
[15:07] <ConstyXIV> or (hopefully) with an atheros ar5007eg wifi card?
[15:10] <h3sp4wn> ConstyXIV: I don't think that card is even supported by the trunk madwifi
[15:11] <h3sp4wn> get the source from xandros (good luck) or asus
[15:11] <h3sp4wn> I am waiting to buy one until someone has the source
[15:11] <ConstyXIV> actually, i just now saw that atheros dropped the source
[15:11] <ConstyXIV> http://madwifi.org/ticket/1679
[15:12] <ConstyXIV> completely incompatible with plain madwifi as-is though
[15:13] <h3sp4wn> So why can you not get it working ?
[15:13] <ConstyXIV> i havent tried it yet
[15:13] <ConstyXIV> i just saw it when i made the comment
[15:13] <h3sp4wn> Shouldn't be too hard
[15:15] <h3sp4wn> just get ubuntu on it and then make a version of madwifi-source with that patch in it
[15:16] <h3sp4wn> dunno why you would want hardy on it though (I would consider it a complete pita)
[15:17] <h3sp4wn> as all the modifications to get it to run decent of flash would have to be done so often
[15:18] <h3sp4wn> ConstyXIV: Dunno what the probability of ubuntu actually keeping another version of madwifi around just for that is
[15:19] <h3sp4wn> (as they have switched to ath5k as the main development place)
[15:19] <ConstyXIV> didnt know if someone had done anything to hardy to fix it while i was asleep
[15:20] <h3sp4wn> There is not even a 2.6.24 kernel in the repos
[15:21] <h3sp4wn> (that presumes hardy will use 2.6.24) but either way only 22 is there
[15:21] <WorkingOnWise> wow. smothers upgrade I have ever done!. From Gutsy to Hardy. Disappointingly so! If I didn't know that it was Hardy, I would have no clue it wasn't Gutsy still!
[15:21] <WorkingOnWise> smoothest
[15:23] <h3sp4wn> WorkingOnWise: Only things I have had to do was hack matlab to disable its java using xinerama
[15:23] <h3sp4wn> and still not got bootlogd working (didn't try to hard) but then its alot easier to see what is broken
[15:23] <ConstyXIV> i typically don't jump into testing until alpha3, but i would make an exception for working drivers
[15:24] <h3sp4wn> No chance
[15:24] <WorkingOnWise> I keep thinking "If redmond had ever been this smooth in production, I'd never had left!"
[15:24] <h3sp4wn> It would have to be done as a specific
[15:24] <h3sp4wn> If you wanted it to run the mobile version then maybe that kernel does not have a version of madwifi at all
[15:25] <h3sp4wn> so the patched one could be added to it but I didn't check
[15:37] <picard_pwns_kirk> what totally awesomely awesome things do I have to look forward to in Hardy?
[15:37] <ConstyXIV> a new theme, LTS support, and that's about all I know of ATM
[15:38] <ConstyXIV> LTS==long term support==more stable
[15:39] <picard_pwns_kirk> no uber-cool new things like Gutsy had?
[15:39] <picard_pwns_kirk> yet?
[15:39] <ConstyXIV> not that i know of
[15:39] <picard_pwns_kirk> darn
[15:39] <ConstyXIV> but those uber-cool things from gutsy should be uber-stable in hardy
[15:40] <ConstyXIV> which is very uber-cool in my book
[16:49] <h3sp4wn> ConstyXIV: The theme is not there yet
[16:50] <h3sp4wn> It could be more stable if it is an 8 month release cycle but even dapper still had tons and tons of updates just after it was released
[16:53] <h3sp4wn> ConstyXIV: depends how you define "uber-stable" as well (to me its Solaris 10 or AIX) mostly stable is RHEL or etch and the BSD's - ubuntu somewhere below that
[18:03] <bardyr> happy alpha1 :D
[18:04] <bardyr> !info linux-image-generic
[18:04] <ubotu> linux-image-generic: Generic Linux kernel image. In component main, is optional. Version 2.6.22.14.21 (gutsy), package size 24 kB, installed size 52 kB
[18:21] <hydrogen> !info linux-image-generic hardy
[18:21] <ubotu> linux-image-generic: Generic Linux kernel image. In component main, is optional. Version 2.6.22.14.21 (hardy), package size 24 kB, installed size 52 kB
[18:22] <bardyr> hydrogen, thats actually a nice feature :)
[18:36] <bernier> !fglrx hardy
[18:36] <ubotu> Sorry, I don't know anything about fglrx hardy - try searching on http://ubotu.ubuntu-nl.org/factoids.cgi
[18:37] <bernier> !fglrx
[18:37] <ubotu> To install the Ati/NVidia drivers for your video card, see https://help.ubuntu.com/community/BinaryDriverHowto
[21:52] <karim> hi
[21:52] <karim> would there be a way to totally rebuild an Ubuntu mirrot for a specific target and optimisations or is it impossible at all ?
[21:54] <h3sp4wn> karim: You could possibly setup a buildd
[21:54] <karim> h3sp4wn: what is it ?
[21:55] <h3sp4wn> or try apt-build but really probably you don't want to if you want to do that stuff use gentoo or bsd
[21:55] <karim> apt-build sucks
[21:55] <h3sp4wn> Why ?
[21:55] <karim> it could be nice but it's not complete or finshed
[21:55] <h3sp4wn> Its pointless to rebuild most packages
[21:55] <karim> because there is no depencies building
[21:55] <h3sp4wn> for the ones it is worth it then apt-build is ok
[21:56] <h3sp4wn> Search for setting up a debian buildd
[21:56] <karim> not really because it will not build static libraries -dev
[21:57] <karim> and if you build a package A that depends of B and C, it will just apt-get install B and C instead of building them
[21:57] <karim> h3sp4wn: problem is that I hate gentoo for the same reason I hate debian
[21:57] <karim> I currently am on gentoo on a G4 400mhz
[21:57] <h3sp4wn> karim: Can't think of any reason to use ubuntu if you want to rebuild the whole archive
[21:58] <karim> but I don't like needing to config everything without GUI.
[21:58] <h3sp4wn> Well you have zero chance of rebuilding the entire archive then
[21:58] <karim> h3sp4wn: that's not the way I think
[21:59] <karim> h3sp4wn: zero chance why ?
[22:00] <karim> h3sp4wn: gentoo rebuilds everything ok, but gentoo does to much of what I really need. I don't need the useflags etcetera. what interest me is just the gcc make options
[22:00] <h3sp4wn> karim: forget it - setup a buildd and do it or don't
[22:00] <h3sp4wn> the gcc make options make zip all difference for almost all the packages
[22:01] <karim> h3sp4wn: I tried once to change the way apt-build behaved, but I don't know perl well. It seemed almost impossible to do better
[22:01] <karim> h3sp4wn: I don't understand what you mean
[22:01] <karim> about make zip
[22:03] <h3sp4wn> forget it - just try to do it and ask specific questions
[22:04] <h3sp4wn> bootstrapping debian to a new architecture - stuff about that might help you as well
[22:07] <karim> ok
[22:07] <karim> thanks
[22:19] <karim> h3sp4wn: do you think there could be a way to kind of gentoise ubuntu and apt ?
[22:19] <karim> or is it really impossible at all technically
[22:19] <h3sp4wn> karim: Probably
[22:19] <h3sp4wn> Solaris has a system that builds sysv packages from redhat spec files so its not impossible
[22:19] <karim> I think apt-build was a good start.
[22:20] <karim> h3sp4wn: one problem on ubuntu/debian is that there are a lot of incompatibilities among the -dev packages
[22:21] <karim> that's better to use pbuilder
[22:21] <karim> otherwise it kills the system
[22:23] <afflux> karim: what do you mean by "incompatibilities among the -dev packages"?
[22:24] <karim> afflux: once I needed to built mythtv
[22:24] <karim> so I installed all build-deps
[22:24] <karim> and probably over time got a lot of -dev files
[22:24] <karim> and at one point dist-upgrades where failing etcetera
[22:25] <h3sp4wn> The normal way to build packages is in isolation
[22:25] <afflux> karim: then this is a thing to file bugs for.
[22:25] <karim> h3sp4wn: well is that really normal ?
[22:25] <h3sp4wn> karim: I would use pbuilder
[22:25] <h3sp4wn> probably
[22:25] <karim> h3sp4wn: I mean practicaly it's normal, but in theory ?
[22:26] <theunixgeek> What happened to "As with the beginning of any development cycle, the Hardy one has seen the merge floodgates upon once again. This merge not only brings in lots of new version of various packages, but also a fair number of totally new applications." I don't see any new apps....
[22:26] <afflux> karim: the more packages you have, the higher the *possibility* to have dist-upgrades failing. But for those fails, bug reports are needed.
[22:26] <karim> theunixgeek: the mobile suite stuffs
[22:26] <theunixgeek> ooo
[22:26] <theunixgeek> ok
[22:26] <afflux> o.o
[22:27] <h3sp4wn> karim: Dunno I don't like to keep loads of -dev packages installed except for stuff I actually need for outside pbuilder
[22:27] <afflux> he must have looked at the wrong position...
[22:27] <karim> afflux: ok but why isolation is considered as normal to build packages ? is it more a way to be sure of the needed depencies, or just because apt can't handle more, and breaks, for some reasons I talked about ?
[22:27] <karim> h3sp4wn: I agree with you
[22:28] <h3sp4wn> karim: In issolation you have more control over what stuff gets linked against
[22:28] <afflux> karim: isolation is considered normal, since the builded package might include libraries (or something else) it isn't intended to.
[22:29] <karim> afflux: I don't understand you here
[22:31] <h3sp4wn> aptitude has not failed a dist-upgrade for me in recent times
[22:31] <afflux> karim: let's say, the ubuntu team packages app A, which can link to lib B and C. A linked with C is broken, so the ubuntu team says: no we only want A linked with B. So, if you have C in your normal system, and you rebuild A in that, C will be linked, because the compiler can find C
[22:31] <afflux> karim: that would result in a broken A.
[22:31] <pwnguin> pbuilder is used to ensure that it builds everywhere
[22:31] <pwnguin> it explicitly tests the build deps
[22:32] <h3sp4wn> you don't have to use pbuilder I don't think there is a few choices you have
[22:32] <pwnguin> it also makes sure that wierd autoconf stuff doesn't link in unintended libs
[22:33] <karim> ok
[22:33] <karim> so how gentoo handles this kind of issues ?
[22:33] <h3sp4wn> You don't have to use autoconf either depending on what the app is sometimes it is easier to just write a standard Makefile
[22:33] <afflux> h3sp4wn: did anyone say you need to use autoconf?
[22:33] <pwnguin> gentoo handles it by not caring
[22:34] <pwnguin> any by USE flags
[22:34] <h3sp4wn> afflux: No idea did they ?
[22:35] <h3sp4wn> It seems people tend to when the upstream program does
[22:36] <h3sp4wn> pwnguin: paludis seems to care alot more about that
[22:36] <karim> so let's imagine I would like a in depth compilation, what are my best chances of doing so ?
[22:36] <h3sp4wn> karim: There is no magic thing that is going to help you do what you are asking
[22:37] <pwnguin> karim: in depth?
[22:37] <karim> because the problem even with pbuilder, is that if I compile A than needs B-dev and C-dev , pbuilder will just download B-dev and C-dev as prebuilt static libraries, so it will but in my A binary already built code non optimised
[22:38] <karim> h3sp4wn: no but I am thinking of how I could try to implement or modify exisintg applications
[22:38] <h3sp4wn> pbuilder is not designed for bootstrapping the whole archive even
[22:38] <pwnguin> what does apt-build do there?
[22:39] <karim> pwnguin: same as pbuilder
[22:39] <pwnguin> i guess i dont understand the problem
[22:39] <karim> pwnguin: it will just install B-dev and C-dev
[22:39] <pwnguin> which are header files
[22:40] <karim> so as far as I understood, there are different types of -dev libraries.
[22:40] <h3sp4wn> pwnguin: He wants to build every single package with i.e -march= -O99 or whatever
[22:40] <pwnguin> sure
[22:41] <pwnguin> is there an example of a -dev holding a static lib?
[22:41] <pwnguin> i thoought those were just header files for linking with dynamic libs
[22:41] <karim> some are headers files to binary libraries installed on your system. And some are already pre built libraries provided as binary code, that will be linked statically to the final binary. is it right ?
[22:42] <h3sp4wn> there should be very little statically linked stuff
[22:42] <h3sp4wn> !info sash
[22:42] <ubotu> sash: Stand-alone shell. In component universe, is optional. Version 3.7-7.2 (gutsy), package size 313 kB, installed size 740 kB
[22:42] <karim> pwnguin: well that's what I though but it seems it's not
[22:43] <h3sp4wn> sash is statically linked handfull of others I can think of
[22:43] <pwnguin> i think you shouldn't worry about staticly linked app
[22:44] <pwnguin> if they're staticly linked, it's probably for a good reason
[22:44] <karim> pwnguin: yes but for exemple if a static library like jpeg is already provided as a binary, I will not be able to have it optimised
[22:44] <afflux> karim: if you want to have all the packages built with optimisation flags, set up a buildd and rebuild the whole archive.
[22:45] <pwnguin> why would you static link libjpeg?
[22:45] <karim> afflux: that's what h3sp4wn suggested, I am looking to docs while we are talking
[22:45] <karim> pwnguin: me or in g?n?ral ?
[22:45] <pwnguin> karim: in general
[22:46] <karim> pwnguin: I have seen exemples like that. of libs that could be optimised
[22:46] <pwnguin> the point is, if you need to optimize it, then it should probably also be a dynamically linked lib
[22:46] <karim> pwnguin: I don't understand
[22:47] <karim> pwnguin: well the point is that nothing is designed with optimisation in mind
[22:47] <pwnguin> if you find an important binary built static, perhaps its a bug?
[22:47] <pwnguin> that's not true. dynamic linking IS optimization, at a systemwide level
[22:47] <karim> pwnguin: why ??
[22:48] <karim> pwnguin: you know of what optimisation I was talking about
[22:48] <pwnguin> because shared libraries reduce RAM usage
[22:48] <karim> of course
[22:49] <karim> pwnguin: of course I am talking about cpu optimisation in case you want to rebuild the package
[22:49] <pwnguin> so you want a 686 build of everything you install basically
[22:50] <karim> pwnguin: I don't think anything is done, knowing you might want to rebuild and optimise packages. that's why when you do a build dep, it will just provide you static libraries without proposing the rebuild them as well
[22:50] <karim> pwnguin: not 686, powerpc G4
[22:50] <pwnguin> you keep using the word static library
[22:50] <pwnguin> explain what you think that means
[22:51] <karim> pwnguin: that's not good ?
[22:51] <karim> pwnguin: well I think it's libraries already pre built, that will be kind of directly merged into the final binary.
[22:52] <pwnguin> when does the merging take place?
[22:52] <karim> pwnguin: during linking ? ^^
[22:52] <pwnguin> indeed. that is static linking
[22:52] <pwnguin> i believe most of the archive is dynamically linked
[22:53] <h3sp4wn> As it should be
[22:53] <pwnguin> meaning that you get a -dev and it brings in header files, which define the shared library api for the compiler
[22:53] <karim> pwnguin: yes, but not all. I am just trying to isolate that case
[22:53] <h3sp4wn> karim: the ones that are not are probably packages you wouldn't be using anyway
[22:54] <pwnguin> at least, not enough to care about
[22:54] <karim> h3sp4wn: what do you mean ?
[22:54] <pwnguin> karim: ever use sash?
[22:54] <h3sp4wn> karim: sash is statically linked - why might you use that ?
[22:55] <karim> what's your point here ? :-)
[22:55] <pwnguin> molehill + karim == mountain
[22:56] <karim> why would I care of sash if it's statically linked if I don't want to use it ?
[22:56] <afflux> karim: you seem to
[22:56] <afflux> karim: that's the question I keep asking myself ;)
[22:56] <karim> no what I am talking about is, if I want to rebuild sash and having it optimised
[22:57] <pwnguin> but not use it?
[22:57] <h3sp4wn> exactly
[22:57] <karim> lol
[22:57] <karim> no
[22:57] <pwnguin> our point here is that the few executables that are staticly linked are also not commonly used
[22:57] <pwnguin> rebuilding them and not using them gains you nothing
[22:57] <pwnguin> so spending the effort to get apt-build to work also gains nothing
[22:57] <h3sp4wn> a full rescue system that is statically linked could be nice
[22:57] <karim> "rebuilding them and not using them gains you nothing" are you kidding ? ;-)
[22:58] <h3sp4wn> but there is livecd and such these days
[22:59] <karim> h3sp4wn: I think I don't understand what is statically linked. I don't see why you are talking about a live cd
[22:59] <h3sp4wn> karim: maybe you should try reading some more
[22:59]  * pwnguin also doesnt understand why a static linked rescue system would be nice
[23:00] <h3sp4wn> pwnguin: broken libc ?
[23:00] <pwnguin> heh
[23:00] <pwnguin> at that point, give up ;)
[23:01] <pwnguin> and im not sure thats even true. i think you're still hosed
[23:01] <h3sp4wn> pwnguin: nah just get the last working one over
[23:01] <karim> what I talk about is that if I want to use sach. an optimised version of sash for my CPU. sash is statically linked, so this mean, sash uses a -dev static library in it's build-deps when you want to build it with debuild. so this means that unless I rebuild this -dev static library before so it's optimised, sash will be not be fully optimised to the maximum of what my CPU can do. right ?
[23:02] <h3sp4wn> broken pam could you deal with that ? (I think I can but it took me longer than I thought it would when I deliberately broke it)
[23:02] <h3sp4wn> broken perl is really nasty though
[23:03] <pwnguin> karim: our point is that you never use sash. or any other static linked thing, so your obsessive complusive nature isn't gaining anything
[23:03] <h3sp4wn> sash has saved me before (I keep it installed for that reason) but I use it extremely rarely
[23:04] <karim> pwnguin: I would not bring this case unless I already faced it. I think I faced it when trying to build mplayer
[23:04] <pwnguin> if you do find something that is statically linked to another package that you use, that's an interesting problem, and possibly a bug
[23:04] <karim> I don't even know what is sash anyway ...
[23:04] <pwnguin> its a secure shell
[23:04] <h3sp4wn> no its a statically linked shell
[23:04] <karim> lol
[23:04] <pwnguin> ah
[23:04] <h3sp4wn> with the stuff that is in coreutils built into it for such times are when they are broken
[23:04] <pwnguin> "stand alone shell"
[23:05] <h3sp4wn> yeah but because of what it is it would be useless if it was not statically linked
[23:05] <karim> ah ok I see why you bring that exemple
[23:06] <afflux> karim: i don't think mplayer uses statically linked libraries.
[23:06] <pwnguin> this is going to sound silly
[23:06] <afflux> anyway, I need some sleep.
[23:06] <pwnguin> does ldd show static links?
[23:06] <pwnguin> i coulda swore i'd seen a few before from ldd
[23:06] <afflux> pwnguin: ldd shows only shared libs
[23:07] <afflux> gn8 guys
[23:09] <h3sp4wn> static linking might be nice for some situations if it was easier with ubuntu/debian
[23:11] <h3sp4wn> I used it for some dns servers a few years ago - running jailed on bsd (no shell) saves you hunting for what libs to put in
[23:12] <h3sp4wn> All the stuff that generated the zone files etc ran outside and all that was in the jail was just a statically linked bind9
[23:13] <h3sp4wn> dunno if I would do that again now or not but was ok at the time
[23:21] <pwnguin> well, the control file already says what libs you need ^_^
[23:24] <h3sp4wn> pwnguin: true but which functions in those libs do you actually need ?
[23:25] <h3sp4wn> and how can you sanely strip out all except what you actually need
[23:31] <karim> h3sp4wn: the package for buildd is rebuildd ?
[23:32] <h3sp4wn> !info dfsbuild
[23:32] <ubotu> dfsbuild: Build Debian From Scratch CD/DVD images. In component universe, is optional. Version 1.0.1 (gutsy), package size 904 kB, installed size 2560 kB (Only available for i386 alpha powerpc amd64)
[23:33] <h3sp4wn> That is not what you asked for but maybe it could be useful for you
[23:33] <h3sp4wn> I think you have to get the buildd code from alioth.debian.org/
[23:34] <crimsun> buildds use sbuild.
[23:38] <h3sp4wn> I got all the stuff from - http://alioth.debian.org/projects/buildd-tools/ (including sbuild) but I won't be doing it again and I have no need for it
[23:40] <karim> sbuild is packaged
[23:40] <crimsun> Debian's buildds use different tools, generally, to Ubuntu's.
[23:41] <crimsun> there are a few components that are identical, but soyuz is fairly different.
[23:45] <h3sp4wn> crimsun: What is happening with the moving all of /etc/init.d/foo to /etc/event.d/foo ? (Is that still supposed to happen)
[23:46] <crimsun> h3sp4wn: Ask Scott. (eventually.)
[23:49] <karim> crimsun: soyuz as not package
[23:49] <karim> ?
[23:50] <crimsun> karim: no, it's not DFSG-Free.
[23:50] <crimsun> (which has been the cause of much irritation)
[23:50] <karim> no kidding ...
[23:51] <karim> I am irritaded right now ^^
[23:52] <crimsun> I am not a Soyuz hacker (IANASH)
[23:53] <crimsun> e.g., see #launchpad
[23:53] <crimsun> caveat: the devs there tire of the continual "but it's not Free!" buffoonery
[23:54] <karim> soyuz is a builder ? crimsun
[23:54] <crimsun> karim: it's the Canonical infrastructure that powers the Ubuntu builders.
[23:55] <infinitycircuit> does anyone know what kernel hardy heron will be using/when a version of it will be uploaded to the repositories
[23:55] <crimsun> karim: this means: it handles the acceptance of *_source.changes by vetted developers, sends the notification e-mails, kicks off the source builds, and hands off the generated binaries to dinstall
[23:55] <crimsun> infinitycircuit: 2.6.24-based.
[23:57] <karim> launchpad situation is kind of ridiculous
[23:57] <crimsun> well, I concur that it's crappy
[23:57] <h3sp4wn> infinitycircuit: meaning should get nohz / high res timers (for amd64) and cfs - containers was the other thing but I dunno whether that made it in or not
[23:58] <karim> the guy use a full distribution work with a huge fork and cannot deliver open source code lol
[23:58] <crimsun> containers are in.
[23:58] <crimsun> well, at least the Linus-vetted portions.
[23:58] <infinitycircuit> h3sp4wn, oh yeah i've probably compiled 10 different 2.6.24 kernels myself over the past weeks.  i was just wondering when i could start using the ubuntu source rather than the generic souce
[23:58] <infinitycircuit> h3sp4wn, the hpet forcing patches are also mainlined
[23:58] <crimsun> infinitycircuit: kernel-team@ certainly could use more testers from ubuntu-hardy.git.
[23:59] <infinitycircuit> crimsun, okay i will check it out from the tree then and just do it through git