[00:12] <jetole> Hey guys. Don't know if this is off-topic but alis couldn't help me find a more on topic room so I was hoping someone could help me with sudo-ldap. I have some rules that I tested on my server in the local sudoers file and one rule was giving members of the admin access to everything except a cmnd_alias for su and shells and I'm not sure how I should do that via sudo-ldap
[00:14] <twb> jetole: alis questions go to #freenode
[00:14] <twb> Oh, misread
[00:15] <ruben23>  guys i ahve a folder/directory with many files on it- dir1 and dir2 are teh same in fromat but dir2 have some few added updates of file in it how do i copy dir2 to dir1 by just ovewriting existing but copy the file whihc dir dont have.>? any idea..?
[00:15] <twb> I don't know what you mean by cmnd_alias
[00:16] <twb> ``Cmnd_Aliases are not really required either since it is possible to have multiple users listed in a sudoRole.  Instead of defining a Cmnd_Alias that is referenced by multiple users, one can create a sudoRole that contains the commands and assign multiple users to it.''
[00:16] <twb> That's what sudoers_ldap says
[00:16] <twb> Er, sudoers.ldap(5)
[00:16] <ruben23> guys i have a folder/directory with many files on it- dir1 and dir2 are the same in fromat but dir2 have some few added updates of file in it how do i copy dir2 to dir1 by just ovewriting existing but copy the file which dir dont have.>? any idea..?
[00:17] <twb> jetole: Here are my sudo objects: http://paste.debian.net/158251/
[00:18] <twb> ruben23: rsync -aui ?
[00:18] <twb> ruben23: perhaps with --dry-run
[00:21] <ruben23> twb:  rsync -aui /home/dir2 /var/dir1...?
[00:22] <twb> ruben23: I expect you to use some initiative and investigate the meaning of those rsync options.
[00:53] <jetole> twb: Thanks.
[00:53]  * jetole looks
[01:21] <Firebolt> I'm trying to install ubuntu server 10.04 lts on a laptop with a broken screen. However, past the menu which prompts me to choose a language/what action to perform, once I select "Install ubuntu server", it stops giving output via VGA
[01:22] <Firebolt> I know that you can specify the vga kernel option, but I've forgotten how
[01:22] <twb> Firebolt: do a network install instead
[01:23] <twb> Firebolt: does the laptop have wired ethernet?
[01:23] <Firebolt> twb, yes
[01:23] <twb> Yeah just set it to boot from network, and load up the netboot installer.  Write a preseed script to get it to the point where you can SSH into the installer and finish the install
[01:24] <Firebolt> no idea how to do that
[01:24] <twb> It's documented in the installation-guide-i386 (or -amd64) package
[01:24] <twb> Alternatively you could try fiddling with vga=false nomodeset and stuff at the start of the installer, where you hit F6 to add extra boot options
[01:25] <Firebolt> ah
[01:25] <Firebolt> what would I specify vga= as then?
[01:25] <twb> I dunno
[01:26] <twb> I don't know how your screen is buggered either
[01:27] <Firebolt> The backlight doesn't work
[01:29] <Firebolt> clumsy friend
[01:30] <twb> If you can get video working enough, you can start SSH from the normal installer
[01:31] <twb> You pick "expert install" (priority=low) and when prompted for udebs (modules) to install, you make sure to tick "network-console".
[01:31] <twb> Passing theme=dark is also good for getting rid of that fugly magenta
[01:35] <Firebolt> the minute the installer starts, I loose the vga
[01:35] <Firebolt> I tried using vga=XXX, but it doesn't display correctly
[01:36] <twb> Oh, wait, this is lucid?
[01:36] <twb> Lucid installer has a bug where you *can't* stop it loading the framebuffer, no matter what, until the install is finished and you boot off the HDD
[01:36] <Firebolt> awww
[01:36] <twb> it drove me apeshit trying to do it until I RTFS and found it was not possible
[01:37] <Firebolt> So I should use a newer version instead?
[01:37] <twb> What's really stupid is it's hard-code to load vga16fb which only provides 80x30 instead of 80x25
[01:37] <twb> For 5 damn lines they broke it for me (and you, I guess)
[01:37] <twb> Firebolt: well AFAIK it's fixed in 10.10 and up, but I don't know if you want LTS or not
[01:38] <Firebolt> I'd prefer lts, but anything will do at this point
[01:39] <Firebolt> I guess i'll download 11.04 server
[01:39] <Firebolt> er, 11.10
[01:39] <Firebolt> 12.04 will be LTS yes?
[01:39] <twb> Yes
[01:39] <twb> Hang on, I'll find you the small ISO URL
[01:40] <twb> http://archive.ubuntu.com/ubuntu/dists/precise/main/installer-amd64/current/images/netboot/mini.iso
[01:40] <Firebolt> no need
[01:40] <twb> Well, OK
[01:40] <Firebolt> already downloading the full
[01:41] <twb> I just hate people downloading 700MB when 20MB will do
[01:41] <Firebolt> I often work with computers with no internet connection at install time
[01:41] <twb> Fair enough
[01:41] <twb> Usually I install them *then* ship them out
[01:42] <Firebolt> shipping, eh?
[01:42] <Firebolt> I just help out friends who want to try linux
[01:42] <twb> You poor poor bastard
[01:42] <Firebolt> but normally i end up installing at school
[01:42] <Firebolt> where we're locked from using the school wifi/ethernet
[01:44] <Firebolt> figures, though, that the one installer I try is borked
[01:44] <twb> Normally it would merely be annoying, not a show-stopper
[01:45] <twb> If the screen goes completely blank that's probably because the screen is lying about its resolution over EDID or something
[01:46] <Firebolt> there's a bit of random colours on the screen
[01:47] <twb> Like snow?
[01:47] <twb> I mean: like an out-of-tune telly?
[01:47] <twb> Maybe you're too young to remember FM TV tuners...
[01:49] <Firebolt> oh no
[01:49] <Firebolt> I do
[01:50] <Firebolt> I may be only 15, but I've seen my share of devices
[01:52] <twb> I remember building one from a kit
[01:52] <twb> back before the electonics hobby market died
[01:54] <Furry> (Firebolt here, connecting from a spare laptop)
[01:55] <Furry> I have too many of these
[02:00] <Ptoenk> Evening .. I just did a fresh oneiric install , and am having the " dhclient: can't create /var/lib/dhcp3/dhclient.eth0.leases: No such file or directory" issue..
[02:00] <Ptoenk> what is the good way to fix it?
[02:00] <Ptoenk> mess with ifup
[02:00] <Ptoenk> create a simlink ?
[02:00] <Ptoenk> create a dir ?
[02:01] <Ptoenk> y
[02:02] <twb> Ptoenk: sounds like your system is damaged.
[02:02] <Ptoenk> lol
[02:02] <Ptoenk> no it's not
[02:03] <Ptoenk> it's a well documented bug
[02:03] <twb> Then fix it yourself, I guess.
[02:03] <lifeless> whats the bug number ?
[02:03] <lifeless> twb: now now
[02:03] <Ptoenk> set me find it again , sec
[02:05] <twb> Ptoenk: try ifdown --force eth0; ifup eth0
[02:05] <Ptoenk> Bug #900234
[02:05] <uvirtbot`> Launchpad bug 900234 in isc-dhcp "dhclient: can't create /var/lib/dhcp3/dhclient.eth0.leases in syslog again on Precise" [Undecided,Confirmed] https://launchpad.net/bugs/900234
[02:07] <twb> Ptoenk: sudo ln -s dhcp /var/lib/dhcp3 as a workaround, according to that ticket
[02:07] <Ptoenk> yes
[02:07] <Ptoenk> i can also mess with ifup
[02:07] <Ptoenk> i can do lots of things
[02:07]  * twb grumbles, why is ifupdown 0.7 still using noweb
[02:07] <Ptoenk> the question i have , if any , is there a set resolution , albeit temporaty
[02:08] <Ptoenk> that will not give issues once a real fix is introdced
[02:08] <twb> Ptoenk: all I know is what's on that bug ticket.
[02:08] <Ptoenk> others might know
[02:08] <Ptoenk> creating a link is a plaster on a wooden leg
[04:35] <delinquentme> hey all OK I've got a ubuntu server up on EC2 .. with a web server running on it .. the web servers config is set to serve out at port 3000 ... however:    http://ec2-23-20-139-29.compute-1.amazonaws.com:3000/     is giving me nothing
[04:38] <twb> delinquentme: on the server, can you connect to 127.0.0.1 3000 ?
[04:39] <delinquentme> twb, how do I check that?? ping?
[04:41] <twb> nc 127.0.0.1 3000
[04:41] <twb> If it doesn't hang up, speak some HTTP to it
[04:41] <uvirtbot`> New bug: #944546 in libcommons-cli-java (main) "StringIndexOutOfBoundsException in HelpFormatter.findWrapPos" [Undecided,New] https://launchpad.net/bugs/944546
[04:41] <twb> If you can't speak HTTP, you should not be setting up a web server.
[04:44] <delinquentme> Hmmm nc .. what kind of tool is this?
[04:45] <delinquentme> nc 127.0.0.1 3000  <<< did nothing with this twb
[04:45] <twb> Then clearly your httpd is not running, or not bound wherey you thought it was
[04:45] <twb> cf. netstat -nlp
[04:47] <delinquentme> https://gist.github.com/1955759  << output
[04:47] <delinquentme> now this is also not an apache server
[04:48] <SpamapS> delinquentme: perhaps you haven't configured EC2 to allow incoming traffic to port 3000? By default all incoming ports are closed on EC2.
[04:48] <twb> SpamapS: should still allow it on lo, surely
[04:48] <SpamapS> yeah, but nc would "do nothing" to the untrained eye
[04:49] <twb> Oh I see what you mean.  Sigh.
[04:49] <SpamapS> Also his netstat (btw people, use ss, not netstat) shows it listening.
[04:49] <twb> ss does the wrong thing in a specific case, I forget which
[04:49] <delinquentme> SpamapS, AH!
[04:49] <SpamapS> twb: good, you can actually *fix* it
[04:50] <SpamapS> twb: whereas netstat is basically dead
[04:50] <twb> IIRC it wouldn't list UDP listening ports by default
[04:50] <delinquentme> so what ss comand should i use to replace the netstat one?
[04:51] <twb> Also its stupid huge padding is really annoying
[04:51] <twb> So you always have to |cat to stop it
[04:52] <SpamapS> twb: it just fills the available columns
[04:52] <twb> SpamapS: yes but I have full-screen ttys so I end up with like 100 spaces between each column
[04:53] <twb> Oh, and by default it puts -p on a second line
[04:53] <SpamapS> twb: perhaps submit a bug report that it should stop at 120 ;)
[04:53] <twb> IMO it should be more or less like column -t, where it puts about four space between each column
[04:53] <delinquentme> ok so do I need to make both a rule for TCP and UDP?
[04:54] <twb> delinquentme: no, HTTP runs over TCP
[04:54] <twb> Which you should also already know.
[04:54] <delinquentme> and then the source should be the internal IP of the web server
[04:54] <SpamapS> twb: ultimately though, netstat is deprecated, so you should gripe about ss to the ss maintainers.. because.. it actually has maintainers. ;)
[04:54] <delinquentme> twb, totally =]
[04:54] <twb> http://paste.debian.net/158263/
[04:54] <twb> Also ss was installed into sbin by default until recently
[04:55] <twb> SpamapS: I complained to them directly a few years back
[04:55] <twb> SpamapS: I was using ss for a while but that gotcha where it didn't list... whatever it was, fucked me over, so I have put off migrating to it for a while
[04:55] <twb> I do use ip everywhere, though.
[04:57] <DyeA> hello all, I updated my Ubuntu 10 server with webmin and it broke php. Files were downloading instead being appropriately handled. I then went to Troubleshooting PHP https://help.ubuntu.com/community/ApacheMySQLPHP#Troubleshooting_PHP_5 and ran sudo a2enmod php5 which returned "Enabling module php5." instead of returning module not found. However upon restarting apache I got an error "Syntax error on line 204 of
[04:57] <DyeA> /etc/apache2/apache2.conf: Syntax error on line 1 of /etc/apache2/mods-enabled/php5.load: Cannot load /usr/lib/apache2/modules/libphp5.so into server: /usr/lib/apache2/modules/libphp5.so: cannot open shared object file: No such file or directory"
[04:58] <delinquentme> you guys have any idea if I need to restart my EC2 servers for the security group changes to take effect?
[04:58] <twb> !webmin
[04:58] <DyeA> I removed php5-common with apt-get, re-installed, restarted apache and now I get a 500 server error instead of a download, however HTML renders fine
[04:59] <twb> DyeA: this is why we don't support webmin, because it causes problems like this.
[04:59] <DyeA> arghhh! it seemed like a good idea at the time
[04:59] <twb> You were probably high
[05:00] <DyeA> definite possibility
[05:00] <twb> SpamapS: can't find my ss whinging in debbugs bts :-/
[05:00] <delinquentme> twb, so what am i looking for in "netstat -nlp" to ensure that the connections I want are functional?
[05:00] <DyeA> should I have done a purge instead of a remove of php?
[05:00] <delinquentme> AWWW YEAHHH
[05:01] <delinquentme> http://ec2-23-20-139-29.compute-1.amazonaws.com:3000/  <3 u guys
[05:01] <twb> http://paste.debian.net/158264/
[05:01] <twb> delinquentme: that shows a server with a listening apache on 80
[05:02] <delinquentme> how can you tell this?
[05:02] <delinquentme> also I didn't install apache on this .. are you sure thats not the routing server?
[05:02] <twb> delinquentme: sorry, you've exceeded your stupidity allowance.  Please wait patiently for someone else to help you.
[05:03] <DyeA> delinquentme: hey don't feel bad, I exceeded my stupidity allowance before even arriving here
[05:04] <delinquentme> lol
[05:04] <delinquentme> twb im learning :D
[05:04] <delinquentme> its cool though
[05:05] <DyeA> twb is drunk and watching glee right now but his knowledge still vastly exceeds ours even in his current state
[05:05] <twb> Isn't glee about gays in a high school musical / drama?
[05:05] <DyeA> twb: close enough
[05:06] <Firebolt> twb, do you have any suggestions besides webmin?
[05:06] <Firebolt> that are similar?
[05:06] <delinquentme> LOL
[05:06] <DyeA> delinquentme: whateve you don't install webmin
[05:06] <delinquentme> twb i dont judge you
[05:06] <twb> I'd rather rewatch the first two seasons of _Skins_
[05:06] <delinquentme> DyeA, check.
[05:06] <twb> Oh, sorry, this is #ubuntu-server not #emacs.  I'll get back on topic.
[05:06] <DyeA> apt-get check?
[05:06] <delinquentme> twb, have you ever used youtube>
[05:06] <delinquentme> :D
[05:07] <Firebolt> (I've only ever used webmin when forced)
[05:07] <twb> Firebolt: we recommend learning to use the CLI like a proper sysadmin
[05:07] <DyeA> yeah i felt vaguely dirty ever time i used it
[05:08] <Firebolt> twb, but for my friends who are gifted with IQs of -4 or use Macs?
[05:08] <twb> Firebolt: they do not get to be sysadmins
[05:08] <twb> They can hire someone like me to babysit their VPS
[05:08] <Firebolt> lol
[05:09] <DyeA> they just get to randomly fire up slow loris and get jacked
[05:09] <delinquentme> twb, dont sysadmins just play wow?
[05:10] <Firebolt> no
[05:10] <Firebolt> I don't play wow
[05:11] <twb> The LAST thing a sysadmin wants to do when she goes home, is to babysit another computer
[05:12] <delinquentme> http://lemonnier.se/erwan/talks/pix/BoredSysadmin.jpg
[05:12] <delinquentme> lik dat?
[05:12] <delinquentme> i tried to find a screenie of a WoW char named sysadmind
[05:12] <Firebolt> I prefer to fool with others' servers by "sudo rm -rf /"
[05:13] <delinquentme> http://www.reddit.com/r/networking/comments/qbi4f/help_me_explain_to_my_wife_that_our_network_isnt/
[05:14] <twb> delinquentme: please take it to #overflow or whatever
[05:15] <delinquentme> twb, trying to lighten your levels
[05:17] <Firebolt> I need to get myself a better ISP
[05:18] <Firebolt> rather, I need to get my parents to get me a better ISP
[05:21] <delinquentme> soo whats up with the apache
[05:21] <delinquentme> oh wait apache tomcat
[05:22] <delinquentme> yeah idk trinidad is some interface between those
[06:23] <bnemec> hello?
[06:26] <SpamapS> bnemec: ahoy!
[06:27] <bnemec> cool someone else in here.
[06:28] <bnemec> I'm running 10.04 LTS on Dell PE2600
[06:28] <bnemec> you?
[06:28] <SpamapS> I run 11.10 in EC2 ;)
[06:29] <SpamapS> and precise on my laptops. :)
[06:29] <SpamapS> but then.. I'm a developer, so I find it helpful to run precise for testing. :)
[06:31] <kirkland> SpamapS: howdy :-)
[06:32] <SpamapS> kirkland: avast!
[06:32] <kirkland> SpamapS: nice post today, btw
[06:32] <kirkland> SpamapS: long live Eddard Stark!
[06:42] <SpamapS> kirkland: not too long.. ;)
[06:42] <kirkland> SpamapS: he dies???? :-)
[06:43] <SpamapS> kirkland: I'm on book 4. Had to swear it off for a couple weeks tho.. tore through the first 3 books so fast.
[06:43] <kirkland> SpamapS: I'm about 20% through book 5
[06:43] <kirkland> SpamapS: book 3 was *great*
[06:50] <SpamapS> kirkland: yeah I feel like book 4 is a result of him being tired of writing about Tyrion. ;)
[06:59] <kirkland> SpamapS: heh, yeah
[06:59] <kirkland> SpamapS: i missed most of my favorite characters in book 4
[07:00] <kirkland> SpamapS: do you happen to have osx running anywhere any more?
[07:00] <kirkland> SpamapS: I want to do some byobu verification/testing/development on osx
[07:00] <kirkland> SpamapS: and I'm wondering if I need to just but a crappy mac mini or something
[07:01] <kirkland> SpamapS: it's so weird not just being able to fire up the OS I need in EC2 and pay a few pennies :-)
[07:27] <SpamapS> kirkland: you can run OS X in a VM on a Mac without buying another license. ;)
[07:28] <SpamapS> kirkland: I don't hardly ever run it except to update the OS on my iphone anymore.
[07:37] <gnome> so can anyone tell me why i can't login to my ec nodes?
[07:38] <gnome> someone must be testing this also.
[07:39] <SpamapS> kirkland: btw, speaking of byobu issues.. using it on precise right now and its flickering a lot..
[07:41] <SpamapS> gnome: ec2 ?
[07:50] <gnome> yup ec2
[07:52] <gnome> sons up back in a few
[07:55] <SpamapS> gnome: You most likely need to define a key pair and make sure a) you're specifying it when launching the instnaces, and b) you're using it when ssh'ing to the instances
[07:56] <gnome> k
[07:56] <gnome> but
[07:56] <gnome> when I go to send key to instance.. I fail.
[07:56] <gnome> so you are saying make the key then boot the instance?
[07:57] <gnome> when i try to ssh to an instance i am met with a password
[07:57] <gnome> i can't even login to the nodes if I am standing in front of them.
[07:57] <gnome> i just don't get it. perfect pxe cobbler install 10 machines.
[07:57] <gnome> and how did it not send my creds during that install?
[07:58] <gnome> sorry for silly questions. :(
[07:58] <gnome> i have done manual clusters with ease.
[07:59] <SpamapS> gnome: you have to inform *amazon* of the key
[07:59] <gnome> k..
[07:59] <gnome> I 'HAVE' to inform them?
[07:59] <SpamapS> gnome: *or* you have to store your key some other way such as through cloud-init metadata
[07:59] <gnome> that amazon thing, like really i registerd my personal cloud. with them?
[08:00] <SpamapS> gnome: you can use your own keys if you want. Its just not built into the EC2 api.. but it is built into Ubuntu.
[08:00] <gnome> repeating a question answered... shows my inability to not understand why we have to register with them for our 'own' personal systems.
[08:00] <gnome> so how will that make me able to login to the nodes?
[08:01] <SpamapS> gnome: they have console access to your systems. Don't be naive. ;)
[08:01] <gnome> behind a proxy alsO?
[08:01]  * gnome me is being paranoid.
[08:02] <SpamapS> gnome: anyway, if you want to SSH to the systems you have two options. Add a keypair using euca-add-keypair (or ec2-add-keypair if you prefer the original slower amazon tools) ...
[08:02] <SpamapS> gnome: or you can learn to use cloud-init to put your keys on the systems.
[08:03] <gnome> i have read the ub cloud info back to back many a times. there was nothing about cloud-init.
[08:03] <SpamapS> gnome: ub cloud info ?
[08:03] <gnome> it it just that 11.10 is lacking documation assuming we have run their soft before?
[08:03] <gnome> spamaps ... ub clound info..?
[08:03] <gnome> i not sure what you mean sir.
[08:05] <SpamapS> gnome: you said "ub cloud info" .. I don't know what that means
[08:06] <gnome> oh, the posted manual on the ubuntu site
[08:06] <gnome> on covering install and setup
[08:08] <gnome> so i went with ubuntu because of ... easability, well >.. :(
[08:08] <gnome> it's not been so easy that's for sure
[08:11] <SpamapS> gnome: "the ubuntu site" ?
[08:11] <SpamapS> gnome: do you mean www.ubuntu.com , cloud.ubuntu.com, help.ubuntu.com, wiki.ubuntu.com, or somethingelse.ubuntu.com ?
[08:11] <SpamapS> gnome: it would help me if you could point me to the same material you are reading so I can help get it fixed, or explain something that might not be clear.
[08:12] <SpamapS> gnome: the cloud is not actually very easy.. we've been working on making it easier w/ juju (http://juju.ubuntu.com/)
[08:12] <gnome> k ya i been playing with juju also
[08:12] <gnome> it's nearly making me crazy at how nice it susposed to work
[08:12] <gnome> but doesn't do as intended.
[08:13] <gnome> i pxe booted all nodes to oneiric - arc - juju
[08:13] <gnome> try to login with my primary user name from main server to any node.
[08:13] <gnome> denied...
[08:13] <gnome> now if this part was streamlined also.. i'd be a happy camper
[08:14] <gnome> i could deploy mpi work i do in a ub environment over massive ammounts of pc's quickly
[08:14] <gnome> instead it feels like my head going to explode.
[08:14] <gnome> glances back over at the deb dvd's... :)
[08:15] <gnome> help.ubuntu.com
[08:15] <gnome> is side.
[08:15] <gnome> site the server guide.
[08:15] <SpamapS> gnome: pxe boot? so you tried the orchestra provider with juju?
[08:15] <gnome> yup
[08:16] <gnome> it works beautifull but no node access...
[08:16] <SpamapS> gnome: that is a really, really specialized and frankly bad use case for juju right now. Its going to be *MUCH* better in 12.04
[08:16] <gnome> like does the head node need a gui front end..
[08:16] <gnome> oh it's fine spamaps i like to work with ... anything
[08:16] <SpamapS> gnome: for EC2.. juju is *very* smooth
[08:16] <gnome> my other cluster is a huge mixture of every distro.
[08:17] <gnome> did it just cause :) lol but I want to do a solid system like what ub development is leaning at with 11.10
[08:17] <SpamapS> gnome: give juju+EC2 a try
[08:17] <SpamapS> gnome: I think you'll like it
[08:17] <gnome> ya i installed ec2
[08:17] <gnome> then juju
[08:18] <gnome> and .. am lost why don't nodes get any info.
[08:18] <gnome> only thing I can possibly think of is in cobbler interface i have user set as
[08:18] <gnome> admin
[08:18] <gnome> that's the conclusion i have come to after... oh 10 netboots of each machine.
[08:19] <gnome> i actually installed the server front end 7 times to get it ...
[08:19] <gnome> the way i wanted.
[08:19] <SpamapS> gnome: *ec2* does not need cobbler
[08:19] <gnome> but isn't cobbler the deploy for pxe?
[08:19] <gnome> doesn't it setup the boot imgs?
[08:20] <SpamapS> gnome: yes, but why would you PXE on a public cloud?
[08:20] <gnome> ? public?
[08:20] <gnome> k private and public clouds.. i just dont' under stand the terminology this way
[08:20] <gnome> to me private would be something running at home behind multi firewalls.
[08:21] <gnome> public would be like a High availability server running on a public ip.
[08:21] <gnome> sorr if this sounds stupid
[08:21] <SpamapS> gnome: public means another company hosts the hardware
[08:21] <SpamapS> gnome: private means you host it and do not sell it to anyone else.
[08:21] <gnome> I am running a private cloud
[08:21] <gnome> then.
[08:21] <SpamapS> gnome: eucalyptus? openstack?
[08:21] <gnome> yup
[08:22] <SpamapS> both?
[08:22] <gnome> eucalyptus
[08:22] <SpamapS> ah ok
[08:22] <SpamapS> We've had some issues w/ juju + eucalyptus
[08:22] <gnome> like when i get the main server running I install eucalyptus
[08:22] <SpamapS> because of the way euca sets up their "S3"
[08:22] <gnome> then after I login to cobbler
[08:22] <gnome> add the nodes
[08:22] <gnome> and boot them
[08:22] <SpamapS> its fundamentally broken unfortunately.
[08:22] <gnome> they install.
[08:22] <gnome> @!%%%%%%%%%%%%%
[08:22] <gnome> 5 days I been working with.. fundamentally broken..
[08:22] <gnome> no wonder my head hurts
[08:22] <SpamapS> gnome: you may have noticed, the buzz around euca has died down a lot... for a reason
[08:23] <gnome> well if it would just deploy user creds with the pxe boot img properly
[08:23] <SpamapS> gnome: openstack is a bit harder to deploy, but will scale quite a bit more.
[08:23] <gnome> omg I'd be still installing systems to it.
[08:24] <SpamapS> gnome: if you're using juju + orchestra, juju should be installing your key to let you login as the 'ubuntu' user with it.
[08:24] <SpamapS> gnome: I'd recommend hanging out in #juju and asking there
[08:24] <SpamapS> gnome: note though that one reason there's very little documentation on orchestra is that it is changing so rapidly in 12.04
[08:25] <gnome> Unknown id: ubuntu
[08:25] <gnome> :(
[08:26] <gnome> if i could just get into my nodes... I'd be so happy..  short of login brute force then set keys after
[08:26] <gnome> so do I wait for 12.04 or do i go back to 10.?
[08:26] <SpamapS> gnome: if something went wrong during the install then you won't be able to login.. its one of the problems that needs solving. :)
[08:27] <gnome> right but you can continue to re-image. then netboot the nodes till they work right
[08:27] <gnome> only thing I can think of is i am using user name admin
[08:28] <gnome> and i noticed in cobber logs it says user [?] on machine [ub1]
[08:28] <SpamapS> gnome: the juju orchestra system profile creates a user named 'ubuntu' and puts your ssh key in for the 'ubuntu' user
[08:28] <gnome> so it doesn't seem to know user for odd reason, i going to look into it further at this point it seems to me there must be a simple reason for why it failing .
[08:28] <gnome> well i tried sudo su ubuntu
[08:28] <gnome> on the master it said.. no user.
[08:29] <gnome> however there is a eucalyptus user
[08:29] <SpamapS> gnome: then you didn't use juju+orchestra to install that machine.
[08:29] <gnome> and course the user name I set during server install
[08:29] <gnome> odd.. because it was the latest and only 11.10 i could download
[08:29] <SpamapS> gnome: wait, I keep forgetting that you're doing eucalyptus. So you created a eucalyptus cloud.. and you're trying to talk to it w/ juju?
[08:30] <gnome> i am trying to figure out how to talk to it to send creds to the nodes so i can access them
[08:30] <gnome> or was euc not intended to allow us to use the nodes directly?
[08:30] <SpamapS> gnome: and by nodes, you mean the nodes *running* eucalyptus, or the VMs running *inside eucalyptus* ?
[08:30] <gnome> node = terminals . pc's
[08:31] <gnome> slaves!
[08:31] <gnome> sry
[08:31] <gnome> master and 9 slaves
[08:31] <gnome> can't access or login to the slave machines what so ever
[08:31] <SpamapS> gnome: ok, well if you just used cobbler and the default oneiric install profile, then you probably don't have a user. You need to add one to the kickstart/pre-seed
[08:32] <SpamapS> gnome: and by slaves, you mean *physical* machines, not virtual machines?
[08:32] <gnome> yup physical
[08:33] <SpamapS> gnome: ok, so yeah, you just need to define a way to login to them in the pre-seed
[08:33] <gnome> k so do that in cobbler web interface?
[08:33] <SpamapS> gnome: did you try 'ubuntu/ubuntu' for user/pass ?
[08:33] <gnome> or just console edit the kick?
[08:33] <gnome> on a node?
[08:33] <SpamapS> yeah
[08:33] <SpamapS> try it
[08:33] <gnome> k i got 4 flights of stairs
[08:34] <gnome> going to take a minute
[08:34] <gnome> brb
[08:34] <SpamapS> wait no
[08:34] <SpamapS> no no on
[08:34] <SpamapS> no no
[08:34] <gnome> k
[08:34] <SpamapS> gnome: they're not running SSH?
[08:34] <gnome> yup they are
[08:34] <SpamapS> ssh ubuntu@thenode
[08:34] <gnome> BAZINGA!
[08:35] <gnome> now if that small instructino was added on the page.
[08:35] <gnome> omgosh would that help like 1000 ppl have same question as me
[08:36] <SpamapS> gnome: well its a security problem and is going to be removed actually. ;)
[08:36] <gnome> is in shock 4 days..
[08:36] <SpamapS> gnome: default passwords == bad
[08:36] <gnome> yes they do
[08:36] <gnome> but k so how do I tell it to just do it auto from pxe.. or should I just not do that? and behapp this way/
[08:37] <SpamapS> gnome: But, alas, there's so much else that is changing. Glad we could move you forward. :)
[08:37] <SpamapS> gnome: if you look in the pre-seed, there is a password value (a hash I think, so it look slike gibberish) set... you can change it.
[08:37] <SpamapS> gnome: in cobbler pre-seeds are called kickstarts (because it came from redhat)
[08:37] <SpamapS> anyway
[08:38] <SpamapS> its after midnight, time for me to sleep
[08:38] <SpamapS> gnome: good luck
[08:38] <gnome> k ya i was affraid to changethat hash
[08:38] <gnome> in fear of breaking the big picture
[08:40] <RoyK> just updated this lucid machine to oneiric to get some more updated libvirt/kvm stuff, and now it hangs when I try to create a new volume :(
[08:40] <RoyK> that is, virt-manager hangs when trying to deal with volumes
[08:41] <RoyK> or actually the whole libvirt part (for this machine)
[09:34] <lynxman> morning o/
[10:35] <jamespage> SpamapS, Daviey: first reboot test now live (but failing testing :-() https://jenkins.qa.ubuntu.com/view/Precise%20ISO%20Testing%20Dashboard/view/Daily/
[10:41] <uvirtbot`> New bug: #944684 in keystone (universe) "Error installing keystone selecting dbconfig-common and sqlite3 as the backend" [Undecided,New] https://launchpad.net/bugs/944684
[11:11] <Daviey> rbasak: do you have capacity to work on bug 911812?
[11:11] <uvirtbot`> Launchpad bug 911812 in facter "processor fact does not handle arm, others" [Undecided,New] https://launchpad.net/bugs/911812
[11:13] <rbasak> Daviey: I think so, I'll look at it
[11:20] <Daviey> rbasak: thanks, it's currently assigned to roaksoax.. but i can't see him having time to work o it in the short term.
[11:49] <rbasak> Daviey: looks like bug 911812 has already been fixed upstream and we're carrying the fix in Precise. I can't confirm from the information in the bug though, so I've asked lamont in the bug.
[11:49] <uvirtbot`> Launchpad bug 911812 in facter "processor fact does not handle arm, others" [Undecided,New] https://launchpad.net/bugs/911812
[11:51] <Daviey> rbasak: does 'factor' work for you on panda?
[11:51] <rbasak> Daviey: yes
[11:51] <rbasak> Daviey: though I do get a couple of warnings about PCI not existing
[11:52] <lamont> sounds like it's fixed then
[11:52] <lamont> I should see if the diff matches though
[12:18] <asac_> so ... I am looking for a cmdline tool that at best can kind of transparently execute commands in ec2 and makes it easy to auto provision servers and shut them down afterwards :)... does such a magic box exist :)?
[12:19] <asac_> actually that is already to specific. We have jenkins running to basically just do cloud provisioning and execution of remote jobs (for building)... but we don't want to use that anymore. what are options?
[12:20] <asac_> smoser: ^^
[12:20] <asac_> :)
[12:20] <asac_> hi!
[12:57] <koolhead17> zul: awesome!! :)
[12:58] <koolhead17> dashboard E4 has a blocker now https://bugs.launchpad.net/horizon/+bug/944763
[12:58] <uvirtbot`> Launchpad bug 944763 in horizon "horizon-2012.1~e4.tar.gz is broken" [Undecided,New]
[12:58] <koolhead17> :(
[13:00] <jdstrand> Daviey, lynxman: hey, do you know if someone is working on the 2.7.11-1 puppet merge?
[13:02] <lynxman> jdstrand: I did a package a couple days ago, 2.7.11-0, can do the merge this morning as well
[13:03] <jdstrand> lynxman: that would be wonderful :) can you ping me when it is uploaded?
[13:04] <lynxman> jdstrand: I don't have upload rights, I'll find someone to sponsor the merge
[13:08] <RoyK> fdhsfdsgdsg: fix your internet connection!
[13:09] <jdstrand> lynxman: if you can't, ping me
[13:10] <lynxman> jdstrand: thanks :)
[13:28] <smoser> asac_, ah.  i'm not aware of anything that exactly fits your needs.  from what i understand, you basically want something like "chroot" that chroots into an ec2 instance, right ?
[13:28] <smoser> or i guess schroot that has the itnerface to start up a new thing and stop it.
[13:28] <rbasak> lamont: so your patch doesn't apply to the latest source in Precise because the logic seems to have moved to a different file (under util/processor.rb now). But the arm logic in there appears to be the same as what your patch is applying - possibly derived from it?
[13:29] <smoser> i think rbasak has some stuff that does similar things.
[13:30] <rbasak> yeah I think my tool matches that description
[13:30] <rbasak> It's geared at openstack at the moment; I need to check how to get it generic to ec2.
[13:32] <rbasak> smoser: speaking of which, if it's useful I'd like to get it into cloud-utils or something like that eventually
[13:32] <smoser> i would say yeah.
[13:32] <smoser> and i think modelling after schroot's cmdline interface would be pretty good.
[13:33] <smoser> or maybe even just extend schroot :)
[13:33] <rbasak> hmm, that'd be interesting
[13:34] <rbasak> I never thought of it as an schroot-alike before
[13:34] <smoser> schroot has a reasonable interface.
[13:34] <smoser>  start, enter, delete, list
[13:35] <rbasak> yes, that is reasonable
[13:35] <rbasak> The current interface is modelled after ssh with some stuff added
[13:36] <smoser> so for +1, lets do that.
[13:36] <rbasak> I think I'd like to support both
[13:36] <smoser> well, schroot has a simple: start, run command, cleanup
[13:36] <smoser> which is really all i think ssh would be different
[13:36] <smoser> right?
[13:37] <rbasak> for interactive use, I tend to think of it as a machine that I can ssh to that is created automatically the first time I mention it
[13:38] <rbasak> I've embedded user-configurable specifications of what the machine should be like (which cloud, what image, etc) based on the machine name, which is in the user's standard ssh namespace. Then scp and rsync work too.
[13:38] <rbasak> I need to show you it really.
[13:39] <rbasak> I agree that an schroot-alike interface would work well too - especially to people used to that, as they won't need to learn anything
[13:39] <spajderix> hi
[13:39] <rbasak> I don't see any reason why I can't do both.
[13:40] <lamont> rbasak: probably
[13:40] <lamont> prolly based on it, that is
[13:40] <rbasak> lamont: can I mark the bug Fix Released for precise, or would you like to check further first?
[13:41] <lamont> 'tever - if it's returning good facts, I'm happy
[13:43] <spajderix> I have some issues with mysql replication. I have master and a slave with backup. Problem is, from time to time when i do SLAVE STOP to do daily dbdump the query just hangs forever, and only killing the server helps to unfreeze it. I've located some bugs at mysql's buglist but fixes addres versions of mysql-server 5.4+. I wuld really appreciate a fix in ubuntu,so should I report a new bug or
[13:43] <spajderix> request a backport of newer mysql to lucid?
[13:44] <rbasak> spajderix: that sounds like a bug that would be a candidate for an SRU, and one that we'd want fixed in lucid
[13:45] <rbasak> spajderix: https://wiki.ubuntu.com/StableReleaseUpdates
[13:47] <rbasak> spajderix: although I'm not sure about mysql actually - upstream don't work in public so it may be awkward
[13:51] <asac_> smoser: maybe :) ... something that brings the cloud transparently to your local machine... but also does some degree provisioning and pooling (can be manual operations i guess) of the instances
[13:51] <smoser> pooling ?
[13:51] <asac_> well
[13:51] <asac_> management
[13:51] <asac_> so this tool kind of keeps track of your instances
[13:51] <asac_> and allows you to shut down etc.
[13:51] <smoser> are you familiar with schroot ?
[13:52] <asac_> important that the host gets to know when an operation is finished so it can pull the artifacts and shut down
[13:52] <asac_> smoser: no :)
[13:52] <asac_> smoser: oh i know schroot yes
[13:52] <smoser> right.
[13:52] <asac_> but not how to use that in the cloud... is there a great receipt how that can do what i want?
[13:52] <smoser> oh, you can't.
[13:52] <asac_> i want it to be a bit dynamic i guess
[13:52] <smoser> but from an interface perspective, would that be enoug for you?
[13:52] <asac_> e.g. just having static cloud servers running that i can schroot into would be a bit lame :)
[13:52] <rbasak> So right now I can do stsh foo, and it detect that a machine called foo doesn't exist, start one in the cloud (called foo), and ssh into it. So it's as if I typed "ssh foo" and the machine existed already. My tool also sorts out known_hosts automatically and updates ~/.ssh/config so scp, rsync and vanilla ssh will work too.
[13:53] <asac_> smoser: i guess...
[13:53] <asac_> smoser: if i can see the running instances with schroot -l
[13:53] <asac_> and have switches to start up
[13:53] <smoser> ie, when i want a new schroot, i do schroot --run-session --chroot ...
[13:53] <asac_> and turn off
[13:53] <asac_> it could be good
[13:53] <smoser> and then when i'm done, i kill it.
[13:53] <rbasak> I have stsh --terminate foo and stsh --list which are easy enough to convert to schroot compatible flags
[13:53] <smoser> that can be all done in one command in schroot too (new session, chroot in, exit when command temrinates)
[13:54] <asac_> smoser: i think exiting the schroot shouldnt shut it down
[13:54] <smoser> in schroot it does sometimes.
[13:54] <smoser> but you can make it not
[13:54] <asac_> smoser: so more like schroot ... goes into an existing chroot
[13:54] <rbasak> asac_: you can do that with schroot, by requesting a persistent session when you create it
[13:54] <smoser> right.
[13:54] <asac_> interesting :)
[13:54] <smoser> so basically i think the model works well.
[13:54] <smoser> the schroot just happens to be somewher eacross the planet
[13:54] <rbasak> I think the schroot model works, but is a bit wordy to use by hand interactively
[13:55] <smoser> it is wordy, i agree.
[13:55] <smoser> :)
[13:55] <asac_> yeah. a convenient wrapper
[13:55] <asac_> would be great
[13:55] <asac_> like
[13:55] <rbasak> OTOH, I think there's a lot of value in trying to match syntax with existing tools
[13:55] <asac_> cloud-root --list
[13:55] <asac_> cloud-root --start name
[13:55] <asac_> cloud-root name CMD
[13:55] <asac_> cloud-root --kill name
[13:55] <asac_> still need to be able to download stuff
[13:55] <asac_> like cloud-root get /path/to/file
[13:55] <rbasak> So I'm thinking of keeping my mechanism but have an schroot-cloud wrapper that wraps it into schroot-compatible options
[13:56] <smoser> asac_, well that is just:
[13:56] <smoser>  cloud-root name cat /path/to/file > file
[13:56] <smoser> or
[13:56] <rbasak> or in my case, scp name:/path/to/file . :-)
[13:56] <asac_> cloud-scp name:/path/...
[13:56] <asac_> wow
[13:56] <asac_> thats cool
[13:56] <asac_> :)
[13:56] <smoser>  cloud-root name tar cf - file1 file2 file3 > local.tar
[13:56] <asac_> but ftp like behaviour would also be fun :)
[13:57] <rbasak> sftp will work :)
[13:57] <asac_> cloud-root name tar cf - file1 file2 file3 > local.tar
[13:57] <asac_> thats interesting
[13:57] <asac_> cool
[13:57] <rbasak> that's just "ssh name tar cf - file1 file2 file3 > local.tar" :-)
[13:57] <smoser> right.
[13:58] <smoser> so, its settled.
[13:58] <asac_> where can i download such tool :)?
[13:58] <smoser> rbasak will write a tool and i tell him how i want it to look :)
[13:58] <asac_> omg
[13:58] <asac_> i would love it
[13:58] <asac_> :)
[13:58]  * rbasak has written the tool already; I just need to write the smoser-wrapper :-P
[13:58] <smoser> can you have that done by monday rbasak ?
[13:58]  * smoser ducks
[13:59] <rbasak> Actually that's not even that far off feasible :)
[14:00] <rbasak> smoser: I have a cloud-init feature request for this BTW
[14:00] <smoser> rbasak, you should show asac_ what you have though
[14:00] <smoser> rbasak, .... what is that ?
[14:00] <smoser> rbasak, and you should point me to what you ahve also
[14:00] <asac_> rbasak: i am a happy lead customer to try out and provide you feedback on how your command line interface is convenient and inspiring :)
[14:00] <smoser> asac_, in bikeshed (kirkland) there is a too....
[14:00] <smoser> let me find it
[14:00] <rbasak> I was discussing this with utlemming back in January. The issue is how to get known_hosts updated securely.
[14:00] <asac_> lol
[14:01] <smoser> called cloud-sandbox
[14:01] <rbasak> kirkland used what let's call a double-key mechanism
[14:01] <smoser> yeah.
[14:01] <rbasak> that works but is a bit ugly
[14:01] <rbasak> I'm reading the console fingerprint from get_console_output and verifying that automatically, but the catch is that EC2 is really slow at updating it, so starting an instance is slow
[14:02] <rbasak> But on openstack it's fine since get_console_output doesn't need updating and works immediately
[14:02] <smoser> rbasak, so that is just motivation for using openstack
[14:02] <smoser> :)
[14:02] <rbasak> The third mechanism that utlemming came up with for EC2 was using SQS as a read-once key delivery mechanism
[14:03] <smoser> that requires putting credentials to do that into the instance.
[14:03] <rbasak> Create a queue, add one item that contains the key, put the credentials for that in user data, then cloud-init fetches the key out.
[14:03] <smoser> right ?
[14:03] <smoser> oh.
[14:03] <smoser> the othe rway around.
[14:03] <rbasak> Yes - but the credentials are useless once cloud-init has finished, since the key will no longer be available from the queue.
[14:04] <smoser> here is the other thing i considered:
[14:04] <smoser> http://openkeyval.org/
[14:04] <smoser>  * using that...
[14:04] <smoser>  * on creation, you come up with a long secret key
[14:04] <smoser>  * use that to tell the instance to post its keys to that location in openkeyval
[14:04] <smoser>  * wait for that key to appear
[14:04] <smoser>  * use it
[14:05] <smoser> you can also fortify it by having more htan just the key as the secret
[14:05] <smoser> but adding a secret that you then calculate the sum of "content+secret" and append it to what is posted.
[14:05] <smoser> then you know that only someone who knows that secret could have posted valid content there.
[14:05] <rbasak> Isn't there a race there? Malicious code runs after the instance has booted and ran cloud-init and is doing its normal workload, and you haven't fetched the key yet
[14:05] <rbasak> Unlikely I admit
[14:06] <jMCg> EVERYTHING as a web service.
[14:06] <smoser> rbasak, "malicious code runs after instance has booted"
[14:06] <smoser> thats your problem
[14:06] <smoser> you can't really fix that, now can you
[14:06] <smoser> :)
[14:06] <rbasak> smoser: in that case why don't we just supply the private host key in userdata? :)
[14:07] <smoser> hm.. is that true. is this no btter?
[14:07] <smoser> let me think
[14:08] <rbasak> it is a lot better, but I think there an (unlikely) race, which the other methods avoid.
[14:10] <smoser> rbasak, yeah, it is bettter
[14:10] <smoser> hm.. i dont knwo.
[14:12] <smoser> rbasak, so wouldn't hte SQS need creds in the instance?
[14:12] <smoser> to read the message?
[14:12] <smoser> i need to read more on sqs
[14:13] <rbasak> smoser: yes. But it gets a bit hacky at this point. I think you can create a per-instance queue so you don't give the instance any more creds than for it's own queue, which will have only one message.
[14:14] <rbasak> smoser: at this point I'm wondering if kirkland's hack is less of a hack than this one
[14:15] <rbasak> (also kirkland's solution is genius even if it is a hack)
[14:15] <smoser> yeah. it does work.
[14:16] <smoser> and he had kees look at it to review it.
[14:16] <kirkland> rbasak: what's kirkland's solution?
[14:17] <rbasak> kirkland: your temporary key thing to securely get a private key to an instance and know its fingerprint
[14:17] <kirkland> SpamapS: yeah, I'm seeing that in a few places (byobu in precise flickering;  something wrong with the status caching mechanism)
[14:17] <kirkland> rbasak: why thank you :-)
[14:17] <kirkland> rbasak: I do like that, very much
[14:17] <kirkland> rbasak: though a much, much more forward thinking solution would be to use monkeysphere
[14:17] <hallyn> zul: where is the patch you wanted me to add to libvirt?
[14:17] <kirkland> rbasak: though I haven't gotten smoser to go for that one yet
[14:18] <kirkland> rbasak: smoser: the *right* answer to this problem, in my opinion, is monkeysphere
[14:18] <smoser> regarding momkeysphere, i'm just lazy
[14:18] <zul> hallyn: damn that one totally fell off my list hold on
[14:18] <smoser> patches welcome
[14:19] <smoser> zul, you said theres a fix for bug 942865 in gerrit ?
[14:19] <uvirtbot`> Launchpad bug 942865 in nova "upgrade from diablo leaves existing images with kernel unbootable" [High,Triaged] https://launchpad.net/bugs/942865
[14:19] <smoser> there is no comment to such affect in the bug
[14:19] <zul> smoser: yes
[14:19] <zul> hallyn: its this commit: http://libvirt.org/git/?p=libvirt.git;a=commit;h=9130396214975ba2251082f943c9717281039050
[14:21] <lamont> SpamapS: I heard a rumor you might know about 904834 - it'd be good to see that get into precise
[14:21] <lamont> SpamapS: specifically wrt the MIR for librbd-dev
[14:21] <zul> hallyn: sorry about that the past couple of days have been hilariously busy
[14:22] <rbasak> kirkland: interesting!
[14:22] <rbasak> kirkland, smoser: that's not really cloud-specific though, right? Wouldn't it make more sense to integrate monkeysphere into Ubuntu Server generally first?
[14:25] <zul> Daviey: just uploaded a fix for he eventlet memory leak as well
[14:25] <smoser> zul, you have a link ?
[14:26] <zul> smoser: https://review.openstack.org/#change,4788
[14:26] <smoser> i'm completely incapable with gerrit's ui
[14:26] <zul> smoser: eh?
[14:26] <smoser> booo to vishy
[14:26] <smoser> for not even adding the bug neumbers
[14:26] <zul> hehe
[14:27] <smoser> * Adds name from manifest to glance on register
[14:27] <smoser> woot!
[14:27] <smoser> i had a review that did that
[14:27] <smoser> but it was nacked waiting on test cases
[14:29] <zul> smoser:  anyways ill backport it for e4 today
[14:31] <smoser> plese
[14:32] <Daviey> zul: nice
[14:34] <zul> so i just uploaded glance e4, so any fixes that need to go in between now and the next snapshot i created a branch called lp:~ubuntu-server-dev/glance/essex.milestone.e4 so if there needs to be any fixes between now and next friday (the new snapshot) will go in here, since the packaging branches follows trunk
[14:35] <lynxman> jamespage: ping
[14:36] <lynxman> or actually Daviey or zul, does this look okay? http://pastebin.ubuntu.com/865160/
[14:39] <Daviey> lynxman: it looks like a failure merging d/changelog?
[14:40] <lynxman> Daviey: hmm yeah you're right, 1 sec
[14:40] <lynxman> Daviey: I had 2.7.10-1 from debian twice by mistake
[14:41] <lynxman> Daviey: when syncing straight from debian again the previous ubuntu changelog disappears? I mean... the 2.7.10-1ubuntu1 release
[14:42] <zul> lynxman: why arent you using syncpackage?
[14:42] <lynxman> zul: erm... *blushes* didn't know it existed :)
[14:42] <zul> if you are synching straight from debian (no ubuntu changes)
[14:42] <lynxman> zul: that's correct
[14:43] <zul> lynxman: install ubuntu-dev-tools
[14:43] <lynxman> zul: I have it there already
[14:43] <zul> lynxman: http://manpages.ubuntu.com/manpages/oneiric/man1/syncpackage.1.html
[14:46] <hallyn> zul: and you've tested with that patch?
[14:47] <zul> hallyn: yep works fine
[14:47] <hallyn> ok
[14:47] <lynxman> zul: the thing is that I have no upload rights and I need to do a bzr merge, which is what I was doing
[14:47] <zul> lynxman: oh yeah duh....carry on :)
[14:47] <lynxman> zul: heh :)
[14:47] <zul> lynxman: why not apply for ubuntu-serv-dev rights?
[14:48] <lynxman> zul: you reckon I'm experienced enough?
[14:48] <zul> lynxman: sure i guess
[14:48] <Daviey> lynxman: this isn't a sync is it?
[14:49] <lynxman> Daviey: not 100% due to the debian-changes patch being different from one version to the next
[14:49] <lynxman> Daviey: but that's it
[14:50] <hallyn> zul: is there a bug to reference for that?
[14:50] <zul> hallyn: no
[14:50] <hallyn> k
[14:50] <hallyn> firing away
[14:51] <Daviey> lynxman: unless it is a straight sync, always maintain the changelog as is.
[14:51] <lynxman> Daviey: so just add the debian changelog entries on top of the ubuntu one (the ones that are newer I mean)
[14:53] <lynxman> Daviey: http://pastebin.ubuntu.com/865188/
[14:59] <smoser> rbasak, just one more thing to say regarding the ssh auth stuff.
[15:00] <smoser> another option that requires s3 is to add an s3 expiring url and '#include' it.
[15:00] <rbasak> smoser: yes, that would wowrk
[15:00] <rbasak> work
[15:00] <Daviey> lynxman: wait, why isn't this a sync?
[15:00] <smoser> its not as good as a one time use, but, reasonable.
[15:00] <rbasak> yeah
[15:00] <smoser> cloudinit has '#include-once' explicitly for that purpose.
[15:01] <smoser> monkeyspere or kirkland's solution use no additional AWS infrastructure (meaning they "just work" on openstack)
[15:01] <rbasak> what would clean the S3 entry up?
[15:01] <smoser> they have "expiring urls"
[15:01] <rbasak> doesn't that correspond to a real URL?
[15:01] <smoser> yes.
[15:01] <smoser> but it goes away
[15:01] <smoser> magically
[15:02] <smoser> http://www.givp.org/blog/2011/08/01/amazon-s3-expiring-urls-with-boto/
[15:02] <rbasak> Yeah but wouldn't we want to clean up the real URL?
[15:02] <smoser> you mean delete the object in the bucket?
[15:03] <smoser> i dont know what happens to i, if it automatically deletes or not
[15:03] <smoser> i'll try
[15:03] <rbasak> I think it stays
[15:03] <lynxman> Daviey: that's what I'm saying, I think it's a sync, but I can't sync since I have no upload rights :)
[15:03] <Daviey> lynxman: if it is a sync, that is - no ubuntu delta still required.. use the 'request-sync' tool
[15:03] <rbasak> AIUI, it's a mechanism to give people temporary access. It's just the authorization that expires.
[15:04] <Daviey> err, syncpackage
[15:04] <lynxman> Daviey: alright! will do so
[15:04] <rbasak> To make it secure, cloud-init would need to sleep for the expiry time
[15:04] <lynxman> Daviey: with requestsync then rather than syncpackage
[15:05] <Daviey> lynxman: err, yeah
[15:05] <lynxman> Daviey: cool, doing right now
[15:05] <Daviey> rocking
[15:07] <lynxman> Daviey: bug #944866 filled
[15:07] <uvirtbot`> Launchpad bug 944866 in puppet "Sync puppet 2.7.11-1 (main) from Debian sid (main)" [Undecided,New] https://launchpad.net/bugs/944866
[15:08] <Daviey> lynxman: cool
[15:08] <smoser> rbasak, yeah, you're right.
[15:08] <smoser> it'd need cleanup
[15:15] <uvirtbot`> New bug: #944866 in puppet (main) "Sync puppet 2.7.11-1 (main) from Debian sid (main)" [Undecided,New] https://launchpad.net/bugs/944866
[15:25] <smb> smoser, Hi, today I brought up a cg1.4xlarge as spot instance and normally. Both show the exactly same stuck cpu#0 as you had. But I cannot get it to do the same locally (even giving it 16 vcpus (while I only got 8 physical cores))
[15:25] <smoser> smb, hm..
[15:26] <smoser> well, i guess we should open a bug, and maybe ping amazon via utlemming.
[15:26] <smoser> smb, note, i'm not certain if natty had this issue or not.
[15:26] <smoser> have you tried other kernels ?
[15:26] <smb> smoser, Right, I am a bit clueless right now. No only tried the precise daily up to now
[15:27] <smoser> does it happen every boot ?
[15:27] <smoser> could you just install the natty kernel and reboot and test it htat way ?
[15:28] <smb> smoser, From the two attempts it did both times, but I can do that natty (oneiric?) test
[15:28] <smoser> smb, the number of times i think i've considered you "clueless" in regard to kernel is... let me count.... ZERO
[15:28] <smoser> smb, well, you can surely bisect at the distro-kernel level to get more info there.
[15:28] <smoser> we should open a bug.
[15:30] <smb> smoser, Well, let me put it that way. It seems always cpu#0 and the instruction pointer we get printed always is the same place (xchg used as nop, after enabling interrupts). Its nothing normally getting a cpu stuck.
[15:31] <smb> smoser, Agreed, I will open one
[15:31] <jcastro> hey smoser
[15:31] <jcastro> I thought we had gotten our AMIs in the amazon quickbrowser by now?
[15:32] <smoser> jcastro, apparently not
[16:05] <kirkland> jdstrand: howdy!  when you get a chance, could you respond to soren's questions on https://bugs.launchpad.net/ubuntu/+source/ssh-import-id/+bug/944367 ?
[16:05] <uvirtbot`> Launchpad bug 944367 in ssh-import-id "Ignores $http_proxy setting" [Wishlist,Triaged]
[16:09] <jdstrand> kirkland: hi! done
[16:09] <kirkland> jdstrand: rockin, thanks
[16:10] <kirkland> jdstrand: I'll specifically whitelist https_proxy
[16:10] <jdstrand> kirkland: well, that isn't what I suggested in the comment
[16:10] <kirkland> jdstrand: hmm, okay, so not just existence of the env var
[16:10] <jdstrand> kirkland: *optionally* whitelisting https_proxy seems the safest move (via command line)
[16:11] <kirkland> jdstrand: but you'd like the user to additionally tell ssh-import-id to use $https_proxy ?
[16:11] <kirkland> jdstrand: i was thinking of just adding env -i https_proxy="$https_proxy" ...
[16:11] <kirkland> jdstrand: but that's not acceptable to you?
[16:12] <jdstrand> kirkland: it doesn't matter to me if the arg allows preserving what is already in https_proxy or the user explicitly setting it
[16:12] <kirkland> jdstrand: but your point is that it has to be an additional non-default argument on the command line explicitly enabling that behavior?
[16:12] <jdstrand> kirkland: imo opinion this is one of the variables we would want to filter
[16:12] <jdstrand> kirkland: yes
[16:13] <jdstrand> s/opinion//
[16:13] <soren> jdstrand: Can you elaborate a bit on your rationale? I (sort of) understand it's a privileged operation, but what is cleaning the environment supposed to protect against?
[16:14] <soren> Er...
[16:14] <soren> s/privileged/sensitive/, of course.
[16:14] <kirkland> jdstrand: under what situation would a user's https_proxy environment variable be potentially compromised, where they would also be running ssh-import-id?
[16:14] <jdstrand> if https_proxy is set to connect to something else, you can import an id that you perhaps didn't intend
[16:15] <jdstrand> it helps with mitm attacks
[16:15] <soren> jdstrand: ...who would be able to set that?
[16:15] <kirkland> (so the good news is that smoser helped add the optarg parsing to ssh-import-id, so this is technically doable...thanks, smoser)
[16:15] <soren> jdstrand: If I can override a user's environment, I can probably add things to his authorized_keys, too?
[16:15] <jdstrand> it isn't just that your environment is altered
[16:15] <jdstrand> this could be in a script situation, etc
[16:15] <jdstrand> (depth)
[16:16] <jdstrand> but, that point aside
[16:16] <jdstrand> say it is set to https_proxy=https://foo.bar
[16:17] <jdstrand> if you are now in a cafe and foo.bar is redirected to an attacker's machine, the attacker could mitm you
[16:17] <soren> How so? wget checks certificates?
[16:18] <SpamapS> jamespage: thanks I'll take a look
[16:18] <jdstrand> is it doing it correctly? does it do it by default? it is just a safty measure
[16:18] <SpamapS> lamont: re librbd+kvm in precise.. waiting on MIR approval as right now kvm will FTBFS if we add support
[16:19] <SpamapS> hallyn: ^^ would you agree with that being the reason?
[16:19] <soren> jdstrand: I guess. Cleaning the environment here just seems kinda arbitrary.
[16:19] <lamont> SpamapS: who do I prod about getting the MIR approved?
[16:19] <jdstrand> well, that's how I roll :P
[16:20] <jdstrand> I see wget has a --no-proxy arg. perhaps that is the easy toggle
[16:21] <SpamapS> jdstrand: can we prod you for status on the CEPH MIR?
[16:21] <SpamapS> jdstrand: https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/932898
[16:21] <uvirtbot`> Launchpad bug 932898 in ceph "[MIR] ceph" [Undecided,Confirmed]
[16:21] <Daviey> Is that still going ahread?
[16:21] <Daviey> ahead*?
[16:21] <jdstrand> SpamapS: the status is nothing has happened yet. I've asked Daviey for a prioritized list of security MIR reviews and will be working through that
[16:21] <SpamapS> Ah
[16:22] <Daviey> jdstrand: Yep, that should have been with you already.. Waiting on some more data my side.
[16:22] <philsf> I need some sanity check on my apache virtualhosts config. I'm setting apache to listen on the ip address for a virtualhost for testing purposes, before I set the DNS, but it seems to be looking into the index of the wrong vhost.
[16:22] <zul> Daviey: i sure hope keystone and horizon are on that list
[16:22] <Daviey> zul: naturally
[16:22] <zul> Daviey: good
[16:22] <SpamapS> lamont: ^^ there you go... I think the MIR team is a bit backed up this cycle. :-P
[16:22] <jdstrand> Daviey: I am still going through email this morning...
[16:23] <philsf> http://paste.ubuntu.com/865332/
[16:23] <SpamapS> zul: re python-tz .. was there no possibility to use pythone-dateutil ?
[16:23] <SpamapS> zul: I noticed nova or glance or something pulled it in
[16:23] <zul> SpamapS: it was a dependency of python-babel which has been dropped
[16:24] <lamont> SpamapS: clearly, we need to arrange a small corner to put the MIR team in so we can discuss priorities... :D
[16:26] <jdstrand> Daviey: you sent that email? I dont see it. what is the subject?
[16:27] <SpamapS> lamont: perhaps we should use...... _THE COMFY CHAIR_
[16:28] <lynxman> SpamapS: noooo, not the comfy chair
[16:28] <SpamapS> lynxman: ok then, just the soft cushion
[16:28] <lynxman> SpamapS: that'll show 'em
[16:28] <philsf> in the above pastebin are the headers of the two vhosts in question, where it's clear that they have different DocumentRoot's. When accessing the FARMACO vhost DocRoot, however, apache seems to read the index.html ICB vhost, which calls a CGI application that's obviously not there. To make things worse, if I try to access /index.html, it reads the correct one. I've grepped for redirects and found nothing suspicious. Can anyone see what am I doi
[16:28] <philsf> ng wrong here?
[16:28] <philsf> http://paste.ubuntu.com/865332/
[16:29] <jdstrand> zul: the keystone mir is still incomplete awaiting a response from the server team
[16:29] <Daviey> jdstrand: no, i'm still waiting on some more data..
[16:29] <zul> jdstrand: and you will have your response on monday
[16:29] <jdstrand> Daviey: oh, I see
[16:29] <jdstrand> ok
[16:29] <Daviey> jdstrand: sorry!
[16:29] <jdstrand> no worries
[16:29] <jdstrand> I already reviewed one keystone...
[16:30] <Daviey> jdstrand: It's a full rewrite. :/
[16:30] <Daviey> (joy)
[16:30] <jdstrand> yeah
[16:30] <jdstrand> that is pretty unfortunate as I reviewed the first one... :|
[16:30] <jdstrand> oh well
[16:31] <SpamapS> jdstrand: not so unfortunate if you gave it the same negative review as the team who decided to rewrite it ;)
[16:31] <jdstrand> heh
[16:32] <jdstrand> in terms of time, it was unfortunate. the code audit itself was not super deep
[16:32] <jdstrand> (how can it be?)
[16:32]  * jdstrand stops griping
[16:34] <___MAX> Hi, ubuntu bootmgr is missing press ctrl+alt+del to restart
[16:41] <smb> smoser, utlemming bug 944923 contains all I think to know so far
[16:41] <uvirtbot`> Launchpad bug 944923 in linux "[EC2:cg1.4xlarge] CPU#0 stuck for 23s! [migration/0:6] __do_softirq+0x60/0x210" [Low,Triaged] https://launchpad.net/bugs/944923
[16:44] <smb> smoser, It looks like an Oneiric 3.0 kernel does also lag at some point. Just a bit (ok, half as long) less and without softlockup triggering.
[16:50] <hallyn> SpamapS: I don't see lamont's q.  but yes we're waiting on mir (see -devel)
[17:25] <zul> main openstack projects have been upated to e4 + bugfixes quantum, swift, and melange will be uploaded this afternoon
[17:30] <sixstringsg> If I'm running a make over SSH, what is the best way to make it continue if I disconnect SSH?
[17:31] <rbasak> sixstringsg: run it in a screen
[17:32] <sixstringsg> Yeah, but I hate trying to scroll back in screen...
[17:32] <sixstringsg> In case it fails.
[17:32] <sixstringsg> I guess I should just learn screen better, thanks.
[17:32] <rbasak> Then you could do make >make.log 2>&1& and then tail -f make.log. Either with screen or without
[17:32] <sixstringsg> Thanks!
[17:33] <smb> or make 2>&1|tee log ...
[17:33] <rbasak> Or make 2>&1 |tee make.log
[17:33] <rbasak> smb: :)
[17:33] <sixstringsg> So many options!
[17:33] <rbasak> but that would die if the connection dies
[17:33] <smb> rbasak, :) just about the same time
[17:33] <rbasak> you could stick a & at the end I suppose
[17:33] <rbasak> bit messy
[17:33] <smb> Id just use it together with screen
[17:34] <rbasak> sixstringsg: if you don't like screen, look at byobu. It wraps screen and makes it a bit more approachable.
[17:34] <rbasak> Not sure what it does about the scrollback keybindings though
[17:34] <sixstringsg> Thanks. Honestly, I just haven't taken the time to learn screen properly.
[17:35] <rbasak> Yeah it isn't pleasant to learn.
[17:35] <sixstringsg> Cannot open your terminal '/dev/pts/6' - please check.
[17:36] <sixstringsg> I'm getting that with both.. THis is a new server I'm playing with, so I haven't used screen on it yet.
[17:36] <sixstringsg> Nevermind, fixed.
[17:37] <smb> smoser, Ok, so this hvm delay on vcpu#0 happens all the way back to Natty (at least)
[17:52] <jamespage> kirkland, around? have a question about dotdee (might be a bug but not sure)
[17:55] <savid> Using ufw, I want to delete rule NUM, but how do I know which NUM to use (they are not numbered in the status view)?
[17:56] <savid> oh, nm.  I needed "status numbered"
[17:57] <whoozdat> hello
[17:57] <whoozdat> howdy ubuntu server users
[17:58] <arthurjohnson> hola
[17:58] <whoozdat> need help setting up bind9
[18:02] <whoozdat> what is $TTL 3D
[18:02] <whoozdat> in db.zonefile ?
[18:02] <kirkland> jamespage: yo yo, what up?
[18:02] <jamespage> kirkland, hey!
[18:02] <jamespage> so I'm using dotdee in a couple of charms I'm working on
[18:03] <kirkland> jamespage: hey man, hope you're doing well :-)
[18:03] <jamespage> kirkland, sure am - hope that life is treating you well as well!
[18:03] <kirkland> jamespage: yeah, things going well
[18:03] <jamespage> good
[18:04] <jamespage> question re dotdee - I should not have to be calling dotdee --update to get it to update a file under management should I?
[18:10] <queso> So in lucid I installed open-vm-tools in a new virtual machine I just built and it installs the X server?  Something's wrong there.
[18:13] <queso> There isn't a -nox version of open-vm-tools?
[18:14] <patdk-lap_> yep
[18:14] <patdk-lap_> oh wait, of open tools? no, of the offical vmware ones, yes
[18:15] <queso> https://help.ubuntu.com/community/VMware/Tools According to this it's a bug and I should use --no-install-recommends.  Okay, that works :)
[18:18] <guntbert> queso: thx for the heads-up
[18:20] <queso> guntbert: yw
[18:21] <genii-around> I wonder why server doesn't have APT::Install-Recommends set to 0 by default
[18:23] <kirkland> jamespage: correct
[18:24] <kirkland> jamespage: it should do that automatically, using inotify
[18:34] <whoozdat> hello
[18:34] <whoozdat> i tried to reinstall bind9 and it just gives me a subprocess error
[18:36] <whoozdat> root@clientx1-lab:~# /etc/init.d/bind9 start
[18:36] <whoozdat>  * Starting domain name service... bind9                                                                                                               [fail]
[18:36] <whoozdat> root@clientx1-lab:~#
[18:39] <SpamapS> whoozdat: check logs
[18:39] <whoozdat> var/log/syslog?
[18:39] <SpamapS> jamespage: any chance you're running on top of overlayfs ?
[18:39] <SpamapS> jamespage: inotify no worky in overlayfs
[18:40] <SpamapS> whoozdat: thats the best place to start yes
[18:46] <whoozdat> you are right
[18:46] <whoozdat> it iw starting now
[18:47] <whoozdat> thakns SpamapS
[18:53] <whoozdat> SpamapS, dude its working now
[18:53] <whoozdat> thakns
[18:53] <whoozdat> root@clientx1-lab:~# nslookup yahoo.com
[18:53] <whoozdat> Server:         10.152.187.2
[18:53] <whoozdat> Address:        10.152.187.2#53
[18:53] <whoozdat> thank you so much bro
[18:53] <whoozdat> damn the syslog even tells you what line in the named.conf.local has errors
[18:54] <whoozdat> I just set up and dns
[18:54] <whoozdat> yay!!!!
[18:54] <SpamapS> whoozdat: woot!!
[18:55] <koolhead17> zul: /o.0\
[19:00] <whooz> one question
[19:02] <whooz> when I installed 11.10 64-bit, I gave it a hostname, now I changed the hostname tosomething else and it will change and will show when I type hostname, but for some reason ,it changes back to the original one when I installed the OS, what am I missing here?
[19:03] <kantlivelong> how can i setup a nic to be up on boot but unconfigured?
[19:03] <whooz> edit /etc/network/interfaces and make it auto for the ethx and choose dhcp
[19:04] <kantlivelong> whooz: im not even looking for dhcp.. just up. no IP
[19:04] <whooz> just leave it blank then
[19:04] <kantlivelong> whooz: would i just do "iface ethX inet manual"
[19:04] <kantlivelong> ?
[19:04] <whooz> on the ifave section
[19:04] <whooz> iface
[19:05] <whooz> don't put static or dynamic
[19:05] <kantlivelong> just manual
[19:05] <whooz> then choose static
[19:05] <whooz> put 0.0.0.0
[19:05]  * koolhead17 is happy
[19:05] <kantlivelong> ah
[19:05] <whooz> then you can change that @ a later time
[19:05] <whooz> you can configure it later if you wish to
[19:05] <kantlivelong> whooz: im bridging the iface w/ vbox and it needs to be up
[19:05] <kantlivelong> thats all :P
[19:06] <kantlivelong> danke :)
[19:42] <kirkland> jamespage: are you still having trouble with it?
[19:43] <kirkland> jdstrand: did you and soren come to any compromise on https_proxy and ssh-import-id?
[19:44] <kirkland> jdstrand: i can absolutely confirm that wget does check and require valid certs by default
[19:44] <kirkland> jdstrand: you can override that with wget --no-check-certificate
[19:44] <kirkland> jdstrand: but, of course, i would never do that when importing an ssh public key
[19:45] <kirkland> jdstrand: as for it doing it correctly, there's always a chance that wget could have security vulnerabilities, as well as problems with the root certs it uses in /etc/ssl
[19:45] <kirkland> jdstrand: but that's a general problem, not specific to ssh-import-id
[19:48] <jdstrand> well, the thing I am advocating is defensive coding since this is a sensitive file. part of defensive programming is scrubbing the environment. having a scrubbed environment seems like a sane default, and an option to explicitly whitelist/set https_proxy allows people the flexibility to use https_proxy when they need it
[19:50] <jdstrand> I came up with 2 situations where there could be a potential problem. one could argue that they are marginal cases, but I'd rather err on the side of caution with a file of this nature rather than trying to enumerate all the problems and hoping we thought of them all
[19:54] <Daviey> jdstrand: try to get LP to sign the +sshkeys :)
[19:55] <SpamapS> So have signing on the socket, and the content?
[19:57] <Daviey> no, sign the datasource.
[19:57] <Daviey> oh, i se what yu mean
[19:58] <Daviey> personally, i don't think socket is enough.
[20:03] <hallyn> SpamapS: hm, is there any guarantee that udev is started before runlevel 2?
[20:04] <hallyn> I thought there would be, but don't actually see it...
[20:04] <hallyn> mountall (filesystem), yes.  udev, no
[20:05] <hallyn> static-network-up could come close, except for failsafe.conf
[20:13] <raubvogel> If I want a script to be run on monday and on friday, can't I have an /etc/cron.d file with something like * * * * 1,5     root    /usr/local/bin/do-something?
[20:14] <SpamapS> hallyn: no no guarantee
[20:14] <SpamapS> hallyn: if you need udev, you need to start on started udev
[20:14] <hallyn> SpamapS: jinkeys.  Thanks :)
[20:15] <SpamapS> hallyn: or if you're looking for a particular event...
[20:16] <hallyn> no no, i was just reviewing an upstartification
[20:17] <raubvogel> Oops! I forgot to fix time of the day, so it is sending once a week
[20:17] <raubvogel> shame on me
[20:23] <jamespage> kirkland, SpamapS: I'm seeing this in lxc containers managed by juju - does that user overlayfs?
[20:24] <SpamapS> jamespage: no
[20:26] <adam_g> zul: do you have a url to where ec2-fixes.patch came from?
[20:26] <zul> adam_g: https://review.openstack.org/#change,4788
[20:30] <adam_g> zul: thanks
[20:41] <uvirtbot`> New bug: #945117 in samba (main) "can't edit files in my public guest allow rw folder" [Undecided,New] https://launchpad.net/bugs/945117
[21:00] <kirkland> jdstrand: so what would the call look like, for example?  ssh-import-id -e https_proxy jdstrand soren kirkland ?
[21:00] <kirkland> jdstrand: where -e says "enable this environment variable"
[21:00] <kirkland> jdstrand: and https_proxy is the env variable to whitelist?
[21:02] <jdstrand> kirkland: seems fine. alternatively you could always use wget with '--no-proxy' unless the user gives '-p' or '--use-proxy' to ssh-import-id
[21:10] <kirkland> soren: what do you think?  would you use this if I went through the trouble to fix it?
[21:10] <kirkland> soren: it would annoy me greatly as a user
[21:10] <kirkland> soren: but thankfully I'm not behind such a firewall
[21:18] <soren> kirkland: I think "-e https_proxy" is too awkward.
[21:19] <kirkland> soren: i'd agree
[21:19] <kirkland> soren: what about just -e
[21:19] <soren> I mean, sure, I'd use it, because I need the functionality, but just a simple -p or whatever would be much preferred.
[21:19] <kirkland> soren: which means "don't scrub my environment at all"?
[21:19] <soren> Also, if this could get hooked up through cloud-init... Much appreciated.
[21:19] <kirkland> soren: it already is
[21:19] <kirkland> soren: well, ssh-import-id already is
[21:19] <kirkland> soren: not the proxy bit
[21:19] <soren> Right, that's what I mean.
[21:20] <soren> I use it with cloud-init, but I'm screwed behind this proxy.
[21:20] <kirkland> ssh_import_id: [$LAUNCHPAD_ID]
[21:20] <kirkland> soren: ah
[21:20] <kirkland> soren: ah, i see, you need the cloud-init support to work with this
[21:21] <kirkland> jdstrand: how about just a "-e" option, which says "use my current environment, please don't scrub" ?
[21:21] <soren> ssh_import_id: ['-e', 'soren'] <- ftw, I guess.
[21:24] <jdstrand> kirkland: that seems overkill but if the default is scrub, I really don't care either way
[21:25] <kirkland> jdstrand: okay, yeah, I agree;  default is scrub, if someone trusts and needs their environment, I'll give it to them
[21:25] <kirkland> soren: ah, is that how cloud-init already parses that data?
[21:25] <soren> kirkland: Not sure.
[21:25]  * soren checks
[21:25] <soren> kirkland: Yes.
[21:29] <stgraber> hallyn: new kernel!!!
[21:30] <hallyn> not built yet though is it?
[21:30] <kirkland> soren: do you have a place you can test this?  http://paste.ubuntu.com/865752/
[21:30] <hallyn> actually lxc was failing on my one laptop where i'd installed that kernel.  i've not had time to look into it
[21:30] <hallyn> so i'm a little fjeered
[21:30] <Canadian1296> I set up a mail server (postfix and dovecot). How do I actually use it? I tested with telnet and got a 250, but how do I actually send and receive mail?
[21:30] <kirkland> soren: I've verified that it does flip the "env -i wget" and just "wget"
[21:30] <stgraber> hallyn: built for amd64 but currently waiting for bin-newing (and still building on the other archs)
[21:32] <stgraber> hallyn: they're bumping the ABI so they'll all new to go through NEW, then a new linux-meta needs to be uploaded and finally a new d-i, so it probably won't be installed by default until at least Monday
[21:33] <hallyn> well i for one welcome our mount-refusing-apparmor overlords.  you know, whenever they show up in the archive...
[21:37] <hallyn> stgraber: seems my cgroup patches messed up lxc when you have ns cgroup enabled.  gotta try and fix that on monday
[21:38] <hallyn> (cause i'm out the rest of next week)
[21:38] <whooz> hello
[21:38] <hallyn> stgraber: i mention it bc 0.8.0 release presumably will be held up on that being fixed
[21:38] <jdstrand> I'll look at the deNEW in a minute
[21:41] <gary_poster> hallyn, hi.  we have another ephemeral tweak we need.  The ssh approach we are using to connect in lieu of lxc-attach is biting us a bit.  since our use is automated, we need to connect as the user that has a key that makes everything seamless.  therefore we added that and it does what we need.  The full file is http://paste.ubuntu.com/865763/, and the diff is http://paste.ubuntu.com/865767/.  We don't really love t
[21:41] <gary_poster> his, and we could imagine you not liking it because it takes us farther away from the replaceable illusion that we are using lxc attach...but we need it.
[21:41] <gary_poster> other suggestions welcome, of course
[21:44] <gary_poster> on a somewhat related note, I've been suggesting to my team that we produce a version of lxc-start-ephemeral that uses aufs, and then try to track what you are doing.  Maybe a nicer approach would be to have a flag in the official version of the script that switches to aufs.  We would only use this if the problems that hurt us with overlayfs were unresolved in precise by the time we needed it, for whatever reason.
[21:44] <gary_poster> (we'd be happy to produce that diff if you said it would be ok)
[21:50] <hallyn> gary_poster: both ubuntu and ubuntu-cloud templates take '-A', so might be nice to keep it as -S for lxc-start-ephemeral
[21:50] <hallyn> uh, s/-A/-S/ there
[21:50] <hallyn> gary_poster: i saw the emails this morning and figured aufs support should be added back in as an option
[21:50] <gary_poster> cool
[21:52] <gary_poster> hallyn, cool, -S for auth key, can do.  Do you want me to...file a bug for this, maybe, with the changes?  Or something else?
[21:52] <hallyn> gary_poster: i'm off most of next week, so if you can write the the patch tha'td cbe great
[21:52] <kirkland> soren: poke me once you've tested and I'll commit
[21:52] <kirkland> soren: and try to get a release team approval for precise
[21:52] <hallyn> gary_poster: if you're writing the patch anyway, you can do it as a merge request against ubuntu:lxc
[21:53] <gary_poster> hallyn, ok cool, will do
[21:53] <gary_poster> hallyn, do you want bugs, or don't bother?
[21:53] <hallyn> gary_poster: thanks much
[21:53] <hallyn> well, bugs are good,
[21:53] <hallyn> to reference in the changelog
[21:53] <gary_poster> ok we'll file
[21:53] <gary_poster> thanks hallyn .  have a nice weekend and time off
[21:53] <hallyn> gary_poster: thanks
[21:56] <hallyn> gary_poster: do you guys use '-b' in lxc-start-ephemeral at all?
[21:56] <gary_poster> hallyn, yes, though I've wondered if we have to
[21:57] <gary_poster> given default behavior
[21:57] <hallyn> right i think in my mind i was thinking more like the binduser functionality.  but what the heck, let's not rock the boat right now.
[21:57] <hallyn> ttyl :)
[21:57] <gary_poster> :-) ok cool ttyl
[22:10] <benji> hallyn: here's the MP: https://code.launchpad.net/~benji/ubuntu/precise/lxc/bug-945183/+merge/95678
[22:13] <uvirtbot`> New bug: #945177 in nova (main) "not lintian clean" [Undecided,New] https://launchpad.net/bugs/945177
[22:14] <hallyn> benji: can you add a changelog entry?  then i'll just accept it and push immediately.
[22:14] <hallyn> benji: note i'm a *little* uncomfortable (but probably being pedantic) about LXC_KEY not being defined when not specified
[22:15] <hallyn> prefer having it initalized to "" before the getopt
[22:15] <uvirtbot`> New bug: #945183 in lxc (universe) "lxc-start-ephemeral is difficult to use with non-"ubuntu" accounts" [Undecided,New] https://launchpad.net/bugs/945183
[22:16] <benji> hallyn: if you wan't I'll be glad to change it, since you don't use set -e, it won't be a problem to be undefined
[22:16]  * benji adds a changelog entry
[22:17] <hallyn> i worry about environment poisoning
[22:17] <hallyn> won't be a problem when i rewrite it in go :)
[22:17] <hallyn> (so that we can set filecaps - we can't do that with scripts)
[22:18] <benji> hallyn: what are the leading numbers in these changelogs?  what should I use?
[22:19] <hallyn> benji: use "dch -i" which will increment it for you to 0.7.5-3ubuntu32
[22:19] <hallyn> benji: at the end of the description, add (LP: #945183)
[22:21] <benji> hallyn: I mean the prefixes to each line, like "0050-clone-lvm-sizes:"
[22:21] <benji> is that a branch name?
[22:21] <hallyn> benji: oh.  sorry
[22:21] <hallyn> I guess 0056 now
[22:21] <benji> ok
[22:21] <hallyn> no, wait
[22:21] <hallyn> benji: you dont' need a patch, bc this is under debian/
[22:22] <benji> ok, so just leave the colon and the bits before out, right?
[22:22] <hallyn> benji: right, those are filenames under debian/patches
[22:22] <benji> ah, gotcha
[22:27] <benji> hallyn: ok, it's pushed, the diff at https://code.launchpad.net/~benji/ubuntu/precise/lxc/bug-945183/+merge/95678 has updated already
[22:27] <hallyn> benji: thanks, i'll take a looka nd push.
[22:27] <benji> hallyn: cool!
[22:30] <hallyn> benji: no wait, did you mean to add 'user:,ssh-key:' to longoptions?
[22:31] <hallyn> If not, ok.  IF so, I'll add it real quick
[22:31] <wonderman> hi, ive asked many times i know, but can someone help me diagnose 408 HTTP error further if they have time ?
[22:33] <jacobw> hi
[22:33] <milkshake_> hi :)
[22:33] <benji> hallyn: oops, you're right; I'd appreciate it if you could add them
[22:33] <hallyn> will do, have a good day
[22:36] <jacobw> milkshake_: do you install the package and did `a2enmod` ?
[22:36] <milkshake_> jacobw yes
[22:36] <milkshake_> and when I do apache2ctl -M
[22:36] <milkshake_>  it lists the mods as enabled
[22:37] <jacobw> and apache still doesn't execute perl?
[22:38] <milkshake_> nope but I think I need to add a file to the mods-available DIR in apache
[22:40] <wonderman> if i am rotating logs, using 'logrotate' and i want to rotate 4 apache logs, what should i do with my 'postrotate' which restarts apache gracefully, surely i dont want to do this 4 times?
[22:47] <jdstrand> stgraber, hallyn: fyi, I have reviewed the amd64 for deNEW. I am going to wait on i386 to finish and deNEW them both
[22:47] <jdstrand> stgraber, hallyn: I'm talking about the kernel of course
[22:47] <jdstrand> (i386 should be done soon I hope)
[22:50] <hallyn> jdstrand: I'm fuzzy on all that but assume that's good - thanks :)
[22:50] <jdstrand> hallyn: just trying to let you know that I am getting you your kernel :)
[22:50] <hallyn> aweseome :)
[22:56] <neodypsis> Why does apt-get update need SU privileges to execute?
[23:04] <neodypsis> Has someone successfully deployed Nginx (from deb http://nginx.org/packages/ubuntu/ lucid nginx) on a production server?
[23:31] <tarvid> added a second nic to access a local LAN and now the default route is through the local LAN instead of the WAN interface
[23:31] <tarvid> how should I change this?
[23:57] <tarvid> since networking restart is deprecated, how are you supposed to restart networking?
[23:59] <humungulous> tarvid: how about sudo ifdown eth0; sudo ifup eth0
[23:59] <tarvid> very bad if you are remote
[23:59] <humungulous> well, any bounce of the network interface has that property if you are remote
[23:59] <tarvid> I'll try it