[00:18] <adam_g> roaksoax: around?
[00:20] <roaksoax> adam_g: not really :) whats up?
[00:21] <adam_g> roaksoax: a week or two ago you said you were running into an issue with rbd commands blocking? did you ever find out the issue? seeing similar things atm with glance blocking in calls to librbd
[00:22] <roaksoax> adam_g: yeah. TBH i didnt really find the issue... the next day i tried and issues were gone
[00:22] <roaksoax> s/snd/the
[00:22] <adam_g> roaksoax: hm, okay
[00:24] <roaksoax> adam_g: check ceph.conf for the hosts to be configured correctly though
[00:25] <roaksoax> adam_g: oh i think it was also related to the osd.. as in using the incorrect /dev/sdb instead of /dev/vdb
[00:27] <adam_g> roaksoax: yeah, theres a connection between the client (glance) and the ceph node, just not much else happening. just librbd stuck in a lock
[00:32] <adam_g> roaksoax: also, if you're still around. is deploying raring /w maas on quantal an option?
[00:33] <roaksoax> adam_g: yes if you have the latest which should be in PPA
[00:34] <adam_g> roaksoax: is that known to be a safe upgrade from 0.1+bzr1269+dfsg-0ubuntu1 ?
[00:37] <roaksoax> adam_g: yes you should not have issues
[01:00] <Fishscene> Greetings. Could someone link me to a tutorial on how to install and set up a DHCP server on Ubuntu server 12.10? I've gone through a couple of articles already and I've run into problems that don't make sense.
[01:05] <sarnold> Fishscene: this should be similar enough to be worth a start: https://help.ubuntu.com/12.04/serverguide/dhcp.html
[01:19] <Fishscene> Thanks sarnold, I'll give that a try.
[02:35] <Fishscene> Nope. No good. Everything looks good, but my VM isn't getting an IP address
[02:37] <mysteriousdarren> Fishscene: What software are you using?
[02:37] <Fishscene> Ha! nvm. My VM wasn't on the same physical server as my DHCP server (I'm using a network that isn't connected to the internet or the rest of my network)
[05:25] <Bourne> anyone have any tutorials
[05:25] <Bourne> on connecting to wireless
[05:25] <Bourne> front he command line
[05:25] <Bourne> for 12.04 LTS
[05:27] <greppy> Bourne: the easiest way I found to do that was to remove networkmanager and install wicd-curses or wicd-cli
[05:28] <Bourne> but
[05:28] <Bourne> I don't have access to the internet currently
[05:28] <Bourne> I do have
[05:30] <Bourne> well
[05:30] <Bourne> ok
[05:30] <Bourne> nvm
[05:48] <sweettea> Bourne: you should be able to get the binaries via another medium
[05:48] <Skaag> my /etc/resolv.conf keeps getting rewritten with: namserver 127.0.0.1
[05:48] <Skaag> how do I stop this insanity? :)
[05:50] <sam_> hi all, looking to setup my ubuntu server with 2 WAN interfaces, and would like to configure automatic failover - no load balancing necessary though if possible would be nice
[05:55] <sweettea> Skaag: are you running networkmanager?
[05:55] <sweettea> i think theres some dnscache daemon that runs with the newer versions of ubuntu
[05:55] <sweettea> i forgot how I disabled it
[05:55] <Skaag> no i'm on ubuntu-server, no gui here
[05:55] <Skaag> it happens on boot
[05:55] <Skaag> despite me not using dhcp
[05:56] <Skaag> my primary nic is setup for "inet static"
[05:57] <sweettea> networkmanager does not require gui
[05:57] <sweettea> see if its running
[06:22] <Skaag> checking
[06:22] <Skaag> not running.
[06:46] <sarnold> sweettea: dnsmasq?
[06:54] <ivoks> hallyn: i'll test it by the end of this week, so no need to delete it
[07:57] <maxb> ~>
[07:57] <maxb> oops
[08:31] <vedic> Is it possible to have one set of public/private keys for ssh user login and another set for backup purposes.
[08:33] <vedic> with different user for ssh login and backup
[08:33] <sarnold> vedic: with different users, it ought to be easy, look in sshd_config for Match and ForceCommand
[08:33] <vedic> sarnold: I see
[08:36] <vedic> sarnold: What is the usually pattern server admins have? Do they keep separate users for ssh login and task and another user for rsync?
[08:36] <vedic> sarnold: This is from security point of view.
[08:39] <sarnold> vedic: that's quite common, indeed
[08:50] <dch> I've updated dbus config to allow users to run their own upstart jobs. Will they be able to run jobs at boot (e.g. `on started network and filesystem`) or do they need to be logged in interactively and do `start myjob`.
[08:54] <ikonia> dch: thats an interestin setup
[08:55] <ikonia> doesn't the call to upstart have to be made by root ?
[08:55] <dch> ikonia: yes, I'm very pleased so far, except I don't see the job starting at boot, and people are tired of me rebooting it to check :D
[08:56] <dch> basically you add an xml blob into /etc/dbus-1/system.d/Upstart.conf, and then users can put jobs into ~/.init/
[08:57] <ikonia> but will that get picked up by the initial init of upstart...I didn't think so (I don't know, I'm half asking)
[08:58] <dch> ditto. don't know.
[09:22] <vedic> sarnold: ok
[10:35] <hackeron> zul: Ended up doing this to partition: http://pastie.org/6161848 -- much easier than all other suggestions :)
[10:48] <vedic> For remote server backup using rsync, there are various ways suggested to backup without compromising the backup server and remoter server. 1) sshd_config with Match and ForceCommand , 2) using command parameter in authorized keys. 3) setting remote user shell prompt from /bin/sh to rsync script . Which one is secure? and which one is often used by Server admin? My backup server will be managed by few guys I would hire and I don't want them to do anything on
[10:49] <vedic> The nature of the backup is full server backup every day on a specific time (say everyday night 11:30)
[10:49] <vedic> on incremental basis
[10:56] <maxb> It doesn't matter which method of restricting the executed command you use, only that you do it
[10:57] <vedic> maxb: security repercussion?
[10:57] <koolhead17> devops and mysqldumop can`t stop posting it here http://devopsreactions.tumblr.com/post/43069215541/waiting-for-a-full-mysqldump
[11:10]  * maxb chuckles and reposts to work's internal IRC :-)
[11:19] <Plizzo> Hello! I have an issue with my server. I accidentally enabled ipv6 forwarding in /etc/sysctl.conf and after that my eth0 in /etc/network/interfaces was replaced with an interface called p6p1. I disabled the setting and enabled eth0 again, but it still does not work. When I run ifconfig -a eth0 is not found but only lo and p6p1. Please help me!
[11:23] <rbasak> Plizzo: I think p6p1 is to do with biosdevname and nothing to do with ipv6.
[11:23] <Plizzo> rbasak: But it only came after I enabled ipv6, and it has killed all internet connections on my server
[11:24] <alimj> Plizzo: Wait a sec
[11:24] <alimj> Plizzo: In /etc/udev/rules.d
[11:25] <alimj> Plizzp: In 70-persistent-net.rules file
[11:25] <Plizzo> alimj: Yeah
[11:25] <alimj> Plizzo: Pastebin for us that file...
[11:25] <Plizzo> alimj: I can't, only have physical access to the server, no ssh
[11:25] <Plizzo> alimj: But it's blank, only comments
[11:26] <Plizzo> alimj: Sorry, no comments even
[11:26] <alimj> Plizzo: Are you sure? What files you have in /etc/udev/rules.d
[11:27] <Plizzo> 70-persistent-cd.rules, and 70-persistent-net.rules
[11:27] <Plizzo> as well as a README
[11:27] <alimj> Plizzo: Are you sure that the 70-persistent-net.rules is blank?
[11:28] <Plizzo> alimj: Positive
[11:28] <Plizzo> alimj: It's just as if eth0 has been wiped clean, no trace of it.. only p6p1, and I wan't to get it all back :(
[11:29] <rbasak> Plizzo: try biosdevname=0 as a kernel boot parameter
[11:32] <alimj> Plizzo: rbasak is right. edit /etc/grub.conf and add biosdevname=0
[11:34] <Plizzo> Hmm, okay
[11:34] <Plizzo> grub.conf is blank
[11:34] <alimj> Plizzo: Sorry, that is Redhat path
[11:35] <alimj> In Ubuntu...
[11:35] <alimj> ... should be  /boot/grub/grub.cfg if I am not mistaken
[11:35] <ogra_>  /etc/default/grub
[11:35] <ogra_> and run sudo update-grub afterwards
[11:36] <Plizzo> Where do I add biosdevname?
[11:36] <ogra_> GRUB_CMDLINE_LINUX_DEFAULT= is where you want to add it
[11:36] <ogra_> GRUB_CMDLINE_LINUX_DEFAULT="splash quiet biosdevname=0"
[11:37] <Plizzo> Why splash, quiet?
[11:37] <ogra_> thats there by default
[11:37] <ogra_> usually
[11:37] <Plizzo> No, mine was blank
[11:37] <ogra_> you indeed dont need it
[11:37] <ogra_> ah, i'm on a desktop install here ...
[11:37] <Plizzo> Aha! :)
[11:37] <ogra_> so just add the new parameter there
[11:37] <Plizzo> Alright, I've added the line and updated grub, what now?
[11:38] <ogra_> reboot
[11:38] <Plizzo> Rebooting now
[11:39] <Plizzo> Waiting for network configuration...
[11:40] <Plizzo> Why did my eth0 drop and p6p1 become added when I enabled ipv6 forwarding? :/
[11:40] <Plizzo> It's still waiting for network configuration, give it another 30 secs
[11:41] <Plizzo> Alright, I'm in, but it booted without network configuration
[11:41] <Plizzo> What do I do now?
[11:42] <ogra_> see if ifconfig has eth0 again
[11:42] <ogra_> you might need to adjust /etc/network/interfaces
[11:43] <Plizzo> ogra_: eth0 is back and I have an IP-address, but my other computers can't access it
[11:43] <Plizzo> Weird
[11:45] <Plizzo> Seems like it's dropping all inbound connections...
[11:45] <Plizzo> No other device in my network can connect to it
[11:45] <Plizzo> Now it works
[11:46] <Plizzo> ufw was acting up...
[11:46] <Plizzo> Thanks ogra_!!
[11:46] <ogra_> great
[11:46] <Plizzo> And ofcourse alimj and rbasak as well
[11:56] <alimj> Plizzo: Good to here that it is working
[11:57] <alimj> orga_: Good job
[11:57] <Plizzo> Thanks :) Now I'm just trying to get name resolving working over my pptp VPN
[11:59] <Plizzo> Do any of you know how to make it work?
[11:59] <Plizzo> My VPN works just fine, and I have all forwarding and everything working good, but name resolving won't work
[12:00] <Plizzo> I set the ms-dns to my gateway (10.0.1.1)
[12:04] <alimj> Plizzo: And localip in /etc/pptpd.conf is also 10.0.1.1?
[12:04] <Plizzo> localip is 10.0.1.250 as in my server IP
[12:05] <Plizzo> alimj: I had it set to 10.0.1.1 but it caused my VPN to continously disconnect
[12:07] <alimj> Plizzo: Why you do not use a public DNS for ms-dns? Any intranet DNS config?
[12:07] <Plizzo> alimj: You mean like googles DNS 8.8.8.8?
[12:07] <alimj> Yes
[12:07] <Plizzo> alimj: I'm guessing my main issue is that I don't have a local DNS server :P
[12:07] <alimj> or 208.67.222.222 208.67.220.220 4.2.2.3 etc
[12:08] <Plizzo> OpenDNS?
[12:08] <Plizzo> alimj: But that would not get me any local name resolving? For instance, being able to write lumen.local instead of 10.0.1.250 over VPN?
[12:08] <alimj> Yes
[12:08] <alimj> In that case, you will require a local DNS
[12:09] <alimj> If you run intrant addresses rather than internet
[12:11] <alimj> Plizzo: Do you have bind installed?
[12:11] <Plizzo> alimj: Nope, I haven't actually installed bind but I'm looking at it. Thinking of setting up my raspberry pi as the local DNS though
[12:11] <Plizzo> That ought to work, and if the routing goes through there the name resolving ought to work over VPN as well
[12:12] <Plizzo> Or what do you think?
[12:13] <alimj> It will work
[12:20] <Plizzo> alimj: I noticed one thing now, I can't seem to connect to lumen.local locally
[12:21] <ogra_> does the lumen machine have avahi-daemon installed ?
[12:21] <ogra_> else the .local domain wont work afaik
[12:23] <Plizzo> ogra_: Yeah, avahi is installed and it's always been working, until this thing happened with the ipv6 thing
[12:23] <Plizzo> "ping: cannot resolve lumen.local: Unknown host"
[12:23] <Plizzo> Avahi is running, and I tried restarting it
[12:23] <Plizzo> Maybe I should do another reboot to get things running properly?
[13:04] <Plizzo> ogra_: I still can't reach my server at lumen.local from other machines... But if I ping lumen.local from my server terminal it works
[13:05] <ogra_> you ping that name or the IP ?
[13:05] <Plizzo> That name
[13:05] <Plizzo> And it resolved to 10.0.1.250
[13:05] <Plizzo> But I fixed it
[13:05] <Plizzo> Seems ufw was blocking it.
[13:05] <ogra_> ??
[13:06] <Plizzo> I typed sudo ufw disable and it worked again
[13:06] <ogra_> and doe it still resolve to that IP ?
[13:08] <Plizzo> Yeah
[13:08] <Plizzo> lumen.local resolves to 10.0.1.250 locally as it should
[13:08] <Plizzo> Not over VPN, but that's probably because I don't have any local DNS
[13:13] <RoyK> Plizzo: locally, it uses /etc/hosts.deny
[13:14] <RoyK> Plizzo: locally, it uses /etc/hosts
[13:14] <RoyK> even
[13:15] <ogra_> and falls back to the mdns client then
[13:16] <ogra_> Plizzo, did that ever work from the pther machines ?
[13:16] <ogra_> *other
[13:16] <Plizzo> ogra_: Did what work?
[13:18] <Plizzo> RoyK: I'm not working on the VPN now, I'm just working on my local network, where lumen.local did not work until I disabled ufw
[13:18] <ogra_> Plizzo, pinging lumen.local
[13:18] <ogra_> from the machines that dont work atm
[13:18] <Plizzo> ogra_: Locally it did not work until I disabled ufw, but now it works on all machines
[13:18] <Plizzo> Though not over VPN because I have no DNS for name resolving set
[13:18] <Plizzo> And my gateway does not reoslve
[13:19] <Plizzo> Do any of you know if you have to rebuild an RAID5 array when you grow it and add another disk?
[13:19] <ogra_> Plizzo, and all your other machines have mdns clients ?
[13:20] <Plizzo> ogra_: I'm not sure, I only have Apple machines, so I assume not
[13:20] <Plizzo> ogra_: But it does not work from my work computer which is running Windows 7, so I guess not
[13:20] <maxb> Plizzo: That would depend on exactly what you mean by "rebuild" and "grow"
[13:21] <maxb> Increasing the overall size of a RAID-5 set certainly does require an intensive rearrangement of data
[13:21] <Plizzo> maxb: I have an RAID5 array now consisting of three 2TB drives and I'm running out of space. I've added another disk to my server which I'm going to format now and append to the array.
[13:21] <ogra_> Plizzo, well, .local is an mdns domain, the clients need to support mdns for it to work
[13:21] <Plizzo> maxb: But I was thinking that I should probably do this overnight
[13:22] <Plizzo> ogra_: Well it works fine in my network on all machines, but not over VPN
[13:22] <ogra_> probably VPN doesnt forward mdns broadcasts
[13:22] <Plizzo> maxb: As I thought, the RAID will need to restructure, and that will take time, right?
[13:24] <RoyK> Plizzo: what sort of restucturing?
[13:24] <maxb> Yes, a reshape operation over that much data will probably take many hours
[13:24] <maxb> You can't really append to a RAID-5
[13:24] <RoyK> Plizzo: oh, 3->4 drives in raid5?
[13:24] <Plizzo> RoyK: Yeah, I want to grow the RAID to 4 drives
[13:24] <RoyK> Plizzo: just mdadm /dev/mdX --add /dev/sdX
[13:24] <RoyK> then --grow --raid-devices=4
[13:25] <maxb> It's more like increasing the space available and then your data doing a long and intricate dance to new locations
[13:25] <RoyK> perhaps possible in as single operation as well, but doesn't make any difference
[13:25] <Plizzo> RoyK: So I add, and then grow, isn't that the wrong order?
[13:25] <Plizzo> RoyK: I was thinking this:
[13:25] <Plizzo> mdadm --grow /dev/md0 --raid-devices=4
[13:25] <RoyK> that works after you've added the forth drive, yes
[13:25] <Plizzo> Alright, so I first add, then grow? :)
[13:25] <Plizzo> Okay
[13:26] <Plizzo> Thanks
[13:26] <RoyK> --add will add the new drive as a spare
[13:26] <RoyK> --grow will move it to an active disk
[13:26] <RoyK> the rebuild will probably take 10+ hours, perhaps more, depending on your drives, controllers etc
[13:29] <Plizzo> RoyK: Okay, I see, thanks
[13:29] <Plizzo> RoyK: Will start it overnight then
[13:29] <Plizzo> RoyK: Should the new drive be formatted as ext4?
[13:30] <Plizzo> RoyK: Because I'm going to prepare it now so I can start the process before going to sleep
[13:30] <RoyK> no need to place a filesystem on it
[13:30] <RoyK> or "format" it as you say
[13:30] <Plizzo> RoyK: Alright, because you see I have two partitions on the disk currently, and I need to wipe those
[13:30] <RoyK> it will add to the raid, which will get more space when it's done
[13:30] <Plizzo> RoyK: But I just wipe the disk clean, and let mdadm handle the rest?
[13:31] <RoyK> if you --add and --grow, everything will be overwritten
[13:31] <RoyK> I'd recommend using the whole drive
[13:31] <RoyK> not partitions
[13:31] <Plizzo> RoyK: Even if the drive currently has a GUID table?
[13:31] <RoyK> just use the whole drive, and md will overwrite the partition table, whatever format it is
[13:31] <Plizzo> RoyK: I want to use the entire drive, but is there no need to remove the existing HFS+ and exFAT partitions on it first?
[13:31] <Plizzo> RoyK: Alright, thanks
[13:31] <RoyK> it'll be overwritten
[13:32] <RoyK> just umount those first
[13:32] <RoyK> :P
[13:34] <Plizzo> RoyK: They're not mounted :)
[13:37] <Haris> Hello all
[13:38] <IdleOne> Haris: answer might be slower to come in here but someone will be along
[13:38] <Haris> oneiric is not on archive.ubuntu.com or gb.archive.ubuntu.com anymore. I need it to complete my network installs for 3 boxes. What are my options
[13:38] <Haris> IdleOne: ah, ok
[13:38] <jpds> Haris: It is: http://gb.archive.ubuntu.com/ubuntu/dists/oneiric/
[13:39] <Haris> jpds: the installer should be able to find that out on its own. it doesn't give me option to mention that manually
[13:45] <zul> jamespage: ping whats leveldb for?
[13:46] <jamespage> zul, reverse depends for ceph; provides local metadata storage for ext4 filesystem usage on OSD's
[13:46] <zul> jamespage: ack
[13:55] <jamespage> zul, I get the same issue on raring fwiw
[13:55] <jamespage> it might be a localized sbuild related issue
[13:55] <zul> jamespage: lemme check
[13:57] <Haris> I get one INFO message in syslog saysing: **: mirror does not have suite symlinks
[13:57] <Haris> I'm trying to install oneiric
[13:58] <Haris> the installer is telling me bad mirror for archive.ubuntu.com and gb.archive.ubuntu.com
[13:58] <Haris> 'bad archive mirror'
[13:59] <Haris> is there a way to specify an alternate mirror or archive on the Internet to the "out of the box" netboot config that ships with oneiric server-amd64.iso ?
[13:59] <zul> jamespage: builds fine here
[13:59] <zul> jamespage: it was python-crypto that had the python3 stuff in it
[13:59] <jamespage> zul, ack - I'll stuff it up to the PPA and check it out
[14:09] <hallyn> ivoks: cool, perf info would be interesting (though we still can't enable it by default I don't think :)
[14:11] <ivoks> hallyn: cause of the iptables?
[14:11] <hallyn> right
[14:12] <hallyn> it's trivila enough to enable with a variable in qemu-kvm.conf...
[14:12] <hallyn> trivial
[14:13] <ivoks> ok
[14:29] <roaksoax> jamespage: alright! so crm_res_running basically checks if the resource is running
[14:29] <jamespage> roaksoax, OK _ so if its not running you cleanup - but do you need to restart if it is running?
[14:30] <roaksoax> jamespage: if it is not running (or has failed to start), then the resource well get cleaned up
[14:30] <roaksoax> jamespage: clean up is more in-depth operation than a simple restart
[14:31] <jamespage> roaksoax, and if its already running then do zip - right I see
[14:31] <jamespage> roaksoax, in which case that does make sense to me
[14:32] <roaksoax> jamespage: cool, that prevents from moving the resources to a new node after juju add-unit
[14:32] <roaksoax> which is what was happening
[14:32] <jamespage> roaksoax, good-oh
[14:32] <jamespage> that makes alot of sense to me then
[14:32] <roaksoax> jamespage: cooll, Now, a problem with keystone/mysql
[14:32] <roaksoax> jamespage: keystoneclient.exceptions.ClientException: An unexpected error prevented the server from fulfilling your request. (OperationalError) (2003, "Can't connect to MySQL server on 'inst-021.virtual-maas.com' (111)") No
[14:32] <roaksoax> ne None (HTTP 500)
[14:33] <roaksoax> jamespage: specially see: inst-021.virtual-maas.com
[14:33] <jamespage> roaksoax, actually I just saw something similar
[14:33] <roaksoax> the problem here is that I do:http://pastebin.ubuntu.com/1651257/
[14:33] <jamespage> roaksoax, I figured I broke something when I added a second mysql unit
[14:33] <jamespage> but I don't really know what yet....
[14:34] <roaksoax> jamespage: so it seems that the shared-db reslation is being run before the cluster/peer relation, which causes the shared-db hook to say "
[14:35] <roaksoax> jamespage: so it seems that the shared-db reslation is being run before the cluster/peer relation, which causes the shared-db hook to say "I'm standalone, not clustered"
[14:35] <jamespage> roaksoax, indeed - this is the early hook execution stuff I was wobbling on about
[14:35] <roaksoax> so that means that the new mysql unit passes its host to keystone (instead of the VIP)
[14:36] <jamespage> roaksoax, if you have a charm that responds in a -joined relation then it can potentially do that
[14:36] <jamespage> roaksoax, might need to revisit the work that hazmat did on this bit
[14:38] <roaksoax> jamespage: yeah.. so an ugly solution I was thinking is that if a VIP is defined, then make it start by default (say configure pacemaker/corosync only for the VIP), which would mean not enforcing 2 nodes at a minimum
[14:38] <jamespage> roaksoax, that approach is still hook timing dependent
[14:39] <roaksoax> jamespage: yeah so, we need to ensure that peer relationships are being run first
[14:43] <jamespage> roaksoax, hmm
[14:45] <hazmat> roaksoax, jamespage was having network issues.. you mentioned you want rel ordering? the standard mechanism of punting in a rel hook till nesc data is avail, doesn't suffice because
[14:45] <hazmat> you don't know if there will be a peer?
[14:46] <roaksoax> hazmat: so the issue is basiclaly juju add-unit mysql --> this causes the new mysql unit to say "I'm standalong and have no peers, let's take action accordingly"
[14:47] <roaksoax> hazmat: so we thought that maybe when you add a unit, one of the first relationships that should be done is the peer relationship, before relating with any other external relationship
[14:47] <hazmat> roaksoax, i'm missing a lot of context.. who has the peer relation?
[14:47] <roaksoax> such as mysql
[14:47] <hazmat> mysql?
[14:47] <roaksoax> err keystone
[14:47] <jamespage> it makes no difference - both keystone and mysql have the same challenge
[14:47] <roaksoax> hazmat: so I do this: http://pastebin.ubuntu.com/1651380/
[14:47] <jamespage> hazmat, basically both of those charms will respond in a -joined relation with access data
[14:48] <jamespage> hazmat, but that hook may fire before the peer-joined hook does
[14:48] <jamespage> hook(s) rather
[14:49] <jamespage> hazmat, roaksoax: we could defer hook execution like we do with ceph until the right number of peers are present
[14:49] <jamespage> and provide that through charm config
[14:49] <jamespage> min-peers: 3
[14:49] <jamespage> for example
[14:49] <hazmat> so peer relations are added to a service at deploy time, but their not acted upon till at least one other unit of the same service shows up
[14:50] <hazmat> so the reason for delaying the mysql add unit, is because you don't want ceph and keystone interacting with the second unit of mysql till its been setup as a slave?
[14:51] <hazmat> er. till its peer relation with the other mysql units has been estbalished
[14:51] <jamespage> hazmat, indeed
[14:52] <jamespage> which is why the 'defer hooks' until peers == min-peers might work better
[14:52] <jamespage> you could start a standlone with min-peers: 1
[14:52] <jamespage> then juju set mysql min-peers: 3
[14:52] <jamespage> juju add-unit -n 2 mysql
[14:52] <jamespage> its a bit clunky but broadley works
[14:53] <hazmat> jamespage, so service config is one way to tackle, ha on.. or min-peers.. .. so the issue though is that for the second mysql unit, that it has client/server rels firing.. hmm.. so the rel hook cache probably prevents the peer rel from being detected?
[14:54] <jamespage> hazmat, I think the relation is detected; but not the members
[14:54] <hazmat> it seems like if the second mysql unit could detect it has a peer rel, it could respond/not respond appropriately till the peer rel is done
[14:54] <hazmat> hmm
[14:55]  * hazmat pokes around
[14:55] <jamespage> hazmat, yeah - but 'the peer rel is done' is hard to define!
[14:55] <jamespage> unless you specify what that means outside of the relation i.e. through config
[14:56] <zul> hallyn: ping
[14:56] <roaksoax> in my particular case it doesn't really matter is the peer relation is done or not, the only thing needed is that there are peers to defer execution of XYZ
[14:56] <roaksoax> jamespage: ^^
[14:56] <hazmat> jamespage, isn't it a question of peer rel config established, and slave ok/data transferred?
[14:57] <jamespage> hazmat, does the peer relation always exist - even when there is just one service unit?
[14:57] <hazmat> jamespage, it does... i'm checking on the members
[14:58] <jamespage> hazmat, I think what we see if that the relation-id is resolvable from first hook exec; but the list of units is not populated until the -joined hooks fire for each unit in the service
[14:59] <jamespage> infact its is - I debug-hooked this  the other day to try to figure out what was happening!
[14:59] <hallyn> zul: .
[14:59] <zul> hallyn: have you seen this before? https://bugs.launchpad.net/nova/+bug/984996
[14:59]  * hallyn hoping to get some lxc coding finished this morning, but watches zul pull him off course :)
[14:59] <jamespage> roaksoax, note that this restricts growing a cluster of service units; building a cluster and then relating actually works OK
[14:59] <zul> hallyn: im quite good at that
[15:00] <jamespage> as by then the cluster knows who is in charge!
[15:00] <roaksoax> jamespage: yeah :)
[15:00] <roaksoax> jamespage: ok so I just added a check for is_peered() (followwing my old approach), and it *seems* it works now. This simply counts the number of peers in the relationship
[15:00] <hallyn> zul: no...  but weren't you saying nova likes to just rm -rf instance dirs at the drop of a hat (and host nbd)?
[15:00] <zul> hallyn: it looks like to me that apparmor is preventing a chardev being created (im not sure) i did some googling and selinux has something http://www.redhat.com/archives/libvir-list/2010-June/msg00608.html
[15:00] <hazmat> jamespage, yeah.. that's accurate we try to maintain a sense of orderly time for hooks.. ie.  only see things that it has been informed exists.. across rels though it could be made viable
[15:01] <zul> hallyn: yeah thats fixed those this qemu
[15:01] <hallyn> zul: i don't think that's it... see comment #3
[15:01] <zul> hallyn: ok just checking :)
[15:02] <hallyn> the disk dir is not there - -ENOENT, not -EPERM or -EACCESS
[15:03] <jamespage> roaksoax, yeah - but that approach is still hook order dependent when growing the cluster
[15:03] <Sendoushi> anyone knows where parallels shared folder (from mac) is?
[15:03] <hallyn> zul: oh, now the bug mentioned in the last comment, bug 1123274, that may be...
[15:03] <jamespage> roaksoax, maybe it runs alphabetically or something - lets just prefix the cluster hook names with 000-
[15:04] <Sendoushi> on the ubuntu desktop it would be under desktop but... since i'm using -server
[15:04] <roaksoax> jamespage: i'm doing this: http://pastebin.ubuntu.com/1651533/
[15:04] <hallyn> but, logs needed :
[15:04] <hazmat> jamespage, so it looks like members for cross rel hooks are requested fresh... ie. they should be accurate per time of execution.
[15:04] <hallyn> :)
[15:04] <roaksoax> jamespage: and that seems to have worked... so I'm re-deploying everything
[15:04] <hazmat> roaksoax, jamespage do you have a unit log for one of these?
[15:05] <roaksoax> hazmat: i'm re-deploying, I'll have one in a bit
[15:06] <jamespage> hazmat, not sure I quite understand " ie. they should be accurate per time of execution."
[15:06] <Sendoushi> how can i make a search on all the hard drive for a folder name?
[15:06] <hazmat> i think i see the issue and a simple workaround, such that querying the peer from the second unit will always show the members.
[15:07] <jamespage> hazmat, thats kinda what we are looking for
[15:07] <jamespage> hazmat, I swear thats what you fixed up for me a couple of weeks ago and tested OK - but I guess hook order is non-deterministic....
[15:07] <hazmat> jamespage, so i think the issue is that the second unit, gets a bunch of relations when its first starting up, and just executes them in somewhat arbitrary order..
[15:08] <hazmat> jamespage, that issue was about cache coherency for relation-ids.. ie pure bug.
[15:08] <jamespage> hazmat, indeed - so I see things like shared-db-joined, amqp-joined firing before cluster-joined hooks
[15:08] <SpaceBass> hey guys, I have some 12.04 servers with eth1 configured as static in /etc/network/interfaces and yet every few hours they revert to DHCP
[15:09] <hazmat> jamespage, yeah.. i can sort the peer rels first when multiple rels are present for the new unit which should give the behavior you want
[15:09] <jamespage> hazmat, pleeeeassseee!
[15:10] <jamespage> hazmat, that will save a whole load of defferred hook execution pain for growing clusters.
[15:19] <hazmat> jamespage, ack, in progress
[15:50] <hazmat> jamespage, roaksoax  lp:~hazmat/juju/peers-from-hurd
[15:51] <hazmat> will put in review queue later today
[15:51] <jamespage> hazmat, ta - I'll tear down and test again
[16:00] <jamespage> SpamapS, around? something is tickling my brain about why ssl support is not turned on in the mongodb packages but I can't remember what it is....
[16:01] <SpamapS> jamespage: IIRC its just too new
[16:02] <jamespage> SpamapS, maybe - I though there was some license incompatibility issue but maybe not
[16:02] <SpamapS> jamespage: if its gnutls, it should be a simple "turn it on". If it is openssl.. you need to verify that openssl will allow mongo to link.
[16:03] <jamespage> SpamapS, yeah - that might be it - I think it needs openssl
[16:03] <SpamapS> jamespage: if there's no libmongodb , I think only the mongo folk have to grant the exception, I misspoke.
[16:04] <SpamapS> jamespage: as long as none of OpenSSL's symbols end up in a library linked by other things, you should be clear... since I assume MongoDB has granted a license exception for OpenSSL's weirdness.
[16:08] <jamespage> SpamapS, mongo is part AGPL and part Apache-2.0
[16:09] <SpamapS> jamespage: dual, or split into components?
[16:09]  * SpamapS reads debian/copyright
[16:09] <jamespage> SpamapS, some parts of the source code are AGPL, some are Apache-2.0
[16:10] <jamespage> SpamapS, http://paste.ubuntu.com/1652108/ last two paras
[16:10] <jamespage> SpamapS, well actually the source tree is misc others as well due to embedding v8, boost pcre etc. etc.
[16:14] <agu10^> when I ping my server from itself, it responds with < 2ms. When i ping it from another computer, however, it doesn't respond and lags until about 10 seconds after. And it responds to the 49th ping request or similar. Any ideas what's happening?
[16:15] <mysteriousdarren> what do you run for your personal server?
[16:15] <jamespage> SpamapS, I think the Apache bits would be OK - but still researching the AGPL compatibility
[16:16] <agu10^> mysteriousdarren: ubuntu server, at amazon compute cloud
[16:17] <mysteriousdarren> agu10: thanks for the response
[16:18] <SpamapS> jamespage: the bits that link libssl are AGPL
[16:18] <jamespage> SpamapS, yeah - I just saw that as well
[16:18] <jamespage> utils/net/sock*
[16:19] <SpamapS> jamespage: you probably need to open a bug w/ 10gen to get them to grant the specific exception per http://people.gnome.org/~markmc/openssl-and-the-gpl.html and http://lists.debian.org/debian-legal/2004/05/msg00595.html
[16:20] <SpamapS> jamespage: the issue is that Debian feels if you ship a binary and OpenSSL as "part of the OS" (which all packages are) then you are violating the terms of the GPL.
[16:20] <jamespage> SpamapS, just getting to that point myself; I think you are correct
[16:21] <SpamapS> jamespage: Ubuntu could take a different position, but IIRC, Ubuntu has never varied from Debian's position on that.
[16:21] <jamespage> SpamapS, I think thats unlikely - thanks for helping my figure this out
[16:57] <smoser> Daviey, jamespage anyone else, cjwatson is requesting some info on un-ran tests at
[16:57] <smoser> http://iso.qa.ubuntu.com/qatracker/milestones/254/builds/37582/testcases
[16:57] <smoser> and http://iso.qa.ubuntu.com/qatracker/milestones/254/builds/37583/testcases
[16:58] <smoser> it would surely seem that we need to run those.
[17:05] <jamespage> zul, fyi horizon is pretty stuffed in g2 - requests seems to be causing some issues;
[17:05] <jamespage> I'll try with the trunk PPA's as well
[17:05] <roaksoax> jamespage: we were experiencing horizon issues last week
[17:06] <jamespage> roaksoax, I hit one issue in nova client and then a second with the version of requests we have in raring
[17:06] <roaksoax> jamespage: yeah maybe its not really related... we were using precise and cloud archive
[17:06] <jamespage> bug 1116740
[17:07] <roaksoax> nope we didn't see tha I think
[17:07] <jamespage> I think the CA still has an older version - however I probably just fixed that
[17:07] <jamespage> if fixed is the right work
[17:08] <jamespage> smoser, hmm - OK
[17:08] <smoser> jamespage, is that something you're near set up to do ?
[17:08] <smoser> if you and i are on equal "that sucks" ground, then i can help. if its trivial for you, then i'd let you do it :)
[17:08] <smoser> and for maas, maybe bother roaksoax
[17:08] <jamespage> I can't do the maas ones
[17:09] <jamespage> but I have most of the images synced
[17:09] <jamespage> so could do some of the others
[17:30] <jamespage> smoser, are we expecting a minimal virtual install to actually work with 12.04.2
[17:30] <jamespage> ?
[17:30] <RoyK> jamespage: why not?
[17:31] <jamespage> RoyK, that kinda why I'm asking
[17:31] <smoser> i think i'd have the same thought
[17:31] <jamespage> I'll have to re-run the test case to confirm but my install looks a little bloaty re modules
[17:31] <smoser> hm..
[17:32] <jamespage> ~150M instead of 40M
[17:32] <RoyK> ouch
[17:33] <zul> jamespage: yeah there were conflicts in g2 between requests and horizon/clients/etc
[17:35] <jamespage> frustratingly the automated tests for minimal-virtual are disabled
[17:38] <plars> jamespage: saw that, not sure why
[17:39] <plars> jamespage: I'm looking at them also, and installation goes ok but I'm wondering if the instructions at http://testcases.qa.ubuntu.com/Install/ServerMinimalVirtualInstall are accurate. I'm seeing that the kernel installed is -generic, not -virtual
[17:39] <jamespage> plars, things have changed; same base kernel but with a smaller module set for -virtual
[17:40] <jamespage> plars, I think the auto-test case was telling us the same thing as I see today in November
[17:41] <plars> jamespage: is this related to the too much space taken up by minimal virtual installs that I was complaining about in https://bugs.launchpad.net/ubuntu/+source/ubuntu-meta/+bug/1053770
[17:42] <jamespage> hmm - not really
[17:42] <plars> jamespage: It might be worth revisiting the minimums again at some point, but I wonder if it matters for anyone outside of the cloud installs?
[17:43] <jamespage> that was about overall install size - this is about how many modules have been installed and which mixin of kernel we get
[17:45] <smoser> jamespage, the original issue came about when the kernel packages changed names.
[17:45] <smoser> i think
[17:45] <plars> jamespage: but that could have been the cause of the overall size being large couldn't it? my virtual minimal install on i386 here is taking up about 758M, on amd64 a bit higher
[17:45] <smoser> or something resulted in the installer not installing the -virtual kernel
[17:45] <smoser> that seems still to be the case.
[17:46] <smoser> if you do not get -virtual kenerl installed, then you're looking at 120M of modules that you dont [necessarily] need.
[17:46] <jamespage> indeed
[17:46] <jamespage> smoser, well I've seen it twice now
[17:46]  * jamespage raises a bug
[17:49] <zapotah> how did you configure the lacp hash to L2+L3 in /etc/network/interfaces
[17:49] <zapotah> tried finding the related man page but failed miserably
[17:49] <plars> jamespage: do you think this came about when we did the changeover to the quantal kernel? I don't recall the timing of that change for certain, but thats our best guess
[17:50] <RoyK> zapotah: isn't that generated dynamically?
[17:50] <zapotah> RoyK: apparently not
[17:51] <zapotah> the switch is configured for L2+L3 (it unfortunately doesnt support L3+L4) but ubuntu doesnt recognize it
[17:51] <jamespage> plars, maybe
[17:51] <jamespage> bug 1125408
[17:52] <RoyK> zapotah: I thought you could only do a particular level...
[17:52] <zapotah> i know thers a switch for forcing it but cant find the documentation
[17:52] <zapotah> bond_xmit_hash_policy layer2+3
[17:53] <zapotah> that should do it (or did it in the past)
[17:53] <zapotah> the bond interface drops some packets as the hash algorithms are different on the host and switch
[17:54] <RoyK> ouch
[17:55] <RoyK> why not just L2?
[17:55] <RoyK> lots of IPs on that interface?
[17:55] <zapotah> its a trunk for 15 vms
[17:56] <RoyK> ic
[17:56] <zapotah> so, yeah
[17:56] <zapotah> :)
[17:59] <RoyK> zapotah: wouldn't 3+4 work better?
[18:02] <zapotah> as i said the switch doesnt support that
[18:02] <RoyK> oh
[18:02] <zapotah> i would use it ofc if i could
[18:02] <RoyK> riht
[18:02] <RoyK> right
[18:03] <hazmat> jamespage, roaksoax did the branch resolve/help the issue?
[18:03] <jamespage> hazmat, looks like it
[18:04] <hazmat> jamespage, excellent
[18:04] <jamespage> I've managed to introduce 3 new keystone units into an existing cluster with stuffing everything up
[18:04] <RoyK> what's the memory requirements for the minimal?
[18:04] <roaksoax> hazmat: yeah it did indeed. Thanks!
[18:05]  * hazmat gets it into the review queue
[18:05] <hazmat> should be in the ppa by the weekend
[18:05] <hazmat> hopefully tomorrow
[18:05]  * hazmat realizes those are equiv statements ;-)
[18:12] <RoyK> jamespage: just tried to setup a minimal ubutnu server + ssh - boots with 64MB
[18:13] <RoyK> jamespage: oh, right, I installed the system first with more memory...
[18:29] <sliddjur> on a class assignment we're supposed to make a backup system for users /home dirs, a webserver, a mysql database and a mail server. what would be a suitable backup software for this task?
[18:29] <sliddjur> ubuntu server 10
[18:31] <sarnold> sliddjur: take a look at duplicity and rsnapshot; they do different things, and since they're on different ends of the spectrum, it'll help you narrow down what you're trying to accomplish.
[18:36] <RoyK> sliddjur: or just rsync, or perhaps bacula if you want to be a badass
[18:39] <sliddjur> RoyK: im not a badass. im a student learning the basics :)
[18:43] <RoyK> sliddjur: not meaning you're a badass, just saying it'll be a very good practical experience to setup bacula for backup
[18:43] <RoyK> sliddjur: I've been using that for data sizes > 100TiB
[18:43] <RoyK> it just works
[18:43] <RoyK> although it takes a bit of time to learn the config
[18:44] <sliddjur> RoyK: ok well i dont think its intended to use too much time to learn
[18:44] <sliddjur> maybe a few days
[18:45] <zapotah> plus the obvious benefit for an enterprise environment of having a windows node software available too (talking about bacula here)
[18:45] <sliddjur> i guess ill look into rsync
[18:47] <RoyK> sliddjur: take a look at bacula
[18:47] <RoyK> sliddjur: it should be quite sufficient with a few days to learn the basics
[18:47] <RoyK> sliddjur: and it can backup anything, really
[18:47] <sarnold> sliddjur: be careful with the database .. I'm a bit leery of saying too much and giving away the point of the homework :) but data consistency is important with databases....
[18:48] <RoyK> use postgresql
[18:48] <RoyK> with the bacula database
[18:48] <RoyK> mysql/sqlite are bad choices
[18:48] <RoyK> postgresql can handle a *lot* more issues
[18:48] <sarnold> RoyK: as a backend to bacula or as the source of the data to be backed up?
[18:48] <RoyK> well, both
[18:48] <RoyK> as a backend to bacula - check
[18:49] <RoyK> as being backed up by just a file backup - check
[18:49] <sliddjur> the database we already ave installed on our systems in class are mysql databases
[18:49] <roaksoax> jamespage: still around?
[18:49] <RoyK> mysql needs to be dumped before backed up
[18:49] <RoyK> backing up a live mysql db is like asking for trouble if there's transactions going on during the backup
[18:50] <sarnold> RoyK: no kidding, postgresql is easier to backup too? damn. I still wonder how mysql ever got off the ground... hopefully it's well on its way to pointlessness....
[18:50] <sliddjur> do i need to stop the service completely_
[18:50] <sliddjur> mysql that is
[18:50] <RoyK> sliddjur: no, just mysqldump --all-databases ...
[18:50] <RoyK> and backup that dump
[18:50] <RoyK> not the db files
[18:50] <RoyK> in bacula, you setup that as a post-run job
[18:51] <RoyK> erm
[18:51] <RoyK> pre
[18:51] <RoyK> !
[18:51] <RoyK> not post
[18:51] <sliddjur> :)
[18:54] <RoyK> sliddjur: bacula is divided into bacula director, sd and fd, sd being the storage, fd being the client, director being the boss, the director uses a database for its index - better use postgresql for that
[18:54] <RoyK> sliddjur: and /j #bacula if you have more questions :P
[18:57] <RoyK> sliddjur: really, start out with installing those on the backup target. after that, it's just configuration
[19:18] <adam_g> zul: mind taking a peak at these? http://people.canonical.com/~agandelman/grizzly/ gets most of current grizzly dependencies up to date, short of requests. in the process of rebasing your changes against the newest version
[19:19] <zul> adam_g: jamespage updated them this morning
[19:20] <adam_g> zul: oh doh
[19:20] <zul> adam_g: http://people.canonical.com/~jamespage/ca-updates-20130208/
[19:21] <adam_g> hmm
[19:23] <adam_g> zul: eventlet still needs an update. nova services now require the newer version. can you check mine and ill upload?
[19:23] <adam_g> zul: that url is 404
[19:23] <zul> adam_g: sure
[19:23] <zul> adam_g: +1 for eventlet
[19:28] <vedic> How to get pgbouncer latest version in Ubuntu 10.04? Ubuntu 10.04 repository seems to provide 1.3.1 version not the latest 1.5.4 . Note that I don't want to upgrade postgresql from 8.4 (i.e. in ubuntu 10.04 ) to Postgres 9.x . I just looking for pgbouncer latest
[19:29] <RoyK> there might be PPAs for that
[19:29] <RoyK> but upgrading to 12.04 might help as well
[19:29] <RoyK> vedic: is there a reason you're sticking to 10.04?
[19:30] <vedic> RoyK: I feel 10.04 is more stable than 12.04 and 10.04 server version has support till 2015. Using it in production server
[19:31] <RoyK> ok
[19:31] <vedic> RoyK: you know ppa link for that?
[19:31] <RoyK> I use 12.04 in prod - works well - 12.04.2 means it's gone past its initial bugs...
[19:31] <RoyK> google it
[19:32] <RoyK> vedic: what sort of services do you have on this box?
[19:32] <vedic> RoyK: No need to google. 10.04 works well for my use and is more bug free than 12.04
[19:33] <RoyK> well, then find a ppa
[19:33] <vedic> RoyK: if not src is always available
[19:33] <RoyK> the problem with ppas is that they aren't very well supported
[19:33] <RoyK> so if you want a stable server, better use something supported
[19:33] <vedic> RoyK: if they are maintained by an individual
[19:34] <RoyK> "an individual" as se
[19:34] <RoyK> yes
[19:34] <vedic> RoyK: postgres maintains pgbouncer development
[19:34] <RoyK> one person
[19:35] <vedic> RoyK: much more maintained than individual ppa. Well, want to end this discussion here. If you know the answer, pls write.
[19:35] <RoyK> I'd use 12.04 over 10.04+PPAs any time
[19:35] <vedic> RoyK: good for you
[19:36] <RoyK> vedic: I just wonder why you think 12.04 is unstable, and why you think a ppa with 10.04 will be more stable, or better supported...
[19:36] <vedic> RoyK: I didn't say 12.04 is unstable. Thats your inference.
[19:36] <RoyK> well, you didn't answer the question
[19:37] <vedic> RoyK: I did. ppa and os are two different things.
[19:38] <RoyK> 12.04 os is stable indeed
[19:38] <vedic> Enjoy it then. Good for you
[19:38] <RoyK> the packages on top may perhaps less stable than those with 10.04, but it won't help to add PPAs
[19:38] <RoyK> those are possibly the most unstable of all
[19:39] <tgm4883> RoyK, I fail to see how a PPA could be considered "the most unstable of all"
[19:39] <tgm4883> RoyK, that 100% depends on the person maintaining it (not to mention actual software bugs)
[19:41] <RoyK> tgm4883: well, a package depending on a single person isn't something I would use in a server
[19:41] <tgm4883> RoyK, PPA's can be managed by a team
[19:43] <tgm4883> RoyK, all that aside, vedic, have you checked to see if what you need is available on the postgres repo?
[19:43] <zul> adam_g: ping ill trade you a +1 http://people.canonical.com/~chucks/ca/
[19:44] <vedic> tgm4883: I think I found it here. Going through it: http://wiki.postgresql.org/wiki/PgBouncer  -> http://wiki.postgresql.org/wiki/Apt
[19:44] <tgm4883> vedic, yes, that is their repo
[19:45] <tgm4883> vedic, looks like they have it http://apt.postgresql.org/pub/repos/apt/pool/main/p/pgbouncer/
[19:45] <tgm4883> 1.5.4
[19:46] <vedic> tgm4883: yay
[19:46] <vedic> thanks
[19:46] <tgm4883> yw
[19:46] <adam_g> zul: +1
[19:47] <zul> adam_g: thanks
[20:00] <edve> Hi people
[20:00] <edve> would anyone be able to give me a hand with squid proxy ?
[20:57] <SpaceBass> hey guys, I have some 12.04 servers with eth1 configured as static in /etc/network/interfaces and yet every few hours they revert to DHCP
[20:58] <RoyK> SpaceBass: installed as desktop machines?
[20:58] <lifeless> SpaceBass: what is starting the dhcp-client process?
[20:58] <Daviey> smoser: hey
[20:58] <smoser> hey.
[20:59] <RoyK> SpaceBass: network-manager is installed by default on desktop machines and override /etc/network/interfaces
[20:59] <RoyK> SpaceBass: that will never happen on a server install
[21:00] <SpaceBass> RoyK, I;m using server
[21:00] <Daviey> smoser: you pinged me earlier
[21:00] <SpaceBass> lifeless, I don't see dhcp running as a process
[21:01] <RoyK> SpaceBass: a server won't ask for dhcp addresses if configured as static
[21:01] <RoyK> SpaceBass: that's definitive
[21:01] <SpaceBass> RoyK, beg to differ
[21:01] <RoyK> SpaceBass: dpkg -l | pastebinit
[21:01] <SpaceBass> RoyK, I wouldn't have believed it, if I couldn't reproduce it
[21:02] <tgm4883> SpaceBass, is the /etc/network/interfaces files returning to DHCP?
[21:02] <smoser> oh. jamespage took care of it.
[21:02] <Daviey> smoser: ok, thanks.
[21:04] <SpaceBass> RoyK, http://paste.ubuntu.com/1653539/
[21:04] <SpaceBass> dhcp client is installed
[21:05] <SpaceBass> tgm4883, don't see anything that would call DHCP, the original line is commented out: http://paste.ubuntu.com/1653546/
[21:05] <SpaceBass> on the other servers, the original dhcp line is removed totally
[21:05] <lifeless> SpaceBass: so if there is no dhcp client running,a nd  /etc/network/interfaces says static, what precisely do you mean when you say 'reverts to dhcp' ?
[21:06] <SpaceBass> lifeless, every 2 hours (give or take) i lose my assigned static IP, the servers in question ping my DHCP server and request an address in the DHCP range
[21:07] <SpaceBass> the only thing these all have in common is they are VirtualBox VMs (installed manually, not a downloaded appliance), but are otherwise stock 12.04 Server installs
[21:07] <lifeless> SpaceBass: ok, when that happens, what does ps fax|grep dhcp show ?
[21:08] <SpaceBass> lifeless, shows dhclient.eth0.pid running
[21:08] <lifeless> SpaceBass: so dhcp client *is* running
[21:08] <lifeless> SpaceBass: whats the full command line for it ?
[21:08] <SpaceBass> dhclient3 -e IF_METRIC=100 -pf /var/run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.eth0.leases -1 eth0
[21:08] <zul> jamespage:  fyi just uploaded a new keystoneclient and novaclient to the CA to deal with the requests mess
[21:08] <SpaceBass> lifeless, yeah, I stand corrected there, I was grepping incorrectly (looking for dhcp)
[21:09]  * RoyK goes back to watching the angles' share
[21:09] <lifeless> SpaceBass: I made that mistake too :P
[21:09] <SpaceBass> :)
[21:09] <lifeless> SpaceBass: ok, so, have a look in /var/log/syslog
[21:09] <lifeless> SpaceBass: (and other logs...) see if there is anything in there about eth0 ?
[21:10] <SpaceBass> syslog shows the dhclient actions: http://pastebin.com/BnGE0esE
[21:12] <SpaceBass> I'd remove dhclient entirely… but there's probably a better way
[21:13] <lifeless> SpaceBass: so I'd start by killing dhclient
[21:13] <tgm4883> SpaceBass, do you know what is starting it?
[21:13] <lifeless> SpaceBass: sudo pkill dhclient
[21:13] <lifeless> SpaceBass: if it was left running for some reason
[21:13] <lifeless> SpaceBass: and the least time is 2 hours
[21:13] <lifeless> SpaceBass: this would explain your symptoms
[21:13] <SpaceBass> it's starting at startup, but not sure what's calling it
[21:13] <lifeless> SpaceBass: secondly, as tgm4883 says, figure out what starts it
[21:14] <tgm4883> SpaceBass, is it upstart?
[21:14] <SpaceBass> will keep hunting down the mystery :)
[21:14] <tgm4883> SpaceBass, pstree
[21:15] <SpaceBass> ty
[21:22] <hallyn> stgraber: do you think dropping support for ns_cgroup would be a problem?  (for new lxc)
[21:22] <hallyn> well, i guess it could be problematic for backports to lucid
[21:36] <stgraber> hallyn: would annoy me quite a bit as my phone runs 2.6.32 ;)
[21:38] <hallyn> stgraber: and does it have clone_children support?
[21:38] <stgraber> no
[21:38] <stgraber> or I don't think so, let me check
[21:39] <hallyn> stgraber: heck, maybe instead of suport ns_cgroup, we should be manually setting up cpuset
[21:41] <stgraber> hallyn: yeah, my phone has the old school cgroups with the separate ns cgroup and no clone_children
[21:43] <hallyn> stgraber: and by default is ns cgroup mounted?
[21:44] <hallyn> well i think i'll stop using clone_children.  see how it goes
[21:44] <stgraber> well, it's my script that setups the cgroups, so yeah, I'm mounting it
[21:45] <hallyn> based on deployments you know about, would asking ns cgroup to not be mounted be a problem?
[21:46] <hallyn> well, actually, i think i'll just ignore it for now and finagle it back in later if i need to (right now it's jsut a shortcut with rename anyway)
[22:27] <peepsalot> what guidelines do you folks use for when to upgrade to the next version?  we have a server running 10.04 LTS, and thinking about going to 12.04, but not sure if its worth the trouble, could just wait till support is done one 10.04
[22:28] <SpamapS> peepsalot: wouldn't you rather start planning your migration *now*
[22:29] <SpamapS> peepsalot: 10.04 will be done in 2 years. You can take your time, start deploying on a test server, etc.
[22:35] <peepsalot> SpamapS, is it feasible to upgrade in place or best to reinstall from scratch?
[22:35] <SpamapS> peepsalot: I kind of think upgrading in place is only feasible if you have very few users and very few servers
[22:35] <peepsalot> one server, 4ish users
[22:36] <SpamapS> haha
[22:36] <SpamapS> yeah upgrade in place
[22:36] <SpamapS> you'll be fine
[22:37] <peepsalot> do you have to upgrade each release in between or can it leapfrog them
[22:37] <peepsalot> 4 upgrades at a time is a little worrisome to me considering issues i've had in the past with in-place upgrades
[22:38] <sarnold> peepsalot: you can upgrade from LTS to LTS without hitting all the little releases in the middle :)
[22:38] <peepsalot> ah ok
[22:40] <sarnold> peepsalot: here's some community-made notes: https://help.ubuntu.com/community/PreciseUpgrades
[22:41] <peepsalot> sarnold, thanks i was just looking at that page ;-)
[22:41] <sarnold> aha :)
[22:43] <peas> howdy... My Drupal site rendering is different when I'm logged in versus when I'm not logged in. I guess all of my content is not going thru the SSL layer. How do I fix this? Thanks for your time. I'm running Ubuntu 12.04 server.
[22:43] <peas> Also, I'm using nginx webserver.