[00:58] <tarvid> looking for the sanest route to hosting shell accounts for a modest number of users
[00:58] <tarvid> LXC?
[00:59] <JasonO> Hi
[00:59] <tarvid> Hi JasonO
[00:59] <JasonO> Hi tarvid
[01:00] <tarvid> Looking for wisdom from folks with LXC experience
[01:02] <JasonO> I am having problems enabling SSL on  my virtual host. Can someone please help me?
[01:03] <JasonO> When i reload apache2 I get:  [warn] _default_ VirtualHost overlap on port 443, the first has precedence
[01:04] <JasonO> Is there a way to allow both on 443 without conflict?
[01:07] <tarvid> NameVirtualHost?
[01:08] <JasonO> tarvid: amjjawad
[01:33] <kimsia> hi there
[01:33] <kimsia> i typed `apt-get update`
[01:33] <kimsia> now i see a long list of
[01:33] <kimsia> EDAC i7core: Lost 127 memory errors
[01:34] <kimsia> the list has not stopped running for the past 5 mins. What is happening?
[01:34] <kimsia> i am using 12.04 LTS server edition
[01:34] <tarvid> Error detection and correction
[01:35] <tarvid> No point is waiting
[01:35] <tarvid> in
[01:35] <tarvid> Run the memory test on bootup
[01:36] <kimsia> tarvid: how do I run the memory test on bootup?
[01:36] <tarvid> I think it is a grub option but you can use any install disk a d run memtest
[01:37] <kimsia> tarvid: I am sorry I am quite new at this
[01:37] <kimsia> I don't have any ubuntu installtion disks with me now
[01:37] <kimsia> i am on site at another place
[01:37] <kimsia> how do I try this grub option?
[01:37] <tarvid> borrow another machine and make one
[01:38] <tarvid> But it sounds like hardware issues
[01:38] <kimsia> oh shucks
[01:38] <tarvid> How many sticks in the machine
[01:38] <tarvid> yes shucks
[01:38] <kimsia> sticks?
[01:39] <tarvid> ram?
[01:39] <tarvid> they could be loose
[01:39] <tarvid> or dirty
[01:40] <tarvid> You may be able to run on part of them
[01:40] <tarvid> like 2 out of 4 or one out of 2
[01:41] <tarvid> fancy board? I7?
[01:42] <kimsia> @tarvid I just restarted the server
[01:42] <kimsia> the errors are no more
[01:42] <kimsia> do i just assume that everything is okay?
[01:42] <tarvid> errors that go away gratuitously often come back
[01:42] <kimsia> i also just ran `apt-get update`. It finished without seeing the errors
[01:43] <kimsia> i see.
[01:43] <tarvid> may have been the oded cosmic ray that zapped a bit
[01:43] <kimsia> so what should I do now?
[01:43] <tarvid> several choices
[01:43] <tarvid> ignore it as just a fluke in the universe\
[01:45] <tarvid> install memtest86+ and let it run when you go home for the night
[01:45] <kimsia> tarvid: someone suggested the following to me
[01:45] <kimsia> http://askubuntu.com/a/334332/10591
[01:47] <tarvid> a bit paranoid
[01:47] <tarvid> most people don't have EDAC and live
[01:48] <shauno> doesn't read too paranoid to me.  it sounds like a decent crash-course on ECC.  if you have lots of correctable errors, good news, you got ECC.  if you start seeing uncorrectable errors, pull the ram
[01:49] <tarvid> an object lesson yes, a crash course??? We don't know how much ram is in the machine
[01:49] <kimsia> what is ECC?
[01:50] <kimsia> I am trying to find out sorry hang on
[01:51] <kimsia> these are the specs:
[01:51] <kimsia> 1 x Intel® Xeon® L5630 12M Cache, 2.13 GHz Processor, 2GB x 4 RAM 2 x 146GB SAS 15K HDD
[01:53] <shauno> ECC is error-correctionin ram.  very common in server-class systems, reasonably common in workstation-class systems, rare in desktops and near unheard of in laptops
[01:53] <kimsia> I am running `apt-get dist-upgrade -y` now will take a while to stop
[01:53] <kimsia> so if I have ECC, that is a good news right?
[01:54] <tarvid> Good news is zero errors
[01:54] <tarvid> memtest86+
[01:55] <kimsia> tarvid: understood. now rebooting after finish `apt-get dist-upgrade -y`
[01:55] <kimsia> over here, it is 955am
[01:56] <kimsia> there are people who need to use the server. SO I will run the memtest at end of biz day
[01:57] <kimsia> Oh look slike i need to run memtest from a CD or usb flash drive
[01:57] <tarvid> Good plan. Memory is pretty cheap these days
[01:57] <kimsia> http://en.wikipedia.org/wiki/Memtest86
[01:57] <kimsia> is this correct?
[01:57] <patdk-lap> you don't need to
[01:57] <patdk-lap> you can run it from anywhere
[01:58] <patdk-lap> just those are normally common
[01:58] <kimsia> erm is it simply something i can apt-get install?
[01:58] <patdk-lap> I run it all the time via pxe
[01:58] <patdk-lap> apt-get I dunno
[01:58] <patdk-lap> but you can add it as a grub option
[01:59] <patdk-lap> no one should run a computer without ecc ram these days
[01:59] <tarvid> ooooooooooooooooooooooo, the vast majority of people run fine without ecc
[02:00] <patdk-lap> tarvid, the vast majority reboot their computer daily, cause of random issues :)
[02:00] <kimsia> erm sorry guys
[02:01] <kimsia> i have an immediate problem
[02:01] <kimsia> i tried to open my kern.log
[02:01] <tarvid> and ...............?
[02:01] <patdk-lap> suspense
[02:02] <kimsia> tarvid: sorry co-worker interrupted me
[02:02] <kimsia> the kern.log was huge so i couldn't leave
[02:02] <tarvid> could be panic time
[02:02] <kimsia> i mean ctrl X when i attempted to exit from nano
[02:03] <patdk-lap> forget kern.log, and use dmesg instead
[02:03] <kimsia> now i can
[02:03] <kimsia> they are asking me to turn the webapp back on for them
[02:03] <kimsia> because it's urgent for them.
[02:03] <kimsia> i am caught between debugging the issue further and restoring the webapp for thme
[02:04] <kimsia> patdk-lap: what is dmesg?
[02:04] <patdk-lap> where the stuff comes from, that goes into kern.log
[02:05] <kimsia> i see
[02:05] <patdk-lap> but it only has the last 1000 or so messages
[02:05] <kimsia> i just typed
[02:05] <kimsia> its long
[02:05] <kimsia> i dunno what i am looking for in dmesg
[02:05] <patdk-lap> ideally, on a normal system, there would be nothing
[02:06] <shauno> if you're able, I'd err towards swapping the ram out and testing it in a machine that isn't so urgent.  spraying memory errors isn't healthy
[02:06] <patdk-lap> unless you have firewall logging
[02:06] <patdk-lap> but you should only have events, for large stuff, like, insert new disk, remove disk
[02:06] <kimsia> i am unable to swap out the ram and test on another machine unfortunately
[02:07] <tarvid> My guess you are headed for a full crash. If you have spare RAM do
[02:08] <tarvid> You could try half the ram and hope the half left in the machine is good
[02:09] <shauno> memory pretty much has three conditions.  green light is no errors.  this is what you want.  amber light is ECC catching errors.  this is a huge warning, especially if they're numerous/frequent.  red light is uncorrectable errors.  which usually manifests itself as "random" corruption & crashes.
[02:09] <shauno> ECC buys you that amber light.  it's up to you to take the warning
[02:10] <kimsia_> sorry guys
[02:10] <kimsia_> somehow i got disconnected
[02:11] <kimsia_> how can i retrieve the conversation of the last few minutes?
[02:11] <kimsia_> i am on webchat.freenode.net
[02:11] <kimsia_> hence i have no logs
[02:14] <bradm> kimsia_: http://irclogs.ubuntu.com/2013/08/19/%23ubuntu-server.txt, its a few minutes behind
[02:15] <kimsia_> thanks bradm
[07:26] <nilli> I'm trying to "become" the user www-data, but "sudo -u www-data" doesn't do anything and "su -u www-data" requires that I know the password for the user. I have root access to the server, and I know I've managed this once before, but I can't remember how I did it...
[07:26] <nilli> oops, I mean "su - www-data" requires password
[07:33] <rbasak> nilli: it could be because www-data doesn't have a shell (or it is set to /bin/false or something). Try sudo -u www-data but specifically with the command you want to run as www-data.
[07:34] <nilli> unfortunately I'm trying to run a command for a huge directory and sudo can't handle the amount of files in it.. I figured I would change to the right user so that I won't be limited by sudo.
[07:35] <rbasak> In what way can sudo not handle the amount of files in it?
[07:36] <nilli> sudo: unable to execute /usr/bin/find: Argument list too long
[07:36] <sgran> that's find, not sudo
[07:36] <nilli> no, the error is for sudo
[07:37] <sgran> I assume you're doing something like find * ?
[07:37] <sgran> sudo is reporting the error that find returns
[07:37] <nilli> sudo find /my/path/* -mtime +30 -exec rm {} \;
[07:37] <rbasak> Drop the *
[07:37] <sgran> yes.  Remove the *.  Marvel as it works
[07:38] <nilli> I found another way to solve my problem so I know that's not the issue :)
[07:38] <rbasak> sudo find /my/path/ -mtime +30 -exec rm {} \;
[07:38] <nilli> I logged in as root instead of my normal user and did chmod so I got permission to write to the files in that directory
[07:39] <nilli> then I went back to my own user and simply dropped "sudo" from the command
[07:39] <nilli> voila. no problems.
[07:39] <nilli> thanks for your suggestions anyway
[07:49] <smb> hallyn_, zul, I saw in the scrollback that you had a few issues. Ping me when you are there and I try to help.
[08:40] <arti> What to do when tcp port is in use(aka i can't bind to it) but nothing is using that port (atleas netstat doesn't show anything)?
[10:47] <jamespage> smb, hey - fyi I'm seeing problems with openvswitch on the 3.11 kernel in saucy
[10:48] <jamespage> smb, I'm backporting the upstream fixes for 3.10 - which work find on 3.10
[10:48] <jamespage> smb, the problem I'm seeing exists for the current version in archive as well
[10:53] <jamespage> smb, bug 1213879
[10:56] <jamespage> smb, autopkgtest would concur with this perspective - https://jenkins.qa.ubuntu.com/view/Saucy/view/AutoPkgTest/job/saucy-adt-openvswitch/35/
[11:00] <jamespage> smb, looks like the last good run was actually against 3.10
[11:00] <jamespage> smb, I'll keep digging
[11:00] <jamespage> ...
[11:30] <smb> jamespage, I  had been building and manually loading the module which was the only action the dkms testing does. So there sure could be issues left that will not get caught by this.
[11:31] <jamespage> smb, I see an extra error message 'openvswitch: cannot register gre protocol handler'
[11:31] <jamespage> and when I try to run the openflow test from debian/tests my machine die's horribly...
[11:31] <smb> Hm... was that not something the in-kernel one would show?
[11:31] <jamespage> smb, no - the in-kernel one appears to be OK - I just tested that
[11:32] <smb> Just the thing about gre protocol
[11:32] <jamespage> smb, I'm looking at the delta in datapath.c between the 3.11 kernel and the datapath.c that we have in the dkms module
[11:32] <smb> The stacktrace at least points into the dkms module
[11:33] <smb> That might be quite large if the statement of upstream about not pushing for all features is true
[11:34] <jamespage> smb, just noticed this - http://kernel.ubuntu.com/git?p=ubuntu/ubuntu-saucy.git;a=history;f=net/openvswitch/vport-gre.c;h=493e9775dcdadb90ea383a26403d8bd11fc6face;hb=HEAD
[11:34] <jamespage> which might indicate that upstream have been pushing to get GRE tunnelling into the native kernel module
[11:34] <jamespage> which is good
[11:35] <smb> Yes, might make the dkms module unneeded
[11:35] <jamespage> smb, well it might
[11:35] <jamespage> smb, indeed
[11:36] <jamespage> this is hard - between 1.10 and 1.12 the tunnelling code got completely restructured
[11:38] <rbasak> Daviey: http://reqorts.qa.ubuntu.com/reports/ubuntu-server/merges.html was generated on 9 August. Could you see if something's failing, please, or am I hitting the wrong URL?
[11:38] <jamespage> smb, lemme email upstream and see what the best way forwards is....
[11:42] <smb> jamespage, Ok, as it looks the call to get_ifindex in get_dpifindex has been replaced by just a lookup instead of a call. We would have to look at the disassembly to be sure it was that that crashed. And then it still would not help that much as you noted the whole code got changed a lot
[11:46] <jamespage> smb, are you smb@ubuntu.com?
[11:46] <smb> I should be, too. Or the one at canonical.com
[12:00] <jamespage> smb, \o/ even with trunk of openvswitch my kernel modules explode!
[12:00] <jamespage> biab
[12:01] <smb> smb, At least consistent in that... :-P
[12:01] <smb> jamespage, ^ I am talking to myself again
[12:01] <jamespage> smb, lol
[13:01] <Daviey> rbasak: ok
[13:02] <Daviey> rbasak: updating by hand, will let you know what happens
[13:02] <rbasak> Thanks
[13:30] <zul> yay keystone needs python-oauth2
[13:33] <jamespage> zul, woot
[14:04] <zul> hallyn_: ping
[14:07] <zul> jamespage:  https://code.launchpad.net/~zulcss/keystone/oauth2/+merge/180878
[14:09] <hallyn_> zul: yeah
[14:09] <zul> hallyn_:  i think this might be the cause of your issues that you were having https://www.redhat.com/archives/libvir-list/2013-August/msg00344.html
[14:10] <hallyn_> zul: ok, i'll try - i'm not working on that right now
[14:10] <zul> hallyn_:  ill take the patch and upload a new version, apparently its affecting nova-compute
[14:10] <hallyn_> oh, ok
[14:11] <hallyn_> (suppose i shoulda tried 1.0.6 on friday0
[14:35] <zul> jamespage/roaksoax: https://bugzilla.redhat.com/show_bug.cgi?id=994855
[14:35] <zul> jamespage: shoot
[14:35] <zul> jamespage/roaksoax: https://bugzilla.redhat.com/show_bug.cgi?id=994855
[14:36] <zul> jamespage/roaksoax: https://code.launchpad.net/~zulcss/keystone/oauth2/+merge/180878
[14:36] <jamespage> zul, +1
[14:36] <jamespage> did you get anywhere with the httpretty dep upstream?
[14:40] <zul> jamespage:  no
[14:40] <zul> jamespage:  its on todo list for today
[14:40] <zul> jamespage:  just fixing a libvirt regression
[14:42] <smb> zul, will that include the fixes I mailed you or will that wait till the merge? :)
[14:47] <zul> smb:  it should already have the fixes there
[14:50] <smb> zul, sounds good. I am not sure but you and hallyn_ seemed to have struggled with libvirt and Xen from my ppa. Did that succeed at some point? I don't remember the outcome
[14:50] <zul> smb:  im not sure check with hallyn ;)
[14:51] <smb> zul, Guess that means you either had no issues or did not try :-P
[14:51] <zul> smb:  i dont have the hardware until the end of the month :(
[14:52] <smb> zul, Ah there was that.
[14:53] <hallyn_> smb: it did not
[14:53] <hallyn_> smb: but i' not working on it this morning
[14:54] <hallyn_> smb: zul thought https://www.redhat.com/archives/libvir-list/2013-August/msg00344.html might actualy be my problem
[14:54] <hallyn_> i haven't tested it.
[14:54] <zul> hallyn_:  ok i included it in ubuntu2 anyways
[14:54] <smb> hallyn_, In general you should forget about PV and libvirt. IMO that has never worked
[14:54] <hallyn_> zul: ok.  now that was to hopefully fix my inability to connect with virt-manager, right?
[14:55] <hallyn_> smb: huh?  wht about nova?
[14:55] <smb> hallyn_, Is that only doing PV ? Not HVM?
[14:55] <zul> hallyn_:  apparently it was causing nova-compute to crash if you were using libvirt
[14:55] <hallyn_> smb: mind you i also wasn't able to start domains my hand.  but then if that's working for you then it's probably user error
[14:56] <smb> hallyn_, Might be. Though a bit od and I would be interested in seeing more details on the failure. Whenever you work on it again
[14:57] <hallyn_> ok
[14:59] <smb> hallyn_, At least on my machines libvirt and Xen HVM was working, but there also was an odd sudden fail on creating new guests that was related to virtinst. One of those which make you wonder how this ever worked. The Saucy version should be ok but depending on what the machine runs on which you run virt-manager this might still trigger. But the symptom is it trying to use hvmloader with an invalid path.
[15:07] <hallyn_> smb: perhaps you should put up a quick wiki page with precise instructions for how you create a guest
[15:10] <smb> hallyn_, From virt-manager that should not be so different from KVM, supply a virt cdrom/iso and install from there. This also automatically makes your guest HVM.
[15:10] <hallyn_> smb: right i meant without libvirt.  (virt-amanger - well that'll be fine when virt-manager manages to connect;  but doesn't help me right now :)
[15:12] <smb> hallyn_, Probably the not connect is because the unix socket is not on by default. You probably want https://wiki.ubuntu.com/Kernel/Reference/Xen then. :)
[15:13] <hallyn_> smb: ?  hm, no, i don't see anything there that's new to me.  but np, i'll get back to it at some point.  having cgroup troubles.
[15:15] <smb> hallyn_, Hm, ok. But yeah, lets wait until you got sorted the other issues
[16:24] <zul> jamespage:  i think im joing to bite the bullet and package httpretty i can see stuff like keystone using it
[17:23] <hallyn_> stgraber: oh fud.  i pushed the right commit to git...  but with the wrong description.
[17:23] <stgraber> hallyn_: --amend + push --force
[17:23] <hallyn_> so if you're wondering "what is that"...  i goofed.  i think it's too late to git push --force it now.
[17:23] <hallyn_> well
[17:23] <stgraber> hallyn_: I haven't updated my branch in a few hours, so it should be fine
[17:23] <hallyn_> you haven't pulled yet?
[17:23] <hallyn_> ok
[17:27] <hallyn_> mind you i was most of hte way through building new ppa packages, but that should be ok
[17:27] <hallyn_> all right, updated.  back in awhile
[17:28] <hallyn_> stgraber: I intend to think about and solve the lxc.snapshots problem now.  if you can think of a more urgent bug in lxc that i should be addressing right now, shout
[17:28] <hallyn_> heh, tonight/tmoorrow i should focus on coverity :)
[17:31] <stgraber> hallyn_: making sure we're pretty low latency for patches getting to lxc-devel in the next few days would be great, but you do a very good job at that usually already. I'll need to take a look once I'm back home to see if manpages/doc/... need some updating and if there's any regression in the bindings that we should address prior to release.
[17:32] <stgraber> it's just the first alpha, so it doesn't need to be perfect, but that's what people are likely to be using when coming to Plumbers, so better try to solve as many issues as possible
[17:37] <hallyn_> speaking of that,
[17:37] <hallyn_> smoser: your patch is in git, but i'm afraid i didn't get it into the newest ppa build.  let me know if it's urgen (i assume it's not as the previous workaround was nonideal but functional)
[17:40] <sidnei> hallyn_: around?
[17:41] <sidnei> hallyn_: fun one: if i run mir, when i do lxc-start it kills my x session and im dropped to console. :)
[17:43] <smoser> hallyn_, yes. non-ideal but funciontal. thanks.
[17:43] <hallyn_> sidnei: is that in a stock saucy desktop install?
[17:43] <sidnei> hallyn_: it's been upgraded from raring
[17:43] <smoser> hallyn_, why do you hate MIR so much ?
[17:44] <hallyn_> smoser: i love mir
[17:44] <smoser> :)
[17:44] <hallyn_> sidnei: your container does have its own networking right?
[17:44] <hallyn_> sidnei: oh wait.
[17:44] <hallyn_> sidnei: ar eyou running from ubuntu-lxc daily ppa?
[17:44] <sidnei> hallyn_: yup
[17:44] <hallyn_> yeah...
[17:44] <hallyn_> i've got a little snafu there... you're not running with proper apparmor profile
[17:45] <sidnei> i see. i've reverted to nvidia from nouveau and that works around it for now.
[17:45] <hallyn_> sidnei: do you know how to build from git?
[17:45] <hallyn_> oh well, says it's already built
[17:45] <hallyn_> sidnei: try upgrading lxc, and see if it fixes it
[17:46] <hallyn_> sidnei: if it doesn't, it's possible that access to 5:0 or 5:1 is doing it
[17:46] <sidnei> hallyn_: in other news, lxc-clone -s -L4G should create a 4G lv? seems like it creates an lv with the same size as the original, but with a 4G COW-table, whatever that means.
[17:48] <hallyn_> sidnei: yeah lxc passes the size along, but it's possible that lvm can't actually do what we're asking
[17:48]  * hallyn_ checks the manpage
[17:49] <hallyn_> yeah, from the lvcreate manpage:
[17:49] <hallyn_>        "lvcreate --virtualsize 1T --size 100M --snapshot --name sparse vg1"
[17:49] <hallyn_>        creates a sparse device named /dev/vg1/sparse of size 1TB with space for just under 100MB of actual data on it.
[17:49] <hallyn_> that's what we're doing.  might be worth a warnin gto the user at lxc-clone.
[17:49] <hallyn_> but that would be wishlist prio :)
[17:50] <hallyn_> no i'm misreading
[17:50] <hallyn_> i'll just test it (later)
[18:05] <styol> ello. I've got several servers running Ubuntu 12.04 and am performing some benchmarking. I seem to be experiencing connection resets and am trying to troubleshoot it. If one server running a load balancer (HAProxy) is showing literally the exact same connection reset output from tcpdump as is shown on one of the servers being load balanced, that does indeed suggest that the load balanced server was the source of the connection reset, correct?
[18:32] <smoser> hallyn_, thoughts really quick...
[18:32] <smoser> would you consider an 'alias' in lxc ?
[18:33] <smoser> so i could 'lxc-clone -o precise-amd64 -n test1'
[18:33] <smoser> as an interface / alias for "clone the latest" precise-amd64
[18:33] <smoser> where something else would manage 'precise-amd64 -> precise-amd64-20130824'
[18:33] <smoser> or something.
[18:34] <smoser> to that effect
[18:36] <hallyn_> smoser: hm
[18:37] <smoser> hallyn_, basically i'd like to have something pulling in simplestreams data and keeping 'precise-amd64' as a "symlink" or alias of sorts to the latest thing pulled in
[18:37] <hallyn_> right,
[18:37] <hallyn_> i dno't really want to add a new list of those.  but,
[18:38] <hallyn_> i'd be ok with allowing a container config which just says 'lxc.alias = xxx'
[18:38] <smoser> but then i'd have to teach the user to read that config
[18:38] <smoser> right ?
[18:39] <smoser> ie: lxc-clone -o $(find-lxc-with alias=precise-amd64) -n test1
[18:39] <smoser> i was just hoping to avoid the 'find-lxc-with'
[18:40] <hallyn_> no, i was thinking lxc would do it for you
[18:40] <smoser> ah.
[18:40] <smoser> i then misunderstood "dont really want ot add a new list of those."
[18:40] <smoser> ah.
[18:40] <hallyn_> i just meant i don't want an external list
[18:41] <smoser> right. ok.
[18:41] <hallyn_> really, i suspect
[18:41] <hallyn_> if you just do echo "lxc.include = /var/lib/lxc/precise-whatever/config" > /var/lib/lxc/precise/config,
[18:41] <hallyn_> that might just work
[18:42] <hallyn_> well, lxc-clone might make too many assumptions for that to work
[18:42] <hallyn_> smoser: but weren't we thinking of having a separate small package keep track of the containers anyway?
[18:43] <hallyn_> seems like a '$(find-latest precise)' would be trivial to use
[18:43] <hallyn_> lxc-clone -s -o $(find-latest precise) -n precise-test
[18:43] <smoser> yes, we'd have the small program. that was the idea. and i'd have it maintain the alias.
[18:43] <smoser> but i didn't want to teach the user about 'find-latest'
[18:43] <smoser> i wanted lxc to do that for me.
[18:44] <smoser> if you're against it, we can just plan on making the user (in this case juju)
[18:44] <hallyn_> smoser: lxc-start is happy with my suggestion above
[18:44] <hallyn_> lemme try clone
[18:46] <hallyn_> no lxc-clone doesn't detect the fstype right (this is with lvm).  but that may be fixable
[18:47] <smoser> ewll, hallyn, i'm fine with letting you decide wether or not its fixable / desireable.
[18:47] <smoser> i'd 'clone' not understanding lxc.include to be bug, but that is neither here nor there.
[18:48] <hallyn_> smoser: well it understands include, i think.  it might understand it too well
[18:49] <hallyn_> smoser: right, the problem is that lxc-clone wants to udpate the old container name to the new
[18:49] <hallyn_> so it wants the disk name to match container name (or at least contain it)
[18:49] <hallyn_> i'll come up with something
[18:50] <smoser> above that was bad syntax.
[18:50] <smoser> i'm good with you deciding if aliases of that sort are desirable or not.
[18:51] <smoser> but i would consider  'clone' not understanding lxc.include to be bug, but that is neither here nor there.
[18:54] <hallyn_> smoser: actually, just a symlink works
[18:54] <hallyn_> sudo ln -s /var/lib/lxc/{c-saucy,c}; sudo lxc-clone -s -o c -n c2
[18:55] <hallyn_> works - other than at least one little corruption in print output
[18:55] <smoser> i figured you were going ot suggest that. do you think that is maintainable?
[18:55] <hallyn_> in what sense?
[18:55] <hallyn_> that we wouldn't get rid of that unintended feature?
[18:55] <smoser> well, its kind of har dto decide when its right to reoslve that link and when it is not.
[18:55] <smoser> and yes, the unexpected feature
[18:55] <hallyn_> that's the nice thing about that,
[18:56] <hallyn_> i'm not resolving that link, i just open $lxcpath/$lxcname/config, and takea ll values from the configfile
[18:56] <smoser> ie, does clone resolve that its cloned 'c-saucy' or 'c'
[18:56] <smoser> because its mounts need to have the full path resolved
[18:56] <hallyn_> it's erronously using c
[18:56] <hallyn_> yes, i'll need to update that.
[18:56] <hallyn_> so, which do you prefer?  symlink, or lxc.alias?
[18:57] <hallyn_> from end user pov
[18:57] <smoser> it seems that with lxc.alias we can explicitly define the behavior without legacy concern.
[18:58] <smoser> where symlinks have some expected legacy behavior.
[18:58] <hallyn_> ok, will try lxc.alias and float a patch tonight
[18:58] <hallyn_> or tomorrow
[18:59] <adam_g> roaksoax, does maas setup some apt proxy by default for nodes to use?
[19:04] <roaksoax> adam_g: only on raring+
[19:05] <roaksoax> adam_g: by default on all maas versions we use maas' squid-deb-proxy, but only for deployment (not for commissioning/enlistment). Raring+ allows you to modify what apt_proxy to use on the MAAS WebUI
[19:06] <roaksoax> adam_g: and from raring+, it is also used for enlistment/commissioning
[19:06] <adam_g> roaksoax, ok, so by default provisioned nodes come up behind an apt proxy?
[19:06] <roaksoax> adam_g: yes. MAAS has squid-deb-proxy which is used by default
[19:18] <sidnei> hallyn_: ok, i understand the confusion now. the size specified in lxc-clone -L is the size allocated for the snapshot but the snapshot cannot ever be bigger than the original volume
[19:19] <hallyn_> right
[19:22] <sidnei> hallyn_: it could be interesting to use thinpools so that the snapshots are not pre-allocated
[19:22] <sidnei> if im understanding correctly what it does
[19:25] <hallyn_> that would be on the original?
[19:25] <hallyn_> patches or descriptive bugs welcome :)  i've not heard of them but sounds likea  good idea
[19:27] <sidnei> if the original is on a thinpool the snapshot is automatically allocated on a thinpool it seems
[19:30] <hallyn_> is there any downside?
[19:32] <sidnei> if you overallocate and run out of disk space you get processes stuck into D state it seems
[20:11] <w00ter> Anyone have experience with the chipset used on the ASRock B75 Pro3-M motherboard (known compatability issues?)
[20:39] <wedgwood> Is there a way to create a bind mount in an LXC container such it will always be owned by "ubuntu" (or some specific user)
[20:41] <sidnei> wedgwood: there's a shortcut to bind-mount ~user, not sure if that's what you want
[20:50] <wedgwood> sidnei: I'm hoping to mount a directory from the host to the same place in the container.
[20:51] <sidnei> wedgwood: juju-core does it so it's certainly doable. you have to create a custom config. let me look it up for you.
[20:55] <sidnei> wedgwood: looks like the syntax is lxc.mount.entry=/path/to/host/dir path/to/container/dir, where path/to/container/dir needs to exist and is relative to the 'rootfs' dir
[20:56] <sidnei> wedgwood: juju-core uses 'lxc.mount.entry=/var/log/juju var/log/juju' iiuc
[20:57] <wedgwood> sidnei: yep, I've got that part, but If the host directory is owned by uid=1000, it will also be owned by uid=1000 in the container
[20:57] <sidnei> i see what you mean
[20:58] <sidnei> http://s3hh.wordpress.com/2011/09/22/sharing-mounts-with-a-container/ maybe?
[20:58] <wedgwood> I *think* that the ubuntu template makes an initial ubuntu user with UID=1000 so if the, jenkins, for instance and has UID=1001, ubuntu won't have access to it.
[20:59] <wedgwood> I had a look at that post... lemme look again
[20:59] <sidnei> you might be able to specify the uid in the fstab entry
[20:59] <wedgwood> maybe so. I didn't see a parameter like that in the mount manual
[21:00] <adam_g> roaksoax, where does maas end up setting proxy settings on a provisioned node?
[21:01] <sidnei> ah, not all fs types support uid/gid
[21:02] <wedgwood> I was thinking there might be a UID namespace map in LXC
[21:02] <sidnei> possibly yeah
[21:02] <wedgwood> that's more what I was expecting, actually
[21:03] <sidnei> wedgwood: you could also make it group-writable and add ubuntu to the 1001 group, or whatever is the jenkins user's group?
[21:04] <wedgwood> yeah, that's a possiblity
[21:04] <koolhead1> Daviey: ping
[21:05] <wedgwood> I still thin that could be trouble. if the guest user created files with go-rwx, then the host wouldn't have access
[21:07] <roaksoax> adam_g: where as in the code?
[21:07] <roaksoax> adam_g: in the preseed we tell to set up the mproxy
[21:07] <roaksoax> adam_g: for apt
[21:07] <roaksoax> and apt configures it automatically
[21:07] <roaksoax> in /etc/apt/conf....
[21:07] <roaksoax> something
[21:07] <adam_g> roaksoax, no, after the node is up and commissioned
[21:08] <roaksoax> adam_g: during commissioning/enlistment it does not set the mproxy. (for precise/quantal), for raring+ it does
[21:08] <adam_g> roaksoax, i thought it was using squid-deb-proxy prior to raring?
[21:10] <wedgwood> hallyn_: still around? do you know whether it's possible to map a UID inside a container to one outside so that it can work with files in a bind mount?
[21:17] <roaksoax> adam_g: it will always use squid-deb-proxy by default
[21:17] <roaksoax> adam_g: however in precise, quantal, it does not use it for enlistment/commissioning, only for deployment
[21:17] <roaksoax> adam_g: in raring+ it uses it for enlistment/commissioning/deployment + it is easily customizable on the WebUI!
[21:17] <hallyn_> wedgwood: just dictate the uid in /etc/passwd in the container
[21:17] <hallyn_> wedgwood: (I assume you're not using a user namespace)
[21:18] <roaksoax> adam_g: if you want to use it for enlistment/commissioning in precise, then you'd need to hack /etc/maas/commissioning-user-data and /usr/share/maas/preseeds/enlist_userdata
[21:18] <roaksoax> adam_g: if you want to use a different in raring/saucy, you can do so on the WebUI
[21:18] <hallyn_> wedgwood: you'll probably want to doublecheck the primary group for the user too
[21:18] <roaksoax> (and probably via cli too)
[21:19] <wedgwood> hallyn_: OK, yeah. That's a simple solution. Thanks.
[21:21] <adam_g> roaksoax, what file in /etc/apt/ gets updated to actually use the proxy.
[21:25] <roaksoax> adam_g: can't remember it is done automatically by preseeding
[21:26] <roaksoax> i'll deploy a node and let you know
[21:38] <Daviey> koolhead17: hey
[22:14] <sidnei> hallyn_: if i compile from git, what's the easiest way to test my changes short of doing 'make install'? i guess i have to play with LD_LIBRARY_PATH and such?
[22:14]  * sidnei < C-newbie
[23:11] <hallyn_> sidnei: actually the easiest way is to get the package source from the ppa (using dget on the url for the .dsc) and apply the missing patches from git,
[23:11] <hallyn_> then debian/rules build && fakeroot debian/rules binary
[23:11] <sidnei> too late!
[23:11] <hallyn_> sidnei: but the ppa should have just about everything in git
[23:11] <hallyn_> ok
[23:12] <hallyn_> my wireless repeater was playing games with me
[23:12] <sidnei> hallyn_: https://github.com/lxc/lxc/pull/33 (still wip)
[23:19] <jose> hey guys, is the keyboard not working a common bug on the install cd?
[23:30] <zerick> Has anybody here uses or knows about OCFS2 ?
[23:50] <xkernel> how to add Apache virtualhosts for the same domain but for sub dirs?
[23:50] <xkernel> like domain.com should open /var/www/domain
[23:51] <xkernel> and domain.com/subdir should open /var/www/subdir
[23:52] <sarnold> xkernel: the <directory> directive works within a virtualhost directive: http://httpd.apache.org/docs/2.2/mod/core.html#directory
[23:55] <sarnold> xkernel: ah, hrm, maybe <directory> isn't what you'd want. I'm nearly certain this page describes how to get where you want, but nothing you can just copy-and-paste: http://httpd.apache.org/docs/2.2/sections.html
[23:56] <jose> prob solved over here. thanks!