[00:58] looking for the sanest route to hosting shell accounts for a modest number of users [00:58] LXC? [00:59] Hi [00:59] Hi JasonO [00:59] Hi tarvid [01:00] Looking for wisdom from folks with LXC experience [01:02] I am having problems enabling SSL on my virtual host. Can someone please help me? [01:03] When i reload apache2 I get: [warn] _default_ VirtualHost overlap on port 443, the first has precedence [01:04] Is there a way to allow both on 443 without conflict? [01:07] NameVirtualHost? [01:08] tarvid: amjjawad [01:33] hi there [01:33] i typed `apt-get update` [01:33] now i see a long list of [01:33] EDAC i7core: Lost 127 memory errors [01:34] the list has not stopped running for the past 5 mins. What is happening? [01:34] i am using 12.04 LTS server edition [01:34] Error detection and correction [01:35] No point is waiting [01:35] in [01:35] Run the memory test on bootup [01:36] tarvid: how do I run the memory test on bootup? [01:36] I think it is a grub option but you can use any install disk a d run memtest [01:37] tarvid: I am sorry I am quite new at this [01:37] I don't have any ubuntu installtion disks with me now [01:37] i am on site at another place [01:37] how do I try this grub option? [01:37] borrow another machine and make one [01:38] But it sounds like hardware issues [01:38] oh shucks [01:38] How many sticks in the machine [01:38] yes shucks [01:38] sticks? [01:39] ram? [01:39] they could be loose [01:39] or dirty [01:40] You may be able to run on part of them [01:40] like 2 out of 4 or one out of 2 [01:41] fancy board? I7? [01:42] @tarvid I just restarted the server [01:42] the errors are no more [01:42] do i just assume that everything is okay? [01:42] errors that go away gratuitously often come back [01:42] i also just ran `apt-get update`. It finished without seeing the errors [01:43] i see. [01:43] may have been the oded cosmic ray that zapped a bit [01:43] so what should I do now? [01:43] several choices [01:43] ignore it as just a fluke in the universe\ === NomadJim_ is now known as NomadJim [01:45] install memtest86+ and let it run when you go home for the night [01:45] tarvid: someone suggested the following to me [01:45] http://askubuntu.com/a/334332/10591 [01:47] a bit paranoid [01:47] most people don't have EDAC and live [01:48] doesn't read too paranoid to me. it sounds like a decent crash-course on ECC. if you have lots of correctable errors, good news, you got ECC. if you start seeing uncorrectable errors, pull the ram [01:49] an object lesson yes, a crash course??? We don't know how much ram is in the machine [01:49] what is ECC? [01:50] I am trying to find out sorry hang on [01:51] these are the specs: [01:51] 1 x Intel® Xeon® L5630 12M Cache, 2.13 GHz Processor, 2GB x 4 RAM 2 x 146GB SAS 15K HDD [01:53] ECC is error-correctionin ram. very common in server-class systems, reasonably common in workstation-class systems, rare in desktops and near unheard of in laptops [01:53] I am running `apt-get dist-upgrade -y` now will take a while to stop [01:53] so if I have ECC, that is a good news right? [01:54] Good news is zero errors [01:54] memtest86+ [01:55] tarvid: understood. now rebooting after finish `apt-get dist-upgrade -y` [01:55] over here, it is 955am [01:56] there are people who need to use the server. SO I will run the memtest at end of biz day [01:57] Oh look slike i need to run memtest from a CD or usb flash drive [01:57] Good plan. Memory is pretty cheap these days [01:57] http://en.wikipedia.org/wiki/Memtest86 [01:57] is this correct? [01:57] you don't need to [01:57] you can run it from anywhere [01:58] just those are normally common [01:58] erm is it simply something i can apt-get install? [01:58] I run it all the time via pxe [01:58] apt-get I dunno [01:58] but you can add it as a grub option [01:59] no one should run a computer without ecc ram these days [01:59] ooooooooooooooooooooooo, the vast majority of people run fine without ecc [02:00] tarvid, the vast majority reboot their computer daily, cause of random issues :) [02:00] erm sorry guys [02:01] i have an immediate problem [02:01] i tried to open my kern.log [02:01] and ...............? [02:01] suspense [02:02] tarvid: sorry co-worker interrupted me [02:02] the kern.log was huge so i couldn't leave [02:02] could be panic time [02:02] i mean ctrl X when i attempted to exit from nano [02:03] forget kern.log, and use dmesg instead [02:03] now i can [02:03] they are asking me to turn the webapp back on for them [02:03] because it's urgent for them. [02:03] i am caught between debugging the issue further and restoring the webapp for thme [02:04] patdk-lap: what is dmesg? [02:04] where the stuff comes from, that goes into kern.log [02:05] i see [02:05] but it only has the last 1000 or so messages [02:05] i just typed [02:05] its long [02:05] i dunno what i am looking for in dmesg [02:05] ideally, on a normal system, there would be nothing [02:06] if you're able, I'd err towards swapping the ram out and testing it in a machine that isn't so urgent. spraying memory errors isn't healthy [02:06] unless you have firewall logging [02:06] but you should only have events, for large stuff, like, insert new disk, remove disk [02:06] i am unable to swap out the ram and test on another machine unfortunately [02:07] My guess you are headed for a full crash. If you have spare RAM do [02:08] You could try half the ram and hope the half left in the machine is good [02:09] memory pretty much has three conditions. green light is no errors. this is what you want. amber light is ECC catching errors. this is a huge warning, especially if they're numerous/frequent. red light is uncorrectable errors. which usually manifests itself as "random" corruption & crashes. [02:09] ECC buys you that amber light. it's up to you to take the warning [02:10] sorry guys [02:10] somehow i got disconnected [02:11] how can i retrieve the conversation of the last few minutes? [02:11] i am on webchat.freenode.net [02:11] hence i have no logs [02:14] kimsia_: http://irclogs.ubuntu.com/2013/08/19/%23ubuntu-server.txt, its a few minutes behind [02:15] thanks bradm === smb` is now known as smb [07:26] I'm trying to "become" the user www-data, but "sudo -u www-data" doesn't do anything and "su -u www-data" requires that I know the password for the user. I have root access to the server, and I know I've managed this once before, but I can't remember how I did it... [07:26] oops, I mean "su - www-data" requires password [07:33] nilli: it could be because www-data doesn't have a shell (or it is set to /bin/false or something). Try sudo -u www-data but specifically with the command you want to run as www-data. [07:34] unfortunately I'm trying to run a command for a huge directory and sudo can't handle the amount of files in it.. I figured I would change to the right user so that I won't be limited by sudo. [07:35] In what way can sudo not handle the amount of files in it? [07:36] sudo: unable to execute /usr/bin/find: Argument list too long [07:36] that's find, not sudo [07:36] no, the error is for sudo [07:37] I assume you're doing something like find * ? [07:37] sudo is reporting the error that find returns [07:37] sudo find /my/path/* -mtime +30 -exec rm {} \; [07:37] Drop the * [07:37] yes. Remove the *. Marvel as it works [07:38] I found another way to solve my problem so I know that's not the issue :) [07:38] sudo find /my/path/ -mtime +30 -exec rm {} \; [07:38] I logged in as root instead of my normal user and did chmod so I got permission to write to the files in that directory [07:39] then I went back to my own user and simply dropped "sudo" from the command [07:39] voila. no problems. [07:39] thanks for your suggestions anyway [07:49] hallyn_, zul, I saw in the scrollback that you had a few issues. Ping me when you are there and I try to help. === freeflying is now known as freeflying_away [08:40] What to do when tcp port is in use(aka i can't bind to it) but nothing is using that port (atleas netstat doesn't show anything)? === freeflying_away is now known as freeflying [10:47] smb, hey - fyi I'm seeing problems with openvswitch on the 3.11 kernel in saucy [10:48] smb, I'm backporting the upstream fixes for 3.10 - which work find on 3.10 [10:48] smb, the problem I'm seeing exists for the current version in archive as well [10:53] smb, bug 1213879 [10:53] Launchpad bug 1213879 in openvswitch "kernel fault ovs 1.10.1 + linux 3.11" [Undecided,New] https://launchpad.net/bugs/1213879 === psivaa_ is now known as psivaa [10:56] smb, autopkgtest would concur with this perspective - https://jenkins.qa.ubuntu.com/view/Saucy/view/AutoPkgTest/job/saucy-adt-openvswitch/35/ [11:00] smb, looks like the last good run was actually against 3.10 [11:00] smb, I'll keep digging [11:00] ... [11:30] jamespage, I had been building and manually loading the module which was the only action the dkms testing does. So there sure could be issues left that will not get caught by this. [11:31] smb, I see an extra error message 'openvswitch: cannot register gre protocol handler' [11:31] and when I try to run the openflow test from debian/tests my machine die's horribly... [11:31] Hm... was that not something the in-kernel one would show? [11:31] smb, no - the in-kernel one appears to be OK - I just tested that [11:32] Just the thing about gre protocol [11:32] smb, I'm looking at the delta in datapath.c between the 3.11 kernel and the datapath.c that we have in the dkms module [11:32] The stacktrace at least points into the dkms module [11:33] That might be quite large if the statement of upstream about not pushing for all features is true [11:34] smb, just noticed this - http://kernel.ubuntu.com/git?p=ubuntu/ubuntu-saucy.git;a=history;f=net/openvswitch/vport-gre.c;h=493e9775dcdadb90ea383a26403d8bd11fc6face;hb=HEAD [11:34] which might indicate that upstream have been pushing to get GRE tunnelling into the native kernel module [11:34] which is good [11:35] Yes, might make the dkms module unneeded [11:35] smb, well it might [11:35] smb, indeed [11:36] this is hard - between 1.10 and 1.12 the tunnelling code got completely restructured [11:38] Daviey: http://reqorts.qa.ubuntu.com/reports/ubuntu-server/merges.html was generated on 9 August. Could you see if something's failing, please, or am I hitting the wrong URL? [11:38] smb, lemme email upstream and see what the best way forwards is.... [11:42] jamespage, Ok, as it looks the call to get_ifindex in get_dpifindex has been replaced by just a lookup instead of a call. We would have to look at the disassembly to be sure it was that that crashed. And then it still would not help that much as you noted the whole code got changed a lot [11:46] smb, are you smb@ubuntu.com? [11:46] I should be, too. Or the one at canonical.com [12:00] smb, \o/ even with trunk of openvswitch my kernel modules explode! [12:00] biab [12:01] smb, At least consistent in that... :-P [12:01] jamespage, ^ I am talking to myself again [12:01] smb, lol === psivaa is now known as psivaa-lunch [13:01] rbasak: ok [13:02] rbasak: updating by hand, will let you know what happens [13:02] Thanks === psivaa-lunch is now known as psivaa === Tzunamii_ is now known as Tzunamii === Jikai is now known as Jikan [13:30] yay keystone needs python-oauth2 [13:33] zul, woot === dduffey_afk is now known as dduffey [14:04] hallyn_: ping [14:07] jamespage: https://code.launchpad.net/~zulcss/keystone/oauth2/+merge/180878 [14:09] zul: yeah [14:09] hallyn_: i think this might be the cause of your issues that you were having https://www.redhat.com/archives/libvir-list/2013-August/msg00344.html [14:10] zul: ok, i'll try - i'm not working on that right now [14:10] hallyn_: ill take the patch and upload a new version, apparently its affecting nova-compute [14:10] oh, ok [14:11] (suppose i shoulda tried 1.0.6 on friday0 === ikonia_ is now known as ikonia [14:35] jamespage/roaksoax: https://bugzilla.redhat.com/show_bug.cgi?id=994855 [14:35] zul: Error: Could not parse XML returned by bugzilla.redhat.com: HTTP Error 404: Not Found [14:35] jamespage: shoot [14:35] jamespage/roaksoax: https://bugzilla.redhat.com/show_bug.cgi?id=994855 [14:35] zul: Error: Could not parse XML returned by bugzilla.redhat.com: HTTP Error 404: Not Found [14:36] jamespage/roaksoax: https://code.launchpad.net/~zulcss/keystone/oauth2/+merge/180878 [14:36] zul, +1 [14:36] did you get anywhere with the httpretty dep upstream? [14:40] jamespage: no === freeflying is now known as freeflying_away [14:40] jamespage: its on todo list for today [14:40] jamespage: just fixing a libvirt regression [14:42] zul, will that include the fixes I mailed you or will that wait till the merge? :) [14:47] smb: it should already have the fixes there [14:50] zul, sounds good. I am not sure but you and hallyn_ seemed to have struggled with libvirt and Xen from my ppa. Did that succeed at some point? I don't remember the outcome [14:50] smb: im not sure check with hallyn ;) [14:51] zul, Guess that means you either had no issues or did not try :-P [14:51] smb: i dont have the hardware until the end of the month :( [14:52] zul, Ah there was that. [14:53] smb: it did not [14:53] smb: but i' not working on it this morning [14:54] smb: zul thought https://www.redhat.com/archives/libvir-list/2013-August/msg00344.html might actualy be my problem [14:54] i haven't tested it. [14:54] hallyn_: ok i included it in ubuntu2 anyways [14:54] hallyn_, In general you should forget about PV and libvirt. IMO that has never worked [14:54] zul: ok. now that was to hopefully fix my inability to connect with virt-manager, right? [14:55] smb: huh? wht about nova? [14:55] hallyn_, Is that only doing PV ? Not HVM? [14:55] hallyn_: apparently it was causing nova-compute to crash if you were using libvirt [14:55] smb: mind you i also wasn't able to start domains my hand. but then if that's working for you then it's probably user error [14:56] hallyn_, Might be. Though a bit od and I would be interested in seeing more details on the failure. Whenever you work on it again [14:57] ok [14:59] hallyn_, At least on my machines libvirt and Xen HVM was working, but there also was an odd sudden fail on creating new guests that was related to virtinst. One of those which make you wonder how this ever worked. The Saucy version should be ok but depending on what the machine runs on which you run virt-manager this might still trigger. But the symptom is it trying to use hvmloader with an invalid path. [15:07] smb: perhaps you should put up a quick wiki page with precise instructions for how you create a guest [15:10] hallyn_, From virt-manager that should not be so different from KVM, supply a virt cdrom/iso and install from there. This also automatically makes your guest HVM. [15:10] smb: right i meant without libvirt. (virt-amanger - well that'll be fine when virt-manager manages to connect; but doesn't help me right now :) [15:12] hallyn_, Probably the not connect is because the unix socket is not on by default. You probably want https://wiki.ubuntu.com/Kernel/Reference/Xen then. :) [15:13] smb: ? hm, no, i don't see anything there that's new to me. but np, i'll get back to it at some point. having cgroup troubles. [15:15] hallyn_, Hm, ok. But yeah, lets wait until you got sorted the other issues === kees_ is now known as kees [16:24] jamespage: i think im joing to bite the bullet and package httpretty i can see stuff like keystone using it === tgm4883_ is now known as tgm4883 [17:23] stgraber: oh fud. i pushed the right commit to git... but with the wrong description. [17:23] hallyn_: --amend + push --force [17:23] so if you're wondering "what is that"... i goofed. i think it's too late to git push --force it now. [17:23] well [17:23] hallyn_: I haven't updated my branch in a few hours, so it should be fine [17:23] you haven't pulled yet? [17:23] ok [17:27] mind you i was most of hte way through building new ppa packages, but that should be ok [17:27] all right, updated. back in awhile [17:28] stgraber: I intend to think about and solve the lxc.snapshots problem now. if you can think of a more urgent bug in lxc that i should be addressing right now, shout [17:28] heh, tonight/tmoorrow i should focus on coverity :) [17:31] hallyn_: making sure we're pretty low latency for patches getting to lxc-devel in the next few days would be great, but you do a very good job at that usually already. I'll need to take a look once I'm back home to see if manpages/doc/... need some updating and if there's any regression in the bindings that we should address prior to release. [17:32] it's just the first alpha, so it doesn't need to be perfect, but that's what people are likely to be using when coming to Plumbers, so better try to solve as many issues as possible [17:37] speaking of that, [17:37] smoser: your patch is in git, but i'm afraid i didn't get it into the newest ppa build. let me know if it's urgen (i assume it's not as the previous workaround was nonideal but functional) [17:40] hallyn_: around? [17:41] hallyn_: fun one: if i run mir, when i do lxc-start it kills my x session and im dropped to console. :) [17:43] hallyn_, yes. non-ideal but funciontal. thanks. [17:43] sidnei: is that in a stock saucy desktop install? [17:43] hallyn_: it's been upgraded from raring [17:43] hallyn_, why do you hate MIR so much ? [17:44] smoser: i love mir [17:44] :) [17:44] sidnei: your container does have its own networking right? [17:44] sidnei: oh wait. [17:44] sidnei: ar eyou running from ubuntu-lxc daily ppa? [17:44] hallyn_: yup [17:44] yeah... [17:44] i've got a little snafu there... you're not running with proper apparmor profile [17:45] i see. i've reverted to nvidia from nouveau and that works around it for now. [17:45] sidnei: do you know how to build from git? [17:45] oh well, says it's already built [17:45] sidnei: try upgrading lxc, and see if it fixes it [17:46] sidnei: if it doesn't, it's possible that access to 5:0 or 5:1 is doing it [17:46] hallyn_: in other news, lxc-clone -s -L4G should create a 4G lv? seems like it creates an lv with the same size as the original, but with a 4G COW-table, whatever that means. [17:48] sidnei: yeah lxc passes the size along, but it's possible that lvm can't actually do what we're asking [17:48] * hallyn_ checks the manpage [17:49] yeah, from the lvcreate manpage: [17:49] "lvcreate --virtualsize 1T --size 100M --snapshot --name sparse vg1" [17:49] creates a sparse device named /dev/vg1/sparse of size 1TB with space for just under 100MB of actual data on it. [17:49] that's what we're doing. might be worth a warnin gto the user at lxc-clone. [17:49] but that would be wishlist prio :) [17:50] no i'm misreading [17:50] i'll just test it (later) [18:05] ello. I've got several servers running Ubuntu 12.04 and am performing some benchmarking. I seem to be experiencing connection resets and am trying to troubleshoot it. If one server running a load balancer (HAProxy) is showing literally the exact same connection reset output from tcpdump as is shown on one of the servers being load balanced, that does indeed suggest that the load balanced server was the source of the connection reset, correct? === masACC is now known as maswan [18:32] hallyn_, thoughts really quick... [18:32] would you consider an 'alias' in lxc ? [18:33] so i could 'lxc-clone -o precise-amd64 -n test1' [18:33] as an interface / alias for "clone the latest" precise-amd64 [18:33] where something else would manage 'precise-amd64 -> precise-amd64-20130824' [18:33] or something. [18:34] to that effect [18:36] smoser: hm [18:37] hallyn_, basically i'd like to have something pulling in simplestreams data and keeping 'precise-amd64' as a "symlink" or alias of sorts to the latest thing pulled in [18:37] right, [18:37] i dno't really want to add a new list of those. but, [18:38] i'd be ok with allowing a container config which just says 'lxc.alias = xxx' [18:38] but then i'd have to teach the user to read that config [18:38] right ? [18:39] ie: lxc-clone -o $(find-lxc-with alias=precise-amd64) -n test1 [18:39] i was just hoping to avoid the 'find-lxc-with' [18:40] no, i was thinking lxc would do it for you [18:40] ah. [18:40] i then misunderstood "dont really want ot add a new list of those." [18:40] ah. [18:40] i just meant i don't want an external list [18:41] right. ok. [18:41] really, i suspect [18:41] if you just do echo "lxc.include = /var/lib/lxc/precise-whatever/config" > /var/lib/lxc/precise/config, [18:41] that might just work [18:42] well, lxc-clone might make too many assumptions for that to work [18:42] smoser: but weren't we thinking of having a separate small package keep track of the containers anyway? [18:43] seems like a '$(find-latest precise)' would be trivial to use [18:43] lxc-clone -s -o $(find-latest precise) -n precise-test [18:43] yes, we'd have the small program. that was the idea. and i'd have it maintain the alias. [18:43] but i didn't want to teach the user about 'find-latest' [18:43] i wanted lxc to do that for me. [18:44] if you're against it, we can just plan on making the user (in this case juju) [18:44] smoser: lxc-start is happy with my suggestion above [18:44] lemme try clone [18:46] no lxc-clone doesn't detect the fstype right (this is with lvm). but that may be fixable [18:47] ewll, hallyn, i'm fine with letting you decide wether or not its fixable / desireable. [18:47] i'd 'clone' not understanding lxc.include to be bug, but that is neither here nor there. [18:48] smoser: well it understands include, i think. it might understand it too well [18:49] smoser: right, the problem is that lxc-clone wants to udpate the old container name to the new [18:49] so it wants the disk name to match container name (or at least contain it) [18:49] i'll come up with something [18:50] above that was bad syntax. [18:50] i'm good with you deciding if aliases of that sort are desirable or not. [18:51] but i would consider 'clone' not understanding lxc.include to be bug, but that is neither here nor there. [18:54] smoser: actually, just a symlink works [18:54] sudo ln -s /var/lib/lxc/{c-saucy,c}; sudo lxc-clone -s -o c -n c2 [18:55] works - other than at least one little corruption in print output [18:55] i figured you were going ot suggest that. do you think that is maintainable? [18:55] in what sense? [18:55] that we wouldn't get rid of that unintended feature? [18:55] well, its kind of har dto decide when its right to reoslve that link and when it is not. [18:55] and yes, the unexpected feature [18:55] that's the nice thing about that, [18:56] i'm not resolving that link, i just open $lxcpath/$lxcname/config, and takea ll values from the configfile [18:56] ie, does clone resolve that its cloned 'c-saucy' or 'c' [18:56] because its mounts need to have the full path resolved [18:56] it's erronously using c [18:56] yes, i'll need to update that. [18:56] so, which do you prefer? symlink, or lxc.alias? [18:57] from end user pov [18:57] it seems that with lxc.alias we can explicitly define the behavior without legacy concern. [18:58] where symlinks have some expected legacy behavior. [18:58] ok, will try lxc.alias and float a patch tonight [18:58] or tomorrow [18:59] roaksoax, does maas setup some apt proxy by default for nodes to use? [19:04] adam_g: only on raring+ [19:05] adam_g: by default on all maas versions we use maas' squid-deb-proxy, but only for deployment (not for commissioning/enlistment). Raring+ allows you to modify what apt_proxy to use on the MAAS WebUI [19:06] adam_g: and from raring+, it is also used for enlistment/commissioning [19:06] roaksoax, ok, so by default provisioned nodes come up behind an apt proxy? [19:06] adam_g: yes. MAAS has squid-deb-proxy which is used by default [19:18] hallyn_: ok, i understand the confusion now. the size specified in lxc-clone -L is the size allocated for the snapshot but the snapshot cannot ever be bigger than the original volume [19:19] right [19:22] hallyn_: it could be interesting to use thinpools so that the snapshots are not pre-allocated [19:22] if im understanding correctly what it does [19:25] that would be on the original? [19:25] patches or descriptive bugs welcome :) i've not heard of them but sounds likea good idea [19:27] if the original is on a thinpool the snapshot is automatically allocated on a thinpool it seems === guntbert_ is now known as guntbert [19:30] is there any downside? [19:32] if you overallocate and run out of disk space you get processes stuck into D state it seems === Ursinha is now known as Ursinha-afk === Ursinha-afk is now known as Ursinha [20:11] Anyone have experience with the chipset used on the ASRock B75 Pro3-M motherboard (known compatability issues?) [20:39] Is there a way to create a bind mount in an LXC container such it will always be owned by "ubuntu" (or some specific user) [20:41] wedgwood: there's a shortcut to bind-mount ~user, not sure if that's what you want [20:50] sidnei: I'm hoping to mount a directory from the host to the same place in the container. [20:51] wedgwood: juju-core does it so it's certainly doable. you have to create a custom config. let me look it up for you. [20:55] wedgwood: looks like the syntax is lxc.mount.entry=/path/to/host/dir path/to/container/dir, where path/to/container/dir needs to exist and is relative to the 'rootfs' dir [20:56] wedgwood: juju-core uses 'lxc.mount.entry=/var/log/juju var/log/juju' iiuc [20:57] sidnei: yep, I've got that part, but If the host directory is owned by uid=1000, it will also be owned by uid=1000 in the container [20:57] i see what you mean [20:58] http://s3hh.wordpress.com/2011/09/22/sharing-mounts-with-a-container/ maybe? [20:58] I *think* that the ubuntu template makes an initial ubuntu user with UID=1000 so if the, jenkins, for instance and has UID=1001, ubuntu won't have access to it. [20:59] I had a look at that post... lemme look again [20:59] you might be able to specify the uid in the fstab entry [20:59] maybe so. I didn't see a parameter like that in the mount manual [21:00] roaksoax, where does maas end up setting proxy settings on a provisioned node? [21:01] ah, not all fs types support uid/gid === Ursinha is now known as Ursinha-afk [21:02] I was thinking there might be a UID namespace map in LXC [21:02] possibly yeah [21:02] that's more what I was expecting, actually [21:03] wedgwood: you could also make it group-writable and add ubuntu to the 1001 group, or whatever is the jenkins user's group? [21:04] yeah, that's a possiblity [21:04] Daviey: ping [21:05] I still thin that could be trouble. if the guest user created files with go-rwx, then the host wouldn't have access [21:07] adam_g: where as in the code? [21:07] adam_g: in the preseed we tell to set up the mproxy [21:07] adam_g: for apt [21:07] and apt configures it automatically [21:07] in /etc/apt/conf.... === marcoceppi__ is now known as marcoceppi [21:07] something [21:07] roaksoax, no, after the node is up and commissioned [21:08] adam_g: during commissioning/enlistment it does not set the mproxy. (for precise/quantal), for raring+ it does [21:08] roaksoax, i thought it was using squid-deb-proxy prior to raring? [21:10] hallyn_: still around? do you know whether it's possible to map a UID inside a container to one outside so that it can work with files in a bind mount? === ShapeShi- is now known as ShapeShifter499 [21:17] adam_g: it will always use squid-deb-proxy by default [21:17] adam_g: however in precise, quantal, it does not use it for enlistment/commissioning, only for deployment [21:17] adam_g: in raring+ it uses it for enlistment/commissioning/deployment + it is easily customizable on the WebUI! [21:17] wedgwood: just dictate the uid in /etc/passwd in the container === Ursinha-afk is now known as Ursinha [21:17] wedgwood: (I assume you're not using a user namespace) [21:18] adam_g: if you want to use it for enlistment/commissioning in precise, then you'd need to hack /etc/maas/commissioning-user-data and /usr/share/maas/preseeds/enlist_userdata [21:18] adam_g: if you want to use a different in raring/saucy, you can do so on the WebUI [21:18] wedgwood: you'll probably want to doublecheck the primary group for the user too [21:18] (and probably via cli too) [21:19] hallyn_: OK, yeah. That's a simple solution. Thanks. [21:21] roaksoax, what file in /etc/apt/ gets updated to actually use the proxy. [21:25] adam_g: can't remember it is done automatically by preseeding [21:26] i'll deploy a node and let you know [21:38] koolhead17: hey [22:14] hallyn_: if i compile from git, what's the easiest way to test my changes short of doing 'make install'? i guess i have to play with LD_LIBRARY_PATH and such? [22:14] * sidnei < C-newbie === Ursinha is now known as Ursinha-afk === steffen is now known as Guest34319 === Ursinha-afk is now known as Ursinha === acrocity_ is now known as acrocity [23:11] sidnei: actually the easiest way is to get the package source from the ppa (using dget on the url for the .dsc) and apply the missing patches from git, [23:11] then debian/rules build && fakeroot debian/rules binary [23:11] too late! [23:11] sidnei: but the ppa should have just about everything in git [23:11] ok [23:12] my wireless repeater was playing games with me [23:12] hallyn_: https://github.com/lxc/lxc/pull/33 (still wip) === Ursinha is now known as Ursinha-afk [23:19] hey guys, is the keyboard not working a common bug on the install cd? [23:30] Has anybody here uses or knows about OCFS2 ? === Ursinha-afk is now known as Ursinha [23:50] how to add Apache virtualhosts for the same domain but for sub dirs? [23:50] like domain.com should open /var/www/domain [23:51] and domain.com/subdir should open /var/www/subdir === freeflying_away is now known as freeflying [23:52] xkernel: the directive works within a virtualhost directive: http://httpd.apache.org/docs/2.2/mod/core.html#directory [23:55] xkernel: ah, hrm, maybe isn't what you'd want. I'm nearly certain this page describes how to get where you want, but nothing you can just copy-and-paste: http://httpd.apache.org/docs/2.2/sections.html [23:56] prob solved over here. thanks!