=== amithkk is now known as amith === amith is now known as amithkk [14:35] charmworld getting rebooted, due to ec2 scheduled downtime, back in a few [14:46] oh noes!! heh j/k [16:06] * m_3 primus on the brain... hoodlyhoodly bass [16:06] and its back [16:09] yup... totally screwed my system yesterday grabbing something out of a really old backup... as root :( [16:09] * m_3 palms forehead [16:19] ouch [16:48] jcastro: ping [16:49] hazmat: ping [16:49] m_3: do you know the maas password for the maas/openstack/juju setup ? [16:53] negronjl, pong [16:54] hazmat: I'm working with the hardware setup for MaaS and was wondering if you know the password for the admin user [16:55] hazmat: I have tried a few but, I'm sutck [16:55] s/sutck/stuck [16:55] negronjl, let me grep the src [16:55] hazmat: thx [16:55] negronjl, have you done a createadmin script? [16:56] negronjl, it looks like that setups the password [16:56] hazmat: no ... the setup should already have one and I don't know what will happen if I make another one [16:57] negronjl, if your using the sample data.. its admin/test [16:57] hazmat ... I'll check ... brb [16:58] hazmat: aha !!! thx. [16:58] negronjl, np [17:32] hazmat: do you happen to know the password for the ubuntu user on the laptop that is serving as the maas server ? [17:38] negronjl, oh.. you mean the 8box demo cluster? [17:38] hazmat: I have 9 HP box cluster here ( I think 8 live and 1 spare ) and a Dell laptop that is acting as the server. [17:38] negronjl, no.. robbiew, daviey, adamg, francis would know better [17:39] hazmat: Cool ... I am talking to Daviey as well. [17:39] hazmat: thx for your help though :) [19:04] <_mup_> txzookeeper/errors-with-path r46 committed by kapil.foss@gmail.com [19:04] <_mup_> merge trunk === almaisan-away is now known as al-maisan === al-maisan is now known as almaisan-away [22:19] m_3: thanks for trying on the jjj release. You only messed one thing up. odd is "dev", even is "release" [22:19] m_3: so you should have released 0.12, and bumped to 0.13 [22:34] SpamapS: gotcha [23:18] so, is there any way to run juju locally without getting new network services (apt-cacher-ng + zookeeper) installed ? [23:18] or do I need to run a kvm instance, and run juju within that ? [23:34] lifeless: be careful with the latter option... you can do it, but juju's lxc uses(needs) libvirt's default 192.168.122.0/24 network [23:36] lifeless: that works though (we run local provider juju environments _within_ ec2 instances all the time for testing) [23:37] don't know about passing new zk/apt-cache addresses into the local provider though... never tried. I'd imagine it's pretty hard-coded to localhost though [23:42] so, what I'd really like is to be able to shove juju into an lxc and give it a single callback to a 127.0.0.1 only service to fire up more lxc's [23:42] that would let me run multiple juju environments [23:43] as it is, I'm looking at jorge's howto for juju with lxc and wondering why-bother [23:44] (not why bothwe with juju, why bother with that setup; its very constrained and inflexible, and has overhead (via zk and apt-cacher-ng whether I'm using it or not) [23:46] lifeless: like juju local provider with a bootstrapped bootstrap node that houses zk [23:46] right [23:46] lifeless: kvm on a non-default libvirt network would do that right now [23:48] lifeless: but yeah, I like your idea better... a safer sandbox for lxc. that wouldn't add extra deps to the base machine. maybe file a bug? [23:48] lifeless: it's closer to how we use the bootstrap node in other providers (ec2) too [23:49] what lp projct [23:50] juju? juju core? juju-ng ? [23:50] :P [23:50] m_3: ^ [23:50] lifeless: also beware of juju local provider's startup scripts too... https://bugs.launchpad.net/bugs/1006553 [23:50] <_mup_> Bug #1006553: Juju uses 100% CPU after host reboot < https://launchpad.net/bugs/1006553 > [23:50] lp:juju itself [23:52] your fix would help box some of those issues too [23:52] hah, so thats exactly the sort of thing I was worried about happening back when the lxc provider was first discussed. [23:52] * lifeless buffs his fingers on his chest [23:52] yup [23:52] :) [23:52] there's lots we can do to clean up the local provider [23:55] something to be aware of [23:55] some libvirt configs bridge onto the LAN [23:56] right... I've got one that does [23:56] I do that with my non-laptop libvirts, because that lets me access the resulting instances directly and trivially. [23:56] grabbing the default is... um... not ideal [23:57] I think the answer may be to create a dedicated one on install... there's a bug for that (also using lxcbr not virbr, but that's another bug) [23:58] it's incompatible enough with most of my libvirt setups that I created lp:charms/juju [23:59] m_3: https://bugs.launchpad.net/juju/+bug/1014435 [23:59] <_mup_> Bug #1014435: lxc local provider sandboxing could be more complete < https://launchpad.net/bugs/1014435 > [23:59] for your editing pleasure [23:59] <_mup_> Bug #1014435 was filed: lxc local provider sandboxing could be more complete < https://launchpad.net/bugs/1014435 >