[14:35] <hazmat> charmworld getting rebooted, due to ec2 scheduled downtime, back in a few
[14:46] <imbrandon> oh noes!! heh j/k
[16:06]  * m_3 primus on the brain... hoodlyhoodly bass
[16:06] <hazmat> and its back
[16:09] <m_3> yup... totally screwed my system yesterday grabbing something out of a really old backup... as root :(
[16:09]  * m_3 palms forehead
[16:19] <imbrandon> ouch
[16:48] <negronjl> jcastro: ping
[16:49] <negronjl> hazmat: ping
[16:49] <negronjl> m_3: do you know the maas password for the maas/openstack/juju setup ?
[16:53] <hazmat> negronjl, pong
[16:54] <negronjl> hazmat: I'm working with the hardware setup for MaaS and was wondering if you know the password for the admin user
[16:55] <negronjl> hazmat: I have tried a few but, I'm sutck
[16:55] <negronjl> s/sutck/stuck
[16:55] <hazmat> negronjl, let me grep the src
[16:55] <negronjl> hazmat: thx
[16:55] <hazmat> negronjl, have you done a createadmin script?
[16:56] <hazmat> negronjl, it looks like that setups the password
[16:56] <negronjl> hazmat: no ... the setup should already have one and I don't know what will happen if I make another one
[16:57] <hazmat> negronjl, if your using the sample data..  its admin/test
[16:57] <negronjl> hazmat ... I'll check ... brb
[16:58] <negronjl> hazmat: aha !!!  thx.
[16:58] <hazmat> negronjl, np
[17:32] <negronjl> hazmat: do you happen to know the password for the ubuntu user on the laptop that is serving as the maas server ?
[17:38] <hazmat> negronjl, oh.. you mean the 8box demo cluster?
[17:38] <negronjl> hazmat: I have 9 HP box cluster here ( I think 8 live and 1 spare ) and a Dell laptop that is acting as the server.
[17:38] <hazmat> negronjl, no.. robbiew, daviey, adamg, francis would know better
[17:39] <negronjl> hazmat: Cool ... I am talking to Daviey as well.
[17:39] <negronjl> hazmat: thx for your help though :)
[19:04] <_mup_> txzookeeper/errors-with-path r46 committed by kapil.foss@gmail.com
[19:04] <_mup_> merge trunk
[22:19] <SpamapS> m_3: thanks for trying on the jjj release. You only messed one thing up. odd is "dev", even is "release"
[22:19] <SpamapS> m_3: so you should have released 0.12, and bumped to 0.13
[22:34] <m_3> SpamapS: gotcha
[23:18] <lifeless> so, is there any way to run juju locally without getting new network services (apt-cacher-ng + zookeeper) installed ?
[23:18] <lifeless> or do I need to run a kvm instance, and run juju within that ?
[23:34] <m_3> lifeless: be careful with the latter option... you can do it, but juju's lxc uses(needs) libvirt's default 192.168.122.0/24 network
[23:36] <m_3> lifeless: that works though (we run local provider juju environments _within_ ec2 instances all the time for testing)
[23:37] <m_3> don't know about passing new zk/apt-cache addresses into the local provider though... never tried.  I'd imagine it's pretty hard-coded to localhost though
[23:42] <lifeless> so, what I'd really like is to be able to shove juju into an lxc and give it a single callback to a 127.0.0.1 only service to fire up more lxc's
[23:42] <lifeless> that would let me run multiple juju environments
[23:43] <lifeless> as it is, I'm looking at jorge's howto for juju with lxc and wondering why-bother
[23:44] <lifeless> (not why bothwe with juju, why bother with that setup; its very constrained and inflexible, and has overhead (via zk and apt-cacher-ng whether I'm using it or not)
[23:46] <m_3> lifeless: like juju local provider with a bootstrapped bootstrap node that houses zk
[23:46] <lifeless> right
[23:46] <m_3> lifeless: kvm on a non-default libvirt network would do that right now
[23:48] <m_3> lifeless: but yeah, I like your idea better... a safer sandbox for lxc.  that wouldn't add extra deps to the base machine.  maybe file a bug?
[23:48] <m_3> lifeless: it's closer to how we use the bootstrap node in other providers (ec2) too
[23:49] <lifeless> what lp projct
[23:50] <lifeless> juju? juju core? juju-ng ?
[23:50] <lifeless>  :P
[23:50] <lifeless> m_3: ^
[23:50] <m_3> lifeless: also beware of juju local provider's startup scripts too... https://bugs.launchpad.net/bugs/1006553
[23:50] <_mup_> Bug #1006553: Juju uses 100% CPU after host reboot <juju:Triaged> < https://launchpad.net/bugs/1006553 >
[23:50] <m_3> lp:juju itself
[23:52] <m_3> your fix would help box some of those issues too
[23:52] <lifeless> hah, so thats exactly the sort of thing I was worried about happening back when the lxc provider was first discussed.
[23:52]  * lifeless buffs his fingers on his chest
[23:52] <m_3> yup
[23:52] <m_3> :)
[23:52] <m_3> there's lots we can do to clean up the local provider
[23:55] <lifeless> something to be aware of
[23:55] <lifeless> some libvirt configs bridge onto the LAN
[23:56] <m_3> right... I've got one that does
[23:56] <lifeless> I do that with my non-laptop libvirts, because that lets me access the resulting instances directly and trivially.
[23:56] <m_3> grabbing the default is... um... not ideal
[23:57] <m_3> I think the answer may be to create a dedicated one on install... there's a bug for that (also using lxcbr not virbr, but that's another bug)
[23:58] <m_3> it's incompatible enough with most of my libvirt setups that I created lp:charms/juju
[23:59] <lifeless> m_3: https://bugs.launchpad.net/juju/+bug/1014435
[23:59] <_mup_> Bug #1014435: lxc local provider sandboxing could be more complete <juju:New> < https://launchpad.net/bugs/1014435 >
[23:59] <lifeless> for your editing pleasure
[23:59] <_mup_> Bug #1014435 was filed: lxc local provider sandboxing could be more complete <juju:New> < https://launchpad.net/bugs/1014435 >