[00:56] <hazmat> cmark, there's some plugins in the works re backup restore to different node
[00:58] <cmark> hazmat: i'm working with an older release of juju (0.6 python).  I'm assuming these wouldn't apply?
[01:01] <hazmat> hmmm
[01:01] <hazmat> cmark, i think we're still targeting january re a data migration from pyju to goju
[01:01] <hazmat> cmark, the tsuru guys did a small abstraction around the pieces
[01:02] <cmark> aside from replicating the juju and zookeeper data, what else is there?
[01:03] <hazmat> cmark, basically restoring is 3 things and one pyju caveat. updating provider object storage file for env in control bucket to point to new server instance id. updating machine zk/db records with new instance id, and updating extant agents with new state server address.. all roughly while the agents are shutdown (or restart post config).
[01:03] <hazmat> the pyjuju caveat being there is some craziness in libzk that doesn't like spinning on reconnect, hence the down while reconfigure recommend.
[01:04] <hazmat> er.. agents down
[01:14] <cmark> hazmat: when you say "updating machine zk/db records with new instance id" are you referring to updating the IP addresses in the machine entries in the zk db?
[01:15] <cmark> so that machine-0 points to the new bootstrap node
[02:00] <hazmat> cmark, so machine-0 points to new instance id
[02:00] <hazmat> from cloud provider
[02:00] <hazmat> ip addresses will get repopulated in db on restart if changed
[11:50] <axw__> ashipika: hey. I'm heading off in a moment, just wanted to let you know that someone fixed the DNS issue for the null provider
[11:50] <axw__> at least for his use-case; I think it applies to yours too
[11:50] <axw__> it's on trunk now
[11:50] <ashipika> excellent! so i just update the code and recompile
[11:50] <axw__> yup
[11:50] <ashipika> thanks!
[11:50] <axw__> nps - cya later
[11:57] <ahasenack> hi guys,
[11:57] <ahasenack> how do I get the private-address of this lds-quickstart/0 unit?
[11:57] <ahasenack> http://pastebin.ubuntu.com/6483983/
[11:57] <ahasenack> it used to be there, but after I gave this openstack instance a floating ip, the internal IP got replaced and I don't see it anymore
[11:57] <ahasenack> in the status output
[11:58] <ahasenack> the "public-address" was the internal IP before
[12:16] <gnuoy> jamespage, I seem to be seeing the behaviour we briefly discussed at the cloud sprint where on a fresh deploy the quantum gateway seems to be wedged. I can see namespaces on the gateway that corresponds to routers that have been created but incoming and outgoing traffic seems to fail. Restarting the machine and recreating the router fixes this. I'll raise a bug but what debug info would you like to see ? iptables from the router namespace and ... anything
[12:16] <gnuoy> else ?
[12:17] <jamespage> gnuoy, can you ping the router IP address?
[12:18] <gnuoy> jamespage, nope
[12:18] <gnuoy> after a reboot and recreate I can
[12:57] <jamespage> gnuoy, it points me a two things
[12:57] <jamespage> one - something is not getting restarted when it should be
[12:57] <jamespage> and the reboot fixes that
[12:58] <jamespage> or two - something is broken in neutron - that flushed through with the rebiit
[12:58] <jamespage> reboot
[13:39] <ashipika> hi all.. looking at my "juju status" report.. the service always gets exposed on a certain dns name (public-address).. would it be possible to tie that to an ip? (for example: if one does not have a dns server running in a mini diy cluster)
[13:49] <iri-> I'm doing an upgrade-charm and I see on the remote side INFO juju charm.go:56 worker/uniter/charm: downloading local:blah/blah from https://s3.amazonaws.. etc, however the contents of the thing it downloads is not the contents of what I'm specifying with --repository on my local disk. How can this be?
[14:15] <iri-> wtf. I just tried again moving --debug from the left hand side of "upgrade-charm" to the right hand side. Then it worked. What the hell?
[14:49] <kirkland> where can I "list all charms"?
[14:49] <kirkland> I see a few highlighted charms at jujucharms.com
[14:51] <rick_h_> kirkland: if you perform an empty search it'll load them all, after a while. https://jujucharms.com/fullscreen/search/?text=
[14:52] <iri-> Now I just get "no new charm event" whenever I try to upgrade my charm. On my machine I see "writing charm to storage" and I see my revision number increment. But it isn't incrementing remotely. What gives?
[14:52] <rick_h_> kirkland: if you're looking just for a quick list you can hit up http://manage.jujucharms.com/charms/precise for just precise/etc
[15:09] <kirkland> rick_h_: thanks
[15:18] <benji> kirkland: there's also http://manage.jujucharms.com/charms/precise
[15:18] <benji> ah, you already were given that
[16:19] <iri-> If I add a variable to config.yaml and "juju set --config <secret yamlfile> <service>", is there anything else I need to do beside editing the config.yaml in the charm?
[16:19] <iri-> I'm currently getting "unknown option: MYVARIABLE" even though MYVARIABLE is in both file
[16:19] <iri-> +s
[21:53] <Tobias_____> how can i install juju on my own public server? (no cloud hoster)/ wie kann ich juju auf meinem eigenen server installieren? (keine cloud)
[22:50] <sarnold> Tobias_____: look at using the 'local' juju provider, which uses LXC to provide containers for each of the charms to execute in: https://juju.ubuntu.com/docs/config-local.html
[22:51] <Tobias_____> sarnold: tried to bridge it to use the static adress of the server instead of a local lxc container... it is currently recovering
[22:51] <sarnold> Tobias_____: I believe it is difficult to provide services to other hosts on the network through the local provider -- it is difficult or impossible to assign each container a LAN-based IP -- but it is quite useful for testing
[22:51] <sarnold> Tobias_____: ah, so you have already disovered this :(
[22:52] <Tobias_____> jep, i try to find an solution to run juju on dedicated server
[22:52] <Tobias_____> it seems really useful to setup instances and stuff
[22:52] <Tobias_____> just deploy something and connect mysql, expose, done
[22:59] <sarnold> Tobias_____: one option is to try the ssh provider, and use .. the --to flag? .. to deploy everything to either hand-build lxc instances or similar. It isn't the same magic :( but it is something..
[23:00] <Tobias_____> Any network bridging to eth0 distorts the servers static inet adress and causes routing to fail
[23:00] <Tobias_____> i tried starting lxc in an container with br0, bridged to eth0
[23:00] <Tobias_____> thats why my server is currently in new install