[00:33] <skay> I'm having trouble adding back a relation with python-django and postgresql
[00:33] <skay> http://paste.ubuntu.com/10755947/
[01:31] <thumper> marcoceppi_, lazyPower: one of you around?
[01:32] <thumper> I was doing something last night that seemed to be way too hard for what I was trying to do
[01:32] <thumper> and I'm wondering if it really is that hard, or whether I should look to push some changes
[01:32] <thumper> I have a running postgresql unit and was wanting to download the daily db dump to my local machine
[01:33] <thumper> the db dump has permissions 0600 and owned by the postgresql user
[01:33] <thumper> and in /var/lib/postgresql/backups by default
[01:34] <thumper> in order to scp it locally, I first had to log in to the machine and copy it (wanting to leave the original untouched) to the home dir and chown (or chmod) it
[01:34] <thumper> is there an easier way, or should we defer this until we have an action?
[01:34] <thumper> we could have an action that is 'get me the latest db dump' right?
[01:35] <thumper> jw4: ^^ re action?
[01:50] <rick_h_> thumper: yea, you could do an action, though honestly I'd suggest the action is more 'put this in some public space' like an s3 bucket or the like
[01:52] <rick_h_> thumper: the other way to go would be a subordinate to do something like inotify and auto copy the db to a known location for you so you can pull it down and auto do it every time a backup is made
[01:53] <rick_h_> thumper: and then you could do stuff like longer term config in the service like s3 credentials/account and such as well which would be interesting since actions don't have persistent 'remember me' settings really
[01:53] <thumper> rick_h_: so the current answer is "no, it is kinda icky" ?
[01:53] <rick_h_> thumper: yep
[01:54] <rick_h_> thumper: in prodstack they have a convention of where to stick backups and then you ping them and they add it to a script that walks services copying dumps for you basically
[01:54] <rick_h_> thumper: just over ssh/scp
[01:54] <rick_h_> thumper: so even in our production it's icky
[01:54]  * thumper sighs
[01:54]  * thumper adds a TODO note to work out how to make this less icky
[01:54] <thumper> on the plus side...
[01:54] <rick_h_> thumper: storage + actions? :)
[01:55] <thumper> I managed to grab a backup (dump) of my prod database, and load it into my dev environment running in an lxc container
[01:55] <thumper> so I can test my charm upgrade with the prod data
[01:55] <rick_h_> nice
[01:55] <rick_h_> now you just need a restore action that takes the dump and auto loads it for you :P
[01:55] <thumper> had to play around with 'pg_restore' a bit, but it was ok
[01:56] <thumper> yep, that too would be nice
[07:46] <stub> thumper: We have been using subordinate charms to shuffle the files to their final locations, so it hasn't bitten us.  For a charm, making dump world readable so 'juju scp' works would be fine from a security POV.
[07:46] <stub> rsync subordinate, stuff-into-swift subordinate, and now the new backup system.
[09:11] <arosales> marcoceppi_, if you have an opportunity could you kick a test off on https://code.launchpad.net/~cbjchen/charms/trusty/ubuntu/lxc-network-config/+merge/255262
[11:03] <schkovich> where can i find docs on which ports need to be open to access juju state server?
[11:04] <schkovich> lets say that my state server is running on machine 0
[11:04] <schkovich> if i enable ufw on that machine and allow ssh juju ssh 0 fails
[11:05] <schkovich> while i can still have standard ssh access eg ssh -i .ssh/mykey publiciip.of.machine.0
[11:26] <lazyPower> schkovich: i dont think those ports are documented - but thats a really good question
[11:26] <lazyPower> let me poke a core dev and see if i cant run that down for you
[11:29] <lazyPower> schkovich: it appears all you need to have open is 17070 and 22
[11:41] <luqas> freyes: hi, you there?
[12:12] <schkovich> lazyPower: thanks, let me try it :)
[12:15] <schkovich> lazyPower: tcp and udp?
[12:15] <lazyPower> i do beleive its just TCP
[12:15] <lazyPower> but it wont hurt to unblock both protos
[12:31] <schkovich> lazyPower: opening 17070 for tcp only did the trick. thanks!
[12:31] <lazyPower> Cheers :)
[17:31] <freyes> luqas, pong
[17:32] <luqas> freyes: hi, I was having a look to https://bugs.launchpad.net/juju-deployer/+bug/1383336
[17:32] <mup> Bug #1383336: TypeError "takes exactly 2 arguments (4 given)" raised while deploying <cts> <juju-deployer:Fix Released by freyes> <https://launchpad.net/bugs/1383336>
[17:32] <luqas> and I changed the status by mistake
[17:36] <freyes> luqas, oh, got it, I'll change it back to 'fix committed'
[17:36] <luqas> cool freyes, thanks a lot and sorry for the trouble
[17:37] <luqas> btw, do you know in which version of juju-deployer has the fix been committed?
[21:25] <jrwren> a freshly bootstrapped environment, and I get: juju.rpc server.go:328 error closing code EOF  when I deploy: http://paste.ubuntu.com/10766678/  any recommendations?
[21:27] <jrwren> bah nevermind trying again works. I'll bet it was typical azure.
[21:29] <rick_h_> jrwren: yea, bac got that yesterday
[21:29] <rick_h_> or maybe it was friday when a deploy failed for hime from the charmstore
[21:30] <bac> yeah, that went away.  never figured it out.