=== kadams54-away is now known as kadams54 [00:33] I'm having trouble adding back a relation with python-django and postgresql [00:33] http://paste.ubuntu.com/10755947/ [01:31] marcoceppi_, lazyPower: one of you around? [01:32] I was doing something last night that seemed to be way too hard for what I was trying to do [01:32] and I'm wondering if it really is that hard, or whether I should look to push some changes [01:32] I have a running postgresql unit and was wanting to download the daily db dump to my local machine [01:33] the db dump has permissions 0600 and owned by the postgresql user [01:33] and in /var/lib/postgresql/backups by default [01:34] in order to scp it locally, I first had to log in to the machine and copy it (wanting to leave the original untouched) to the home dir and chown (or chmod) it [01:34] is there an easier way, or should we defer this until we have an action? [01:34] we could have an action that is 'get me the latest db dump' right? [01:35] jw4: ^^ re action? [01:50] thumper: yea, you could do an action, though honestly I'd suggest the action is more 'put this in some public space' like an s3 bucket or the like [01:52] thumper: the other way to go would be a subordinate to do something like inotify and auto copy the db to a known location for you so you can pull it down and auto do it every time a backup is made [01:53] thumper: and then you could do stuff like longer term config in the service like s3 credentials/account and such as well which would be interesting since actions don't have persistent 'remember me' settings really [01:53] rick_h_: so the current answer is "no, it is kinda icky" ? [01:53] thumper: yep [01:54] thumper: in prodstack they have a convention of where to stick backups and then you ping them and they add it to a script that walks services copying dumps for you basically [01:54] thumper: just over ssh/scp [01:54] thumper: so even in our production it's icky [01:54] * thumper sighs [01:54] * thumper adds a TODO note to work out how to make this less icky [01:54] on the plus side... [01:54] thumper: storage + actions? :) [01:55] I managed to grab a backup (dump) of my prod database, and load it into my dev environment running in an lxc container [01:55] so I can test my charm upgrade with the prod data [01:55] nice [01:55] now you just need a restore action that takes the dump and auto loads it for you :P [01:55] had to play around with 'pg_restore' a bit, but it was ok [01:56] yep, that too would be nice === kadams54 is now known as kadams54-away === kadams54 is now known as kadams54-away === kadams54 is now known as kadams54-away === urulama__ is now known as urulama [07:46] thumper: We have been using subordinate charms to shuffle the files to their final locations, so it hasn't bitten us. For a charm, making dump world readable so 'juju scp' works would be fine from a security POV. [07:46] rsync subordinate, stuff-into-swift subordinate, and now the new backup system. [09:11] marcoceppi_, if you have an opportunity could you kick a test off on https://code.launchpad.net/~cbjchen/charms/trusty/ubuntu/lxc-network-config/+merge/255262 [11:03] where can i find docs on which ports need to be open to access juju state server? [11:04] lets say that my state server is running on machine 0 [11:04] if i enable ufw on that machine and allow ssh juju ssh 0 fails [11:05] while i can still have standard ssh access eg ssh -i .ssh/mykey publiciip.of.machine.0 [11:26] schkovich: i dont think those ports are documented - but thats a really good question [11:26] let me poke a core dev and see if i cant run that down for you [11:29] schkovich: it appears all you need to have open is 17070 and 22 [11:41] freyes: hi, you there? [12:12] lazyPower: thanks, let me try it :) [12:15] lazyPower: tcp and udp? [12:15] i do beleive its just TCP [12:15] but it wont hurt to unblock both protos [12:31] lazyPower: opening 17070 for tcp only did the trick. thanks! [12:31] Cheers :) === niemeyer is now known as niemeyer_ === urulama_ is now known as urulama__ === scuttle|afk is now known as scuttlemonkey === mattgrif_ is now known as mattgriffin === scuttlemonkey is now known as scuttle|afk === natefinch is now known as natefinch-afk === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 === FunnyLookinHat_ is now known as FunnyLookinHat === kadams54 is now known as kadams54-away [17:31] luqas, pong [17:32] freyes: hi, I was having a look to https://bugs.launchpad.net/juju-deployer/+bug/1383336 [17:32] Bug #1383336: TypeError "takes exactly 2 arguments (4 given)" raised while deploying [17:32] and I changed the status by mistake [17:36] luqas, oh, got it, I'll change it back to 'fix committed' [17:36] cool freyes, thanks a lot and sorry for the trouble [17:37] btw, do you know in which version of juju-deployer has the fix been committed? === kadams54-away is now known as kadams54 === kadams54 is now known as kadams54-away === natefinch-afk is now known as natefinch === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 === kadams54 is now known as kadams54-away [21:25] a freshly bootstrapped environment, and I get: juju.rpc server.go:328 error closing code EOF when I deploy: http://paste.ubuntu.com/10766678/ any recommendations? [21:27] bah nevermind trying again works. I'll bet it was typical azure. [21:29] jrwren: yea, bac got that yesterday [21:29] or maybe it was friday when a deploy failed for hime from the charmstore [21:30] yeah, that went away. never figured it out.