=== scuttlemonkey is now known as scuttle|afk === med_ is now known as Guest39045 === frankban|afk is now known as frankban [11:45] kwmonroe: very late response, but no, it was against aws [11:46] kwmonroe: it's solved though, i updated the bundletester issue thread about my issue [13:48] any jujucharms.com admins here? need help with a broken login due to updated user on launchpad [14:02] Hey. I'm looking into backing up a OpenStack system, that is built with nova-lxd-containers (All-in-one Ubuntu OpenStack) are there any documentation to help with that? I'm assuming that backup is slightly different than usual due to the use of containers. [14:04] SimonKLB: rick_h can probably point you in the right direction [14:05] Neepu: beisner or jamespage might know [14:08] hi Neepu - the all-on-one openstack scenario is a handy dev/test/demo scenario, but it isn't recommended for production use cases. openstack is a cloud, and clouds are meant to be distributed. as such, we've not explored or documented a backup routine for that use case. [14:09] hi beisner_ ty for the answer, if i were to backup such a system, do you have a suggestion? it is not meant to be used in a production environment. [14:10] Neepu - there are a number of things that could be done. when you say back up, are you talking about backing up the backing storage for running instances, or backing up the control and data plane of the cloud itself, or both? [14:12] I haven't really settled on that yet, as i'm not too familiar with LXD containers yet. But the requirements i'm dealing with is that, it should be easy to restore again. The most important feature would be to backup the guestOSes, but i think there already is a feature for that. [14:16] yep "nova backup", which probly backups inside the nova container. [14:19] An OpenStack cloud as-a-whole isn't really something that's meant to be backed up and restored, as one might do with a traditional application. It's a stack of independent applications and api services, each of which have certain needs. The process for doing that at the application level, regardless of the substrate, would be the same as is documented in the upstream openstack docs. [14:19] The process for doing that on a machine (container) level, is not documented or tested for use against an entire openstack cloud, but if this is a dev/test environment, it'd be interesting to see if lxd snapshots are helpful. [14:24] I see, i think i'd be good with lxd snapshot to extract the backup [14:25] I'm not too familiar with Juju's approach to OpenStack, but would snapshoting all LXD containers and start them let me run a functional OpenStack cloud? [14:28] Neepu - i don't believe that is tested. the recommended approach to OpenStack Charms delivered via Juju is to use an HA topology, where api services are disposable, etc. [14:29] ie. so when a machine dies, it's ok, there are 2 other machines that take the load while a repair/replacement is made. [14:29] that's the "cloudier" way to address this. [14:29] i see [14:30] cheers for the answers, i'll head off now :-) [14:31] Neepu - yw, happy to help. see you around & best to ya! [15:26] hi vds - belated pong: invalid region "" seems to be meaningful there. have you set the region in your juju cloud config? here's an example of what we do in CI: https://github.com/openstack-charmers/charm-test-infra/blob/master/juju-configs/clouds.yaml#L27 [15:27] beisner_: hi! :) how do I get the value for region from Openstack? [15:29] hi vds - it'd be the value set at deploy time (on nova-cloud-controller). if one wasn't specified, it'll use the default: https://github.com/openstack/charm-nova-cloud-controller/blob/master/config.yaml#L125 [15:29] RegionOne [15:30] beisner_: thanks [15:30] cheers vds :) yw [15:33] 243667 [15:34] mhilton: we're helping SimonKLB here [15:34] SimonKLB: broke the system! [15:36] hi SimonKLB could you please try logging in on the staging system https://staging.jujucharms.com/ This should help resolve the login problems on jujucharms.com itself. [16:18] o/ juju world === cholcombe_ is now known as cholcombe === frankban is now known as frankban|afk [16:49] heya Budgie^Smore [17:30] what's happening today? [17:34] Not much for me :) [18:37] mhilton: confirmed working :) thaanks [19:40] SimonKLB, thanks for that. unfortunately I don't have access to our production system to update that myself, but it is in the queue for the people that do, it's fairly simple so should be done reasonably quickly. [19:49] mhilton: no worries, thanks for looking into it