[06:29] Good moring jujuers === frankban|afk is now known as frankban [09:00] Good morning Juju world! [09:01] thank you for taking the time to write this email erik_lonroth3 [12:06] hi here, iirc, the etcd charm nos do daily backup on its own, do you know where? [12:06] s/nos/now/ === scuttlemonkey is now known as scuttle|afk [13:13] Zic: Ok clue me in as to where you got that info [13:13] because thats news to me :) [13:15] sounds like I'm wrong here xD [13:16] oh I remembered: etcd do an automatical backup when upgrading from .deb to snap, but there is no "daily" ones, I just dreamed about [13:16] Zic: right, you can also snapshot using the action... which you could put on a cron job to run the action, parse the action output and fetch teh snapshot to do dailies [13:17] the primitives are all there to do it... so there's nothing really stopping you from doing that as a jenkins job [13:18] yup, it will be sexier than my old cron [13:18] which looks like this today: 0 0 * * * root cd /data/etcd-backup && etcdctl backup -data-dir=/var/lib/etcd/default -backup-dir=etcd-backup_$(date -I) && tar zcf etcd-backup_$(date -I).tar.gz etcd-backup_$(date -I) && rm -rf etcd-backup_$(date -I) [13:19] (I got to update this one with the new path of "etcdctl" as "/snap/bin/etcdctl") [13:19] :) I have an open todo to change the snapshot format to support the etcdctl backup command [13:20] right now it just tarballs up the data directory, when you reinit the cluster from the snapshot it takes care of cleaning up any dirty state that may be left around in the db [13:22] * Zic add to his TODO "Test to restore and etcd backup in case of disaster" [13:22] never did it, and as you all know, you can't call something a "backup" since you don't have test it to restore [13:22] :> [13:32] lazyPower: what is the complete line to run the action ? I have a missing parameter :o [13:33] Zic: juju actions etcd --format=yaml --schema [13:35] thx [13:36] lazyPower: oh, I forget to tell you that the main CDK cluster (the big one) is now publicly accessible, the Android & iOS app has been released by our customer yesterday (as beta) on mobile stores [13:37] \o/ [13:37] we made it [13:37] :crossfinger: [13:37] thats awesome that you're in that new milestone of the journey [14:07] lazyPower: hmm, I have a strange thing with juju run-action etcd/0 snapshot [14:07] http://paste.ubuntu.com/24560810/ [14:07] it show "exited status 1" in show-action-status [14:07] see above for juju debug-log [14:08] hmm [14:08] ok, can you bug that for me Zic and I'll take a deeper look? i'm not certain off the top of my head [14:09] "open /var/snap/etcd/current/member/snap: no such file or directory" <= the correct path is /var/snap/etcd/current/etcd2.etcd/member/snap [14:09] ah [14:09] the migration path schenanigans [14:09] I -completely- forgot about that [14:10] good catch Zic, that should be a simple fix, i just need to do a quick path check for unit_name as teh first data path in the backup action before presuming its the default location. [14:10] do you want that I fill a bug in the bundle-canonical-kubernetes@GitHub? [14:11] certainly. i can cross post to the layer-etcd repo and get both when i push a fix later [14:16] lazyPower: https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/285 [14:23] Zic: thanks for filing that and sorry about the bug D: I'll make sure that a fix for that makes its way to the layer today [14:25] no problem :) was just searching if we always need our home ugly (= as behaviour can change between version) cron or if I can switch to something more standard handled by Juju [14:25] we skip a few backup days of etcd the time I saw that the old cron does not work anymore since we siwched to snap [14:25] +w [14:52] Zic: yeah, i'd like to keep the admin interface clean for you as well [14:52] Zic: feel free to file any usability bugs that you feel would make your life easier [15:03] Hello all, how can I specify apt cache when using conjur-up [15:26] rahworkx: as far as i know you would have to add the model, and set it on the model-config, then use conjure-up in headless mode [15:26] cory_fu: does that sound correct? [15:26] ooo wait, what about model-defaults? [15:26] stokachu: ^ [15:27] rahworkx: conjure-up -h [15:27] lazyPower: Yeah, you could probably manually bootstrap a controller, set some model defaults, and then select that controller with conjure-up [15:27] apt-proxy and apt-https-proxy are available [15:27] Oh, look at that [15:27] :) [15:33] ok thanks will take a look. [15:43] rahworkx: thanks, feel free to ping in here if you need help [15:44] stokachu: thanks will do. === frankban is now known as frankban|afk [18:32] lazyPower want to see the case design idea I have for a cluster in a box? [18:33] https://goo.gl/photos/V8UxiobqgNA6Xtn39 [18:46] Budgie^Smore: want to see mine? [18:46] http://imgur.com/a/BYHPG [18:46] http://imgur.com/a/DpsrW [18:47] http://imgur.com/a/GDpyO [18:48] she has been through many stages [18:48] http://imgur.com/a/SU1vd [18:50] http://imgur.com/a/oUQEU [18:53] http://imgur.com/a/fYeWV [18:54] oh nice a nvidia tesla!... I have been meaning to build a desk case for my main workstation and go with liquid cooling... the box in that link is for a 5 node mini itx with a 16 port switch [18:54] oh sick [18:54] was thinking individual aio liquid cooled cpu coolers for it [18:54] at least for the "worker" nodes [18:54] that is a lot of GPU power. [18:55] well it would only be the onboad GPUs... not really looking at GPU power as I didn't design in space for a PCIe card [18:56] oh the tesla, yeah that is a nice card :) [18:57] I rip the resistors off the gtx cards and turn them into grid and teslas [18:57] super simple [18:57] did you get the case custom made?, just looking at the drive bay layout [18:58] yeah [18:58] unfortunately danger den closed their doors a few months after I ordered that [18:58] yeah I have heard of them :-/ [19:00] I would really like to figure out how to get customer boards built so I could basically build a midplane with connectors for multiple systemboards, etc. and make the unit a lot smaller [19:03] Budgie^Smore: have you considered just doing a microblade enclosure? [19:03] in a mini rack [19:04] http://www.wiredzone.com/startech-racks-kvm-chassis-power-racks-and-cabinets-enclosure-cabinets-2636cabinet-30956242 [19:05] wow Budgie^Smore nice subwoofer box! "Thats my computer...." Nice computer! ;D [19:05] i tease, this looks neat [19:06] bdx: thats a nice lookin rig too [19:06] Budgie^Smore: whats budget on that box build looking like? [19:07] lazyPower: thx [19:08] yeah just do you want to spend 1k on a glorified case? that is also massive in comparison... my design is basically 22.25" x 16.5" x 12" [19:09] lazyPower for the parts you are talking 5 x itx board (recommend one with ipmi like the asrock on at $210 each) but you could use a small speced board for the maas node [19:09] Hi, juju purpose endpoint for charms inside hypervisor ? i try find documentation for subject [19:09] the PSUs I found that are a nice form factor are $30 each [19:10] shann_: I'm not sure i understood the question. Are you looking for the network endpoint of the controller node? [19:11] if you want liquid cooled it is another $70 / node and the case can handle 4 of those / 2 layers, but easily use some 120mm fans and blower cpu cools and bring that down a bit [19:11] yeah man, looks like a solid layout from my perspective [19:11] oh and about another $80 for a 16 port managed switch [19:12] just haven't figured out the cost of the case [19:12] lazyPower, i think mount lab's with Juju, serve apps, but i think not use couple of public_ip:port, i think possibility to use endpoint proxy to redirect traffic on app [19:13] oh and I have designed it in a way that you can add more levels to increase the cluster size [19:14] Budgie^Smore: I like man ... super cool [19:16] ah we dont expose anything like that in juju. You can add users to the controller, and share the juju-gui url to your lab students. Then they can in turn manipulate and deploy/destroy applications in their model, using their individual apps public ip:port combination [19:16] shann_: would that satisfy the requirement? [19:17] at least, we dont expsoe anything like that if i'm understanding correctly. [19:17] allow me to be clear on that. i'm still a little confused as to what you're asking for [19:18] i found this, https://github.com/vtolstov/charm-haproxy, if i understood, it's define single endpoint. [19:18] shann_: ah yes, you can certainly use haproxy to reverse proxy into applications in a model [19:20] yes infact, but this configuration it's correctly usage with juju, or juju has own method implemented ? [19:22] shann_: possibly `juju deploy haproxy` is what you want [19:23] example haproxy can serve several apps relations ? ex haproxy => first_app, haproxy => second_app, ...., i try to understand archi before mount demo on my computer. [19:24] but i think better is test ^^ [19:27] bac the idea would be to do the design work and then try and crowd source a few purchases to bring the individual cost down [19:42] shann_: yeah it can do that ... but you have to write the code for the other side of the relation that connects to haproxy to correctly pass the information needed to haproxy charm to make it do what you want [19:43] shann_: so you can connect haproxy to a single application and it will automatically do what you want [19:43] reverse proxy to that app [19:43] but it doesn't know what to do when you connect subsequent applications [19:44] bdx, yes infact seem work out of box for haproxy linked app. It's possibly wrong way for my reflexion. [20:31] i try to execute juju bootstrap localhost controller, but blocking with "Waiting for address" :( [20:32] shann_: do you have lxd installed and working? (does `lxc launch ubuntu:16.04 u1` work for you?) [20:34] bdx, yes u1 launched but not have ip :( [20:34] lxd installed, bridge lxdbr0 seem up, with ip defined with dpkg-reconfigure -p medium lxd [20:35] no missing deps (bridge-utils, dhcp,.. ?) [20:35] shann_: `lxc delete u1 --force` [20:36] then [20:36] sudo lxd init [20:36] you might need to remove the image too [20:37] lxc image delete juju/xenial/amd64 [20:39] shann_: sudo service lxd-bridge restart; sudo service lxd restart [20:39] yes infact, i remove image, also i restart lxd.socket, seem "lxd init" warning them still activate [20:39] wait 5/10mns with my little connexion [20:40] 400ko/s :'( [20:47] bdx, ok image download, but continue to blocked at "waiting for address :(" [20:48] hmmm [20:49] shann_: will you paste the output of this command `cat /etc/default/lxd-bridge | pastebinit` [20:49] humm, in log i have "invalid pid for SIGHLD. Received pid yyy expected pid xxx" (for juju-à [20:49] humm, in log i have "invalid pid for SIGHLD. Received pid yyy expected pid xxx" (for juju-à [20:50] paste.ubuntu.com/24563015 [20:52] shann_: looks good to me [20:52] hmmm [21:09] bdx, lxdbr0 bridge need matched with nic ? [21:09] brctl show, lxdbr0 interface [21:10] i have enp0s... and wlp0s3 for wireless [22:26] shann_: it shouldn't be bridged to any interface I don't thing [22:27] can anyone give me a quick run down on deploying windows with juju on aws? [22:27] is that a thing? [23:02] lazyPower think you will appreciate the color now ;)