=== frankban|afk is now known as frankban [12:49] hi.. is there any function that can help to get ip-address of specific interface of machine deployed by charm? === sfeole-away is now known as sfeole [14:39] if someone sees digv come back around, the bigtop charms have a helper to get the ip for an interface (or network range): https://github.com/juju-solutions/layer-apache-bigtop-base/blob/master/lib/charms/layer/apache_bigtop_base.py#L68 [16:16] jamespage: yo [16:17] (or in general) is anyone on here who is/has been part of setting up or maintaining the canonical in-house openstack cluster, juju-deployed? I'm seeking guidance for hardware requirements. === frankban is now known as frankban|afk [16:29] hallyn: beisner might have a link for you. I thought we had some suggested docs sitting around [16:32] rick_h: thanks. hey beisner :) [17:20] o/ juju world [17:34] o/ hallyn [17:52] hey beisner. so do you typically use 3-4 nics per server for openstack deploys? Do you get a set of like 5+ identical machines, or do you tailor them to storage, network, etc nodes? [17:56] will holler back shorty, after lunch hallyn [18:49] hi hallyn - now back for real: [18:49] \o [18:49] we do it both ways actually. in a couple of labs, i have a pile of identical machines. two nics, two disks in each machine. [18:49] but i know that most folks in production have certain machines geared more toward storage, and use maas tagging to identify and allocate those to ceph (for example). [18:51] two nics and two disks per host works ok for general usage for you then? [18:51] this isn't production, i just don't want to end up realizing that i can't run anything i want to on top of it bc it's just a toy :) [18:51] yep that's exactly what two of our labs have. then we don't have to pin applications to machines in any way, knowing that any one of them will have the goods neededd. [18:52] just spinning rust disks? like 2x1T? [18:52] if there aren't any (and i haven't seen any) blog posts talking about the internal juju-deployed stacks, you should write them :) [18:53] i like to use 1xSSD as a bcache front to spindles. [18:53] so one ssd one regular? hmm. [18:54] wonder whether maas/juju will automatically set that up the smart way [18:54] depends on tolerance/needs, but yep. in a case where we can afford to lose a node (this is a cloud afterall), then that is tolerable for my use case. [18:54] maas is bcache-aware. there is a tickbox essentially. [18:54] cool [18:54] juju doesn't really need to know [18:55] hm, where's that juju gui demo site [18:56] this one?: https://jujucharms.com/new/ [18:56] beisner: i landed at https://demo.jujucharms.com but looks the same :) [18:57] (just want to see what it recommends for a medium install) [18:57] thanks beisner [18:58] yw hallyn - and ack on the blog post. the tricky thing about that is, when we build those clouds, they are HA. and with HA come network-specific things like VIPs and other machine-specific things. we've not published an "HA bundle" to-date, since it would definitely not-work without users editing quite a few things. [18:58] but still, planning to do something/some time with lots of caveats and notes. [18:59] beisner: yeah, but that's also exactly the kind of thing that potential juju users may not think of, get started, then run into trouble with, and then say "juju sucks, it's for demos only" [18:59] hallyn - we do have this openstack charms deployment guide (not released yet) but you might find this quite helpful (just clone it, run 'tox', and read for now): https://github.com/openstack/charm-deployment-guide [18:59] this aims to address that audience. the we-want-more-than-a-demo audience. [19:00] cool, thanks [19:00] we have some MPs outstanding against that still, broken links, etc., but that is intended to publish by 17.10 to openstack.org. [19:06] cheers, hallyn [19:06] hm, no mac brew version of tox :) [19:06] thanks beisner \o