[00:05] @tavansteenburgh, thanks === scuttlemonkey is now known as scuttle|afk === menn0 is now known as menn0-afk [06:11] when is 2.1.2 being released? launchpad suggests last friday, but there are no packages I can find [06:42] zeestrat: looks like I can't reference variables in other variables? I guess that would be cheating ;) [07:27] Good morning Juju world! [08:22] kklimonda: Yeah, I tried that too ;) [09:56] can you register multiple users on one machine with juju? [09:57] and switch between them? [09:59] cnf you can register multiple users on the same controller [10:01] kjackal: yes, but can I log in to different users on the same local machine [10:01] using the juju command [10:01] i don't want to be admin all the time [10:02] cnf: I have never seen that [10:02] hmm [10:02] and no ACL's either, it seems [10:03] you mean do a "juju ssh myapplication/0" and you get a shell that is not a sudoer user [10:03] That is an interesting feature. What is the use case you want to serve with this? [10:05] no, i mean juju register [10:05] multiple users [10:06] right now, everything i do with the juju command is as admin [10:07] have to pick up some guests at reception, brb [10:13] back [10:14] kjackal: i basically want to restrict myself when i don't need to be admin [10:14] cnf: your juju client can be a non-priviliged user [10:15] I just had juju fail to bring a couple of containers on two machines, with error cannot start instance for machine "0/lxd/15": unable to setup network: no obvious space for container "0/lxd/15", host machine has spaces: "management", "two-ceph-private", "two-ceph-public" [10:15] but that actually worked fine for other units of the same application, and for other containers on the same machine [10:15] (I'm testing 2.1.2 from git right now) [10:16] kjackal: right, but do i need to delete the admin credentials? [10:16] or can i swap between them? [10:17] cnf: what kind of credentials? [10:26] kjackal: the admin user you are authenticated with [10:27] kjackal: now, my juju command is registered as full admin, right? [10:27] i can destroy models, delete users, etc etc [10:28] you can create a juju user (juju add-user) and then grant permissions (eg juju grant myuser add-model) [10:28] right, i did that [10:28] but when i want to register it, it says i can't because i am already registered [10:29] so is the only way to unregister the admin user, and register the new user [10:29] and when i need admin the other way around? [10:31] I ahven't tried to register a juju user with a local user that is already registered [10:31] I guess it is expected for juju to complain [10:32] can you have a second local user to register with the controller with limited permissions? [10:32] you mean add a new local user? [10:33] that's a lot of extra work for this :P [10:35] maybe i'll set up a docker container or something [11:32] can anyone recommand me a charm that supports adding storage? [11:34] ceph and ceph-osd [11:44] hoenir: the postgres charm does [12:33] when juju is deploying containers on MAAS, what is responsible for assigning IPs? I have a reservation for IP range, but Juju still deployed LXD containers with IPs from this range [13:10] kklimonda: did you have free ip's? [13:34] cnf: I should have 100+ free IPs in that netwokr [13:34] (based on quick math and what MAAS tells me) [13:37] k [13:37] i had all ip's in either dynamic or static pool === scuttle|afk is now known as scuttlemonkey [14:04] resources question. I have a snap charm and made a bad assumption about resources as optional things. I thought I could remove a resource. The charm is for a public snap, so I want to grab it from the store. [14:04] but when I want to test a change in a fully deployed environment, I'd like to attach a snap resource [14:05] that is a bad assumption since I can't remove a resource. [14:05] so, how do people normally test things in that situation? I'd rather not push a candidate snap if I'm doing something like adding debugging code to get more info, etc etc etc [14:14] skayskay: If you are using the snap layer, attaching a 0 byte resource is effectively the same as removing the resource. [14:15] skayskay: You should be able to have your edge charms with a snap attached as a resource, and your stable channel charms with no snap or 0 byte resource. [14:15] stub: thanks, I didn't think of attaching a new zero byte resource. I feel sheepish now. [14:16] :) [14:16] (I am using the snap layer) [14:16] skayskay: Its a hack, but works until we can drop this resources-are-required nonsense. === scuttlemonkey is now known as scuttle|afk === scuttle|afk is now known as scuttlemonkey === scuttlemonkey is now known as scuttle|afk [15:24] lazyPwr: hey, long time since we didn't talk, I have a question for you (or for marcoceppi too), can I update the nginx-ingress-controller deployed by CDK manually? [15:24] I need this functionality recently merged: https://github.com/kubernetes/ingress/pull/246/commits/5cc5669938108ab7429bc7eee40c18a6ba18150a [15:25] Zic: hey, 1 sec let me see whats in teh link [15:26] Zic: so this looks like k8s golang bin code... do you know if this is landing in the addon container? [15:26] Zic: if so, you should be able to just update the image reference in /etcd/kubernetes/addons/ingress-*.yml [15:26] Zic: however, you might need to set it in the Template dir of the charm, so it doesn't get autogenerated right back out of the template before its re-scheduled [15:27] stub: hey little bit of an update on that btw, i spoke to rick about optional resources @ the last sprint. It's now acknowledged that its a potential issue, but no resolution has been offered as of yet. [15:28] lazyPwr: the last message is what I feared :( can I do this simply without breaking future updates of the charm ? [15:28] skayskay: also, one workflow tip that mbruzek and I have done is resource-revision 0 is always a zero byte resource (by convention, we touch and push) so you can always re-publish with a zero byte resource simply. [15:29] lazyPwr: or do you know if this PR is already in the image used by the kubernetes charm in 1.5.3? I'm always on 1.5.2 currently [15:29] Zic: certainly if you patch the template and toggle ingress=true it will get your update. on teh next charm-upgrade it will ovewrite the changes to that template and you should be back in alignment with our upstream releases [15:29] Zic: i'm not positive on if its in the 1.5.3 release, it depends on when it was cut and added. [15:30] yeah, I saw this PR have 26 days, but I don't know if it was merged for the 1.5.3 release [15:30] i'm going to err and say it wasn't [15:30] https://github.com/kubernetes/ingress/pull/246 <= thr PR associated [15:31] Zic: the trouble here is the issue wasn't attached to a release https://github.com/kubernetes/ingress/issues/180 [15:31] so its hard to discern when it was actually pulled into a release or if its just sitting in trunk [15:31] Zic: i would probably ping one of the authors of this pr and ask when it was/will-be released. [15:35] lazyPwr: thanks, also, just seeing the tag "Coverage 46%" in the Issue, seems that it's sure that it's not released so :( [15:36] I will try to ping the author anyway [15:39] Zic: I can get you a resource if you can examine the binary files to see if the fix is in there. [15:40] Zic: Alternately if you check out the tag branch "v1.5.3" You could check for the code in there [15:42] mbruzek: good idea, it seems to be this one: https://github.com/kubernetes/ingress/releases/tag/nginx-0.9.0-beta.2 [15:42] I will check [15:44] ok so [15:44] 1.5.2 and 1.5.3 of Kubernetes in CDK's charm both use 0.8.3 of this nginx-controller [15:45] the proxy-set-headers I need appeared in 0.9.0-beta2 [15:45] (but it's beta as you can see...) [15:45] but a later stable version is 0.9.2 [15:46] are you planning to switch to 0.9.2 in next release so? :D [15:49] https://github.com/kubernetes/ingress/releases <= oh, I do a mistake, it's not Nginx which is in 0.9.2, it's GCE ingress... [15:49] the latest version of nginx-ingress-controller is still 0.9.0-beta.2, which contains the feature I wanted [15:55] lazyPwr / marcoceppi : for waiting, as I really need to customize nginx header for the launch of this customer, can I simply "kubectl edit rc nginx-ingress-controller", swap the image to "gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.2" instead of "gcr.io/google_containers/nginx-ingress-controller:0.8.3" (which CDK deployed) ? [15:55] as I understand, nex-upgrade will overwrite this change [15:56] but I just need to recall myself to re-put it, no? [15:56] I don't feel confident in hacking the CDK template and deploy it for a production cluster :/ [16:07] Zic: good question, Cynerva ryebot ^ ? [16:08] catching up, gimme a minute :) [16:09] :) [16:11] haha, I just realize that I confused marcoceppi & mbruzek for my earlier highlight-ing of CDK's maintainers :) [16:11] sorry :] [16:13] Zic: it looks like kubernetes-worker won't override the ingress RC until the charm is upgraded [16:15] and do I need to rollback to the original version of CDK kubernetes-worker charm before the next upgrade ? [16:15] Zic: so I think as a temporary workaround that should work. but i'm not 100% sure [16:17] Zic: hmm good question. all the charm does is `kubectl apply -f ingress-replication-controller.yaml` so i would think you won't have to roll it back before charm upgrade [16:22] ok thanks [16:22] I think I will upgrade from 1.5.2 to 1.5.3 before testing this [16:22] planned for tomorrow, I will let you know if it works :) [16:23] (the ingress upgrade part, I'm confident for 1.5.3 upgrade part) [16:24] cool :D [16:24] * Zic fix lazyPwr's eyes for the second part of his message [16:24] :} [16:25] O_o [16:25] o_O [16:26] * Zic begins to search "juju rollback command" [16:26] xD [16:26] wait for snaps my friend, it'll make your roll forward/backwords life a bit easier. [16:26] Zic: have you seen the etcd 3.x migration bits using snaps? [16:27] nop, the last time we talked about this, it was not publicly released [16:27] s/publicly/officially/ [16:27] does it now? [16:28] I still have the the thought about using snaps on switches / routers to load the maas and juju controllers onto them instead of servers [16:33] stormmore: you're not alone in that thought :) [16:34] stormmore: however storage on a switch is going to be hairy at best... you'll likely want to run your rack controller on a unit with some storage [16:34] Zic: its pending a merge against layer-etcd, i have it published in my namespace though [16:34] oh, I can test it in the lab so :) [16:34] Zic: it supports moving between channels, and if you dont attach external storage (read: ebs/gce volumes) it'll version your data [16:35] so say you upgrade to 3.0/stable channel and things get funky, NO PROBLEM! [16:35] currently, we have deployed our 2nd CDK cluster, full-AWS this time [16:35] revert back to 2.3/stable [16:35] and it'll reload your data from the version of the snap that was installed using 2.3, you may incur some minor data loss, but this is expected with a rollback no? [16:35] and I convert the LXD labs into a real lab [16:35] yeah? [16:35] lazyPwr, yeah I realize that but storage is only a problem when you are a boot strapping the data center. once the servers are up, the db, etc. can be moved into the cluster [16:35] nice man :) You're getting your hands all kinds of dirty with cdk [16:36] lazyPwr: yeah, I'm just using NFS & ISCSI volume [16:37] lazyPwr, that just leaves the core services running on the switches / routers, i.e. pxe, etc. from maas and the juju controller agent [16:37] (for the first CDK cluster actually) [16:37] Zic: i was talking to cholcombe and it appears that gluster+heketi is going to be problematic moving forward [16:37] some mailing list drama popped up about this :/ [16:37] for the second one, we're using AWS-EBS [16:37] i read into it a bit, and they are wanting to not support existing glusterfs deployments, which baffles me [16:38] i find it odd also [16:38] (the first-one is the one I gave you the Architecture Plan, hybrid between baremetal servers / on-premise VMs & EC2@AWS) [16:38] (the second-one is full-AWS) [16:38] cholcombe: <3 can we send them a gentle reminder to not stab their users in the face with a hot branding iron? [16:38] i tried on the irc channel and they shot me down [16:38] because $reasons again? [16:39] yeah because deploying onto an existing server that they didn't setup is hard [16:39] i get it but at least try [16:39] so what if we dont support existing clusters and instead make it a charm deployed glusterfs endpoint the only one we support in cdk? [16:39] or am i missing something [16:40] i dont think its out of the question to have a storage admin provision some bricks and enlist them in a gluster charm deployment, then xmodel relate them to cdk workers [16:40] lazyPwr: http://pastebin.com/E4ZrqzNN [16:40] http://pastebin.com/ewUELFwT [16:40] got any suggestions? [16:40] lazyPwr: suggesting we just take over for heketi? [16:41] magicaltrout: how did you deploy this? [16:41] maual [16:41] manual [16:41] lazyPwr: basic question, does the "juju upgradecharm " have a special order recommended? because between the insights.ubuntu.com news and https://kubernetes.io/docs/getting-started-guides/ubuntu/upgrades/, it's not the same [16:41] Zic: follow the kubernetes.io docs [16:41] upgrade-charm* [16:41] just walked through the bundle [16:41] ok :) [16:41] we've kept that up to date [16:41] magicaltrout: are these local copies of the charms? [16:41] nope [16:41] i've got 2 running workers and 2 stalled [16:42] magicaltrout: well at the end of teh day it appears the resources didn't make it out of the charm store. You'll want to fetch the resources from jujucharms.com/u/containers/kubernetes-worker and juju attach them [16:42] At this time rolling back etcd is unsupported. / At this time rolling back Kubernetes is unsupported. <= you already prepare this section about snap :D [16:42] Zic: not until it lands as GA [16:42] but yeah, i'll be adding subsections there about rollbacks using the snaps and what to expect [16:43] the only snap-in-production that I used today is Rocketchat [16:43] Zic: i'm expected to have this fully landed and r2g by the time we cut the 1.6.x release [16:43] i've got some really good early tests in, i'm hacking on fixing the whole "i killed the master and my cluster just pooped the bed" bug [16:43] :} [16:43] which is really obnoxious if you're not paying attention to what you're doing... juju remove-unit etcd-master, and suddenly things brick [16:43] *sadtrombone.wav* [16:43] I need to switch to multimaster again some day also [16:43] I'm also on monomaster in production for now [16:44] the lab is in multimaster with your patch for token [16:44] Zic: we have incoming work rn to hack in place an haproxy load balancer instead of that nginx based one [16:44] my colleague is hacking on that rn [16:44] magicaltrout: lmk if that doesn't resolve it, and i'm going to point a finger at a juju bug that i hvaen't found yet, but it seems like it just bailed during grabbing the resource, which is unfortunate :( i've seen this once before but it was back in the 2.0 beta days [16:45] magicaltrout: actually, are you still using a beta or did you finally upgrade to GA? [16:47] thanks lazyPwr testing [16:48] this is a new deploy for the JPL folks so nothing beta here :) [16:48] * lazyPwr snaps [16:48] 2.0.2 [16:48] i was kind of hoping you would say beta-18 again, so i could easily point to why it happened [16:49] it might have been connectivity betwen the openstack units and the charm store, but it would be nice for juju to recover if that were actually =the case [16:59] huhu juju [17:00] can somebody tell me how to filter hosts to use for a cloud by zones in maas [17:00] i tried adding zone = foobar to the yaml file for the cloud [17:00] but that didnt work [17:17] If a MAAS node is released while it is a Juju controller... can I get that controller removed? It seems hung on `juju destroy-controller someController` [17:19] have you tried unregister [17:19] Not yet... one moment [17:19] oh [17:20] Well, that was fast. [17:20] did it work [17:20] Thanks! I think I can rebootstrap it now [17:20] yes you can [17:20] had a similar problem and that helped me too [17:21] do you happen to know how to filter maas nodes for a cloud by zones? [17:22] i need to know if that is possible [17:22] oh man, I am barely getting this thing to download images from ubuntu's cloud repo [17:22] I vaguely remember that... one sec [17:23] Nuts, that's next on my list of things to try [17:23] That's what works with the HA zones, right? [17:24] well i want to create nodes for customers and add them to zones. then create a cloud for that particular zone [17:24] lazyPwr: I have a lot of apt updates also on the 1.5.2 cluster I'm planning to upgrade tomorrow, do you advice me to firstly run juju upgrade-charp *before* apt update/upgrade ? [17:24] (to bypass some problem like the time apt upgrade etcd without the charms :/) [17:25] ybaumy: I was going about that upsidedown and was going to use Openstack to partition customers. But you make an interesting point; if I had my old pile of blade servers that sounds pretty good [17:25] okay i think we can safely say Juju doesn't like deploying stuff to low power vms === hml_ is now known as hml [17:26] andrew-ii: a customer should be able to create his own cloud by a webinterface with vmware and maas. that the target. [17:27] I don't have vmware, but my customers really only need containerization, so I probably have a different use case [17:28] andrew-ii: true. thats different. i have thought about that too. and will look into it once i am able to do that .. [17:28] im creating POC's for my company [17:29] so its just try and play at the moment [17:29] ybaumy: I get the feeling it's jumping right into the deep end to go straight-up containerization. But I only have a few machines for maas, so I can't afford to dedicate to each customer. But man, that sounds like a nice idea. [17:30] ybaumy: same here; if I can get the cloud bootstrapped and somehow lock it down _and_ get it to be usable, then it'll be a nice playground. [17:31] andrew-ii: we have vsphere vrealize scripted to create the vm templates and distribute them across the esx farms. then i use maas to commission them and juju to create the cloud [17:31] but now i need something to filter those vm's [17:32] ybaumy: slick. It does sound like zones are exactly what you'd want (if I understand why they were added). [17:33] andrew-ii: yes its really working now. its one last piece to put it together and close the POC [17:33] Best I can figure is the `juju deploy someCharm --to zone=maasZone1`, but I bet you're not really looking to use the --to command [17:33] andrew-ii: does that work? [17:33] i have to try [17:34] (For my setup I'm using maas tags to filter machines, sadly, so I may be a terrible example) [17:35] ybaumy: note that I just sorta stole that command and assumed it worked! It may be aws specific (or maybe maas emulates that too) - sorry if it fails utterly [17:36] andrew-ii: great this seems to work. at least it used now for bootstraping the foobar zone i specified [17:37] ybaumy: holy mackeral, whelp, I'm more optimistic now (I actually thought it'd fail...). [17:37] maybe its luck but i will try that a few times [17:38] That's the spirit! [17:56] magicaltrout: yeah it tends to yield a pretty crummy experience when you starve the unit resources [17:57] Zic: probably do the upgrade charm then run the apt update/upgrades [17:57] Zic: keep your change set to the least viable change in order to validate things are functional after making the change [18:01] andrew-ii: sadly this --to zone= seems to be ignored. i bootstrapt it then i wanted to enable-ha --to zone=foobar and one node was created in default one in foobar [18:01] indeed lazyPwr i was hoping the units would at least start though :) [18:01] so i need something else [18:01] ybaumy: sorry - I was afraid it wouldn't work with maas yet [18:02] andrew-ii: thanks anyway .. better die trying then havent tried at all [18:02] Might be worth checking if a newer version manages to use it (though I think I got that command from 2.1.1) [18:03] im using devel [18:03] ouch, nevermind then [18:03] Did you try `--constraints zone=foobar` ? [18:03] hmm no :D [18:03] will do [18:03] hehe === hml_ is now known as hml [18:05] I have some real goofy machines to play with, and some are only good at certain things (like one's basically a bank of hard drives), so I use `--constraits "tags=storage"` for that one [18:05] what is your goal what are you trying to accomplish? [18:06] Though just in case, make sure when boostrapping a controller you use `--boostrap-constraints` instead (otherwise the controller sets ALL nodes to use that constraint) [18:06] k [18:06] I have a hodgepodge of random servers to play with, and I need a test rig that can simulate a network; plus host some in-house tools that will get replaced often [18:07] Literally random machines collected off craigslist for a few months [18:07] is that you homelab? [18:07] your [18:08] It's my equipment, but it's hosted in the office for it's fancy 220V line, space, and dedicated network connection [18:10] i started also with some few blades but now the project has management attention [18:11] so i got everything i requested [18:11] that neat [18:12] haha always the best when people with checkbooks take notice [18:14] true [18:28] juju deploy --to zone=test cs:bundle/openstack-base-49 [18:28] ERROR Flags provided but not supported when deploying a bundle: --to. [18:28] also something that should work [18:29] hmmm, yeah, that's what I thought to try [18:30] i will try single charms to see how that goes [18:36] nope ignored as well [18:37] juju deploy --to zone=test -n4 cs:ubuntu-10 [18:37] creates random machines in all zones [18:48] i will try tags [18:49] That should work, though it's a pain to tag all the machines [18:49] yep [18:50] seems like that you cannot assign tags to all machines in a zone [18:50] Nuts. Need to do something complicated like a script to assign them all? (Overkill maybe?) [18:52] you are right. i will have to script it but thats not the problem. it would be nice if something like that would be supported out of the box === hml_ is now known as hml [18:59] I think there is a goal for it, it's just not added in yet? [19:00] we have to ask the devs [19:00] i signed up on the mailing list maybe i get an answer there [19:02] ybaumy: I don't know too much about navigating launchpad, but check https://launchpad.net/juju to see if that feature is slated for addition [19:08] juju deploy --constraints tags=test cs:bundle/openstack-base-49 [19:08] ERROR Flags provided but not supported when deploying a bundle: --constraints. [19:08] ;) [19:09] ybaumy: 99% sure --constraints/--to don't work with bundles [19:10] ybaumy: same with --config [19:10] zeestrat: thats too bad. shouldnt they? [19:10] --config is on the roadmap [19:11] single charms work now with contraints with tags thats nice so i have to script the whole bundle setup as single charms [19:11] constraints and to are (or at least can be) defined in the bundle so the idea is that you set them there [19:12] ok so i can generate a yaml file with --contraints? [19:13] the format with all the : at the end is really hard to script [19:13] hmm [19:13] i will try and see [19:13] have you checked out https://jujucharms.com/docs/stable/charms-bundles ? [19:16] before not. but i can just generate it one time and then use perl or sed to change the tags=test to tags=somethingelse [19:17] nice [19:17] :) [19:17] thanks zeestrat [19:17] that will do [19:18] ybaumy: glad to help :) There are no examples of using variables in the docs at the moment, but here's a openstack HA bundle you can look at: https://launchpadlibrarian.net/298175262/bundle.yaml [19:20] no need for variables. i just sed -i 's/tags=test/tags=somethingsomethign/g' in the file for each process before i start juju deploy bundle.yaml [19:21] and thanks for the HA link [19:38] will try tomorow. now football [19:48] lazyPwr, trying to put together my budget ask, is there any conference, etc. that would be good for me to try and attend? any suggestions? [19:50] all of them \o [19:53] zeestrat andrew-ii adding constraints to the bundle works. thanks for helping me. [19:55] magicaltrout, yeah I wish I could get the budget for that :-/ I have to be more realistic than that and target kubernetes, juju, maas related ones as well as the automotive ones since that is the industry we are actually in [19:56] hmm wtaf, my deployment is green but the ingress proxy just gives me a 504 on absolutely everything =/ [19:56] what country you in stormmore ? [19:57] can a 'requires' relation be related to two 'provides' at the same time? [19:57] stormmore: well, the charmer summit is great if you want facetime with us to hack on projects :) [19:57] ex) kibana1 to elasticsearch1 and kibana1 to elasticsearch2 [19:58] stormmore: however if you want k8s focus, you're more than likely going ot have success attending kubecon's but our presence there isn't very big. Mostly what I would call guerilla community ops, where we roam hallway tracks and find people to engage with [19:58] hatch: depends on how the charm is coded to use those relationships, but yeah. [19:58] lazyPwr ok so there is no Juju restriction to that effect? [19:59] interfaces as the abstraction, you should be able to interface with both es clusters, but its up to the app to implement that, and up to the author to correctly implement the data coming from the interfaces. [19:59] afaik, nope [19:59] the GUI explicitly prohibits that interaction [19:59] why? [19:59] seems....arbitrary [19:59] my guess, a limitation in Juju 1? [19:59] kidna like not letting me unrelate a subordinate :P [19:59] yeah, you're probably right [19:59] or from PyJuju even? [19:59] juju1 had fun quirks like that [19:59] yeah [19:59] ok thanks [20:00] * hatch creates a model on jujucharms.com to test for sure [20:00] lazyPwr, yeah I would love to go to a "charmer summit" but I don't see another one setup yet [20:01] stormmore: i dont think we'll get it scheduled until mid spring. jcastro might have more details as to when the next summit will be however [20:02] we haven't really talked about it [20:02] somewhere hot [20:05] and of course finance want to know like yesterday :-/ [20:11] lazyPwr, fyi I think the better question is whether you guys want to suffer the pain of meeting me ;-) [20:12] any idea how to debug a 504 on the ingress router, no kubernetes dashboard etc lazyPwr ? [20:22] @jcastro, I will look forward to hearing about when you do have those discussions [20:35] \o/ === menn0-afk is now known as menn0 === _thumper_ is now known as thumper [21:04] hey, trying to remove a unit that is in an error state for the config-changed hook doesn't work, as far as I can tell [21:05] there's ab ug about a failing upgrade state that already exists, maybe this is related [21:07] oh, this may be something else. lp:1671476 is about destroying a model [21:09] I'm unable to remove a unit or application [21:10] you need to mark the unit resolved first skayskay [21:10] juju resolved unit/0 --no-retry [21:11] magicaltrout: thanks! that's it. I didn't know about that --no-retry option. handy [21:18] hello all; anyone have an example bundle of openstack using network spaces ? [21:22] oooh shiugar [21:22] sooo lazyPwr i'm not convinced that each k8s worker should be on its own flannel subnet, am I right? [21:25] this is really odd, having it hang during fetching juju agent [21:26] I suspect a network issue but I can't log into the instance using the key that I added to the MaaS user! hmmmm [21:28] magicaltrout: why not? [21:29] okay in that case i'm just wrong :) [21:29] i assumed they had to share a subnet [21:30] * magicaltrout returns to wondering why they aren't working [21:30] magicaltrout, no each worker needs it's own subnet to manage it's containers with [21:30] fair enough [21:30] i'm spinning one up on aws for comparison, but i assumed they'd be on the same subnet. Obviously not :) [21:33] anyone have an idea where to look when I can enlist and commission nodes in maas but I can't bootstrap juju? [21:42] stormmore: juju bootstrap --debug and then check the maas logs during bootstrap. [21:44] rick_h, thanks, I keep forgetting about that. building a "clean" environment to try again [21:49] firl: The #openstack-charm people have a bundle in their dev repo: https://github.com/openstack-charmers/openstack-bundles/blob/master/development/openstack-base-spaces/bundle.yaml [21:49] zeestrat thanks! [21:50] firl: There's a OpenStack HA bundle laying around using networks defined in config (not the new juju network bindings): https://launchpadlibrarian.net/298175262/bundle.yaml [21:50] interesting [21:51] I am trying to figure out the best way to do the networking I need to, and I just finished configuring all of the physical net, so trying to make sure I have MAAS setup the way I Want to deploy the bundle [21:52] it seems like “spaces” is where the bundles are going [21:52] charms rather [21:53] firl: I recommend checking out #openstack-charm channel as well. I'm working on a HA bundle with spaces if you're interested. Send me a pm and I can get back to you [21:53] I will jump on that channel too [21:54] hrmm no one in that one. [21:54] firl: sorry, #openstack-charms [21:54] perfect [21:55] there are also some docs on https://docs.openstack.org/developer/charm-guide/ with release notes for the latest stable releases of the openstack charms [21:56] I'm off for now, but pm me and I can get back to you [21:56] thanks [22:07] rick_h, http://paste.ubuntu.com/24179043/ shows the output from --debug, I am not seeing anything obvious in the regiond.log, rackd.log or the instance messages log to show why it isn't bootstrapping :-/ [22:10] rick_h, but this time I was able to log in, apparently I am missing a default gateway [22:11] rick_h, which seem odd considering it did a apt update / apt dist-upgrade [22:22] one problem down, no telling how many to go :-/ [22:29] gotta love weird networking issues