[00:32] Where would I start looking to learn how a model is added (specifically a caas model). I'm wanting to figure out how to determine if a controller has something that is using the "add-k8s" added cloud details for that controller [00:35] veebers: depends on ur patience but maybe from cli that adds a model? that should take u thru all the layers :) [00:36] * veebers puts on spelunking kit [00:36] anastasiamac: sweet, I'll dive on in :-) [00:36] * veebers takes of kit [00:36] no, need to eat first [00:37] veebers: good idea, no need to look at code hungry... oh... wait... [00:37] :) [04:00] anastasiamac: you have a moment, want to outline how I might implement the code to fix this bug https://bugs.launchpad.net/juju/+bug/1768845 [04:00] Bug #1768845: add-k8s incorrectly reports "cloud already exists" [04:01] anastasiamac: Roughly remove-cloud needs to ask the controller if anything is using that cloud, if so error. If nothing using it we remove it from the controller and then locally. [04:01] query: Querying the controller like that, that'll be a new facade call? Should it be 2 calls? (1. check for usage 2. remove cloud details) [04:20] babbageclunk: when would one use GetRawCollection over GetCollection? Context, I want to do a simple query on models collection: find count where cloud == and type == "caas" === zeus is now known as Guest6945 [04:29] veebers: the non-raw one will add the state's model uuid to any ids in queries or updates. So if you want to query across models you'll need to use GetRawCollextion, I think. [04:30] babbageclunk: ah ok, so a query run on the controller checking which models may be caas on that cloud should be fine with a raw, right (I just need to know if any exist) [04:31] veebers: yeah, that sounds right. [04:31] babbageclunk: If you have a second could you respond to my question I should have asked the room but singled out anastasiamac :-) [04:34] veebers: How would you expect it to behave if it was in use in one controller but not in another? Error without removing any? Or remove it from the ones where it's not in use and then error about the ones where it is? [04:35] (I think the former, not totally sure.) [04:35] babbageclunk: I still haven't gotten clarity on that, there is a discussion on the 2 bugs, but nothing definitive for when there might be multiple controllers using the cloud [04:42] babbageclunk: in the bug (the other one with the discussion is: https://bugs.launchpad.net/juju/+bug/1768847) I suggest perhaps removal be a 2 part step (2-part commands are frowned upon I know :-)). Split it into remove from controller and remove locally [04:42] Bug #1768847: add-k8s requires a model [05:28] babbageclunk, anastasiamac: https://github.com/juju/juju/pull/8689 this updates the upgrade functional test to use percona-cluster. I've not done the CMR test yet as I'm still scratching my damn head on that one. [05:28] (so I thought screw it land the upgrade test, CMR can land later) [05:33] veebers: whoa, that's bigger than I was expecting from the description. [05:34] oh, because of the local mediawiki. [05:34] babbageclunk: lol yeah I forgot to specifically mention it uses a local copy of mediawiki to fix anissue that was breakint the trest [05:34] aha typing is hard [05:34] I'll update the PR comment to make it more obvious [05:34] Are you on a bouncy castle now? [05:36] babbageclunk: oh man I wish, that would explain how non-productive I've been today :-\ [05:36] I did break into that kvm node though, so *yay* [05:37] yes and m sure that experience beats bouncy castle any day!! [05:38] hah ^_^ [07:03] anyone able to get to CI Jenkins? I'm on the VPN but http://10.125.0.203:8080/ isn't returning for me [07:11] jam: Doughnuts for me too. [07:14] Latest CI run for my PR got a 503 installing Go from snap too. === frankban|afk is now known as frankban [07:38] manadart: snap store was down for a bit [07:38] manadart: but I thought it was back up [07:41] manadart: it seems there has been a general connectivity problem to the Canonical datacenter [07:42] jam: Ack. Seems to be running now. [07:51] guild: can I get a review for https://github.com/juju/juju/pull/8690 [07:58] stickupkid: so... before going to the effort, could we just copy that repo and name it to something useful? [07:58] (before I set up a bot on a repo that I'd like to kill :) [08:32] jam: Approved it. [08:32] manadart: hm. just got a "took to long to connect" to hangouts, trying again [09:51] when i run juju destroy-controller, the command hangs, how can i debug this issue? [09:52] ice9: have you run with --debug flag? [09:54] stickupkid, juju.juju api.go:67 connecting to API addresses: [10.96.159.38:17070] seems this IP doesn't exists [09:55] is there away to force a juju command? [09:59] even list-machines hangs [10:08] ice9: Try kill-controller [10:08] manadart, it hangs as well, even juju status [10:10] since i deleted those containers, juju cannot execute it's commands against non existing containers so it hangs! [10:26] If the controller machine is already gone, try "juju unregister " [12:19] Is there a charm reactive/endpoint pattern wonderperson online that can help me out with implementing my layer? [12:22] https://github.com/Ciberth/generic-database/blob/master/reactive/generic-database.py#L58 I want to implement this but now I'm once again in doubt if I should use that share_details function or if I should create a handler in the provides.py that reacts on the available flag sharing it [12:26] cory_fu: ^ === tvansteenburgh1 is now known as tvansteenburgh [12:57] TheAbsentOne: No, I think what you have there looks right. As long as you're not directly accessing to_publish or received in your charm layer, then you're respecting the encapsulation and future-proofing yourself to allow for the interface layer to evolve. [12:58] TheAbsentOne: And in general, if you need to pass data around, like with your share_details, then it needs to be a method like that and not just a flag, since flags can't pass data. [13:01] TheAbsentOne: However, in https://github.com/Ciberth/generic-database-layer/blob/master/provides.py#L19-L26 you're basically saying that every connected application will always be sent the same relation data; if an application is related and requests mysql, and then another application is related and requests pgsql, the loop will overwrite the mysql data on the first relation with the pgsql data. [13:02] You should probably add something like "if relation.joined_units['technology'] == technology: continue" to the start of the loop [13:03] And also make https://github.com/Ciberth/generic-database-layer/blob/master/provides.py#L11-L14 loop over the relations and check the technology on each one, since they could be different for different relations [13:05] cory_fu: thanks for the input, but each generic-database should only receive a request once! This means that if a new consumer-app connects he should receive the same connection details. On the other hand the same consumer-app should be able to connect to multiple generic-databases though [13:06] cory_fu: could you tell me how I can pass a flag? I kinda want to set a flag in my generic-database and when he is set the same flag should be set in my consumer-app? This: https://www.dropbox.com/s/n27pznx2ms3ew4d/temphandlerquestion.png?dl=0 doesn't make sense right? [13:07] TheAbsentOne: Why make that restriction, though? Why not allow both wordpress and django to connect to the same generic-database, with one requesting mysql and the other pgsql? [13:08] Granted, any pgsql connections would end up sharing the same db, since I don't think it supports a named db relation... [13:08] cory_fu: because the version of this generic-database charm assumes that 1 charm represents only 1 database if you need 2 you need a second charm/unit [13:08] so wordpress and django are allowed to connect to the same generic-database but it will only be 1 database with 1 technology, 1 connectionstring [13:10] the idea behind this approach is that I want an entity as small as possible, and we can add layers on top of it if we want what you suggested! [13:10] TheAbsentOne: Ok, if that's how you're designing it, then disregard my comments. :) As for the flag question, you're right that flags are local to the unit they are set on. If you want a flag to be set on the other side of the relation, it needs to be driven by some bit of relation data and set by the (in this case) requires side of the interface layer. [13:10] Fair enough. :) [13:12] TheAbsentOne: Does that make sense about the flag? [13:12] cory_fu: so for example in the requires.py I can do @when('generic-database.changed.port') set_flag('myflag') [13:12] Yep [13:13] cory_fu: No I think not x) I'm making stupid mistakes again. The thing is I want to react in my consumer-app as soon as this function is finished https://github.com/Ciberth/generic-database/blob/master/reactive/generic-database.py#L37 [13:13] so in other words my generic-database knows the data (connectionstring), I now want to pass the data to the consumer-app [13:15] TheAbsentOne: Right, but that data has to get to the consumer-app over via the relation data, which you're already doing with share_details. Then you just have to add something on the requires side that observes that that data is now available and sets the flag accordingly. From the consumer-app charm's perspective, it will be the same: as soon as the data is available on the consumer-app machine, the flag will be set and the charm can react to it [13:16] cory_fu: I had the flag @when('generic-database.postgresql.available') render config file so it was this flag that I wanted to set in my requires [13:17] allright I'll try it out in a bit and I come back to you thanks already cory_fu! [13:17] TheAbsentOne: No problem! :) [13:27] <[Kid]> how do you guys normally do a juju/MAAS controller? Do you just have a small footprint machine to run the commands from/ [13:28] <[Kid]> for example, if i have 6 machines I want to put into a compute cluster for Openstack and I want to use juju and MAAS to deploy it, i don't want to chew up one of those machines to install juju and MAAS on to deploy to the others [13:29] <[Kid]> i mean, i thought as crazy as making my MAAS and juju roles being on a raspberry pi [13:30] <[Kid]> or maybe it is good to have a lxd container for these roles. [13:41] [Kid], no, if those 6 are beastly then don't dedicate one of them to the MAAS server & the Juju client [13:42] [Kid], you'll need to find a mediocre system for those. maas docs have hardware requirements for the server [13:44] [Kid], the harder part will be finding a suitable system in the MAAS cluster to dedicate to the Juju controller. best find yet another mediocre system for that and make it a MAAS node [13:44] you can then target that system (usually via a MAAS 'tag') when creating the controller (`juju bootstrap`) [13:46] [Kid]: yea, I actually run maas on an old laptop for my 8nuc cluster. I do end up using one to bootstrap to, but might also use that to run a workload if I have to. The main thing is if the openstack is important how HA are things going to be/etc [13:50] [Kid]: Also remember that you can deploy a modest additional maas box to swap out the original maas system if you overallocated hardware to the initial maas box by building an addtional maas box [13:59] tinwood: Were you working on https://github.com/juju/charm-tools/pull/397/ by chance? [14:00] I'm certainly looking at it. I'm not sure what's going on though (I've only just got to it). [14:00] cory_fu, ^^ [14:01] tinwood: I think the issue is here: https://github.com/snapcore/snapcraft/blob/master/snapcraft/plugins/python.py#L373-L390 [14:01] * tinwood is looking [14:01] But I also think I might have a workaround. I just found the override-stage option [14:02] tinwood: https://docs.snapcraft.io/build-snaps/scriptlets#overriding-the-stage-step [14:02] Trying it now [14:02] kk [14:02] so ive got graylog installed, configured via rsyslog, I set the daemon to send it via tcp via /etc/rsyslog.conf, and I bounced the rsyslog service on an ubuntu server. in addition, I also added an input for syslog tcp in graylog, but im not receiving any logs. any ideas? [14:04] [Kid]: Small physical host for maas controller then a couple of kvm's on the same host as juju controller. No HA but decently small footprint [14:39] manadart: so whatever patch you *just* landed on develop broke the lxd test suite on bionic. [14:39] I was trying to reproduce a failure but it wasn't failing, and then I updated by bionic branch and it started failing and the last commit is: lxd-remove-client-config [14:40] tinwood: I had to use override-prime, but it did work. Hit another unicode error in charm build, but that's easy enough to sort out [14:40] http://10.125.0.2038080/job/RunUnittests-amd64-bionic/11/testReport/github/com_juju_juju_provider_lxd/TestPackage/ [14:41] so ive got graylog installed, configured via rsyslog, I set the daemon to send it via tcp via /etc/rsyslog.conf, and I bounced the rsyslog service on an ubuntu server. in addition, I also added an input for syslog tcp in graylog, but im not receiving any logs. any ideas? [14:41] Jam: That landed this morning. I will take a look. [14:41] as an update, I can now see data loading in inputs, but its not showing anything in searches or extractors [14:44] <[Kid]> pmatulis, got your info, that's kind of what i thought [14:45] <[Kid]> pmatulis, you think a pi might work for the juju controller/client and the MAAS server? [14:45] <[Kid]> or maas controller [14:47] hi guys [14:47] any update on when 2.3.8 will be the stable snap? [14:47] admcleod_: we're currently working on the 2.4beta2 and after that will do the 2.3.8. Our goal is to get it out before your Tues release of stable charms [14:48] beisner, ^^ [14:48] manadart: i was able to reproduce it with a checkout of develop on bionic and TestOpen started failing. I'll see if I just pop off your change if it passes, give me a sec [14:51] [Kid], i'm not sure, you will have to compare specs - https://docs.maas.io/2.3/en/#minimum-requirements [14:52] cory_fu, excellent; are you pushing to the same PR? [14:52] tinwood: I can't push to your fork [14:52] I thought I'd picked up all the py3 isms. [14:52] anyoen have any connections to the graylog project for a warm intro? Im trying to fix a weird issue with graylog via juju install [14:53] cory_fu, really? I'm sure I ticked the box ... let me check. [14:53] somehow I have 12kb of log data into a system, but no data? [14:53] tinwood: Ticked the box? Hrm. Maybe I can, I didn't actually try [14:54] cory_fu, tbh, I don't know if it actually works; but I did tick "let maintainers push to this ..." [14:54] manadart: nope, still fails, was something earlier than the last change [14:54] So I'm hoping it does! [14:56] rick_h_: thanks [14:56] <[Kid]> pmatulis, yeah a Pi isn't enough. [14:56] <[Kid]> i need to get a cheap rack mount to host that [14:56] <[Kid]> then i can put all the simple roles on it [14:57] tinwood: Oh yeah, it worked! :) [14:57] tinwood: https://github.com/juju/charm-tools/pull/397/commits/5a391458d452c93a40af3b7ec5fee481a052dd53 [14:57] even more excellent! [14:58] manadart: ugh... it failed, it failed again, and just now it passed on upstream/develop [14:58] cory_fu, thanks also for taking a look at it and pushing it on. I started this branch over Xmas break when i was a bit bored! [14:58] jam: I was looking at the PR. Hard to see how that would be it. I'll look at my priors. The fix to run init on Bionic. [14:58] tinwood: So, that works in the snap, but there's some discussion happening about getting the deb package for xenial updated and I'm not sure if these changes will cause any issues in the deb package or not [14:58] jam: great* [14:59] manadart: methinks there is some sort of race condition. [14:59] manadart: and/or the fact that we are mutating host machine state is *not* conducive to working correctly all the time [14:59] tinwood: Oh, I thought it was because you were hitting some issue that needed Py3 support. Ah well, into the future! [14:59] cory_fu, It should be py2+py3? At least the tests passed. [14:59] cory_fu, yeah, that's what resurrected it! [14:59] manadart: example, it only works after it has failed once, which always passes for humans, but never for bots :) [14:59] cory_fu, what we need in openstack is py3 charm-list and build as we're going py3. [15:00] So it suddenly became an issue again. [15:00] So, yes, definitely needed. [15:00] tinwood: Yeah, it should be py2 compatible still, but I'm not sure if the pip bundling behavior was needed in the deb package. TBH, I'm not even sure if it ever worked in the deb package, though, because the deb package is quite out of date [15:00] Ah, good to know it's needed [15:01] :) [15:01] manadart: if I "lxc network delete lxdbr0" the test suite fails [15:02] tinwood: I seem to have broken the tests [15:02] ah [15:03] manadart: /me really doesn't want us to depend on having "lxd init" run on the host prior to running the test suite. *any* test *mutating* a local LXD is bad mojo. [15:06] jam: we shouldn't be using the real deal for my money. GoMock. [15:06] manadart: definitely [15:06] manadart: 8691 passed the test suite now. are you ok with it landing ? [15:08] jam: yes. I am about transit for 90 mins or so. Will look at those tests when I can sit down with the laptop. [15:09] About *to* transit. Will be working en route. [15:49] cory_fu are you still here? Can I borrow your eyes for a minute once again? [15:50] TheAbsentOne: Yep [15:50] https://github.com/Ciberth/consumer-app/blob/master/reactive/consumer-app.py#L59 <-- pgsql here seems to be empty when I look at my rendered file [15:50] https://github.com/Ciberth/generic-database-layer/blob/master/requires.py#L10 is the requires [15:51] but shouldn't I return an object as a whole? [15:56] as a side note question cory_fu is it the convention to use 'endpoint.{endpoint_name}.changed' but with fields: '{endpoint_name}.field' or what is the best-practice in naming convention according to you? [15:59] TheAbsentOne: 'endpoint.{endpoint_name}.changed' and 'endpoint.{endpoint_name}.changed.' are set automatically by the framework. You can document that you use those in your interface layer, or you can translate them into other flags that you have more control over if you want. When converting interface layers from the older style, it's common to translate them to '{endpoint_name}.changed' or similiar, because the old convention didn't [15:59] include a prefix (which did lead to some confusion if the charm name and endpoint name were the same) [16:01] ah I see, I was unsure if I should use 'endpoint.{endpoint_name}' or '{endpoint_name}' in my case but it doesn't really matter I guess cory_fu [16:01] TheAbsentOne: "details" in your requires layer isn't a valid field name. You probably want to use 'endpoint.{endpoint_name}.changed' and then check one of your attributes (e.g., technology) and if that's set, then set the available flag [16:02] No, it doesn't really matter. It's just a convention. I like to make the flag names more explicit to avoid confusion, but it does require more typing. *shrug* [16:03] cory_fu: and what do I return? Can I still return the details dictionary? [16:03] TheAbsentOne: Oh, I misread your code there. I see that you moved everything under a single "details" key, in which case your check is correct but you need to update the accessors [16:04] Handlers shouldn't (and can't, really) return anything. The return value is ignored. You should set the flag, and then the charm code can react to the flag, as you have done, and use the accessors to get the relevant data. [16:04] yeah I thought that would be better so less flags are set [16:05] so what do you mean by updating the accessors cory_fu what am I missing in my requires? [16:05] You could also have a single accessor that returns the whole "details" data structure, but you should be a bit careful there as you're leaking details about the communication protocol that might be better being encapsulated [16:05] TheAbsentOne: https://github.com/Ciberth/generic-database-layer/blob/master/requires.py#L26 should be "return self.all_joined_units.received['details']['dbname'] [16:05] " [16:06] ahn right ofcourse [16:06] That's probably why you're seeing None values in your handler in your charm [16:06] exactly [16:06] so what do you think, still use the details way or use 6 flags instead [16:06] cory_fu [16:08] so instead of changed.details flag a when() with changed.port, changed.user, changed.dbname... all in one when [16:09] TheAbsentOne: You can use a single flag and a non-nested data structure, or you can nest the data structure and just update your accessors, it doesn't really matter, it's just up to you when creating the interface protocol [16:10] If you want to use a non-nested data structure, you could use @when('endpoint.{endpoint_name}.changed') with an if check inside the handler to verify the value(s) are set, or @when('endpoint.{endpoint_name}.changed.technology') and leave off the other flags because you expect them to be all set together. [16:11] I usually do the former, but it really is just up to you [16:11] right I'll update it so you can review [16:12] kk [16:12] Ohn and 1 more question, right now my request functions goes over all relations. But in reality my request is only meant for 1 generic-database. How can I achieve that? If you add-relations manually there will always be a first one but if I would create a bundle I don't know if this might give conflicts. [16:13] cory_fu: so like this right? https://github.com/Ciberth/generic-database-layer/blob/master/requires.py#L10 [16:15] TheAbsentOne: So, you want the client to be able to connect to multiple generic-databases on a single relation endpoint? Your requires class would need to provide some way for the charm layer to distinguish them, like assigning each one an ID (or re-using the relation.relation_id, but take care to document that it is an "opaque" ID so that charms don't start depending on the specific value). But it might be better to say that if a charm wants to [16:15] use multiple generic-databases, then it should probably use a separate relation endpoint name for each one so that it knows which is which [16:16] TheAbsentOne: And yes, that update looks good, assuming you also update the provides side to not publish the nested structure [16:16] TheAbsentOne: i.e., undo this bit: https://github.com/Ciberth/generic-database-layer/blob/master/provides.py#L38 [16:16] yes absolutely I completely forgot about that, they should just use multiple generic-databases in their metadata [16:17] Yep. :) [16:18] cory_fu: oh right, uncomment all these lines right, or what do you mean? I still have to fill the details dict no? [16:20] TheAbsentOne: No, you don't need the nested structure any more. Can revert back to this: https://gist.github.com/johnsca/038c34657b2ee59f47c79f8443f4cf2e [16:21] wait now I'm confused, in my requires I thought I was still using the nested structure [16:23] TheAbsentOne: Oh, now I'm confused. It seems that you went in both directions in your requires, and I missed it [16:23] ow wait but I shouldn't xD I'm gonna change the all_joined_units.received['details']['...'] back to all_joined_units.received['...'] and try that [16:23] yeah my bad cory_fu xD [16:23] TheAbsentOne: Ok, so leave provides.py alone, and change https://github.com/Ciberth/generic-database-layer/blob/master/requires.py#L12 to juse "if self.technology():" [16:23] gimme a sec [16:23] *just [16:23] or yes thats even better [16:23] :) [16:23] haha damn I'm retarded [16:24] it feels like since I started writing charms I completely forgot how to program o.O [16:25] Creating interface layers requires a certain type of thinking and can be pretty confusing at first. I keep meaning to create a tutorial on tutorials.ubuntu.com walking through one but never seem to find the time [16:26] Well if you are up to it I want to help with it with my use case here. I'll start it and maybe you can review it and publish it if you would want that cory_fu [16:29] TheAbsentOne: Yeah, I'd be happy to collaborate on a tutorial [16:31] As soon as I get this to work, I'll work on it and get back to you as it would help me a lot too, having you review the code would be awesome [16:31] thanks again for the help before cory_fu gonna deploy my charms ^^ [16:31] TheAbsentOne: \o/ Good luck! :) === frankban is now known as frankban|afk [16:38] jujucharms.com is down [16:39] awe sits back [16:39] bdx: wfm? /me tries more urls [16:39] everything works fine from my end [16:40] hmm yeah its back for me now [16:40] must have been a slight outage wile things rebooted? [16:40] not sure o.O [16:40] although multiple users hit it in different areas of the globe [16:40] yeah strange [16:40] may be geo to me [16:41] I'll keep an eye out but nothing flipped in nagios/etc [16:42] cory_fu I missed something; I get a nonetype not subscriptable error; I'm gonna try it without the nested structure [16:43] TheAbsentOne: I know what the issue is. Moving away from the nested structure will help, but if you're unsure and curious, I can explain what's happening [16:43] (Obviously, I missed it before, too) [16:44] oh go ahead, love to learn cory_fu [16:45] TheAbsentOne: :) So, the .changed flag is going to be set pretty early due to some implicit and automatic data that juju sets (specifically, private-address). But the "details" structure won't be in there yet. However, the technology() accessor assumes that it's there and tries to subscript it, which leads to the NoneType error. To fix it with the nested structure, the accessors would all need to include an "if 'details' in self.all_joined_units [16:45] .received" check [16:46] TheAbsentOne: I think moving away from the nested structure will be easier. And it will also be easier if you ever need to debug this by viewing the relation data directly (with, e.g., juju run --unit unit/0 -- relation-get -r rel:id - other-unit/0) [16:49] cory_fu: right I get it makes sense thanks for that, I didn't know private-address happens before everything [16:50] I changed requires and provides correctly now I think, could you give it another quick look: https://github.com/Ciberth/generic-database-layer/blob/master/provides.py [16:57] TheAbsentOne: I would keep https://github.com/Ciberth/generic-database-layer/blob/master/requires.py#L12 as "if self.technology():" instead just because then you only have one place where that key has to be correct, rather than two [16:58] But other than that, it looks great [16:59] allright perfect ill change that [17:12] works like charm x) cory_fu thanks a lot! [17:13] you made my day [17:13] TheAbsentOne: Glad I could help! [17:40] how do I switch back and forth between local juju controllers and jaas? [17:40] bobeo: juju controllers [17:40] will show you the different controllers [17:40] bobeo: and you can use `juju switch jaas` [17:40] to switch [17:40] rick_h_: I can see my controllers, but how do I move from one controller to another? [17:40] ooo! [17:41] bobeo: and you can specify what model in those controllers with `juju switch controller:model` [17:41] bobeo: as optional more specific values [17:41] bobeo: so watch out naming your models the same as a controller or it might get confusing :) [17:45] rick_h_ is there a way to pass entire files over a relation or do I need to read the contents of the file and pass it as a variable? [17:45] TheAbsentOne: yea, have to pass contents or put them in a common place like a s3 bucket or something [17:45] TheAbsentOne: the relation data is meant to be a databag, sending whole files is kind of a bit much [17:46] allright thx rick_h_ the idea is a .sql file for example [17:46] TheAbsentOne: right, but if you have a connection why not just run the file as the client? [17:47] rick_h_: because the client needs to install packages/stuff to make that happen but you are right though ;) it was a showerthought [18:13] anyone used conjure-up for OpenStack yet? === blahrus_ is now known as blahrus [18:15] blahrus: bunch of folks have used it. What's up? [18:16] (and by bunch of folks I'm not included heh) [18:16] rick_h_, just trying to understand the networking requirements going into. [18:17] eno0 is for PXE/MAAS network - Then we have vlan100 - vlan103 on the remaining 3 ports [18:18] blahrus: hmm, so not sure how much conjure up walks through on the binding of apps to the various vlans bit. [18:18] beisner: do you all have any person run through of conjure up with > flat networks [18:20] rick_h_, everything is on the same switch stack and the only IPs that are routed are the publicly accessible IPs [18:21] rick_h_, Looks like conjure wants to setup bridges (which is fine) but I not sure what needs setup in MAAS before hand then [18:21] rick_h_: i think the conjure-up openstack use case is indeed just a simple network. [18:22] beisner: k, yea I wasn't sure how much customizing it let you do [18:22] simple network = everything in the same vlan? [18:22] blahrus: so typically you want to setup different network spaces in MAAS, and then you can get Juju to map the traffic patterns you need onto those spaces with Juju commands during the deploy and setup. [18:23] blahrus: this is a little old but has the idea: https://insights.ubuntu.com/2016/01/21/introduction-deploying-openstack-on-maas-1-9-with-juju [18:26] rick_h_, thanks! I'll check it out [18:56] beisner, would that be everything running over eno1 ? [18:57] no vlans? [18:57] trying to make sense of step 5: https://www.ubuntu.com/download/cloud/build-openstack [20:16] Morning all o/ [20:18] blahrus: what has helped me in the past is to just forget about the multiple nets at first [20:18] get it all deployed on a single flat network [20:19] once you have that working, then try adding/diversifying the base networks etc etc [20:19] bdx: got it, you use conjure to do that? [20:20] yeah, you definitely can [20:20] blahrus: you can also create your own bundles and `juju deploy` them [20:21] blahrus: 3 most recent gists here https://gist.github.com/jamesbeedy [20:22] are 3 recent tutorials on basic openstack deploys [20:22] have at it [20:23] there are a few configs to swap out in the bundle if you wanted to make it work on your own env [20:23] but, hopefully you get the idea [20:23] bdx: Minimal VLAN + External might prove very very helpful [20:23] thanks! [20:23] np [20:35] morning team [21:08] morning!