[00:07] hi wallyworld im here [00:07] ah, different nic? [00:08] have you pushed changes to charm.v6? [00:08] ah.. let me change it! [00:11] wallyworld, can i have ur a few minutes on HO? [00:12] sure [00:43] hi wallyworld do u think the `devices` will be used for GPU only or could be something else in the future? [00:43] whatever k8s supports, like networks cards etc [00:44] that's why we wnet with the generic "devices" name [00:44] but for now, initially, just gpu [00:44] there's a link in the spec to the k8s device plugin framework [00:45] so I define the Device.Name -> nvidia.com/gpu and Device.Type -> gpu. [00:46] Is this correct? [00:46] no, define name is somthing meaningful to the charm [00:46] like "bitcoin-miner" [00:47] type is either "gpu" or "nvidia.com/gpu" [00:47] depending on if the charm wants just a generic/any gpu or specifically an nividia one [00:47] there are examples in the spec [00:48] initially we'll probably do an exact match on charm metadata device type and k8s [00:49] ok, got it. thx [01:54] wallyworld: when u r bored and wonder what to entertain ur mind with, could u PTAL at https://github.com/juju/juju/pull/8834 [01:54] lol, bored [01:55] this is the last part before storage interface :D and actual providers changes ... [01:55] lol, i know :D [02:10] anastasiamac: done [02:12] wallyworld: thnx \o/ [02:13] wallyworld: F vs Func... one the type, the other the name of the variable... [02:14] i see, ok. we tend to use Func for var names elsewhere, but i'm not too worried about that one. the names just grated a bit [02:14] wallyworld: but, yes, consistency is better and Func everywhere is more explicit [02:14] i had hungarian notation flashbacks [02:15] wallyworld: agree :) func is better :) i'd even go with 'funk' but m sure it won't b liked either :) [02:15] yes! funk!!!!! [02:16] :D [02:16] anastasiamac: there's an F on the context as well i think? could do that as a drive by? [02:16] wallyworld: k... [02:17] but u do know that there r a few Fs in the codebase :) [02:34] no, didn't know [02:47] here is one (altho in tests so probably does not count as much... but still exists...) https://github.com/juju/juju/blob/develop/cmd/juju/commands/bootstrap_test.go#L2056 [02:53] wallyworld, I just moved the Count Validation from Checker to `schema`, and addressed all the other comments, would you mind to take another look? thanks. [02:53] sure [03:11] kelvin: lgtm but we need to drop the checks for specific device type values, see my comments [03:12] the schema should just be to check that a value is supplied [03:12] wallyworld, yes, looking now [03:12] juju should validate the actual values based on capability of the cloud [03:12] * wallyworld off to vet for a bit [03:15] wallyworld, yes, u r right, we will have to do more detailed checks later on juju side based on the runtime status of the cluster, so here we do not need to validate the type value. [04:46] kelvin: that should be good to land, just go ahead and $$merge$$ [04:47] then you can pull tip of master locally and do the deps update [04:47] wallyworld, yes, doing it now, thx === skay is now known as Guest21561 === vern_ is now known as vern [06:57] wallyworld: storage signatures change to accomodate call context... PTAL when u can - https://github.com/juju/juju/pull/8835 :D === frankban|afk is now known as frankban [08:03] anastasiamac: looking [08:04] wallyworld: it's not fully ready yet - i need to fix storage provisioner worker, etc... most of logic is there tho... wip :) [08:04] i'll wait till done [08:06] k [08:58] BlackDex: Sorry I missed your reply! [08:58] BlackDex: ip ro get shows me a single route using a matching interface and gateway [09:00] very strange indeed then that it doesn't work. That interface is the same interface the requests does its ingress on? [09:00] BlackDex: What i'm thinking though is that only one of these network spaces should have a gateway defined to prevent this issue - that way it's always consistent. But i'm not sure which 'space' is best for this, none of the charms share a common space - and presumably each charm needs at least one space with a gateway in order to hit apt etc? [09:01] KingJ: Or just use the apt-proxy [09:01] from juju [09:01] i have build several environments which them selfs don't have internet access at all [09:01] apt-http(s)-proxy? [09:01] yea, and no dfgw [09:02] I have that defined already, but to a value that isn't part of the network spaces - how will it be able to route there? using the system level default gw? [09:03] routing is not available then [09:03] thats true [09:03] maybe your network setup is to complex for what i suggest [09:03] Hmm, would you have an example diagram of one of these environments? [09:03] I'm always able to make changes ;) [09:04] It sorta sounds like I need to remove the default gateway from each of the space subnets, but make sure each charm also has access to a common space that contains a proxy, and set apt-http-proxy to that host. [09:05] they are very simple. Just one network for openstack communication, and a pxe network for the systems. Some containers do not have the pxe network, but do have the apt-proxy stuff. and the maas node has an interface in both networks as does juju [09:06] that way juju can still communicate with all the charms. Also the juju controller uses the maas proxy! [09:07] Ahh I see. Mine's broken up in to about 7 spaces similar to this... http://blog.naydenov.net/wp-content/uploads/2015/11/openstack-spaces-e1447000706196.png [09:07] via the `juju model-config -m controller` settings [09:07] (which to be fair, does have internal-api as a common space across all charms which i've not done, hmm, that could be the fix...) [09:09] Although ceph-osd and ceph-mon don't have a binding to anything other than storage and storage-cluster in the charm, I think I could use extra-bindings to give it an address in the internal-api space too, which would mean all my charms have a binding there and I could use that as my proxy subnet. [09:10] ah yes, that is the only thing i do have, a separate storage network indeed :) [09:10] and that is only connected to the charms which need it [09:11] How have you connected that seperate storage space for proxy access? [09:12] not, because ceph-osd and ceph-mon are connected to the interal ;) [09:12] and maas has a nic in the internal also to provide the proxy [09:12] Ahhhh [09:12] So you've bound your ceph-* charms to internal and a dedicated storage network? [09:13] yes! [09:13] just be sure to correctly configure the bondings or the *network* config options [09:13] that messes stuff up for me sometimes [09:14] for the bondings in a bundle see: https://github.com/openstack-charmers/openstack-bundles/blob/master/development/openstack-base-spaces/bundle.yaml [09:24] BlackDex: That's a useful bundle reference. I'm trying to work out how best to bind it to my internal now. I've already got public -> storage-data and cluster -> storage-cluster, so I need to pick something for internal. [09:24] I'm not sure how to 'discover' what bindings a charm supports though. So for example, ceph-osd's metadata.yaml lists bindings of 'public' and 'cluster' (https://github.com/openstack/charm-ceph-osd/blob/master/metadata.yaml) but the juju error output implies it supports bindings of "admin", "bootstrap-source", "client", "cluster", "mds", "mon", "nrpe-external-master", "osd", "public", "radosgw" [09:25] Where are those extra bindings defined? [09:28] haha, i did a dirty trick to get all those [09:28] a long time a go [09:28] i just added a non-existing bond [09:29] like 'foobar' [09:29] then juju returned me an error telling me which were available [09:30] else i would have to download all the charms and look for it in the source-code, because not everything was in the metadata.yaml of the charms [09:30] Ok forgive me, the link I gave to metadata.yaml was for ceph-osd, but the juju output was for ceph-mon. Now that i'm looking at ceph-mon's metadata.yaml I can see the same bindings.... [09:30] they have the same :) [09:30] that is correct [09:30] you can also add a default bond btw [09:30] instead of a name use "" [09:31] Will that be used in addition to defined ones? [09:31] "": space-name [09:31] so e.g., ceph-osd supports bindings to public and cluster, which i've already bound. If I bind default too, will I get a third interface [09:31] yes :) [09:32] perfect :D [09:32] it should as far as i know atleast [09:32] which solves my problem of "ceph-osd can only bind to public and cluster and i've bound those to spaces other than internal" [09:32] but [09:32] it is better to use "constraints: spaces=space1,space2,space3" [09:33] or both ofcourse [09:33] the constaints will ensure the nic bridged [09:34] So far i've been explicitly binding charms to space(s), and sounds like it's best for me to keep doing that except in cases like the ceph charms where I need an extra space that's not used by the charm's bindings, but instead by core network stuff. [09:34] i'm used to the constraints because that was in 2.x somewhere, and the bonding came later [09:35] so, i really don't know if they have the same effect actually [09:35] the bonding values are also used by the charm to configure values [09:35] like keystone uses those to configure the pub,int,admin parts [09:35] I've just updated my bundle for ceph-mon to have a default binding to my internal space in addition to the existing explicit binds, deployed and I can see now they have 3 interfaces, perfect. [09:36] nice :) [09:36] So now I can change MaaS to not put default gateways on every subnet, except for the admin subnet. [09:36] * internal subnet [09:37] so that'll result in a single default gw on all of them [09:40] rick_h_ stub cory_fu zeestrat kwmonroe and all others I'm forgetting: I want to all thank you for all the help the past few months! In 20 hours I need to present my research and generic database implementation. If you guys are interested in the presentation it can be seen here: https://www.youtube.com/watch?v=DPtRJKgNxoA [09:40] Let's hope things go well :fingerscrossed: [09:41] (and sorry for the bad english >.<) [10:02] TheAbsentOne: best of luck! [10:08] thanks! [10:17] wallyworld: 8835 is ready for review - all unit tests pass locally [10:18] Hmm... is 'juju deploy' preferring xenial over bionic, even for multiseries charms that list bionic first? cs:cassandra lists bionic first, but I get xenial vms unless I override with --series [10:19] My test charm that only lists bionic as a supported series gets bionic as a default just fine [10:20] Hmm, no. The test charm lists the exact same 3 series as supported. So the only difference I can think of is the charmstore, since older versions of cs:cassandra didn't support bionic? [10:21] I'd be interested if other people see the same thing, with 'juju deploy cs:~cassandra-charmers/cqlsh' getting bionic and 'juju deploy cs:cassandra' getting xenial [10:27] hi guys/gals. i granted a user to a admin rights to a model. a few mins later i am getting this error "ERROR cannot log into controller "maas-controller": cannot get discharge from "https://172.17.174.2:17070/auth": cannot acquire discharge: cannot http POST to "https://172.17.174.2:17070/auth/discharge": Post https://172.17.174.2:17070/auth/discharge: proxyconnect tcp: tls: oversized record received with length 20527" [10:28] im not using any proxy between the juju client and the juju maas controller [10:30] juju controllers shows the controller but no user/access. its like it removed my access from the controller model at the same time as giving the user admin to the openstack model [11:02] cory_fu: endpoint.{endpoint_name}.joined gets set when the first relation is made, but there is no event signaling that a second relation has been made to that name [11:04] When two client applications are related to something like a database (ie. juju add-relation cqlsh:database cassandra:database; juju add-relation cqlsh2:database cassandra:database) [11:13] cory_fu: https://pastebin.canonical.com/p/BwRRF8thTp/ is the relevant code, which works great for just one relation. Second relation, no new flag so no handlers in the interface or my charm get triggered. [11:14] cory_fu: I can work around it with a @hook [11:25] Hmm, I also can work around it by watching for endpoint.{endpoint_name}.changed.private-address, which I can reset [11:32] practical fix might be for clear and set the endpoint.{endpoint_name}.{joined, changed.*} flags when a new relation appears, which a trigger can react to. [11:33] TheAbsentOne: you can do it! [11:35] naturalblue: hmm, what command did you use to grant access? [11:36] Let's hope rick_h_! [11:44] But I think a new endpoint.{endpoint_name} flag will be better, if we can think of a suitable name === Guest21561 is now known as skay [12:01] rick_h_: juju grant admin openstack (openstack is model name) [12:02] naturalblue: ok, and can you still run show-model on the openstack model? [12:04] rick_h_: ERROR refreshing models: no credentials provided (no credentials provided) [12:04] naturalblue: can the other user run the command? [12:05] rick_h_: i am both users [12:05] 1 is admin and 1 is naturalblue [12:05] naturalblue: ah ok, from the same terminal? [12:05] or machine I guess [12:05] when i try to login with either user is get [12:06] naturalblue: so...you ran the register command on that machine and gave the controller a new name? [12:06] no [12:07] i was logged into the maas-controller on the client machine [12:07] i ran the juju grant naturalblue admin openstack [12:07] after a minute or 2 i started getting the proxy error [12:08] i logged out and tried to login as both users and am still getting the same message [12:08] naturalblue: ok, bear with me. Having my morning coffee still. There's some cached files in .local/share/juju that have the admin credentials in them normally. [12:09] I'm wondering if they got confused but if you ran register on another machine it shoudn't have. [12:09] sorry, i wasnt giving out, just giving a better recap. i will check now [12:09] naturalblue: can you run juju commands from the maas-controller then? [12:09] naturalblue: as the new user? [12:14] cory_fu: Oh, I guess endpoint.{endpoint_name}.changed is what I am supposed to hook into, and that will work fine. I don't know why it took me so long to get there :) [12:18] rick_h_: i will see about loggin into the maas-controller now [12:18] naturalblue: k, I wanted to see if it's working on that end and we can probe the model for details to make sure the access is still there for everyone [12:20] ^^ i still have access to the openstack model for naturalblue in the juju gui [12:23] rick_h_: when i try to ssh to the maas-controller it says permission denied [12:23] i have tried with admin and naturalblue. it does even get to a password prompt [12:23] the controller was setup using juju [12:23] naturalblue: is maas deployed with juju or something? [12:23] naturalblue: I mean juju sits on top of maas [12:23] naturalblue: so I'd expect you to be able to ssh to that machine from wherever you set maas up [12:25] i setup the maas-controller from the maas region server. i am on that now as i use it for juju client actions. it is the system i am getting the errors from when i try to ruin juju commands [12:26] i setup a maas region controller, then installed juju, from that deployed a juju-maas-controller on a different machine. I then setup a naturalblue user. i added an openstack model. i deployed my openstack setup to other machines. i then granted naturalblue admin access to the openstack model. This is where i am now [12:27] all actions where done a the defult setup admin user. [12:27] after i granted the naturalblue user admin access to openstck model, i have lost access to everything it seems [12:28] as both admin and naturalblue [12:28] although in the juju gui i had open for naturalblue, i do have the openstack model there and can do things [12:28] strange! [12:33] You can force a charm to deply on a non-supported series via the juju CLI - how would you do that in a bundle configuration? [12:49] KingJ: we don't have a --force in the bundle atm [12:50] rick_h_: Ah I see. For now, i'm working around it for the moment by deploying with force on the CLI, then using the bundle to configure and relate. [12:52] KingJ: yea, the --force was meant really so folks could test/etc before things got updated but not something you'd want to do in prod with a repeatable bundle [12:52] KingJ: at that point it's best to edit the charm and push the update [12:53] rick_h_: Yeah, understandable that that is the preferred approach. Unfortunately this is a charm from the store that hasn't yet been updated with bionic support in the manifest, even though it does work fine on bionic. [12:54] KingJ: gotcha, have you filed a bug on the charm? Should be a link to file bugs if the charm author set a homepage/bugs-url [12:54] rick_h_: I did yeah, although unfortunately there's been no traction on the bug since I filed it 3 weeks ago. [12:54] I wouldn't mind making the changes myself - if I forked it can I publish it to the store too? (albeit under my username instead of theirs?) [12:55] KingJ: bummer, what charm? [12:56] KingJ: exactly, you can use the charm cli tool to pull down the charm, edit it, and push to your own space [12:56] rick_h_: https://jujucharms.com/u/bertjwregeer/snmpd/3 [12:56] rick_h_: Ah cool, I will look in to doing that in the mean time then [12:56] KingJ: https://docs.jujucharms.com/2.3/en/authors-charm-store [12:57] rick_h_: Perfect, thanks for the information and pointers :) [13:19] stub: .changed will work, certainly. I do think that .joined should re-trigger on new units, but that touches on the idea that I think all handlers / triggers should be edge-triggered and there should be an explicit mechanism for forcing an edge (which, under the hood, would just clear / set cycle the flag). But I was already writing up a more in-depth discussion of that for https://github.com/juju-solutions/charms.reactive/issues/177 [13:24] stub: Also, your usage pattern for the endpoint is very different than what I usually recommend, in that you're using the relations collection outside the endpoint class. It seems like a pretty natural way of doing it but it breaks encapsulation somewhat, mainly because the endpoint class can't influence the relations list, leading to things like your `publish_credentials(rel, False)`. I wonder if it would be a good pattern to allow the endpoint [13:24] class to provide a subclass implementation for Relation and Unit so that those collections could be used directly with the interface layer being able to extend them like it can the Endpoint, so that we don't have to pass around rel_id or some other synthetic ID? [13:30] cory_fu: This is a peer relation, which have a tendency to break encapsulation [13:32] I would like to move stuff into the endpoint peers.py, but this was the first pass through translating things to reactive [13:36] cory_fu: The Endpoint implementation can already override the relations property to return wrapped relations and access to wrapped units, with effort. [13:38] cory_fu: I wouldn't bother with making things pluggable, at least at this stage. I think it would likely make things more confusing for people with more normal use cases. [13:41] https://pastebin.ubuntu.com/p/YXBBVMXcD4/ is the full client.py file, which certainly could all be moved to provides.py. Encapsulated, just not where you expected it ;) [13:50] And yes, I'm confusing two bits of code in my head and realise this is not a peer relation [13:52] But in general I'm finding the client side easy to publish as an interface, but the server side implementation seems easier this way. [14:00] stub: The server side is always going to feel easier to write that way, but the issue is that it makes it harder for others to create an interface-compatible server, since they have to make sure they re-implement the interface data protocol properly, probably by reading code. The purpose of the provides side is to provide a documented API for anyone who wants to support the same interface. Of course, that's not going to happen very often, so it [14:00] feels like pointless work. :p === frankban is now known as frankban|afk [22:48] morning peeps [23:54] kelvin: reviewed - there's a dependency issue, see if my comments make sense [23:55] wallyworld, looking now, thx