[00:00] nice work [00:00] stokachu mad props boi, you got mad squabbles [00:04] lazyPower: its still pretty rough, but this is the barbican client idea I'm working with -> https://github.com/jamesbeedy/juju-layer-barbican-client/blob/master/reactive/barbican_client.py [00:05] i love this random "i dont have this so just set it without doing anything" https://github.com/jamesbeedy/juju-layer-barbican-client/blob/master/reactive/barbican_client.py#L13 [00:05] i do stuff like that in prototypes all the time and it drives matt crazy [00:06] my motive behind that, is that I want to be able to react to barbican being installed in other layers, not just when the secrets are set [00:06] bdx - looks like a good start. I'm not very familiar with barbican so it makes it difficult to review/make suggestions, but at a glance, it looks straight forward enough. [00:06] are the 'containers' a primitive of barbican? [00:07] yea, containers store refs to secrets [00:07] like, you put secrets in 'containers' and an app requests the secrets to put in its local container so its encrypted at rest or something? [00:08] I wish, nothing going on here with encrypting at rest ... the values sit in the app config on the filesystem after they have been retrieved from barbican [00:09] ah [00:09] i gave it a lot of credit then [00:09] * lazyPower nods [00:09] sad I know [00:09] i dont think its sad, i think its indicative of our industries stance on security [00:09] with the exception of ^ [00:10] i cant say my solutions are much better than that :) [00:10] but i'm interested in making them better [00:10] using barbican in conjunction with keystone, allows me to create projects in domains, and users in projects, and separate their access to differet secrets in barbican [00:11] using keystone to authenticate and authorize users in front of barbican [00:11] so simple, so sweet [00:12] seems like its a tiered setup like you're expecting :) so thats good [00:13] nice, granular, and interrelated [00:13] yes, YES! [00:13] and I already have it working across all of my charms :-) [00:13] take that vault [00:14] lol [00:14] you tell em buddy [00:18] lazyPower: https://s17.postimg.org/jtqy0d9mn/Screen_Shot_2016_11_13_at_9_40_03_AM.png [00:24] interesting [00:25] so I get the secret refs from the container here https://github.com/jamesbeedy/juju-layer-barbican-client/blob/master/reactive/barbican_client.py#L47 [00:25] then iterate over them, unpacking the payload of each https://github.com/jamesbeedy/juju-layer-barbican-client/blob/master/reactive/barbican_client.py#L51 [00:25] and setting to the leader [00:26] no doubt there are probably better ways to do this [00:26] its just a start ... I had to start somewhere [00:32] nah this seems reasonable [00:32] leader coordination seems like the right thing to do here [00:33] the only problem with setting to the leader is the type can only be a string [00:33] lol [00:33] I think this is the same for unit data [00:38] I mean you can set other value types, but the are converted and saved as strings [00:39] stokachu: the latest conjure-up I've gotten from the next ppa doesn't have localhost as an option for me. My LXD is all configured and works already [00:52] stokachu: ahh, looks like the latest hasn't been built by lp yet in the next PPA [00:54] bdx - yeah, thats fair. There's no notion of types in leader-data [00:55] jcastro - are you jazzed up on lxd kubernetes now? [00:56] i'm afraid. [02:49] jcastro: what version of conjure-up are you running [02:49] this was just a spell modification no core code changed [02:51] larrymi: ah yea the whole supported thing lol [02:51] oops wrong person [08:09] Good morning Juju world! [09:27] I'm back, what did I miss?! ;) [10:53] hey folks. Is there a way for me to customise the base image used by the lxd provider locally? [10:54] I used to do this with the juju-template images the local provider, it allowed me to preinstall a bunch of stuff, and pre-download a load of others, which meant much faster provisioning [10:55] afaics, juju will always use the 'ubuntu-trusty' or 'ubuntu-xenial' aliases. If publish a custom image to my local lxd image server, with the alias 'ubuntu-trusty', what should work, right? [13:24] petevg: did you raise a bug that quality arrow problem on jujucharms.com? [13:25] magicaltrout: I did not. I will add it to my list of things to do this morning (not sure which repo it lives in ...) [13:25] no problem petevg i'll do it [13:28] done [13:32] Cool. Thx, magicaltrout. [14:07] lazyPower: hey! [14:07] o/ hackedbellini [14:07] lazyPower: using xenial worked! I'm just having one last problem now.. I'm not very familiar with docker so maybe you will know what is wrong [14:08] https://www.irccloud.com/pastebin/gOa72syy/ [14:09] lazyPower: the same problem jhappened with the default config (it was 'postgresql:9.5'). I changed it to 'postgresql:latest' but the same happened [14:09] They restructured their tags it looks like [14:09] https://hub.docker.com/r/library/postgres/tags/ [14:10] lazyPower: hrm, so I should just remove the leading 'postgres:' and 'redmine:' from the config? [14:11] the postgres:9.5 image should work [14:11] change the image line to read that, postgres:9.5 [14:11] and give it a `juju resolved` to retry the hook [14:13] lazyPower: it didn't work before with 'postgres:9.5'. It gave me the same error as above. But I'll try again [14:14] kwmonroe - hey kev, got a sec? [14:16] lazyPower: there's a typo on the charm config. It was written as 'postres:9.5' instead of 'postgres:9.5'. I changed 9.5 to latest and didn't even notice the typo [14:16] now with 'postgres:latest' I think it worked [14:16] nice [14:16] glad it was simple :) [14:18] lazyPower: nice! It is taking a while in the "Pulling postgresql (postgres:latest)..." but I think that is normal, right? Lets see what happens now, hopefully it will complete now [14:19] yeah, running docker in lxd, the image pulls were quite slow, however once they were cached things were about normal timed. [14:32] hackedbellini - did it finally settle and turn up the services? [14:33] stokachu: nice blog post! http://blog.astokes.org/conjure-up-canonical-kubernetes-under-lxd-today/ [14:46] is it not possible to debug hooks on a subordinate charm? [14:52] vmorris - it certainly is possible [15:09] lazyPower: it is still installing. I have a meeting right now, will give you more details when I come back! [15:09] ok, sounds good [15:21] lazyPower: just wondering how i might do it, do I need to run debug-hooks on the parent charm? for example, this cinder-ceph subordinate unit doesn't show up in the debug-hooks list [15:21] vmorris - nope. you target it like any principal unit [15:22] lazyPower: alright well this doesn't seem to be working as designed then [15:23] lazyPower: I'm getting "can't find session cinder-ceph/2" [15:23] that seems like perhaps there was a prior debug-hooks session that was disconnected? [15:23] or your'e already in a debug-hooks session on the principal unit? [15:24] ah yes, the 2nd is true [15:24] ^^ thanks [15:24] yeah, the output messaging there could be improved, but thats a bit obtuse as it would have to know some things first. [15:48] mbruzek lazyPower ryebot Cynerva great job on the release last night, seems to have cleared up cloud weather report http://cwr.vapour.ws/bundle_canonical_kubernetes/index.html cc arosales [15:52] jcastro Have you interfaced with Sarah Novotny recently? [15:52] jcastro I need to coordinate with them on a reasonable place to xpost our release notes for CDK. I thinks she is the right person to track/ask as she's the community manager for k8s right? [15:53] she is [15:53] but the community meetings are closed for the holiday [15:53] so I won't talk to her for like 2 weeks. [15:53] Right, but i was thinking email. i'm digging in slack to find her deets [15:54] right, sec, I'm in a community room with her [15:54] sarahnovotny@google.com - found it [15:58] jcastro - https://docs.google.com/document/d/1nx8v7FgzwKgF9uFu1KK-ecx6bOGxwEh3XorodDJ6s24/edit - mind proofing this before i hit send? [15:58] i've got you on CC as well [15:58] marcoceppi: thanks for the fyi. Did k8-core get updated? Looks like it still has the lint failures re: http://data.vapour.ws/cwr-tests/results/bundle_kubernetes_core/9cff5292b7434ce29891195a47c18131/report.html [15:58] arosales thats my b, we have an update coming ot kubes-core later today with those fixes [15:59] lazyPower: ack, just removed one marcoism, lgtm. [15:59] ta [16:00] lazyPower: thanks, and thanks for the release [16:00] mbruzek: ryebot Cynerva: lazyPower marcoceppi: thanks for the k8 release, looking forward to trying it out [16:01] arosales - wanna try it out on LXD?! [16:01] :D [16:01] indeed I do, conjure-up [16:01] ah whoops, didnt mean to leak email addresses here, i thought i was somewhere else. whoops [16:02] * lazyPower flogs himself [16:02] I apt get updated the other day and it removed everything inside my /home directory [16:02] which i blame entirely on jcastro [16:02] wait what? [16:02] because i had a bunch of ZFS stuff on a semi stable Xenial because of his blog post ages ago :P [16:03] O_o [16:03] I dunno, but it did happen :) [16:03] I think something inside ZFS removed itself during the update and everything else around it [16:03] which wiped a bunch of charms i'd not pushed anywhere.... sad times [16:04] that teaches me [16:05] did you reinstall? [16:05] because you can usually zfs export and reimport the pool [16:05] my home directory wasn't inside ZFS [16:05] but I think my pool was chillaxing somewhere around there [16:05] i dunno, anyway, it all vanished :) [16:06] magicaltrout - on top of VCS i also use syncthing to keep my stuff sync'd with my NAS. Might be worth investigating. [16:06] i know this is hindsight and all that, but just a thought [16:06] you're very wise lazyPower [16:06] no, i catch stray good suggestions from jcastro [16:06] do i just catch the bad ones? ;) [16:07] littlie of column a little of column b [16:08] also, your filesystem and your backup strategy are not the same thing. :D [16:08] it's not my fault you didn't push [16:08] yeah, next time, i'll grep the debian update scripts for "rm -rf /home/*" ;) [16:08] jcastro - isn't this like a replica of the mysql charm fauxpaus that happened 2 years ago? [16:09] this is basically the same thing [16:09] * lazyPower grins [16:09] good times [16:09] magicaltrout: oh hey are you submitting a session to the charmer summit? Last call is today [16:10] i'm debating whether my liver will have packed in by then or not [16:13] I don't like those guys since they rejected my last 2 attempts [16:13] and told me to stop selling a product :P [16:15] okay i'll pitch something, but i'm not trying too hard :P [16:16] * magicaltrout offsets the lack of effort by pitching with his NASA email address [16:27] lazyPower: still some problems: [16:27] https://www.irccloud.com/pastebin/GbEMv3zD/ [16:28] done jcastro we'll see how that goes [16:31] hackedbellini - thats without having postgres related correct? [16:31] hackedbellini - as in, using the redmine charm as the all-in-one-using-docker path [16:32] it seems like compose didn't spin up postgres first, or after spinning up it immediately exited. You'll have to juju ssh into that unit and investigate the status of the workloads on the docker bridge [16:32] s/docker bridge/docker runtime/ [16:37] lazyPower: yeah, I didn't see any "db" relation on the charm, so I didn't relate it to my postgresql unit. Was that an option? [16:38] ok, so the problem now is totally docker? Nothing related to juju/lxc anymore? [16:38] yep [16:39] hackedbellini in a meeting, i'll circle back after [16:40] stokachu: I think I'm missing something obvious, localhost is missing for me when I do `conjure-up kubernetes`, using your next ppa, lxd and everything is already configured [17:24] jcastro: silly me, try it again please [17:28] thanks stokachu [17:28] was this a packaging issue or something? [17:34] is the charm snap in a functioning state? Tried to do a charm create and getting an error. Ideas? [17:35] geekgonecrazy - can you pastebin your error for me? [17:35] https://paste.ubuntu.com/23496377/ the output I get after doing charm create -t bash my-charm [17:36] I didn't see anything in the documentation about folder location, but with snaps being limited on folder I was kinda wondering if that was the issue. But the output didn't really give me any useful clues [17:37] Tried specifying just incase that was it. Same error [17:39] lazyPower: should I give the version in the juju ppa a try instead? [17:39] * lazyPower looks [17:39] ah yeah, this does look like a packaging issue with the snap [17:40] pkg_resources.VersionConflict: (SecretStorage 2.3.1 (/snap/charm/9/lib/python2.7/site-packages), Requirement.parse('secretstorage<2.3.0')) -- and thats not something you can easily remedy [17:40] geekgonecrazy - would you mind terribly opening a bug against charm-tools for this? https://github.com/juju/charm-tools/issues [17:45] lazyPower: done! https://github.com/juju/charm-tools/issues/287 [17:45] geekgonecrazy thanks for that, we'll try to circle back to that [17:45] but yeah, to answer your follow up question, i would purge the snap and install from the ppa then [17:46] stokachu: awesome, working now, though it expected a controller up and running already, it didn't fire off a controller ootb for me, no idea if that's on purpose or not. [17:46] lazyPower: perfect, i'll give the PPA a go. Thanks for taking the time to take a look [17:46] np, sorry you found that rough edge. we'll get that sanded down for ya [17:47] All good. It happens to all of us :D [17:49] stokachu: dude, this is awesome. [18:00] jcastro: yay \o/ === CyberJacob is now known as zz_CyberJacob [18:06] beware: https://bugs.launchpad.net/juju-deployer/+bug/1643027 [18:06] Bug #1643027: juju-deployer deploys the wrong unit series from charm store [18:06] that bad boy is causing us serious funk in the osci amulet test gate [18:08] marcoceppi, arosales, thedac, gnuoy, tinwood ^ [18:09] thanks for filing that [18:09] boy, tricky one. [18:09] Is there any best practice for the start hook? Should I drop in a systemd unit during installation and trigger it to start / stop in the hooks? [18:10] juju deployer! [18:11] beisner: I bet if you try that with 2.0 you don't see it correct? [18:12] juju-deployer via amulet via bundle-tester! :-) [18:12] as we only need deployer with 1.x juju [18:12] ah via bundletester [18:12] juju-deployer is still called with juju 2.0 when using bundletester [18:12] yes, it is bundletester that is using deployer [18:13] ack [18:18] geekgonecrazy - are you charming with reactive/layers? [18:20] lazyPower: i'm a huge noob to charms. Familiar with snaps but not charms. In the charm i've got it installing from a tar ball. I will need to have it react to relationships (mongodb primarily) and configuration changes. If that's what you mean? [18:20] hackedbellini - hey there, circling back to your docker issues, have a few minutes to rif? i can help you dissect the issues now. [18:20] geekgonecrazy - not exactly. Reactive/layers is a new paradigm for charming that will really help you get moving with delivering that tarball and doing the right things at the right time, and allow you to re-use code we've already written for things like how to talk to mongodb [18:21] geekgonecrazy https://jujucharms.com/docs/devel/developer-layer-example - take a look at this doc. it may be a bit windy and cover too many new concepts if you're brand new to charming, but its a walkthrough by example of how to charm by layers. [18:22] err rather, i intended to link here: https://jujucharms.com/docs/devel/developer-getting-started === zz_CyberJacob is now known as CyberJacob [18:27] lazyPower: hm... yikes... another set to wade through. Any good examples you know of? Would make it a lot easier to dive in and adjust my thinking. :D [18:28] you bet, we've got a lot of layers already. So what your'e trying ot do is deliver a tarball based resource, and then wire that up to talk to mongo? [18:28] let me see if i can find something similar. i'm pretty sure cmars has something thats very close to this already. [18:30] lazyPower: yeah it would be great! I tried redeploying the charm with this docker-compose config: https://github.com/sameersbn/docker-redmine/blob/master/docker-compose.yml [18:31] geekgonecrazy - while its not mongodb, this is a mattermost (open source slack alternative) layer which exercises a good amount of layers/functions-in-juju https://github.com/cmars/juju-charm-mattermost [18:31] lazyPower: yeah pretty much a nodejs app bundled in a tar ball. Extract / npm install then wire up to mongo [18:32] I guess that's fitting, i'm working on the Rocket.Chat charm haha [18:32] geekgonecrazy - you'll really want to grok the resources: usage of mattermost, as you're not using na official dependency management solution there, you can declare the tarball as a resource of the charm and instnatly win there. [18:33] oh thats fantastic :D [18:33] i didnt even know #winning-by-accident [18:33] hackedbellini - ok, did you update it to be a jinja template? [18:33] lazyPower: I'm having a problem checking that charm log. The postgresql charm logs _a lot_ of information all the time, and I miss the relevant parts from redmine there. I tried running "juju debug-log" with "--include". I tried putting the unit name, the application name, the machine name/oid, but none of that works [18:33] lazyPower: yes I did! [18:34] hackedbellini ah yeah, the -i flag is really confusing ot new users since the 2.0 change [18:34] hackedbellini `juju debug-log -i unit-postgresql-0` is the format to include only postgres unit output [18:34] well, postgres unit 0 output. [18:34] lazyPower: I'll give it a look though, see if I can figure out a way. Go binary for sure a lot easier to work with then node.js [18:35] geekgonecrazy - well, we have a layer-nodejs too [18:35] geekgonecrazy how about this, i'll help you get started, and answer direct questions you may have. if i'm not availble, do you agree to post to the mailing list so others can lend me a hand helping you? (juju@lists.ubuntu.com) [18:35] lazyPower: hrm, so for "redmine/9" (yeah, already on the 9th try) it would be "-i unit-redmine-9"? I'm trying it as I write this to you and it isn't displaying anything [18:35] i have a scaling problem where i'm a single poin tof failure. [18:36] yep, that should be the magic incantation [18:36] hackedbellini ^ [18:37] lazyPower: Sounds awesome. already subbed :) i'll follow up after doing a bit more research [18:37] sounds good. dont hesitate to poke if you get mired down in the docs. I helped write most of our dev documentation. so its likely scattered. bugs / feedback welcome always [18:38] stokachu - welp, i have results. none of my worker nodes have registered as ready :| but, it did complete and looks like its still converging in the background [18:38] it crushed my i5 deploying this whole stack in one go :D took ~ 30 minutes from start-finish. [18:40] hackedbellini - one other thing is either the charm isn't initialized... or maybe you should pass --replay so it forces debug-log to replay the messages from the beginning of the units creation [18:40] lazyPower: omg, its friday I'm tired... The service was 6 but I read it as 9 lol [18:40] haha <3 i do these things all the time. dont feel bad [18:41] hackedbellini - the best is when you're trying to multi-task, run a deployment, then switch terminals and run bundletester and wipe out the deployment you just did. :| [18:41] #mistakes-i-realize-i-will-repeat-again-and-again [18:42] lazyPower: hahaha yeah. So, those are the last lines of the log: [18:42] https://www.irccloud.com/pastebin/KjgnG4S6/ [18:42] lazyPower: hhaha did that some times too =P [18:43] it is "stuck" on that for more than a hour now [18:44] ok, if i remember correctly this is the end of the sidewalk where that prior docker work was done. As of lastnight stokachu has some profile edits that got it further along in the context of kubernetes. I think there's some modules we need to unblock on the container yet ot make this 100% functional [18:44] hackedbellini - so whats the status of the workloads in docker? did it pull the images and attempt container spinup? [18:48] lazyPower: strange that if I run 'debug-log' with '--replay' the latest lines are pretty different: [18:48] https://www.irccloud.com/pastebin/E05LE9wT/ [18:48] oo fantastic actually [18:48] juju ssh into that unit and run `docker images` [18:48] do you see the nornagon postgres image in there? [18:49] lazyPower: no, no images [18:49] or rather: postgres:latest [18:49] ok, so it seems like pulling the image is either blocking, or is hozed and not signalling back that the hook has failed [18:50] and thats not good :| i haven't encountered that before so there's likely no logic in the charm to handle the scenario [18:50] lazyPower: yeah that is strange. My previous deploy did finish pulling the container [18:50] something happened in this one specifically [18:51] is there a way to make juju rerun the hook, considering that it "failed"? [18:51] hackedbellini - try running the docker pull postgres:latest on that unit [18:51] yep [18:51] juju resolved redmine/6 [18:51] that by default will attempt a retry of the hook, if you want to skip its juju resolved --no-retry redmine/6 [18:52] s/try running/try manually running/ [18:52] that way we can capture the behavior and determine what the root cause is [18:55] lazyPower: ok, waiting for docker pull to finish. Let's see if that works :) [18:56] I should run "docker pull" with the ubuntu user itself? Or with root? [18:56] * lazyPower crosses fingers "no whammies no whammies no whammies, c'mon big money" [18:56] Shoudl work with either/or [18:59] lazyPower: strange that is seems to be "stuck". This is my output: [18:59] https://www.irccloud.com/pastebin/looQTz6T/ [18:59] but even if I ctrl+c it and do it again, I get the exactly same output [18:59] hackedbellini - any output from journalctl -u docker? hoping there's something in the docker runtime logs that will help indicate what hte problem might be [19:00] stokachu - found the culprit, looks like the workers cant open /proc/sys/vm/overcommit_memory, its cuasing the workers to panic and abort the registration process [19:03] lazyPower: this is the tail of the log: [19:03] https://www.irccloud.com/pastebin/Z7jVEdwC/ [19:03] so my "docker pull" refers to those latest 2 lines [19:04] forget it, it seerms that the pull finished [19:04] docker images now has the postgres [19:04] I'll try to "juju resolved" now [19:04] ok, with the image cached it should skip pulling during docker-compose up [19:05] hrm, didn't even need. The debug-log just advanced to "INFO unit.redmine/6.install Pulling redmine (redmine:latest)..." by itself [19:05] omg now i know why it was fighting [19:05] two threads of the engine trying to pull the same image [19:06] that explains the slowness [19:06] whoops [19:06] i thought the hook was already in failure mode [19:07] lazyPower: it seems to have worked. Let me check if it is really running now [19:09] lazyPower: so, the charm now says that "Redmine is running on port 8000", but there's nothing on that port [19:09] even netstat doesn't show that port [19:10] docker ps -a [19:10] do you see the container running, and that its bound to por 8000? [19:11] lazyPower: hrm, both postgres/redmine are with status "Exited (1) 2 minutes ago" (I don't understand anything of docker, sorry for not being able to debug it better =P) [19:11] welp, thats why nothing is listed as being bound to a port :) [19:11] you can fish up the container logs to see why [19:12] docker logs [19:12] hackedbellini by the end of this exercise you're going ot be an expert docker charmer you're literally touching every corner of the stack [19:12] lazyPower: the log has some lines with "error: exec: "initdb": executable file not found in $PATH" [19:13] lazyPower: hahaha nice! Maybe I can help contribute back to you in the near future [19:14] :| not cool postgres image, not cool [19:14] i wonder if osmething changed there [19:14] lazyPower: strange that I'm using postgresql:latest. Also, the redmine:latest gave me that: "error: exec: "rake": executable file not found in $PATH" [19:14] so, the problem are with the images? [19:15] seems like something has changed in the images, yeah [19:15] so, question [19:15] hav eoyu tried this using just compose without teh charm on your host to verify the docker components are good to go? [19:15] i'm in another meeting, but i can give it a shot when i'm out [19:17] I didn't really, but I can try. Just don't think I'm gonna make it today :( [19:17] hackedbellini - well you've made a lot of good progress and you're literally at the last 5% [19:17] whats your TZ? are you EU based? [19:19] lazyPower: I'm on Brazil hahaha, it is 17h19 right now [19:19] obs. Maybe it is related to this (https://github.com/docker/compose/issues/1639), but maybe not [19:19] ah yeah, late in the work day/week [19:33] justicefries: we're in the hangout [19:33] omw [19:34] ugh I can't find it [19:34] did you send the invite? [19:34] i can't find the hangout [19:34] https://hangouts.google.com/hangouts/_/canonical.com/kubernetes-with [19:34] justicefries ^ [19:35] hmm it has me in requesting to join [19:37] lazyPower: hmm i thought that was fixed with my profile edit [19:37] lazyPower: ill look again [19:37] stokachu - i'm going to file some bugs and do the proper process [19:37] where would you like them to be filed? against the bundle and tag them with conjure? [19:37] or would you prefer i find the spell and file them there or? [19:38] lazyPower: yea lets file them on the spell https://github.com/conjure-up/spells/issues [19:38] ack, will do [19:38] i thought i fixed that but maybe not [19:42] github.com/kz8s/tack [19:55] justicefries: http://github.com/juju/python-libjuju [20:12] justicefries http://blog.astokes.org/conjure-up-canonical-kubernetes-under-lxd-today/ sending here for persistence [20:12] justicefries: http://blog.astokes.org/conjure-up-canonical-kubernetes-under-lxd-today/ [20:12] jcastro ninja'd [20:22] hey guys... I have my horizon up, but in looking into the nuetron settings for the bridged interface, I'm missing networks that should be there. Is there any means of looking at the YAML files used by conjure up to get a better idea on how this process is handled? [20:23] keep in mind that i've performed conjure-up openstack about 10 times [21:32] hmm. [21:33] 1 nice to have thing. i have a superuser account, i'd like to be able to grant models to myself without having to do the logout/login as the model's user/logout/login as myself dance. [21:33] i'm able to create the model under that username. [21:33] or even have machine users/groups that I can put a model into. that'd sort of force that as being necessary. [21:34] but it seems weird I can create a model under someone else's user, grant access to anybody I want on it [21:34] but not grant it to myself [22:04] justicefries: not sure I follow [22:04] so if you do: [22:04] `juju add-model foo-bar --credential bam --owner admin` while logged in as justicefries [22:05] where justicefries is a superuser [22:05] doing a `juju grant justicefries admin admin/foo-bar` fails, you have to log in as the model owner. [22:05] yup [22:05] wat [22:05] let me try again to verify [22:05] * marcoceppi tries [22:06] yup, can't do it. and just confirmed i'm a superuser [22:06] interesting. Not sure if this is a bug or by design, rick_h ^^ ? [22:07] justicefries: does the inverse work okay for your usecase, you're the admin, then you make "admin" user an admin of that model? [22:07] let me try. so make the model under my user, then grant it to admin from admin [22:07] I guess for auditing, you'd just be the owner of all the models though [22:07] justicefries: yeah, create the model, then as the owner of the model, give the other use admin access to it [22:07] yeah I'd like the primary ones to all be under an admin/production user. [22:08] oh, as the owner [22:08] yeah, I can do it as the owner. [22:08] sure [22:08] i think the weird thing is I can create a model for someone else, but then not get access to it. [22:08] interesting, I'll file a bug see what shakes out from it [22:08] yeah, superuser seems to not inherit admin of models it controls [22:08] should be consistent between the two either way [22:09] if its intended, I'd expect add-model --owner shouldn't work for someone else [22:09] well, if that user does not have "addmodel" or "superuser" it wont [22:09] right [22:09] I'll file a bug and link you up in a min [22:10] oh, I see, because superuser doesn't inherit admin over models, I can't see it, and so grant 404s. [22:10] justicefries: is this with 2.0.2 or 2.1-beta1? [22:10] 2.0.2 [22:10] i can try on 2.1-beta1 [22:10] yeah, I wonder if this is just an ommision of the permission level of superuser [22:10] where if it doesn't own a model, it really can't see it, despite being the creator (and superuser) [22:14] justicefries: https://bugs.launchpad.net/juju/+bug/1643076 [22:14] Bug #1643076: superuser does not have admin over models it created but does not own [22:17] awesome [22:58] lazyPower: so I built the charm.. I guess you guys would probably call it the old style. Because mainly using bash. Having issues locally getting the application to die so I can deploy it again. juju remove-application appname doesn't work, and neither does juju remove-unit appname/0 [22:59] last time I had to destroy the controller and manually purge the lxd containers to get it to finally remove [22:59] which sucks because have to wait on it to download the mongodb charm again :D [23:45] man... it just won't remove any. I'm sure i've got to be missing something