[00:00] <bdx> nice work
[00:00] <lazyPower> stokachu mad props boi, you got mad squabbles
[00:04] <bdx> lazyPower: its still pretty rough, but this is the barbican client idea I'm working with -> https://github.com/jamesbeedy/juju-layer-barbican-client/blob/master/reactive/barbican_client.py
[00:05] <lazyPower> i love this random "i dont have this so just set it without doing anything" https://github.com/jamesbeedy/juju-layer-barbican-client/blob/master/reactive/barbican_client.py#L13
[00:05] <lazyPower> i do stuff like that in prototypes all the time and it drives matt crazy
[00:06] <bdx> my motive behind that, is that I want to be able to react to barbican being installed in other layers, not just when the secrets are set
[00:06] <lazyPower> bdx - looks like a good start. I'm not very familiar with barbican so it makes it difficult to review/make suggestions, but at a glance, it looks straight forward enough.
[00:06] <lazyPower> are the 'containers' a primitive of barbican?
[00:07] <bdx> yea, containers store refs to secrets
[00:07] <lazyPower> like, you put secrets in 'containers' and an app requests the secrets to put in its local container so its encrypted at rest or something?
[00:08] <bdx> I wish, nothing going on here with encrypting at rest ... the values sit in the app config on the filesystem after they have been retrieved from barbican
[00:09] <lazyPower> ah
[00:09] <lazyPower> i gave it a lot of credit then
[00:09]  * lazyPower nods
[00:09] <bdx> sad I know
[00:09] <lazyPower> i dont think its sad, i think its indicative of our industries stance on security
[00:09] <bdx> with the exception of ^
[00:10] <lazyPower> i cant say my solutions are much better than that :)
[00:10] <lazyPower> but i'm interested in making them better
[00:10] <bdx> using barbican in conjunction with keystone, allows me to create projects in domains, and users in projects, and separate their access to differet secrets in barbican
[00:11] <bdx> using keystone to authenticate and authorize users in front of barbican
[00:11] <bdx> so simple, so sweet
[00:12] <lazyPower> seems like its a tiered setup like you're expecting :) so thats good
[00:13] <lazyPower> nice, granular, and interrelated
[00:13] <bdx> yes, YES!
[00:13] <bdx> and I already have it working across all of my charms :-)
[00:13] <bdx> take that vault
[00:14] <lazyPower> lol
[00:14] <lazyPower> you tell em buddy
[00:18] <bdx> lazyPower: https://s17.postimg.org/jtqy0d9mn/Screen_Shot_2016_11_13_at_9_40_03_AM.png
[00:24] <lazyPower> interesting
[00:25] <bdx> so I get the secret refs from the container here https://github.com/jamesbeedy/juju-layer-barbican-client/blob/master/reactive/barbican_client.py#L47
[00:25] <bdx> then iterate over them, unpacking the payload of each https://github.com/jamesbeedy/juju-layer-barbican-client/blob/master/reactive/barbican_client.py#L51
[00:25] <bdx> and setting to the leader
[00:26] <bdx> no doubt there are probably better ways to do this
[00:26] <bdx> its just a start ... I had to start somewhere
[00:32] <lazyPower> nah this seems reasonable
[00:32] <lazyPower> leader coordination seems like the right thing to do here
[00:33] <bdx> the only problem with setting to the leader is the type can only be a string
[00:33] <bdx> lol
[00:33] <bdx> I think this is the same for unit data
[00:38] <bdx> I mean you can set other value types, but the are converted and saved as strings
[00:39] <jcastro> stokachu: the latest conjure-up I've gotten from the next ppa doesn't have localhost as an option for me. My LXD is all configured and works already
[00:52] <jcastro> stokachu: ahh, looks like the latest hasn't been built by lp yet in the next PPA
[00:54] <lazyPower> bdx - yeah, thats fair. There's no notion of types in leader-data
[00:55] <lazyPower> jcastro - are you jazzed up on lxd kubernetes now?
[00:56] <jrwren> i'm afraid.
[02:49] <stokachu> jcastro: what version of conjure-up are you running
[02:49] <stokachu> this was just a spell modification no core code changed
[02:51] <stokachu> larrymi: ah yea the whole supported thing lol
[02:51] <stokachu> oops wrong person
[08:09] <kjackal> Good morning Juju world!
[09:27] <magicaltrout> I'm back, what did I miss?! ;)
[10:53] <bloodearnest> hey folks. Is there a way for me to customise the base image used by the lxd provider locally?
[10:54] <bloodearnest> I used to do this with the juju-template images the local provider, it allowed me to preinstall a bunch of stuff, and pre-download a load of others, which meant much faster provisioning
[10:55] <bloodearnest> afaics, juju will always use the 'ubuntu-trusty' or 'ubuntu-xenial' aliases. If publish a custom image to my local lxd image server, with the alias 'ubuntu-trusty', what should work, right?
[13:24] <magicaltrout> petevg: did you raise a bug that quality arrow problem on jujucharms.com?
[13:25] <petevg> magicaltrout: I did not. I will add it to my list of things to do this morning (not sure which repo it lives in ...)
[13:25] <magicaltrout> no problem petevg i'll do it
[13:28] <magicaltrout> done
[13:32] <petevg> Cool. Thx, magicaltrout.
[14:07] <hackedbellini> lazyPower: hey!
[14:07] <lazyPower> o/ hackedbellini
[14:07] <hackedbellini> lazyPower: using xenial worked! I'm just having one last problem now.. I'm not very familiar with docker so maybe you will know what is wrong
[14:08] <hackedbellini> https://www.irccloud.com/pastebin/gOa72syy/
[14:09] <hackedbellini> lazyPower: the same problem jhappened with the default config (it was 'postgresql:9.5'). I changed it to 'postgresql:latest' but the same happened
[14:09] <lazyPower> They restructured their tags it looks like
[14:09] <lazyPower> https://hub.docker.com/r/library/postgres/tags/
[14:10] <hackedbellini> lazyPower: hrm, so I should just remove the leading 'postgres:' and 'redmine:' from the config?
[14:11] <lazyPower> the postgres:9.5 image should work
[14:11] <lazyPower> change the image line to read that, postgres:9.5
[14:11] <lazyPower> and give it a `juju resolved` to retry the hook
[14:13] <hackedbellini> lazyPower: it didn't work before with 'postgres:9.5'. It gave me the same error as above. But I'll try again
[14:14] <lazyPower> kwmonroe - hey kev, got a sec?
[14:16] <hackedbellini> lazyPower: there's a typo on the charm config. It was written as 'postres:9.5' instead of 'postgres:9.5'. I changed 9.5 to latest and didn't even notice the typo
[14:16] <hackedbellini> now with 'postgres:latest' I think it worked
[14:16] <lazyPower> nice
[14:16] <lazyPower> glad it was simple :)
[14:18] <hackedbellini> lazyPower: nice! It is taking a while in the "Pulling postgresql (postgres:latest)..." but I think that is normal, right? Lets see what happens now, hopefully it will complete now
[14:19] <lazyPower> yeah, running docker in lxd, the image pulls were quite slow, however once they were cached things were about normal timed.
[14:32] <lazyPower> hackedbellini - did it finally settle and turn up the services?
[14:33] <marcoceppi> stokachu: nice blog post! http://blog.astokes.org/conjure-up-canonical-kubernetes-under-lxd-today/
[14:46] <vmorris> is it not possible to debug hooks on a subordinate charm?
[14:52] <lazyPower> vmorris - it certainly is possible
[15:09] <hackedbellini> lazyPower: it is still installing. I have a meeting right now, will give you more details when I come back!
[15:09] <lazyPower> ok, sounds good
[15:21] <vmorris> lazyPower: just wondering how i might do it, do I need to run debug-hooks on the parent charm? for example, this cinder-ceph subordinate unit doesn't show up in the debug-hooks list
[15:21] <lazyPower> vmorris  - nope. you target it like any principal unit
[15:22] <vmorris> lazyPower: alright well this doesn't seem to be working as designed then
[15:23] <vmorris> lazyPower: I'm getting "can't find session cinder-ceph/2"
[15:23] <lazyPower> that seems like perhaps there was a prior debug-hooks session that was disconnected?
[15:23] <lazyPower> or your'e already in a debug-hooks session on the principal unit?
[15:24] <vmorris> ah yes, the 2nd is true
[15:24] <vmorris> ^^ thanks
[15:24] <lazyPower> yeah, the output messaging there could be improved, but thats a bit obtuse as it would have to know some things first.
[15:48] <marcoceppi> mbruzek lazyPower ryebot Cynerva great job on the release last night, seems to have cleared up cloud weather report http://cwr.vapour.ws/bundle_canonical_kubernetes/index.html cc arosales
[15:52] <lazyPower> jcastro Have you interfaced with Sarah Novotny recently?
[15:52] <lazyPower> jcastro I need to coordinate with them on a reasonable place to xpost our release notes for CDK. I thinks she is the right person to track/ask as she's the community manager for k8s right?
[15:53] <jcastro> she is
[15:53] <jcastro> but the community meetings are closed for the holiday
[15:53] <jcastro> so I won't talk to her for like 2 weeks.
[15:53] <lazyPower> Right, but i was thinking email. i'm digging in slack to find her deets
[15:54] <jcastro> right, sec, I'm in a community room with her
[15:54] <lazyPower> 	sarahnovotny@google.com - found it
[15:58] <lazyPower> jcastro - https://docs.google.com/document/d/1nx8v7FgzwKgF9uFu1KK-ecx6bOGxwEh3XorodDJ6s24/edit  - mind proofing this before i hit send?
[15:58] <lazyPower> i've got you on CC as well
[15:58] <arosales> marcoceppi: thanks for the fyi. Did k8-core get updated?  Looks like it still has the lint failures re: http://data.vapour.ws/cwr-tests/results/bundle_kubernetes_core/9cff5292b7434ce29891195a47c18131/report.html
[15:58] <lazyPower> arosales thats my b, we have an update coming ot kubes-core later today with those fixes
[15:59] <jcastro> lazyPower: ack, just removed one marcoism, lgtm.
[15:59] <lazyPower> ta
[16:00] <arosales> lazyPower: thanks, and thanks for the release
[16:00] <arosales> mbruzek: ryebot Cynerva: lazyPower marcoceppi: thanks for the k8 release, looking forward to trying it out
[16:01] <lazyPower> arosales - wanna try it out on LXD?!
[16:01] <lazyPower> :D
[16:01] <arosales> indeed I do, conjure-up
[16:01] <lazyPower> ah whoops, didnt mean to leak email addresses here, i thought i was somewhere else. whoops
[16:02]  * lazyPower flogs himself
[16:02] <magicaltrout> I apt get updated the other day and it removed everything inside my /home directory
[16:02] <magicaltrout> which i blame entirely on jcastro
[16:02] <lazyPower> wait what?
[16:02] <magicaltrout> because i had a bunch of ZFS stuff on a semi stable Xenial because of his blog post ages ago :P
[16:03] <lazyPower> O_o
[16:03] <magicaltrout> I dunno, but it did happen :)
[16:03] <magicaltrout> I think something inside ZFS removed itself during the update and everything else around it
[16:03] <magicaltrout> which wiped a bunch of charms i'd not pushed anywhere.... sad times
[16:04] <magicaltrout> that teaches me
[16:05] <jcastro> did you reinstall?
[16:05] <jcastro> because you can usually zfs export and reimport the pool
[16:05] <magicaltrout> my home directory wasn't inside ZFS
[16:05] <magicaltrout> but I think my pool was chillaxing somewhere around there
[16:05] <magicaltrout> i dunno, anyway, it all vanished :)
[16:06] <lazyPower> magicaltrout - on top of VCS i also use syncthing to keep my stuff sync'd with my NAS.  Might be worth investigating.
[16:06] <lazyPower> i know this is hindsight and all that, but just a thought
[16:06] <magicaltrout> you're very wise lazyPower
[16:06] <lazyPower> no, i catch stray good suggestions from jcastro
[16:06] <magicaltrout> do i just catch the bad ones? ;)
[16:07] <lazyPower> littlie of column a little of column b
[16:08] <jcastro> also, your filesystem and your backup strategy are not the same thing. :D
[16:08] <jcastro> it's not my fault you didn't push
[16:08] <magicaltrout> yeah, next time, i'll grep the debian update scripts for "rm -rf /home/*" ;)
[16:08] <lazyPower> jcastro - isn't this like a replica of the mysql charm fauxpaus that happened 2 years ago?
[16:09] <jcastro> this is basically the same thing
[16:09]  * lazyPower grins
[16:09] <lazyPower> good times
[16:09] <jcastro> magicaltrout: oh hey are you submitting a session to the charmer summit? Last call is today
[16:10] <magicaltrout> i'm debating whether my liver will have packed in by then or not
[16:13] <magicaltrout> I don't like those guys since they rejected my last 2 attempts
[16:13] <magicaltrout> and told me to stop selling a product :P
[16:15] <magicaltrout> okay i'll pitch something, but i'm not trying too hard :P
[16:16]  * magicaltrout offsets the lack of effort by pitching with his NASA email address
[16:27] <hackedbellini> lazyPower: still some problems:
[16:27] <hackedbellini> https://www.irccloud.com/pastebin/GbEMv3zD/
[16:28] <magicaltrout> done jcastro we'll see how that goes
[16:31] <lazyPower> hackedbellini - thats without having postgres related correct?
[16:31] <lazyPower> hackedbellini - as in, using the redmine charm as the all-in-one-using-docker path
[16:32] <lazyPower> it seems like compose didn't spin up postgres first, or after spinning up it immediately exited. You'll have to juju ssh into that unit and investigate the status of the workloads on the docker bridge
[16:32] <lazyPower> s/docker bridge/docker runtime/
[16:37] <hackedbellini> lazyPower: yeah, I didn't see any "db" relation on the charm, so I didn't relate it to my postgresql unit. Was that an option?
[16:38] <hackedbellini> ok, so the problem now is totally docker? Nothing related to juju/lxc anymore?
[16:38] <lazyPower> yep
[16:39] <lazyPower> hackedbellini  in a meeting, i'll circle back after
[16:40] <jcastro> stokachu: I think I'm missing something obvious, localhost is missing for me when I do `conjure-up kubernetes`, using your next ppa, lxd and everything is already configured
[17:24] <stokachu> jcastro: silly me, try it again please
[17:28] <lazyPower> thanks stokachu
[17:28] <lazyPower> was this a packaging issue or something?
[17:34] <geekgonecrazy> is the charm snap in a functioning state?  Tried to do a charm create and getting an error.  Ideas?
[17:35] <lazyPower> geekgonecrazy - can you pastebin your error for me?
[17:35] <geekgonecrazy> https://paste.ubuntu.com/23496377/ the output I get after doing charm create -t bash my-charm
[17:36] <geekgonecrazy> I didn't see anything in the documentation about folder location, but with snaps being limited on folder I was kinda wondering if that was the issue.  But the output didn't really give me any useful clues
[17:37] <geekgonecrazy> Tried specifying just incase that was it.  Same error
[17:39] <geekgonecrazy> lazyPower: should I give the version in the juju ppa a try instead?
[17:39]  * lazyPower looks
[17:39] <lazyPower> ah yeah, this does look like a packaging issue with the snap
[17:40] <lazyPower> pkg_resources.VersionConflict: (SecretStorage 2.3.1 (/snap/charm/9/lib/python2.7/site-packages), Requirement.parse('secretstorage<2.3.0')) -- and thats not something you can easily remedy
[17:40] <lazyPower> geekgonecrazy - would you mind terribly opening a bug against charm-tools for this? https://github.com/juju/charm-tools/issues
[17:45] <geekgonecrazy> lazyPower: done! https://github.com/juju/charm-tools/issues/287
[17:45] <lazyPower> geekgonecrazy thanks for that, we'll try to circle back to that
[17:45] <lazyPower> but yeah, to answer your follow up question, i would purge the snap and install from the ppa then
[17:46] <jcastro> stokachu: awesome, working now, though it expected a controller up and running already, it didn't fire off a controller ootb for me, no idea if that's on purpose or not.
[17:46] <geekgonecrazy> lazyPower: perfect, i'll give the PPA a go.  Thanks for taking the time to take a look
[17:46] <lazyPower> np, sorry you found that rough edge. we'll get that sanded down for ya
[17:47] <geekgonecrazy> All good.  It happens to all of us :D
[17:49] <jcastro> stokachu: dude, this is awesome.
[18:00] <stokachu> jcastro: yay \o/
[18:06] <beisner> beware: https://bugs.launchpad.net/juju-deployer/+bug/1643027
[18:06] <mup> Bug #1643027: juju-deployer deploys the wrong unit series from charm store <uosci> <juju-deployer:New> <https://launchpad.net/bugs/1643027>
[18:06] <beisner> that bad boy is causing us serious funk in the osci amulet test gate
[18:08] <beisner> marcoceppi, arosales, thedac, gnuoy, tinwood ^
[18:09] <thedac> thanks for filing that
[18:09] <tinwood> boy, tricky one.
[18:09] <geekgonecrazy> Is there any best practice for the start hook?  Should I drop in a systemd unit during installation and trigger it to start / stop in the hooks?
[18:10] <arosales> juju deployer!
[18:11] <arosales> beisner: I bet if you try that with 2.0 you don't see it correct?
[18:12] <beisner> juju-deployer via amulet via bundle-tester!  :-)
[18:12] <arosales> as we only need deployer with 1.x juju
[18:12] <arosales> ah via bundletester
[18:12] <beisner> juju-deployer is still called with juju 2.0 when using bundletester
[18:12] <arosales> yes, it is bundletester that is using deployer
[18:13] <beisner> ack
[18:18] <lazyPower> geekgonecrazy - are you charming with reactive/layers?
[18:20] <geekgonecrazy> lazyPower: i'm a huge noob to charms.  Familiar with snaps but not charms.  In the charm i've got it installing from a tar ball.  I will need to have it react to relationships (mongodb primarily) and configuration changes.  If that's what you mean?
[18:20] <lazyPower> hackedbellini - hey there, circling back to your docker issues, have a few minutes to rif? i can help you dissect the issues now.
[18:20] <lazyPower> geekgonecrazy - not exactly. Reactive/layers is a new paradigm for charming that will really help you get moving with delivering that tarball and doing the right things at the right time, and allow you to re-use code we've already written for things like how to talk to mongodb
[18:21] <lazyPower> geekgonecrazy https://jujucharms.com/docs/devel/developer-layer-example - take a look at this doc. it may be a bit windy and cover too many new concepts if you're brand new to charming, but its a walkthrough by example of how to charm by layers.
[18:22] <lazyPower> err rather, i intended to link here: https://jujucharms.com/docs/devel/developer-getting-started
[18:27] <geekgonecrazy> lazyPower: hm... yikes... another set to wade through.  Any good examples you know of?  Would make it a lot easier to dive in and adjust my thinking. :D
[18:28] <lazyPower> you bet, we've got a lot of layers already.  So what your'e trying ot do is deliver a tarball based resource, and then wire that up to talk to mongo?
[18:28] <lazyPower> let me see if i can find something similar. i'm pretty sure cmars has something thats very close to this already.
[18:30] <hackedbellini> lazyPower: yeah it would be great! I tried redeploying the charm with this docker-compose config: https://github.com/sameersbn/docker-redmine/blob/master/docker-compose.yml
[18:31] <lazyPower> geekgonecrazy - while its not mongodb, this is a mattermost (open source slack alternative) layer which exercises a good amount of layers/functions-in-juju   https://github.com/cmars/juju-charm-mattermost
[18:31] <geekgonecrazy> lazyPower: yeah pretty much a nodejs app bundled in a tar ball.  Extract / npm install then wire up to mongo
[18:32] <geekgonecrazy> I guess that's fitting, i'm working on the Rocket.Chat charm haha
[18:32] <lazyPower> geekgonecrazy - you'll really want to grok the resources: usage of mattermost, as you're not using na official dependency management solution there, you can declare the tarball as a resource of the charm and instnatly win there.
[18:33] <lazyPower> oh thats fantastic :D
[18:33] <lazyPower> i didnt even know #winning-by-accident
[18:33] <lazyPower> hackedbellini - ok, did you update it to be a jinja template?
[18:33] <hackedbellini> lazyPower: I'm having a problem checking that charm log. The postgresql charm logs _a lot_ of information all the time, and I miss the relevant parts from redmine there. I tried running "juju debug-log" with "--include". I tried putting the unit name, the application name, the machine name/oid, but none of that works
[18:33] <hackedbellini> lazyPower: yes I did!
[18:34] <lazyPower> hackedbellini ah yeah, the -i flag is really confusing ot new users since the 2.0 change
[18:34] <lazyPower> hackedbellini `juju debug-log -i unit-postgresql-0`  is the format to include only postgres unit output
[18:34] <lazyPower> well, postgres unit 0 output.
[18:34] <geekgonecrazy> lazyPower: I'll give it a look though, see if I can figure out a way.  Go binary for sure a lot easier to work with then node.js
[18:35] <lazyPower> geekgonecrazy - well, we have a layer-nodejs too
[18:35] <lazyPower> geekgonecrazy how about this, i'll help you get started, and answer direct questions you may have. if i'm not availble, do you agree to post to the mailing list so others can lend me a hand helping you? (juju@lists.ubuntu.com)
[18:35] <hackedbellini> lazyPower: hrm, so for "redmine/9" (yeah, already on the 9th try) it would be "-i unit-redmine-9"? I'm trying it as I write this to you and it isn't displaying anything
[18:35] <lazyPower> i have a scaling problem where i'm a single poin tof failure.
[18:36] <lazyPower> yep, that should be the magic incantation
[18:36] <lazyPower> hackedbellini ^
[18:37] <geekgonecrazy> lazyPower: Sounds awesome.  already subbed :) i'll follow up after doing a bit more research
[18:37] <lazyPower> sounds good. dont hesitate to poke if you get mired down in the docs. I helped write most of our dev documentation. so its likely scattered. bugs / feedback welcome always
[18:38] <lazyPower> stokachu - welp, i have results. none of my worker nodes have registered as ready :|   but, it did complete and looks like its still converging in the background
[18:38] <lazyPower> it crushed my i5 deploying this whole stack in one go :D took ~ 30 minutes from start-finish.
[18:40] <lazyPower> hackedbellini - one other thing is either the charm isn't initialized... or maybe you should pass --replay so it forces debug-log to replay the messages from the beginning of the units creation
[18:40] <hackedbellini> lazyPower: omg, its friday I'm tired... The service was 6 but I read it as 9 lol
[18:40] <lazyPower> haha <3 i do these things all the time. dont feel bad
[18:41] <lazyPower> hackedbellini - the best is when you're trying to multi-task, run a deployment, then switch terminals and run bundletester and wipe out the deployment you just did. :|
[18:41] <lazyPower> #mistakes-i-realize-i-will-repeat-again-and-again
[18:42] <hackedbellini> lazyPower: hahaha yeah. So, those are the last lines of the log:
[18:42] <hackedbellini> https://www.irccloud.com/pastebin/KjgnG4S6/
[18:42] <hackedbellini> lazyPower: hhaha did that some times too =P
[18:43] <hackedbellini> it is "stuck" on that for more than a hour now
[18:44] <lazyPower> ok, if i remember correctly this is the end of the sidewalk where that prior docker work was done. As of lastnight stokachu has some profile edits that got it further along in the context of kubernetes. I think there's some modules we need to unblock on the container yet ot make this 100% functional
[18:44] <lazyPower> hackedbellini - so whats the status of the workloads in docker? did it pull the images and attempt container spinup?
[18:48] <hackedbellini> lazyPower: strange that if I run 'debug-log' with '--replay' the latest lines are pretty different:
[18:48] <hackedbellini> https://www.irccloud.com/pastebin/E05LE9wT/
[18:48] <lazyPower> oo fantastic actually
[18:48] <lazyPower> juju ssh into that unit and run `docker images`
[18:48] <lazyPower> do you see the nornagon postgres image in there?
[18:49] <hackedbellini> lazyPower: no, no images
[18:49] <lazyPower> or rather: postgres:latest
[18:49] <lazyPower> ok, so it seems like pulling the image is either blocking, or is hozed and not signalling back that the hook has failed
[18:50] <lazyPower> and thats not good :| i haven't encountered that before so there's likely no logic in the charm to handle the scenario
[18:50] <hackedbellini> lazyPower: yeah that is strange. My previous deploy did finish pulling the container
[18:50] <hackedbellini> something happened in this one specifically
[18:51] <hackedbellini> is there a way to make juju rerun the hook, considering that it "failed"?
[18:51] <lazyPower> hackedbellini - try running the docker pull postgres:latest on that unit
[18:51] <lazyPower> yep
[18:51] <lazyPower> juju resolved redmine/6
[18:51] <lazyPower> that by default will attempt a retry of the hook, if you want to skip  its juju resolved --no-retry redmine/6
[18:52] <lazyPower> s/try running/try manually running/
[18:52] <lazyPower> that way we can capture the behavior and determine what the root cause is
[18:55] <hackedbellini> lazyPower: ok, waiting for docker pull to finish. Let's see if that works :)
[18:56] <hackedbellini> I should run "docker pull" with the ubuntu user itself? Or with root?
[18:56]  * lazyPower crosses fingers "no whammies no whammies no whammies, c'mon big money"
[18:56] <lazyPower> Shoudl work with either/or
[18:59] <hackedbellini> lazyPower: strange that is seems to be "stuck". This is my output:
[18:59] <hackedbellini> https://www.irccloud.com/pastebin/looQTz6T/
[18:59] <hackedbellini> but even if I ctrl+c it and do it again, I get the exactly same output
[18:59] <lazyPower> hackedbellini - any output from journalctl -u docker?   hoping there's something in the docker runtime logs that will help indicate what hte problem might be
[19:00] <lazyPower> stokachu - found the culprit, looks like the workers cant open /proc/sys/vm/overcommit_memory, its cuasing the workers to panic and abort the registration process
[19:03] <hackedbellini> lazyPower: this is the tail of the log:
[19:03] <hackedbellini> https://www.irccloud.com/pastebin/Z7jVEdwC/
[19:03] <hackedbellini> so my "docker pull" refers to those latest 2 lines
[19:04] <hackedbellini> forget it, it seerms that the pull finished
[19:04] <hackedbellini> docker images now has the postgres
[19:04] <hackedbellini> I'll try to "juju resolved" now
[19:04] <lazyPower> ok, with the image cached it should skip pulling during docker-compose up
[19:05] <hackedbellini> hrm, didn't even need. The debug-log just advanced to "INFO unit.redmine/6.install Pulling redmine (redmine:latest)..." by itself
[19:05] <lazyPower> omg now i know why it was fighting
[19:05] <lazyPower> two threads of the engine trying to pull the same image
[19:06] <lazyPower> that explains the slowness
[19:06] <lazyPower> whoops
[19:06] <lazyPower> i thought the hook was already in failure mode
[19:07] <hackedbellini> lazyPower: it seems to have worked. Let me check if it is really running now
[19:09] <hackedbellini> lazyPower: so, the charm now says that "Redmine is running on port 8000", but there's nothing on that port
[19:09] <hackedbellini> even netstat doesn't show that port
[19:10] <lazyPower> docker ps -a
[19:10] <lazyPower> do you see the container running, and that its bound to por 8000?
[19:11] <hackedbellini> lazyPower: hrm, both postgres/redmine are with status "Exited (1) 2 minutes ago" (I don't understand anything of docker, sorry for not being able to debug it better =P)
[19:11] <lazyPower> welp, thats why nothing is listed as being bound to a port :)
[19:11] <lazyPower> you can fish up the container logs to see why
[19:12] <lazyPower> docker logs <id of container from `docker ps`>
[19:12] <lazyPower> hackedbellini by the end of this exercise you're going ot be an expert docker charmer you're literally touching every corner of the stack
[19:12] <hackedbellini> lazyPower: the log has some lines with "error: exec: "initdb": executable file not found in $PATH"
[19:13] <hackedbellini> lazyPower: hahaha nice! Maybe I can help contribute back to you in the near future
[19:14] <lazyPower> :| not cool postgres image, not cool
[19:14] <lazyPower> i wonder if osmething changed there
[19:14] <hackedbellini> lazyPower: strange that I'm using postgresql:latest. Also, the redmine:latest gave me that: "error: exec: "rake": executable file not found in $PATH"
[19:14] <hackedbellini> so, the problem are with the images?
[19:15] <lazyPower> seems like something has changed in the images, yeah
[19:15] <lazyPower> so, question
[19:15] <lazyPower> hav eoyu tried this using just compose without teh charm on your host to verify the docker components are good to go?
[19:15] <lazyPower> i'm in another meeting, but i can give it a shot when i'm out
[19:17] <hackedbellini> I didn't really, but I can try. Just don't think I'm gonna make it today :(
[19:17] <lazyPower> hackedbellini - well you've made a lot of good progress and you're literally at the last 5%
[19:17] <lazyPower> whats your TZ? are you EU based?
[19:19] <hackedbellini> lazyPower: I'm on Brazil hahaha, it is 17h19 right now
[19:19] <hackedbellini> obs. Maybe it is related to this (https://github.com/docker/compose/issues/1639), but maybe not
[19:19] <lazyPower> ah yeah, late in the work day/week
[19:33] <jcastro> justicefries: we're in the hangout
[19:33] <justicefries> omw
[19:34] <justicefries> ugh I can't find it
[19:34] <justicefries> did you send the invite?
[19:34] <justicefries> i can't find the hangout
[19:34] <lazyPower> https://hangouts.google.com/hangouts/_/canonical.com/kubernetes-with
[19:34] <lazyPower> justicefries ^
[19:35] <justicefries> hmm it has me in requesting to join
[19:37] <stokachu> lazyPower: hmm i thought that was fixed with my profile edit
[19:37] <stokachu> lazyPower: ill look again
[19:37] <lazyPower> stokachu - i'm going to file some bugs and do the proper process
[19:37] <lazyPower> where would you like them to be filed? against the bundle and tag them with conjure?
[19:37] <lazyPower> or would you prefer i find the spell and file them there or?
[19:38] <stokachu> lazyPower: yea lets file them on the spell https://github.com/conjure-up/spells/issues
[19:38] <lazyPower> ack, will do
[19:38] <stokachu> i thought i fixed that but maybe not
[19:42] <justicefries> github.com/kz8s/tack
[19:55] <marcoceppi> justicefries: http://github.com/juju/python-libjuju
[20:12] <lazyPower> justicefries http://blog.astokes.org/conjure-up-canonical-kubernetes-under-lxd-today/ sending here for persistence
[20:12] <jcastro> justicefries: http://blog.astokes.org/conjure-up-canonical-kubernetes-under-lxd-today/
[20:12] <lazyPower> jcastro ninja'd
[20:22] <bildz> hey guys...  I have my horizon up, but in looking into the nuetron settings for the bridged interface, I'm missing networks that should be there.  Is there any means of looking at the YAML files used by conjure up to get a better idea on how this process is handled?
[20:23] <bildz> keep in mind that i've performed conjure-up openstack about 10 times
[21:32] <justicefries> hmm.
[21:33] <justicefries> 1 nice to have thing. i have a superuser account, i'd like to be able to grant models to myself without having to do the logout/login as the model's user/logout/login as myself dance.
[21:33] <justicefries> i'm able to create the model under that username.
[21:33] <justicefries> or even have machine users/groups that I can put a model into. that'd sort of force that as being necessary.
[21:34] <justicefries> but it seems weird I can create a model under someone else's user, grant access to anybody I want on it
[21:34] <justicefries> but not grant it to myself
[22:04] <marcoceppi> justicefries: not sure I follow
[22:04] <justicefries> so if you do:
[22:04] <justicefries> `juju add-model foo-bar --credential bam --owner admin` while logged in as justicefries
[22:05] <justicefries> where justicefries is a superuser
[22:05] <justicefries> doing a `juju grant justicefries admin admin/foo-bar` fails, you have to log in as the model owner.
[22:05] <marcoceppi> yup
[22:05] <marcoceppi> wat
[22:05] <justicefries> let me try again to verify
[22:05]  * marcoceppi tries
[22:06] <justicefries> yup, can't do it. and just confirmed i'm a superuser
[22:06] <marcoceppi> interesting. Not sure if this is a bug or by design, rick_h ^^ ?
[22:07] <marcoceppi> justicefries: does the inverse work okay for your usecase, you're the admin, then you make "admin" user an admin of that model?
[22:07] <justicefries> let me try. so make the model under my user, then grant it to admin from admin
[22:07] <marcoceppi> I guess for auditing, you'd just be the owner of all the models though
[22:07] <marcoceppi> justicefries: yeah, create the model, then as the owner of the model, give the other use admin access to it
[22:07] <justicefries> yeah I'd like the primary ones to all be under an admin/production user.
[22:08] <justicefries> oh, as the owner
[22:08] <justicefries> yeah, I can do it as the owner.
[22:08] <marcoceppi> sure
[22:08] <justicefries> i think the weird thing is I can create a model for someone else, but then not get access to it.
[22:08] <marcoceppi> interesting, I'll file a bug see what shakes out from it
[22:08] <marcoceppi> yeah, superuser seems to not inherit admin of models it controls
[22:08] <justicefries> should be consistent between the two either way
[22:09] <justicefries> if its intended, I'd expect add-model --owner shouldn't work for someone else
[22:09] <marcoceppi> well, if that user does not have "addmodel" or "superuser" it wont
[22:09] <justicefries> right
[22:09] <marcoceppi> I'll file a bug and link you up in a min
[22:10] <justicefries> oh, I see, because superuser doesn't inherit admin over models, I can't see it, and so grant 404s.
[22:10] <marcoceppi> justicefries: is this with 2.0.2 or 2.1-beta1?
[22:10] <justicefries> 2.0.2
[22:10] <justicefries> i can try on 2.1-beta1
[22:10] <marcoceppi> yeah, I wonder if this is just an ommision of the permission level of superuser
[22:10] <marcoceppi> where if it doesn't own a model, it really can't see it, despite being the creator (and superuser)
[22:14] <marcoceppi> justicefries: https://bugs.launchpad.net/juju/+bug/1643076
[22:14] <mup> Bug #1643076: superuser does not have admin over models it created but does not own <juju:New> <https://launchpad.net/bugs/1643076>
[22:17] <justicefries> awesome
[22:58] <geekgonecrazy> lazyPower: so I built the charm.. I guess you guys would probably call it the old style.  Because mainly using bash.  Having issues locally getting the application to die so I can deploy it again.  juju remove-application appname doesn't work, and neither does juju remove-unit appname/0
[22:59] <geekgonecrazy> last time I had to destroy the controller and manually purge the lxd containers to get it to finally remove
[22:59] <geekgonecrazy> which sucks because have to wait on it to download the mongodb charm again :D
[23:45] <geekgonecrazy> man... it just won't remove any.  I'm sure i've got to be missing something