[00:47] lazyPower: you wrapped ansible playbooks in charms didn't you in the demo in Ghent last year? [00:48] magicaltrout - it was kind of hackey, and there was no follow up interest [00:48] but i did yeah [00:48] cool [00:48] cause i have this 0.5PB server that I mentioned the other week [00:48] can't install anything on it but does have python [00:49] so i'm proposing I create some playbooks (never used ansible before, should be a treat) to deploy our code, then wrap the playbooks in charms to deploy the same stuff outside of the mega server [00:54] interesting [00:55] magicaltrout - https://github.com/chuckbutler/ansible-base early work was left here [00:55] literally nothing more than POC work [00:56] thanks lazyPower thats veryu handy [00:56] I work for NASA, everything we do is POC ;) [00:56] :) [00:56] there's probably something heinous in there [00:57] feel free to flame me later [00:57] i'll just pour beer on you instead.... by "accident" [00:57] ooo please dont, i'll be lean on clean clothing [00:58] headed to PHX on Thursday before i head to Ghent, so its literally like, last day of the conference i might be recycling socks [00:58] hehe [00:58] i'll bring pegs to block the smell [01:05] is it possible to transfer model ownership? [01:06] bdx i'm not certain, i would think its possible via juju grant/revoke && juju share [01:08] lazyPower: `juju share` - ha dreaming [01:09] ok so that command changed on me [01:09] i make no apologies for progress happening :P [01:10] i'm gonna dip out and go get an extremely late dinner [01:10] bbiaf [01:16] bdx: so if you make another account the model admin then it's just the same [01:16] bdx: should be able to remove yourself and the other person now has the admin rights [01:17] rick_h: ahh niceeeee! thanks! [01:35] rick_h - i was totally there, but dropped the ball that juju-share is no longer a thing (derp) [01:35] byproduct of working late, sorry bout that one === frankban|afk is now known as frankban [08:18] Good morning Juju world! [09:43] anrah are you there [10:03] Good morning! [10:04] lazyPower: (don't freak out, I don't have new problems :D) -> I saw that all our VMs (which hosting the control plane of k8s (master, etcd, kube-apilb, easyrsa)) are prompting some I/O error in intensive use (like, many cluster operation at once, like the deletion of large namespaces) [10:04] lazyPower: the problem is clearly identified in our side, it's the storage of the ESXi which slow down [10:06] lazyPower: so, just to let you know that's not the fault of CDK :] [10:06] I will pursue my environment test on this cluster, but I think I will rebootstrapping from scratch before go in production [10:06] (when the I/O error problem at my storage will be fixed) [10:18] Zic: that's good insight, we're trying to build out a wiki of caveats on different providers, an ESXi gotchyas would be good to have [10:25] all our virtualization is based on Proxmox clusters or VMware ESXi/vCenter clusters in my office [10:26] I tend to prefer Proxmox for opensource but for this customer which will use CDK, it's an ESXi :/ [10:27] but as I said, I think the problem is located on the ESXi storage (which is not local, it's an iSCSI attached disk-arrays machine) [10:27] maybe on the network side, or maybe on the disk-arrays itself [10:45] Zic: seems plausable [13:58] Zic: there's a sig-onprem, which meets today actually, that is collecting onpremise tips, tricks, issues, bugs, etc. that we're a part of [13:58] so if you want to pass any feedback along upstream ... === mskalka|afk is now known as mskalka [14:05] Zic - ah, interesting. I just had a volume crash in my NAS that was backing my home lab. i'm in the process of copying the data to another volume now and prepping for a re-deployment of my remote storage. Seems like i've found a similar situation with iscsi backed volumes. [14:05] weird how it struck us both at the same time. [14:05] greanted you said io issues not a crash, but i digress [14:07] lazyPower: you know anyone I can ping on this? https://github.com/juju/charm-tools/issues/287 [14:08] rick_h - marcoceppi is the grand poobah of that charm [14:08] lazyPower: ok, wasn't sure if he'd delegated these days ty [14:08] s/charm/snap/ [14:08] i need coffee [14:08] marcoceppi: is there any way around https://github.com/juju/charm-tools/issues/287 ? I'm trying to create my first layered charm wheeee [14:09] lazyPower: does charm create work for you? If you could, could you create a shell and shoot me a tarball as a personal favor pretty please? [14:10] rick_h https://www.dropbox.com/s/ey8bi262mqcys12/for_rick.tar?dl=0 [14:10] my hero! [14:11] * lazyPower flex's [14:12] don't let rick_h write charms! ..... [14:12] but but but .... I promise to only do somewhat good things [14:13] why not? he worked on the gui charm back in the day [14:13] hehe [14:13] OOOhhhh i should have looked at who was trolling before i fed the argument ;) [14:13] morning :P [14:13] \o magicaltrout [14:14] spent the morning writing some cfgmgmtcamp slides [14:14] figured i'll come and annoy you all now [14:14] Thats on my todo list today as well [14:14] before the californians wake up and annoy me [14:15] jcastro: this kind of meeting is planned regularly? [14:15] yep [14:15] jcastro: I can prepare something for the next one, because for today I don't have enough time :/ [14:15] started work on yet another charm yesterday [14:15] openlda[ [14:15] there's a sig-cluster-ops as well, always looking for feedback [14:16] p [14:16] Zic: I was just making you aware it exists [14:16] jcastro: what is the date of the next one? [14:16] https://github.com/kubernetes/kubernetes/wiki/Special-Interest-Groups-IGs) [14:16] magicaltrout - so i see you're embracing the big tree of hate known as ldap [14:16] lazyPower: well, er [14:16] yeah [14:16] 15 Feb is the next one [14:16] :D [14:16] I don't see why not, i use ldap servers all the time and their a ballache [14:16] the last time i interfaced with ldap with any seriousness i recall being frustrated [14:16] yeah [14:16] why not let it be easier with relations? [14:17] +1 to that sentiment [14:17] basically you put what you want to talk about on the google document, they have mailing lists for each sig. [14:17] i think we talked about this in pasadena [14:17] yeah [14:17] I have a blank implementation for interfacing with AD and stuff kicking around as well [14:17] I just need to tidy it up and ship it [14:17] interesting, did you use the cloudbase AD as proof point? [14:18] nope, i used NASA's AD as a proof point ;) [14:18] well i mean, we have an AD charm that CBS wrote [14:18] last time i deployed it was back in 2015 however. [14:19] i presume your AD "adapter" was a light weight forwarder then? a proxy-charm as it were. [14:19] yeah i know but i'm not overly bothered by its guts [14:19] there was no public interface [14:19] so i just created one with the configuration stuff i needed [14:19] * lazyPower nods [14:19] rick_h: snap install --classic --candidate [14:19] shame it wasn't more useful ini ts current form. but eyyyyyyy [14:19] marcoceppi: did [14:19] marcoceppi: or did you just update in the last 15min? [14:20] I haven't updated it, bu tit should be working [14:20] rick_h: are you still blocked? [14:20] marcoceppi: I got a tarball of the create from lazypower and I'm tweaking it and will see if I can run charm build or if I'll hit the same dep issue [14:21] marcoceppi: k, verified I can run charm build just not create [14:21] rick_h: odd, I'll take a look [14:22] rick_h: whoops, I see that now. I'll get a fixin [14:22] marcoceppi: <3 ty [14:24] jcastro: I will try to come at the next one so :) I don't notice, is it on Slack, IRC? [14:24] it's on slack too [14:24] which one? (the link you gave me redirect to the wiki homepage, maybe because I'm unsigned currently from GitHub) [14:29] rick_h - any movement on compiling juju in a newer version of go in brew? [14:29] rick_h - these stack traces are a drag :( [14:29] lazyPower: I have no idea tbh. [14:29] lazyPower: would have to check with balloons or sinzui on the plans there I think [14:30] hmm, I really need to take time to learn Go someday if I want to contribute (and also because it's a more and more widely used language) [14:31] my old Bash/C/Python skills is not up-to-date with 2017 I guess :p [14:33] if i update an interface and push it to github [14:33] do i need to do anything next time a build a charm relying on it? [14:33] * magicaltrout has messed up somewhere [14:34] magicaltrout: nope, just build away [14:36] charmtools.build.tactics: Missing implementation for interface role: requires.py [14:36] what have i messed up then? [14:37] i renamed the underlying bits to fit the general interface naming [14:37] and now my charm doesn't build :) [14:38] sounds like one of the layers you're using isn't in the right spot [14:39] well [14:39] http://interfaces.juju.solutions/interface/solr/ interface is there [14:39] sorry I mean locally [14:39] and its referenced in layers.yaml and metadata.yaml [14:39] but the build directory charm build lists is empty when it falls over [14:40] hmm [14:40] this is my first attempt at a public interface so I've clearly messed up somewhere [14:40] normally my interfaces just live in $INTERFACES [14:44] you're sure charm build pulled the interface down into $JUJU_REPOSITORY/interfaces? [14:45] no it didn't but I don't believe charm build does that now(any more?). They just appear in hooks/relations/ in your build dir from somewhere [14:45] if charm build pulled them all down I'd have loads in JUJU_REPO/intefaces [14:48] I'm not 100% on the build behavior but I ran into the same issue yesterday building a local charm and the fix was dropping a local copy of whatever missing interface I had into that INTERFACE_PATH dir [14:53] it uses a temporary directory in your build path 'deps' [14:53] you'll find what it pulled there [14:55] magicaltrout: are you using 2.2.0 charm-tools? [14:55] 2.1.9 [14:55] magicaltrout: are you on ubuntu? [14:56] of course [14:56] magicaltrout: sudo apt purge charm charm-tools; sudo apt update; sudo apt install snapd; sudo snap install charm --candidate --classic [14:56] magicaltrout: the snap has 2.2.0 in it, which is much better at telling you whats happening during the build process [14:57] excellent [14:58] vague logging is my own forte [15:00] trolol same error [15:00] marcoceppi: does the build stage somewhere? [15:01] cause the target dir is emptyu [15:01] magicaltrout: can you run build with --debug and post the output? [15:02] http://pastebin.com/RSFRvTHR [15:02] nothing special [15:03] i actually put my interface back in $INTERFACE_PATH and I still get the error [15:04] so i've no idea what i broken in renaming it [15:04] although i do have a recollection of me naming my interface solr-interface initially because of this problem [15:10] magicaltrout: hi! what do your layers and metadata yamls look like? [15:12] hello admcleod_ pretty standard [15:12] https://github.com/USCDataScience/sparkler/blob/master/sparkler-deployment/juju/sparkler/metadata.yaml [15:12] except solr-interface is now just solr [15:13] the errors are weird though from charm tools, like its building a half cached version [15:16] magicaltrout: hmm. i think you broke it. [15:17] magicaltrout: did you check what lazyPower said? deps? [15:17] lazyPower: can I let kubernetes-e2e in a production cluster or it's not recommended? [15:17] yeah but then i just reverted to trying a local version [15:17] and thats screwed as well [15:17] Zic - you can certainly runi t against a prod cluster. its a great validation mechanism and it cleans up after itself [15:18] "and it cleans up after itself" was what I wish to know, thanks :) [15:18] in a near-default CDK, should I get any error? [15:18] (I didn't try yet) [15:19] yeah admcleod_ I went back to calling a local version solr-interface and it returns to building find [15:19] fine [15:19] but if i call it just `solr` it freaks out [15:22] build: Processing interface: solr [15:22] ... [15:22] worked [15:22] ah [15:22] found it [15:22] wtf [15:23] i think this goes down as a weird charm build bug [15:23] plus my wonky setup [15:23] I have ~/Projects/charms [15:23] and ~/Projects/charms/interfaces [15:23] in charms I had an empty directory called `solr` [15:24] RIP [15:24] magicaltrout - if its in the interface archive, try biulding with --no-local-layers [15:24] maybe its not a bug, maybe it searches various places for interfaces [15:24] it does, and it will use local paths if it finds them [15:24] the --no-local-layers ensures you're always fetching from the api always [15:24] I thought it just looked in $INTERFACE_PATH? [15:26] oh well [15:26] weirdness averted [15:27] * magicaltrout must remember not to put stuff in $JUJU_REPOSITORY that might share the name with a layer [15:27] s/layer/interface [15:27] oh you [15:27] i think its a fair assumption it would look for interfaces on the interface_path! :P === admcleod_ is now known as admcleod [15:27] well, you know what they say [15:28] the balder you are the more shiny your scalp? [15:28] actually that depends on buffing [15:29] but no i meant the other thing [15:30] don't assume things they're generally wrong? [15:30] thatll do :} [15:37] bdx: Mind weighing in on https://github.com/juju-solutions/layer-basic/pull/86 [15:52] cory_fu: just for reference as you guys use GH you could setup a CLA exactly like we do with the ASF: https://cla.github.com/ [15:52] so that people contributing get some terms about copyright ownership and canonical's rights etc [15:52] so that if you do a license pivot whilst its nice to have asked, you can do what you like ;) [15:53] not that I think bdx will care especially, but ya know... [15:54] this statement is true, you never know about bdx ;) ;) [15:55] depends what drugs he's under the influence of at that given point in time! ;) [15:55] :O [16:37] has anybody tried mixing bash + python in a reactive, layered charm? [18:28] marcoceppi: I can see why you ran into issues even with manual replset initiation. Mongo is SUPER picky about its input [18:28] mskalka: it totally is. [18:30] marcoceppi: I spent 20 minutes trying to figure out why it thought my obvious string '10.X.X.X:Y' was being interpreted as an int. Needed double quotes. [18:30] * mskalka bangs head on desk === frankban is now known as frankban|afk [18:49] magicaltrout: put some heat on this for me and I'll let that one go https://bugs.launchpad.net/juju/+bug/1660675 [18:49] Bug #1660675: Feature Request: instance tagging via Juju [18:51] 10min warning to Juju Show Ep #5 [18:51] wooooooooo [18:51] Juju Show watch link: https://www.youtube.com/watch?v=NySW5VjBDC8 [18:51] Juju Show "sit on the panel" link: https://hangouts.google.com/hangouts/_/ytl/YAV4cq1d16ZlbovNrxLocKaJBoURiJ8c2KYWnDY-64E=?eid=103184405956510785630&hl=en_US&authuser=0 [18:54] arosales: marcoceppi jcastro lazyPower bdx magicaltrout mbruzek ^ [18:54] rick_h: have fun, I'll watch later [18:54] thanks Rick [18:55] kwmonroe: ^ [18:55] thx rick_h, petevg ^^ [18:58] thx, kwmonroe. Heading over ... [18:59] rick_h: I got a 403 with that url [19:00] mbruzek: try a different authuser at the end? [19:00] mbruzek: or take that off? [19:00] still able to jon? [19:00] will do [19:00] https://hangouts.google.com/hangouts/_/ytl/YAV4cq1d16ZlbovNrxLocKaJBoURiJ8c2KYWnDY-64E=?eid=103184405956510785630&hl=en_US&authuser=0 [19:00] 404 for me [19:00] * rick_h tries w/o the authuser [19:00] https://hangouts.google.com/hangouts/_/ytl/YAV4cq1d16ZlbovNrxLocKaJBoURiJ8c2KYWnDY-64E=?eid=103184405956510785630&hl=en_US [19:00] we've got 3 other folks in atm [19:01] rick_h: are you able to invite? [19:01] arosales: delete everything after the equal sign [19:01] arosales: invited [19:02] 404 all around [19:03] arosales: if you took authuser=0 out, try putting it back with =1, https://hangouts.google.com/hangouts/_/ytl/YAV4cq1d16ZlbovNrxLocKaJBoURiJ8c2KYWnDY-64E=?eid=103184405956510785630&hl=en_US&authuser=1 [19:03] no luck there either [19:03] Try this: https://hangouts.google.com/hangouts/_/ytl/YAV4cq1d16ZlbovNrxLocKaJBoURiJ8c2KYWnDY-64E= [19:03] arosales ^ [19:04] that worked, thanks mbruzek [19:04] nice! [19:04] arosales: owes mbruzek a brewski [19:06] mbruzek: you got it [19:15] ~charmers group info, for those interested: https://jujucharms.com/community/charmers [19:17] kvm! [19:17] awesome [19:19] howdy juju world! :) [19:20] stormmore: hello [19:21] lazyPower - just your daily friendly reminder ;-) [19:24] rick_h: are you guys watching deltas too, on the hosted controller? statistics for usage and events per user, per model? [19:24] not sure if you planned to touch on the hosted controller ... [19:30] to create a charm that drives other charms [19:31] the stacks idea [19:31] hello Merlijn_S [19:31] indeed, we have seen that use case come up time and time again [19:32] isn't that what bundles do? [19:33] stormmore: a bundle is static, or just a yaml description for a given solution [19:34] stormmore: what folks have been talking about is if you wanted to have a auto-scaler charm it would need extra privledges to add-unit on another charm [19:34] today a charm can't take juju admin tasks on another charm [19:34] so that is the though here stormmore [19:35] arosales: that would be indeed be cool. I want to eventually be able to spin up and spin down nodes based on load in my bare metal environment using MaaS [19:35] PS: for show and tell; I'm working on a Charm for the Eclipse Che cloud editor + Charming integration. Very rough early work, but if anyone is interested: https://jujucharms.com/u/tengu-team/eclipse-che/ [19:36] Basically an IDE running in your browser that connects to a charmbox with all the juju tools preinstalled [19:36] stormmore: indeed and some folks have been thinking about that with libjuju and juju-2.0 so stay tuned to the list for that work [19:36] arosales: awesome, another "getting ahead of myself" situation :) [19:37] good your thinking in that direction [19:37] Merlijn_S: oooh shiny [19:37] Merlijn_S: very interesting, taking a look now [19:38] yes and it is awesome that I am not the only one. reason #143 of why I choose MaaS and Juju for this environment ;) [19:39] stormmore: :-) [19:40] Merlijn_S: perhaps we should show this in the next juju show if you are up or it [19:40] Merlijn_S: rick_h was thinking of doing a couple of juju shows at the summit next week. At a min to recap [19:40] arosales: I'll probably do a lightning talk about it [19:41] Merlijn_S: +1 [19:41] Merlijn_S: we will also be recording talks [19:41] arosalesL +1 :) [19:43] ok so we don't have slots for lightning talks [19:43] so I think we should perhaps start consolidating talks [19:43] or asking people if they need the full 40 minutes [19:43] jcastro: ya we should look to make some room [19:43] jcastro: or shorten talks, +1 [19:44] we could also ask matt/chuck to bin the kubernetes talks and propose those as lightning talks in the kubes track? [19:44] jcastro: I think we could shorten the talks each by 5-10 min each day to at least leave 30 min at the end of the day [19:45] talks? where do these happen? [19:45] thats 6 lightning talks across the 2 days, each at 10 min [19:45] stormmore: summit.juju.solutions [19:45] Gent, Belgium next week [19:46] yeah the problem is we can't really change the timeslots, we inherit those from cfgmgmntcamp [19:46] so like, snacks and drinks and breaks are all on that schedule [19:46] ah [19:47] I mean, we could fit 2 in one [19:47] but customizing the schedule is out [19:47] the slots I mean [19:49] gotcha [19:49] jcastro: so then our only option is to consolidate [19:50] jcastro: what time do we end on Tuesday? [19:51] http://cfgmgmtcamp.eu/schedule/index.html#juju [19:51] lolz [19:51] yes I am looking at that [19:52] oh, well talks are 40 minutes [19:52] so 16:20, bus at 16:30~1700 [19:53] on monday james talk i at 17:00 [19:53] but on tuesday last talk is 15:40 [19:54] you may want to update the channel topic, summit.jujucharms.com is failing dns right now [19:55] kwmonroe: perhaps post call mbruzek would like to learn more about resources and cwr-ci [19:55] marcoceppi: ^^ [19:55] I think it's safe to just link to the direct schedule good call === jcastro changed the topic of #juju to: Join us at the Charmer Summit: 6-7 Feb - http://cfgmgmtcamp.eu/schedule/index.html#juju || https://review.jujucharms.com/ || https://jujucharms.com/docs/ === jcastro changed the topic of #juju to: Join us at the Charmer Summit: 6-7 Feb - http://cfgmgmtcamp.eu/schedule/index.html#juju || https://review.jujucharms.com/ || https://jujucharms.com/docs/ || http://goo.gl/MsNu4I || Youtube: https://www.youtube.com/c/jujucharms [19:57] http://summit.juju.solutions/ works for me [19:57] sorry for the spam [19:57] it's been glitchy all month [19:57] ty arosales mbruzek kwmonroe bdx and pete not tim! [19:58] :) [19:58] thanks for hosting rick_h ! [19:59] if anyone has anything else for the notes please fill it in like kwmonroe is doing and I'll copy/pretty up for the youtube desc [19:59] arosales: ok so who are we combining? [19:59] we should do this now because I have to start packing soon, I have a pre-Gent trip to cram in before summit-ing. [19:59] jcastro: hangout? [20:00] omw === mskalka is now known as mskalka|afk [20:05] stormmore: jcastro it's summit.juju.solutions....... === marcoceppi changed the topic of #juju to: Join us at the Charmer Summit: 6-7 Feb - http://summit.juju.solutions || https://review.jujucharms.com/ || https://jujucharms.com/docs/ || http://goo.gl/MsNu4I || Youtube: https://www.youtube.com/c/jujucharms [20:05] @marcoceppi yes I was aware of that just the link in the topic was wrong ;-) [20:06] someone was complaining that summit.juju.solutions was the busted one [20:06] last week [20:07] that doesn't excuse the wrong url in the topic though. /me runs [20:08] stormmore: <3 [20:08] jcastro: yeah, that link has never been broken, we should just get summit.jujucharms.com pointed as well [20:09] 301 redirect time marcoceppi! [20:10] ok, video updated. /me runs to get the boy from school [20:10] kwmonroe: let's chat later on blog/email follow up please [20:10] kwmonroe: thanks so much for presenting and putting that together! [20:11] np rick_h - thanks for the airtime! === siva is now known as Guest21821 [20:17] I used to have my charms working in trusty [20:17] I recently moved to xenial [20:18] I find that my charms are failing in the install hook and the log has the following errors [20:18] 2017-02-01 19:37:41 INFO juju.worker.meterstatus connected.go:112 skipped "meter-status-changed" hook (missing) 2017-02-01 19:38:10 INFO juju.worker.leadership tracker.go:184 contrail-control/0 will renew contrail-control leadership at 2017-02-01 19:38:40.054544153 + 0000 UTC2017-02-01 19:38:22 INFO install /usr/bin/env: 'python': No such file or directory 2017-02-01 19:38:22 ERROR juju.worker.uniter.operation runhook.go:107 hook "i [20:18] The /usr/bin/env directory is indeed there and if manually install the packages it is working [20:19] Can you please let me know, why I am seeing this error? [20:19] Any help is much appreciated [20:21] wonder if you are getting tripped up with a Windows CRLF problem [20:23] or it could be a related bug to https://bugs.launchpad.net/charms/+source/odl-controller/+bug/1555422 [20:23] Bug #1555422: On Xenial: install /usr/bin/env: 'python': No such file or directory [20:26] @mupand @stormmore, can you ls let me know, how I can put the patch for this fix [20:26] I am using Juju2.0 [20:26] that I can't do, sorry :-/ [20:27] question concerning legacy hooks in reactive charms, should this work http://paste.ubuntu.com/23907069/ ? [20:27] Guest21821: which charm is it? [20:27] oops, this http://paste.ubuntu.com/23907074/ [20:28] I am using the contrail charms that i am developing [20:28] my 'upgrade-charm' hook just doesn't seem to be firing ... I'm wondering if there is something else I need to add ... [20:28] oops [20:28] bdx should the @hook not be @hooks? [20:29] @mup, @bdx, if I install the python2 package, will it resolve the issue? [20:29] Guest21821: your charm needs to either install python2, or use python3 instead [20:30] stormmore: whoops ... yeah .. that might be my bad (typo) .. thx [20:30] @tvansteenburgh, thanks. How do I install python2 from the charm [20:30] what will be the package name? [20:31] no worries bdx, just what I noticed from a quick glance [20:31] Guest21821: python [20:31] @tvansteenbursh, just apt-get install python will do? [20:32] Guest21821: yeah, is it a bash charm? === mskalka|afk is now known as mskalka [20:32] @tvansteenburgh, no it is a python charm [20:33] Guest21821: if it's a reactive charm you can put it in layer.yaml [20:33] @tvansteenburgh, no it is a python charm. It is not a reactive charm [20:33] I will just mention 'python' in the list of packages I have [20:33] Guest21821: those are not mutually exclusive [20:34] Guest21821: for example https://github.com/juju-solutions/review-queue-charm/blob/master/layer.yaml [20:34] I meant it is not a bash charm but python charm [20:35] Guest21821: right, but the charm i linked above is also python, but it uses the reactive framework [20:35] @tvansteenburgh, mine does not use reactive charm [20:35] Guest21821: ok [20:36] Let me install the python package from the charm and see how it goes [20:36] Thanks a lot [20:36] np [20:36] cory_fu: for the invite I get 'This invitation is invalid. ' when I try to accept for crashdump :/ [20:37] lutostag: Ah. I was hoping that the admin invite would transfer when I moved it to https://github.com/juju/juju-crashdump but it didn't. [20:38] marcoceppi: Since this move was at your behest, can you give lutostag and the big software team access? [20:38] lutostag: I should ask, did you see the context for this move? [20:38] cory_fu: you all do have access [20:39] cory_fu: it could have stayed in juju-solutions, fwiw [20:39] cory_fu: check your perms now [20:39] cory_fu: you have admin [20:39] marcoceppi: I thought all new mature projects are supposed to go juju? [20:39] cory_fu: true [20:39] we need to move a lot of things then ;) [20:40] lutostag: For reference: https://github.com/juju/plugins/pull/75 [20:41] marcoceppi: Yeah, I thought that was the plan, as it made sense for each repo. [20:41] cory_fu: true, we should move charms.reactive and such [20:41] marcoceppi: Yes, we should [20:41] cory_fu: lets chat at the summit [20:41] cory_fu: ah neato. I don't care where it lives, but somewhere on its own probably does make more sense [20:41] cory_fu: get a list and make a plan [20:42] marcoceppi: I won't be at the summit, but tvansteenburgh, Merlijn, and tinwood will be there. [20:42] cory_fu: doh [20:42] :/ [20:51] kwmonroe: so, I had typos in my pastebin, not my charm, I'm still not getting the 'upgrade-charm' hook to fire [20:51] kwmonroe: http://paste.ubuntu.com/23907168/ [20:52] my log shows http://paste.ubuntu.com/23907184/ [20:53] bdx: You're mixing reactive and non-reactive. http://pastebin.ubuntu.com/23907195/ [20:54] cory_fu: that would do it! thanks! [20:54] np [21:10] @tvansteenburgh, I installed the python package in my charms. I don't get the old error but it still fails in the install hook [21:10] I get the following error [21:10] 2017-02-01 20:49:13 ERROR juju.worker.dependency engine.go:539 "metric-collect" manifold worker returned unexpected error: failed to read charm from: /var/lib/juju/agents/unit-contrail-control-0/charm: stat /var/lib/juju/agents/unit-contrail-control-0/charm: no such file or directory 2017-02-01 20:49:13 INFO worker.uniter.jujuc tools.go:20 ensure jujuc symlinks in /var/lib/juju/tools/unit-contrail-control-0 2017-02-01 20:49:13 INFO w [21:14] Sorry , I still see the same error [21:14] 2017-02-01 20:49:43 INFO juju.worker.meterstatus connected.go:112 skipped "meter-status-changed" hook (missing) 2017-02-01 20:49:43 INFO install /usr/bin/env: 'python': No such file or directory 2017-02-01 20:49:43 ERROR juju.worker.uniter.operation runhook.go:107 hook "install" failed: exit status 127 2017-02-01 20:49:43 INFO juju.worker.uniter resolver.go:100 awaiting error resolution for "install" hook 2017-02-01 20:49:48 INFO juj [21:16] siva_guru: really hard to diagnose without seeing charm source code [21:17] Let me paste the install hook for you [21:17] PACKAGES = [ "python", "docker.io" ] [21:17] @hooks.hook() def install(): apt_upgrade(fatal=True, dist=True) apt_install(PACKAGES, fatal=True) load_docker_image() [21:18] http://paste.ubuntu.com/23907302/ [21:21] @tvansteenburgh, I find that the python package was not installed even though it is there in the list of packages [21:23] siva_guru: can you link to the repo or something? also maybe pastebin the entire juju debug-log [21:26] petevg: I got this with the new juju-crashdump repo and the latest matrix code: [21:26] matrix:216:execute_process: ERROR retrieving SSH host keys for "ubuntu/1": keys not found [21:27] petevg: Shouldn't that be resolved? [21:27] cory_fu: I believe that's okay. It probably means that ubuntu/1 had gone away. [21:27] cory_fu: ... or that it hand't come up. [21:27] petevg: Oh, wait. There never should have been a /1 if I'm reading this right [21:28] cory_fu: I got that error, threw things into a debugger, and confirmed that the ssh trick worked, and that glitch had just added the machine. [21:28] cory_fu: glitch probably added the /1 [21:28] petevg: http://pastebin.ubuntu.com/23907345/ [21:28] petevg: You're right [21:28] I missed the "add_unit" at the top due to glare on my monitor. >_< [21:30] cory_fu: cool. I think that it's worth continuing to watch, and I don't think that we should squelch those messages, but I'm 95% certain that the ssh fix is working, and that message is okay. [21:30] petevg: Is there any way we can improve or skip the error message in the case that glitch added a unit and it's not up yet? [21:30] Why do you think we shouldn't drop those messages (for that particular case)? [21:30] cory_fu: if you can think of a way to squelch it that doesn't squelch actual errors, I'm all ears. [21:30] Ah [21:30] Yeah, I don't have any ideas. :p [21:31] cory_fu: yeah the error is being generated by juju-crashdump, and glitch is the thing that knows about the added machine. [21:31] petevg: I do think that glitch probably shouldn't terminate until the units it added are up and healthy, otherwise we're not actually testing that add_unit works [21:32] But maybe it can just do that at the end, instead of blocking before the next glitch step? [21:32] cory_fu: true. Right now, the only way to wait is our health check, though, and that only works once per test. [21:33] cory_fu: adding a more general "wait 'til everything is good" check makes sense to me, though. I'll make an issue and a ticket. [21:33] petevg: Thanks [21:33] np [21:34] @tvansteenburgh, the code is in a private repo [21:34] I can cut n paste the entire juju log [21:34] will that help [21:34] will that help? [21:36] @tvansteenburgh, after I manually install it and do a juju resolved, it goes through [21:37] Any idea why the charm is not installing it [21:39] siva_guru: log might help, yeah [21:39] kwmonroe: what is the scoop on the pipeline you demoed using private interfaces/layers? e.g. my interfaces and layers are not on interfaces.juju.solutions [21:40] siva_guru: you're sure the unit has the new charm code? [21:45] @tvansteenburgh, here is the log [21:45] http://paste.ubuntu.com/23907417/ [21:46] bdx: great question. currently, the jenkins job will shell out to 'charm build' (https://github.com/juju-solutions/layer-cwr/blob/master/templates/BuildMyCharm/config.xml#L60). we do not support adding flags to charm build, but we should. i think you'd need the job to do 'charm build --interface-service=http://private.repo'. [21:47] kwmonroe: I see, how do I make myself an 'interface-service'? [21:47] bdx: would you please open an issue requesting that charm build support private interface registreies? https://github.com/juju-solutions/layer-cwr/issues [21:47] yes [21:51] kwmonroe: https://github.com/juju-solutions/layer-cwr/issues/49 [21:52] thx bdx! i'm looking for docs on making your own interface service, but am coming up empty. cory_fu, do you recall what 'charm build --interface-service=foo' requires for foo? [21:53] i think it might be as simple as running a python -m SimpleHTTPServer in your $INTERFACE_PATH somehwere [21:53] kwmonroe, bdx: https://github.com/juju-solutions/juju-interface [21:54] ah, cool, thx cory_fu [21:54] siva_guru: does your install hook source file have a shebang line at the top? === mskalka is now known as mskalka|afk [22:06] kwmonroe, cory_fu: looking through https://github.com/juju-solutions/juju-interface, a) this is great! b) I'm not seeing how/where I might add a private registry entry, possibly that functionality doesn't exist yet .. [22:07] bdx: I don't think there's any support for private entries at the moment. You'd have to run your own instance of that service and point to it with the --interface-service variable. [22:08] I'm wondering if ^ will just give me a gui, similar to interfaces.juju.solutions that I can log into and add my private repos in the ui possibly? [22:09] it looks like that is the site interfaces.juju.solutions [22:10] bdx: Yes, that is the application that runs interfaces.juju.solutions [22:10] I'm not sure if there's a charm for it, but there ought to be [22:10] cory_fu: I see, so if I was to run it locally, I could just login and manually add my private repo interface entries then eh? [22:10] Right [22:11] ok, nicee [22:44] cory_fu - not at this time [22:44] cory_fu - there was a TODO i took to writ eone for it, but i've since been busy with the k8s work. However we've also talked about just running it in k8s as a manifest since its a cloud native app as it were [22:44] i mean it uses mongo as its backing store, so its webscale already right? thats like CN right? [22:45] ha [22:45] bdx - lmk if you need any help with that. as i'm the current maintainer of the interfaces instance [22:59] @tvansteenburgh, yes it has the shbang line #!/usr/bin/env python [23:04] siva_guru: well that's the problem [23:04] siva_guru: the script is trying to use python2 to install python2 [23:05] @tvansteenburgh, should I remove it as a solution? [23:05] this works fine in trusty though [23:06] siva_guru: python2 is not installed on xenial by default [23:06] siva_guru: you could try running the script with python3 instead [23:06] siva_guru: try changing the shebang line to #!/usr/bin/env python3 [23:07] @stormore, I will try that [23:07] siva_guru: you might running into other problems so I would recommend updating your code to python3 [23:08] @stormore, what is the default python version that will be used if don't put any shebang in the code? [23:10] siva_guru: it won't work at all [23:10] a linux system won't know which interpreter to us [23:10] use* [23:10] https://en.wikipedia.org/wiki/Shebang_%28Unix%29 [23:25] lazyPower: thx, will do [23:25] https://git.launchpad.net/layer-apt/ [23:26] http://paste.ubuntu.com/23908017/ [23:27] `charm build` is failing me [23:27] bc ^^ [23:27] ahh its back now [23:27] looks like git.launchpad was down for a moment [23:29] bdx gremlins [23:35] Hello everyone! [23:36] @tvansteenburgh, @stormmore the old error is not coming anymore [23:37] progress \o/ [23:37] o/ skuda [23:38] but I find that other hooks are getting run as part of install [23:38] should they be modified as well [23:38] should they be modified as well? [23:38] siva_guru - I presume you're using the layered/reactive approach to charming? [23:38] No.. I am not using reactive model [23:38] I am trying to use Juju to deploy canonical kubernetes but I think I am missing something, I have four bare metal servers rented in a hosting provider, I don't have access to ipmi (or should I install a DHCP server for that matter) [23:39] @lazyPower, No.. I am not using reactive model [23:39] Should i not be able to use Juju and deploy to me dedicated servers? those server have Ubuntu Xenial installed and everything working fine [23:39] siva_guru - 1 SEC [23:40] skuda - you certainly can. If you don't have a functional cloud API that juju integrates with, you can certainly use the manual provider. Its less automatic than we would like, but its certainly possible to enlist those machines manually into a model and deploy CDK to them [23:40] I would love to be able to install to those servers using LXD for example, or directly using the Ubunto Os installed [23:40] however with only 4 bare metal servers, you might be better served by kubernetes-core, as it has fewer machine requirements [23:40] @lazPower, @tvansteenburgh, how come the @hooks.hook("contrail-control-relation-joined") is getting called as part of install [23:40] skuda - you can do both [23:41] ahh I am always redirected to MAAS when reading about bare metal [23:41] skuda - yeah, we prefer maas as the substrate for reasons that allow you to treat those bare metal units as a cloud, like, a proper cloud, not a manually managed cloud. [23:41] speaking about kubernetes when launching conjure-up I am only offered localhost or MAAS [23:41] skuda that is cause MAAS gives Juju the "cloud" API layer [23:42] skuda - but MAAS does have some assumptions there, that it will manage your DNS, and IPMI, and other settings at the metal layer, because you're basically modeling the machines in maas [23:42] How should I "manually, it doesn't matter" instruct juju or conjure-up to use my servers? [23:42] skuda - i do believe you need to add other cloud credentials in order ot see the other substrates, i may be incorrec ton that though [23:42] mmcc stokachu ^ any feedback here on my statement? am i wildly misinformed? [23:43] I understand what MAAS brings to the table, and I see the value, I would use it if I were controlling my datacenter, but I am not :( [23:43] skuda - so you have some options here, you can juju bootstrap a manual provider controller, and enlist each machien 1 by 1, and then deployd irectly to them [23:43] skuda I don't think conjure-up is going to be the best method for you to install with [23:43] but as stormmore is alluding to, you're probably not going to be able to get a good experience with conjure unless you want 4 independent clusters, one per machine, all in lxd [23:44] Ok, I can manually deploy the units, no problem [23:44] you can use placement directives in a bundle to control how your applications are deployed, and that seems like the better bet [23:44] skuda , i would however encourage you to try a lxd based deployment locally first to get familiar with how its put together [23:44] skuda once you've got that intial poking done, figur eout how you want the applications arranged on what machine, and then you can export a bundle and re-use it in your manual deployment [23:44] Still have to familiarize myself a little bit more with Juju but it should not be a problem if I can create a manual provider controller someway [23:45] skuda - so i'm going to be traveling over the next week, but i'll make sure i pop in here to see how things are going. If all else fails, make sure you mail the juju list juju [23:45] argh [23:45] juju@lists.ubuntu.com, and i'll monitor it like a hawk to help you through the manual deployment or any questions you have about the lxd initial poking [23:46] but thats my suggested route, is to deploy on lxd first and get a feel for it [23:46] placement directives... ok.. I will check all that information, thanks [23:46] then go for the manual step, as manual denotes, if something gets botched, you're likely to have to wipe the model, then reinstall each machine base OS + re-enlist in a new model [23:46] and thats time consuming [23:46] and i want to be respectful of your time/effort [23:46] I will lazyPower, tomorrow I will deploy in local LXD [23:46] awesome \o/ [23:46] and you'll get the conjure experience there [23:46] i'll see if i can talk to adam about the conjure bits while we are in ghent, maybe there's a better story there [23:47] as more people are showing up with BM clouds, this is going to be a growing concern [23:47] it seems pretty awesome juju and conjure [23:47] we can probably get this somewhere on the roadmap at some point and try to come up with something better than "fork the bundle and make edits" [23:47] I wanted to try OpenStack too because we are choosing the best tool for our project [23:47] yeah man, you can openstack on lxd too if you have the horsepower [23:47] great way to poke at it and see if you like it [23:48] very cheap to experiment [23:48] and those are two really complex beast that seems to be muuuuuuuuch easier in Juju, it's awesome, I hope everything works fine in my tests! [23:48] skuda - if not, i want your feedback [23:48] positive/negative/indifferent, it all helps [23:48] I will send to the mailing list any roadblock I found, sure [23:48] :D [23:49] fan-tastic [23:49] glad i ran into you then :D [23:49] siva_guru - ok sorry about that, i'm very passionate about k8s [23:49] I was going to suggest that maybe an OpenStack cluser would be a good solution to put down first and then install CDK in VMs on it [23:49] siva_guru - so, its a classic charm, with a single hook file i presume symlinked? [23:49] and your *-relation-joined hook is executing during the install phase? [23:50] @lazyPower, yes it is a single hook file symlinked [23:50] In reality what I would love to have is a solution with a good UI able to manage LXD containers with live migration and ZFS deduplication, but it seems really difficult to find [23:50] siva_guru - i would presume one of two things has happened [23:50] 1) tehre's some dirty state on the unit (least likely culprit) [23:50] So I am right now testing different options to be as close to possible to what we want [23:50] @lazyPower, yes it is a single hook file symlinked and yes relation-joined hook is getting called during install phase [23:50] 2) there's a code error somewhere in the code thats falling through and executing that hook stanza [23:50] I saw K8 in lxd.. Just wanted to share my recent experience on this. I followed https://www.stgraber.org/2017/01/13/kubernetes-inside-lxd/ and there are only two things I need to change for a successful deploy. [23:51] siva_guru - like perhaps the method itself is being invoked directly [23:51] skuda LXDs seem good from a systems stand point but Docker containers / Kubernetes is more dev friendly [23:51] from withiin the install() block [23:51] one is to add "local:" prefix to the lxd container name,, which is 'kubernetes' in stgraber's example. [23:52] man stgraber is a beast. just sayin [23:52] that guy is like the local container legend 'round these parts [23:52] stormmore: true, but LXD offers live migration and docker not, not yet at least [23:52] The other is to limit the zfs pool size so that you don't run out of disk space on the system. I used my 3-year old laptop, but if it's a rental from a data center, you probably don't have to worry about this. [23:52] skuda <3 you get it [23:52] @lazypower, the same charm code works fine in trusty... I am seeing this issue in xenial [23:52] siva_guru - doesn't seem like series would cause the weirdness though. [23:53] siva_guru - i guess py2 vs py3? but i would have expected to see things like type errors and syntax errors, not random hook execution [23:53] skuda: not sure they will offer live mitigrations in Docker, seems their view is you should be running multiple instances of your service and make the service handle a lost of an instance [23:53] @lazpower, yes I moved from py2 to py3 [23:54] well and the state, sometimes I don't want to use slower cluster filesystems just to be able to put everything on top of docker [23:54] siva_guru - thats what i'm saying, i'm thinking outloud with you here. as we dont have hook code to look at, its very hard to debug [23:54] siva_guru - so the best i can do is offer thoughts while you debug [23:54] I mean Docker it's awesome and everything, we use for many stateless apps and the orchestration of those services it's awesome, but it's not the solution for everything I think [23:55] @lazypower, is this a bug or is this something I need to fix in my charm code to make it work with py3 [23:55] stormmore - i think there's room for both in your DC/workflow. LXD is amazing at handling just about every class of workload, docker is engineered and sold as a very specific class of workload. [23:55] skuda that should be handled by replication between the instances [23:55] siva_guru - well without seeing the code, i can only guess [23:55] siva_guru and i'm going to guess its in the hook code [23:55] lazyPower oh I definitely agree :) [23:56] @lazyPower, do you need my code or are you talking about the juju code? [23:56] In the project we are creating right now, minecraft servers, we are speaking about big IOPS needs and a lot of state, but only 1 instance needed per server [23:56] siva_guru - i mean your code, the charm code you are working on thats exhibiting the bad behavior [23:56] skuda ooooh man [23:56] skuda would you like an ARK server workload for k8s to test with? [23:56] So I can't not make a good usage of all this amazing replication sets [23:56] i just wrote the manifest for that a coupel weeks ago for my homelab and my friends and I have been beating on it quite furiously. we are quite enamored with how well its performing [23:57] sure [23:58] skuda - you sound like you could get away with just charming up the workload, and juju deploying it directly into lxd [23:58] yes, I think so [23:58] some minor network config i believe will need to happen on the host to forward things correctly, but thats minor, and we can totally get you up and running on just lxd and juju in short order [23:58] and that networkign bit should be sorted in one of the forthcoming juju releases, we have even more networking goodness in the oven afaik [23:58] right now I have two options, with different tradeoffs about this project [23:58] dont hold me to that, but i'm like 80% certain that is the case [23:59] use OpenStack and manage LXD as virtual machines, managing live migration, using local SSD, good [23:59] use k8s being able to do crash recovery on a cluster filesystem like scaleIO [23:59] no live migration for k8s