magicaltrout | lazyPower: you wrapped ansible playbooks in charms didn't you in the demo in Ghent last year? | 00:47 |
---|---|---|
lazyPower | magicaltrout - it was kind of hackey, and there was no follow up interest | 00:48 |
lazyPower | but i did yeah | 00:48 |
magicaltrout | cool | 00:48 |
magicaltrout | cause i have this 0.5PB server that I mentioned the other week | 00:48 |
magicaltrout | can't install anything on it but does have python | 00:48 |
magicaltrout | so i'm proposing I create some playbooks (never used ansible before, should be a treat) to deploy our code, then wrap the playbooks in charms to deploy the same stuff outside of the mega server | 00:49 |
lazyPower | interesting | 00:54 |
lazyPower | magicaltrout - https://github.com/chuckbutler/ansible-base early work was left here | 00:55 |
lazyPower | literally nothing more than POC work | 00:55 |
magicaltrout | thanks lazyPower thats veryu handy | 00:56 |
magicaltrout | I work for NASA, everything we do is POC ;) | 00:56 |
lazyPower | :) | 00:56 |
lazyPower | there's probably something heinous in there | 00:56 |
lazyPower | feel free to flame me later | 00:57 |
magicaltrout | i'll just pour beer on you instead.... by "accident" | 00:57 |
lazyPower | ooo please dont, i'll be lean on clean clothing | 00:57 |
lazyPower | headed to PHX on Thursday before i head to Ghent, so its literally like, last day of the conference i might be recycling socks | 00:58 |
magicaltrout | hehe | 00:58 |
magicaltrout | i'll bring pegs to block the smell | 00:58 |
bdx | is it possible to transfer model ownership? | 01:05 |
lazyPower | bdx i'm not certain, i would think its possible via juju grant/revoke && juju share | 01:06 |
bdx | lazyPower: `juju share` - ha dreaming | 01:08 |
lazyPower | ok so that command changed on me | 01:09 |
lazyPower | i make no apologies for progress happening :P | 01:09 |
lazyPower | i'm gonna dip out and go get an extremely late dinner | 01:10 |
lazyPower | bbiaf | 01:10 |
rick_h | bdx: so if you make another account the model admin then it's just the same | 01:16 |
rick_h | bdx: should be able to remove yourself and the other person now has the admin rights | 01:16 |
bdx | rick_h: ahh niceeeee! thanks! | 01:17 |
lazyPower | rick_h - i was totally there, but dropped the ball that juju-share is no longer a thing (derp) | 01:35 |
lazyPower | byproduct of working late, sorry bout that one | 01:35 |
=== frankban|afk is now known as frankban | ||
kjackal | Good morning Juju world! | 08:18 |
surf | anrah are you there | 09:43 |
aisrael | Good morning! | 10:03 |
Zic | lazyPower: (don't freak out, I don't have new problems :D) -> I saw that all our VMs (which hosting the control plane of k8s (master, etcd, kube-apilb, easyrsa)) are prompting some I/O error in intensive use (like, many cluster operation at once, like the deletion of large namespaces) | 10:04 |
Zic | lazyPower: the problem is clearly identified in our side, it's the storage of the ESXi which slow down | 10:04 |
Zic | lazyPower: so, just to let you know that's not the fault of CDK :] | 10:06 |
Zic | I will pursue my environment test on this cluster, but I think I will rebootstrapping from scratch before go in production | 10:06 |
Zic | (when the I/O error problem at my storage will be fixed) | 10:06 |
marcoceppi | Zic: that's good insight, we're trying to build out a wiki of caveats on different providers, an ESXi gotchyas would be good to have | 10:18 |
Zic | all our virtualization is based on Proxmox clusters or VMware ESXi/vCenter clusters in my office | 10:25 |
Zic | I tend to prefer Proxmox for opensource but for this customer which will use CDK, it's an ESXi :/ | 10:26 |
Zic | but as I said, I think the problem is located on the ESXi storage (which is not local, it's an iSCSI attached disk-arrays machine) | 10:27 |
Zic | maybe on the network side, or maybe on the disk-arrays itself | 10:27 |
marcoceppi | Zic: seems plausable | 10:45 |
jcastro | Zic: there's a sig-onprem, which meets today actually, that is collecting onpremise tips, tricks, issues, bugs, etc. that we're a part of | 13:58 |
jcastro | so if you want to pass any feedback along upstream ... | 13:58 |
=== mskalka|afk is now known as mskalka | ||
lazyPower | Zic - ah, interesting. I just had a volume crash in my NAS that was backing my home lab. i'm in the process of copying the data to another volume now and prepping for a re-deployment of my remote storage. Seems like i've found a similar situation with iscsi backed volumes. | 14:05 |
lazyPower | weird how it struck us both at the same time. | 14:05 |
lazyPower | greanted you said io issues not a crash, but i digress | 14:05 |
rick_h | lazyPower: you know anyone I can ping on this? https://github.com/juju/charm-tools/issues/287 | 14:07 |
lazyPower | rick_h - marcoceppi is the grand poobah of that charm | 14:08 |
rick_h | lazyPower: ok, wasn't sure if he'd delegated these days ty | 14:08 |
lazyPower | s/charm/snap/ | 14:08 |
lazyPower | i need coffee | 14:08 |
rick_h | marcoceppi: is there any way around https://github.com/juju/charm-tools/issues/287 ? I'm trying to create my first layered charm wheeee | 14:08 |
rick_h | lazyPower: does charm create work for you? If you could, could you create a shell and shoot me a tarball as a personal favor pretty please? | 14:09 |
lazyPower | rick_h https://www.dropbox.com/s/ey8bi262mqcys12/for_rick.tar?dl=0 | 14:10 |
rick_h | my hero! | 14:10 |
* lazyPower flex's | 14:11 | |
magicaltrout | don't let rick_h write charms! ..... | 14:12 |
rick_h | but but but .... I promise to only do somewhat good things | 14:12 |
lazyPower | why not? he worked on the gui charm back in the day | 14:13 |
magicaltrout | hehe | 14:13 |
lazyPower | OOOhhhh i should have looked at who was trolling before i fed the argument ;) | 14:13 |
magicaltrout | morning :P | 14:13 |
lazyPower | \o magicaltrout | 14:13 |
magicaltrout | spent the morning writing some cfgmgmtcamp slides | 14:14 |
magicaltrout | figured i'll come and annoy you all now | 14:14 |
lazyPower | Thats on my todo list today as well | 14:14 |
magicaltrout | before the californians wake up and annoy me | 14:14 |
Zic | jcastro: this kind of meeting is planned regularly? | 14:15 |
jcastro | yep | 14:15 |
Zic | jcastro: I can prepare something for the next one, because for today I don't have enough time :/ | 14:15 |
magicaltrout | started work on yet another charm yesterday | 14:15 |
magicaltrout | openlda[ | 14:15 |
jcastro | there's a sig-cluster-ops as well, always looking for feedback | 14:15 |
magicaltrout | p | 14:16 |
jcastro | Zic: I was just making you aware it exists | 14:16 |
Zic | jcastro: what is the date of the next one? | 14:16 |
jcastro | https://github.com/kubernetes/kubernetes/wiki/Special-Interest-Groups-IGs) | 14:16 |
lazyPower | magicaltrout - so i see you're embracing the big tree of hate known as ldap | 14:16 |
magicaltrout | lazyPower: well, er | 14:16 |
magicaltrout | yeah | 14:16 |
jcastro | 15 Feb is the next one | 14:16 |
lazyPower | :D | 14:16 |
magicaltrout | I don't see why not, i use ldap servers all the time and their a ballache | 14:16 |
lazyPower | the last time i interfaced with ldap with any seriousness i recall being frustrated | 14:16 |
lazyPower | yeah | 14:16 |
magicaltrout | why not let it be easier with relations? | 14:16 |
lazyPower | +1 to that sentiment | 14:17 |
jcastro | basically you put what you want to talk about on the google document, they have mailing lists for each sig. | 14:17 |
lazyPower | i think we talked about this in pasadena | 14:17 |
magicaltrout | yeah | 14:17 |
magicaltrout | I have a blank implementation for interfacing with AD and stuff kicking around as well | 14:17 |
magicaltrout | I just need to tidy it up and ship it | 14:17 |
lazyPower | interesting, did you use the cloudbase AD as proof point? | 14:17 |
magicaltrout | nope, i used NASA's AD as a proof point ;) | 14:18 |
lazyPower | well i mean, we have an AD charm that CBS wrote | 14:18 |
lazyPower | last time i deployed it was back in 2015 however. | 14:18 |
lazyPower | i presume your AD "adapter" was a light weight forwarder then? a proxy-charm as it were. | 14:19 |
magicaltrout | yeah i know but i'm not overly bothered by its guts | 14:19 |
magicaltrout | there was no public interface | 14:19 |
magicaltrout | so i just created one with the configuration stuff i needed | 14:19 |
* lazyPower nods | 14:19 | |
marcoceppi | rick_h: snap install --classic --candidate | 14:19 |
lazyPower | shame it wasn't more useful ini ts current form. but eyyyyyyy | 14:19 |
rick_h | marcoceppi: did | 14:19 |
rick_h | marcoceppi: or did you just update in the last 15min? | 14:19 |
marcoceppi | I haven't updated it, bu tit should be working | 14:20 |
marcoceppi | rick_h: are you still blocked? | 14:20 |
rick_h | marcoceppi: I got a tarball of the create from lazypower and I'm tweaking it and will see if I can run charm build or if I'll hit the same dep issue | 14:20 |
rick_h | marcoceppi: k, verified I can run charm build just not create | 14:21 |
marcoceppi | rick_h: odd, I'll take a look | 14:21 |
marcoceppi | rick_h: whoops, I see that now. I'll get a fixin | 14:22 |
rick_h | marcoceppi: <3 ty | 14:22 |
Zic | jcastro: I will try to come at the next one so :) I don't notice, is it on Slack, IRC? | 14:24 |
jcastro | it's on slack too | 14:24 |
Zic | which one? (the link you gave me redirect to the wiki homepage, maybe because I'm unsigned currently from GitHub) | 14:24 |
lazyPower | rick_h - any movement on compiling juju in a newer version of go in brew? | 14:29 |
lazyPower | rick_h - these stack traces are a drag :( | 14:29 |
rick_h | lazyPower: I have no idea tbh. | 14:29 |
rick_h | lazyPower: would have to check with balloons or sinzui on the plans there I think | 14:29 |
Zic | hmm, I really need to take time to learn Go someday if I want to contribute (and also because it's a more and more widely used language) | 14:30 |
Zic | my old Bash/C/Python skills is not up-to-date with 2017 I guess :p | 14:31 |
magicaltrout | if i update an interface and push it to github | 14:33 |
magicaltrout | do i need to do anything next time a build a charm relying on it? | 14:33 |
* magicaltrout has messed up somewhere | 14:33 | |
marcoceppi | magicaltrout: nope, just build away | 14:34 |
magicaltrout | charmtools.build.tactics: Missing implementation for interface role: requires.py | 14:36 |
magicaltrout | what have i messed up then? | 14:36 |
magicaltrout | i renamed the underlying bits to fit the general interface naming | 14:37 |
magicaltrout | and now my charm doesn't build :) | 14:37 |
mskalka | sounds like one of the layers you're using isn't in the right spot | 14:38 |
magicaltrout | well | 14:39 |
magicaltrout | http://interfaces.juju.solutions/interface/solr/ interface is there | 14:39 |
mskalka | sorry I mean locally | 14:39 |
magicaltrout | and its referenced in layers.yaml and metadata.yaml | 14:39 |
magicaltrout | but the build directory charm build lists is empty when it falls over | 14:39 |
mskalka | hmm | 14:40 |
magicaltrout | this is my first attempt at a public interface so I've clearly messed up somewhere | 14:40 |
magicaltrout | normally my interfaces just live in $INTERFACES | 14:40 |
mskalka | you're sure charm build pulled the interface down into $JUJU_REPOSITORY/interfaces? | 14:44 |
magicaltrout | no it didn't but I don't believe charm build does that now(any more?). They just appear in hooks/relations/ in your build dir from somewhere | 14:45 |
magicaltrout | if charm build pulled them all down I'd have loads in JUJU_REPO/intefaces | 14:45 |
mskalka | I'm not 100% on the build behavior but I ran into the same issue yesterday building a local charm and the fix was dropping a local copy of whatever missing interface I had into that INTERFACE_PATH dir | 14:48 |
lazyPower | it uses a temporary directory in your build path 'deps' | 14:53 |
lazyPower | you'll find what it pulled there | 14:53 |
marcoceppi | magicaltrout: are you using 2.2.0 charm-tools? | 14:55 |
magicaltrout | 2.1.9 | 14:55 |
marcoceppi | magicaltrout: are you on ubuntu? | 14:55 |
magicaltrout | of course | 14:56 |
marcoceppi | magicaltrout: sudo apt purge charm charm-tools; sudo apt update; sudo apt install snapd; sudo snap install charm --candidate --classic | 14:56 |
marcoceppi | magicaltrout: the snap has 2.2.0 in it, which is much better at telling you whats happening during the build process | 14:56 |
magicaltrout | excellent | 14:57 |
magicaltrout | vague logging is my own forte | 14:58 |
magicaltrout | trolol same error | 15:00 |
magicaltrout | marcoceppi: does the build stage somewhere? | 15:00 |
magicaltrout | cause the target dir is emptyu | 15:01 |
marcoceppi | magicaltrout: can you run build with --debug and post the output? | 15:01 |
magicaltrout | http://pastebin.com/RSFRvTHR | 15:02 |
magicaltrout | nothing special | 15:02 |
magicaltrout | i actually put my interface back in $INTERFACE_PATH and I still get the error | 15:03 |
magicaltrout | so i've no idea what i broken in renaming it | 15:04 |
magicaltrout | although i do have a recollection of me naming my interface solr-interface initially because of this problem | 15:04 |
admcleod_ | magicaltrout: hi! what do your layers and metadata yamls look like? | 15:10 |
magicaltrout | hello admcleod_ pretty standard | 15:12 |
magicaltrout | https://github.com/USCDataScience/sparkler/blob/master/sparkler-deployment/juju/sparkler/metadata.yaml | 15:12 |
magicaltrout | except solr-interface is now just solr | 15:12 |
magicaltrout | the errors are weird though from charm tools, like its building a half cached version | 15:13 |
admcleod_ | magicaltrout: hmm. i think you broke it. | 15:16 |
admcleod_ | magicaltrout: did you check what lazyPower said? deps? | 15:17 |
Zic | lazyPower: can I let kubernetes-e2e in a production cluster or it's not recommended? | 15:17 |
magicaltrout | yeah but then i just reverted to trying a local version | 15:17 |
magicaltrout | and thats screwed as well | 15:17 |
lazyPower | Zic - you can certainly runi t against a prod cluster. its a great validation mechanism and it cleans up after itself | 15:17 |
Zic | "and it cleans up after itself" was what I wish to know, thanks :) | 15:18 |
Zic | in a near-default CDK, should I get any error? | 15:18 |
Zic | (I didn't try yet) | 15:18 |
magicaltrout | yeah admcleod_ I went back to calling a local version solr-interface and it returns to building find | 15:19 |
magicaltrout | fine | 15:19 |
magicaltrout | but if i call it just `solr` it freaks out | 15:19 |
admcleod_ | build: Processing interface: solr | 15:22 |
admcleod_ | ... | 15:22 |
admcleod_ | worked | 15:22 |
magicaltrout | ah | 15:22 |
magicaltrout | found it | 15:22 |
magicaltrout | wtf | 15:22 |
magicaltrout | i think this goes down as a weird charm build bug | 15:23 |
magicaltrout | plus my wonky setup | 15:23 |
magicaltrout | I have ~/Projects/charms | 15:23 |
magicaltrout | and ~/Projects/charms/interfaces | 15:23 |
magicaltrout | in charms I had an empty directory called `solr` | 15:23 |
lazyPower | RIP | 15:24 |
lazyPower | magicaltrout - if its in the interface archive, try biulding with --no-local-layers | 15:24 |
magicaltrout | maybe its not a bug, maybe it searches various places for interfaces | 15:24 |
lazyPower | it does, and it will use local paths if it finds them | 15:24 |
lazyPower | the --no-local-layers ensures you're always fetching from the api always | 15:24 |
magicaltrout | I thought it just looked in $INTERFACE_PATH? | 15:24 |
magicaltrout | oh well | 15:26 |
magicaltrout | weirdness averted | 15:26 |
* magicaltrout must remember not to put stuff in $JUJU_REPOSITORY that might share the name with a layer | 15:27 | |
magicaltrout | s/layer/interface | 15:27 |
admcleod_ | oh you | 15:27 |
magicaltrout | i think its a fair assumption it would look for interfaces on the interface_path! :P | 15:27 |
=== admcleod_ is now known as admcleod | ||
admcleod | well, you know what they say | 15:27 |
magicaltrout | the balder you are the more shiny your scalp? | 15:28 |
admcleod | actually that depends on buffing | 15:28 |
admcleod | but no i meant the other thing | 15:29 |
magicaltrout | don't assume things they're generally wrong? | 15:30 |
admcleod | thatll do :} | 15:30 |
cory_fu | bdx: Mind weighing in on https://github.com/juju-solutions/layer-basic/pull/86 | 15:37 |
magicaltrout | cory_fu: just for reference as you guys use GH you could setup a CLA exactly like we do with the ASF: https://cla.github.com/ | 15:52 |
magicaltrout | so that people contributing get some terms about copyright ownership and canonical's rights etc | 15:52 |
magicaltrout | so that if you do a license pivot whilst its nice to have asked, you can do what you like ;) | 15:52 |
magicaltrout | not that I think bdx will care especially, but ya know... | 15:53 |
lazyPower | this statement is true, you never know about bdx ;) ;) | 15:54 |
magicaltrout | depends what drugs he's under the influence of at that given point in time! ;) | 15:55 |
lazyPower | :O | 15:55 |
icey | has anybody tried mixing bash + python in a reactive, layered charm? | 16:37 |
mskalka | marcoceppi: I can see why you ran into issues even with manual replset initiation. Mongo is SUPER picky about its input | 18:28 |
marcoceppi | mskalka: it totally is. | 18:28 |
mskalka | marcoceppi: I spent 20 minutes trying to figure out why it thought my obvious string '10.X.X.X:Y' was being interpreted as an int. Needed double quotes. | 18:30 |
* mskalka bangs head on desk | 18:30 | |
=== frankban is now known as frankban|afk | ||
bdx | magicaltrout: put some heat on this for me and I'll let that one go https://bugs.launchpad.net/juju/+bug/1660675 | 18:49 |
mup | Bug #1660675: Feature Request: instance tagging via Juju <juju:Triaged> <https://launchpad.net/bugs/1660675> | 18:49 |
rick_h | 10min warning to Juju Show Ep #5 | 18:51 |
rick_h | wooooooooo | 18:51 |
rick_h | Juju Show watch link: https://www.youtube.com/watch?v=NySW5VjBDC8 | 18:51 |
rick_h | Juju Show "sit on the panel" link: https://hangouts.google.com/hangouts/_/ytl/YAV4cq1d16ZlbovNrxLocKaJBoURiJ8c2KYWnDY-64E=?eid=103184405956510785630&hl=en_US&authuser=0 | 18:51 |
rick_h | arosales: marcoceppi jcastro lazyPower bdx magicaltrout mbruzek ^ | 18:54 |
marcoceppi | rick_h: have fun, I'll watch later | 18:54 |
mbruzek | thanks Rick | 18:54 |
rick_h | kwmonroe: ^ | 18:55 |
kwmonroe | thx rick_h, petevg ^^ | 18:55 |
petevg | thx, kwmonroe. Heading over ... | 18:58 |
mbruzek | rick_h: I got a 403 with that url | 18:59 |
rick_h | mbruzek: try a different authuser at the end? | 19:00 |
rick_h | mbruzek: or take that off? | 19:00 |
arosales | still able to jon? | 19:00 |
mbruzek | will do | 19:00 |
arosales | https://hangouts.google.com/hangouts/_/ytl/YAV4cq1d16ZlbovNrxLocKaJBoURiJ8c2KYWnDY-64E=?eid=103184405956510785630&hl=en_US&authuser=0 | 19:00 |
arosales | 404 for me | 19:00 |
* rick_h tries w/o the authuser | 19:00 | |
rick_h | https://hangouts.google.com/hangouts/_/ytl/YAV4cq1d16ZlbovNrxLocKaJBoURiJ8c2KYWnDY-64E=?eid=103184405956510785630&hl=en_US | 19:00 |
rick_h | we've got 3 other folks in atm | 19:00 |
arosales | rick_h: are you able to invite? | 19:01 |
mbruzek | arosales: delete everything after the equal sign | 19:01 |
rick_h | arosales: invited | 19:01 |
arosales | 404 all around | 19:02 |
kwmonroe | arosales: if you took authuser=0 out, try putting it back with =1, https://hangouts.google.com/hangouts/_/ytl/YAV4cq1d16ZlbovNrxLocKaJBoURiJ8c2KYWnDY-64E=?eid=103184405956510785630&hl=en_US&authuser=1 | 19:03 |
arosales | no luck there either | 19:03 |
mbruzek | Try this: https://hangouts.google.com/hangouts/_/ytl/YAV4cq1d16ZlbovNrxLocKaJBoURiJ8c2KYWnDY-64E= | 19:03 |
mbruzek | arosales ^ | 19:03 |
arosales | that worked, thanks mbruzek | 19:04 |
kwmonroe | nice! | 19:04 |
mbruzek | arosales: owes mbruzek a brewski | 19:04 |
arosales | mbruzek: you got it | 19:06 |
kwmonroe | ~charmers group info, for those interested: https://jujucharms.com/community/charmers | 19:15 |
mbruzek | kvm! | 19:17 |
mbruzek | awesome | 19:17 |
stormmore | howdy juju world! :) | 19:19 |
arosales | stormmore: hello | 19:20 |
stormmore | lazyPower - just your daily friendly reminder ;-) | 19:21 |
bdx | rick_h: are you guys watching deltas too, on the hosted controller? statistics for usage and events per user, per model? | 19:24 |
bdx | not sure if you planned to touch on the hosted controller ... | 19:24 |
Merlijn_S | to create a charm that drives other charms | 19:30 |
Merlijn_S | the stacks idea | 19:31 |
arosales | hello Merlijn_S | 19:31 |
arosales | indeed, we have seen that use case come up time and time again | 19:31 |
stormmore | isn't that what bundles do? | 19:32 |
arosales | stormmore: a bundle is static, or just a yaml description for a given solution | 19:33 |
arosales | stormmore: what folks have been talking about is if you wanted to have a auto-scaler charm it would need extra privledges to add-unit on another charm | 19:34 |
arosales | today a charm can't take juju admin tasks on another charm | 19:34 |
arosales | so that is the though here stormmore | 19:34 |
stormmore | arosales: that would be indeed be cool. I want to eventually be able to spin up and spin down nodes based on load in my bare metal environment using MaaS | 19:35 |
Merlijn_S | PS: for show and tell; I'm working on a Charm for the Eclipse Che cloud editor + Charming integration. Very rough early work, but if anyone is interested: https://jujucharms.com/u/tengu-team/eclipse-che/ | 19:35 |
Merlijn_S | Basically an IDE running in your browser that connects to a charmbox with all the juju tools preinstalled | 19:36 |
arosales | stormmore: indeed and some folks have been thinking about that with libjuju and juju-2.0 so stay tuned to the list for that work | 19:36 |
stormmore | arosales: awesome, another "getting ahead of myself" situation :) | 19:36 |
arosales | good your thinking in that direction | 19:37 |
rick_h | Merlijn_S: oooh shiny | 19:37 |
arosales | Merlijn_S: very interesting, taking a look now | 19:37 |
stormmore | yes and it is awesome that I am not the only one. reason #143 of why I choose MaaS and Juju for this environment ;) | 19:38 |
arosales | stormmore: :-) | 19:39 |
arosales | Merlijn_S: perhaps we should show this in the next juju show if you are up or it | 19:40 |
arosales | Merlijn_S: rick_h was thinking of doing a couple of juju shows at the summit next week. At a min to recap | 19:40 |
Merlijn_S | arosales: I'll probably do a lightning talk about it | 19:40 |
arosales | Merlijn_S: +1 | 19:41 |
arosales | Merlijn_S: we will also be recording talks | 19:41 |
Merlijn_S | arosalesL +1 :) | 19:41 |
jcastro | ok so we don't have slots for lightning talks | 19:43 |
jcastro | so I think we should perhaps start consolidating talks | 19:43 |
jcastro | or asking people if they need the full 40 minutes | 19:43 |
arosales | jcastro: ya we should look to make some room | 19:43 |
arosales | jcastro: or shorten talks, +1 | 19:43 |
jcastro | we could also ask matt/chuck to bin the kubernetes talks and propose those as lightning talks in the kubes track? | 19:44 |
arosales | jcastro: I think we could shorten the talks each by 5-10 min each day to at least leave 30 min at the end of the day | 19:44 |
stormmore | talks? where do these happen? | 19:45 |
arosales | thats 6 lightning talks across the 2 days, each at 10 min | 19:45 |
arosales | stormmore: summit.juju.solutions | 19:45 |
arosales | Gent, Belgium next week | 19:45 |
jcastro | yeah the problem is we can't really change the timeslots, we inherit those from cfgmgmntcamp | 19:46 |
jcastro | so like, snacks and drinks and breaks are all on that schedule | 19:46 |
arosales | ah | 19:46 |
jcastro | I mean, we could fit 2 in one | 19:47 |
jcastro | but customizing the schedule is out | 19:47 |
jcastro | the slots I mean | 19:47 |
arosales | gotcha | 19:49 |
arosales | jcastro: so then our only option is to consolidate | 19:49 |
arosales | jcastro: what time do we end on Tuesday? | 19:50 |
jcastro | http://cfgmgmtcamp.eu/schedule/index.html#juju | 19:51 |
arosales | lolz | 19:51 |
arosales | yes I am looking at that | 19:51 |
jcastro | oh, well talks are 40 minutes | 19:52 |
jcastro | so 16:20, bus at 16:30~1700 | 19:52 |
arosales | on monday james talk i at 17:00 | 19:53 |
arosales | but on tuesday last talk is 15:40 | 19:53 |
stormmore | you may want to update the channel topic, summit.jujucharms.com is failing dns right now | 19:54 |
arosales | kwmonroe: perhaps post call mbruzek would like to learn more about resources and cwr-ci | 19:55 |
jcastro | marcoceppi: ^^ | 19:55 |
jcastro | I think it's safe to just link to the direct schedule good call | 19:55 |
=== jcastro changed the topic of #juju to: Join us at the Charmer Summit: 6-7 Feb - http://cfgmgmtcamp.eu/schedule/index.html#juju || https://review.jujucharms.com/ || https://jujucharms.com/docs/ | ||
=== jcastro changed the topic of #juju to: Join us at the Charmer Summit: 6-7 Feb - http://cfgmgmtcamp.eu/schedule/index.html#juju || https://review.jujucharms.com/ || https://jujucharms.com/docs/ || http://goo.gl/MsNu4I || Youtube: https://www.youtube.com/c/jujucharms | ||
kwmonroe | http://summit.juju.solutions/ works for me | 19:57 |
jcastro | sorry for the spam | 19:57 |
jcastro | it's been glitchy all month | 19:57 |
rick_h | ty arosales mbruzek kwmonroe bdx and pete not tim! | 19:57 |
kwmonroe | :) | 19:58 |
arosales | thanks for hosting rick_h ! | 19:58 |
rick_h | if anyone has anything else for the notes please fill it in like kwmonroe is doing and I'll copy/pretty up for the youtube desc | 19:59 |
jcastro | arosales: ok so who are we combining? | 19:59 |
jcastro | we should do this now because I have to start packing soon, I have a pre-Gent trip to cram in before summit-ing. | 19:59 |
arosales | jcastro: hangout? | 19:59 |
jcastro | omw | 20:00 |
=== mskalka is now known as mskalka|afk | ||
marcoceppi | stormmore: jcastro it's summit.juju.solutions....... | 20:05 |
=== marcoceppi changed the topic of #juju to: Join us at the Charmer Summit: 6-7 Feb - http://summit.juju.solutions || https://review.jujucharms.com/ || https://jujucharms.com/docs/ || http://goo.gl/MsNu4I || Youtube: https://www.youtube.com/c/jujucharms | ||
stormmore | @marcoceppi yes I was aware of that just the link in the topic was wrong ;-) | 20:05 |
jcastro | someone was complaining that summit.juju.solutions was the busted one | 20:06 |
jcastro | last week | 20:06 |
jcastro | that doesn't excuse the wrong url in the topic though. /me runs | 20:07 |
marcoceppi | stormmore: <3 | 20:08 |
marcoceppi | jcastro: yeah, that link has never been broken, we should just get summit.jujucharms.com pointed as well | 20:08 |
stormmore | 301 redirect time marcoceppi! | 20:09 |
rick_h | ok, video updated. /me runs to get the boy from school | 20:10 |
rick_h | kwmonroe: let's chat later on blog/email follow up please | 20:10 |
rick_h | kwmonroe: thanks so much for presenting and putting that together! | 20:10 |
kwmonroe | np rick_h - thanks for the airtime! | 20:11 |
=== siva is now known as Guest21821 | ||
Guest21821 | I used to have my charms working in trusty | 20:17 |
Guest21821 | I recently moved to xenial | 20:17 |
Guest21821 | I find that my charms are failing in the install hook and the log has the following errors | 20:18 |
Guest21821 | 2017-02-01 19:37:41 INFO juju.worker.meterstatus connected.go:112 skipped "meter-status-changed" hook (missing) 2017-02-01 19:38:10 INFO juju.worker.leadership tracker.go:184 contrail-control/0 will renew contrail-control leadership at 2017-02-01 19:38:40.054544153 + 0000 UTC2017-02-01 19:38:22 INFO install /usr/bin/env: 'python': No such file or directory 2017-02-01 19:38:22 ERROR juju.worker.uniter.operation runhook.go:107 hook "i | 20:18 |
Guest21821 | The /usr/bin/env directory is indeed there and if manually install the packages it is working | 20:18 |
Guest21821 | Can you please let me know, why I am seeing this error? | 20:19 |
Guest21821 | Any help is much appreciated | 20:19 |
stormmore | wonder if you are getting tripped up with a Windows CRLF problem | 20:21 |
stormmore | or it could be a related bug to https://bugs.launchpad.net/charms/+source/odl-controller/+bug/1555422 | 20:23 |
mup | Bug #1555422: On Xenial: install /usr/bin/env: 'python': No such file or directory <uosci> <odl-controller (Juju Charms Collection):Fix Committed by james-page> <https://launchpad.net/bugs/1555422> | 20:23 |
Guest21821 | @mupand @stormmore, can you ls let me know, how I can put the patch for this fix | 20:26 |
Guest21821 | I am using Juju2.0 | 20:26 |
stormmore | that I can't do, sorry :-/ | 20:26 |
bdx | question concerning legacy hooks in reactive charms, should this work http://paste.ubuntu.com/23907069/ ? | 20:27 |
tvansteenburgh | Guest21821: which charm is it? | 20:27 |
bdx | oops, this http://paste.ubuntu.com/23907074/ | 20:27 |
Guest21821 | I am using the contrail charms that i am developing | 20:28 |
bdx | my 'upgrade-charm' hook just doesn't seem to be firing ... I'm wondering if there is something else I need to add ... | 20:28 |
bdx | oops | 20:28 |
stormmore | bdx should the @hook not be @hooks? | 20:28 |
Guest21821 | @mup, @bdx, if I install the python2 package, will it resolve the issue? | 20:29 |
tvansteenburgh | Guest21821: your charm needs to either install python2, or use python3 instead | 20:29 |
bdx | stormmore: whoops ... yeah .. that might be my bad (typo) .. thx | 20:30 |
Guest21821 | @tvansteenburgh, thanks. How do I install python2 from the charm | 20:30 |
Guest21821 | what will be the package name? | 20:30 |
stormmore | no worries bdx, just what I noticed from a quick glance | 20:31 |
tvansteenburgh | Guest21821: python | 20:31 |
Guest21821 | @tvansteenbursh, just apt-get install python will do? | 20:31 |
tvansteenburgh | Guest21821: yeah, is it a bash charm? | 20:32 |
=== mskalka|afk is now known as mskalka | ||
Guest21821 | @tvansteenburgh, no it is a python charm | 20:32 |
tvansteenburgh | Guest21821: if it's a reactive charm you can put it in layer.yaml | 20:33 |
Guest21821 | @tvansteenburgh, no it is a python charm. It is not a reactive charm | 20:33 |
Guest21821 | I will just mention 'python' in the list of packages I have | 20:33 |
tvansteenburgh | Guest21821: those are not mutually exclusive | 20:33 |
tvansteenburgh | Guest21821: for example https://github.com/juju-solutions/review-queue-charm/blob/master/layer.yaml | 20:34 |
Guest21821 | I meant it is not a bash charm but python charm | 20:34 |
tvansteenburgh | Guest21821: right, but the charm i linked above is also python, but it uses the reactive framework | 20:35 |
Guest21821 | @tvansteenburgh, mine does not use reactive charm | 20:35 |
tvansteenburgh | Guest21821: ok | 20:35 |
Guest21821 | Let me install the python package from the charm and see how it goes | 20:36 |
Guest21821 | Thanks a lot | 20:36 |
tvansteenburgh | np | 20:36 |
lutostag | cory_fu: for the invite I get 'This invitation is invalid. ' when I try to accept for crashdump :/ | 20:36 |
cory_fu | lutostag: Ah. I was hoping that the admin invite would transfer when I moved it to https://github.com/juju/juju-crashdump but it didn't. | 20:37 |
cory_fu | marcoceppi: Since this move was at your behest, can you give lutostag and the big software team access? | 20:38 |
cory_fu | lutostag: I should ask, did you see the context for this move? | 20:38 |
marcoceppi | cory_fu: you all do have access | 20:38 |
marcoceppi | cory_fu: it could have stayed in juju-solutions, fwiw | 20:39 |
marcoceppi | cory_fu: check your perms now | 20:39 |
marcoceppi | cory_fu: you have admin | 20:39 |
cory_fu | marcoceppi: I thought all new mature projects are supposed to go juju? | 20:39 |
marcoceppi | cory_fu: true | 20:39 |
marcoceppi | we need to move a lot of things then ;) | 20:39 |
cory_fu | lutostag: For reference: https://github.com/juju/plugins/pull/75 | 20:40 |
cory_fu | marcoceppi: Yeah, I thought that was the plan, as it made sense for each repo. | 20:41 |
marcoceppi | cory_fu: true, we should move charms.reactive and such | 20:41 |
cory_fu | marcoceppi: Yes, we should | 20:41 |
marcoceppi | cory_fu: lets chat at the summit | 20:41 |
lutostag | cory_fu: ah neato. I don't care where it lives, but somewhere on its own probably does make more sense | 20:41 |
marcoceppi | cory_fu: get a list and make a plan | 20:41 |
cory_fu | marcoceppi: I won't be at the summit, but tvansteenburgh, Merlijn, and tinwood will be there. | 20:42 |
marcoceppi | cory_fu: doh | 20:42 |
cory_fu | :/ | 20:42 |
bdx | kwmonroe: so, I had typos in my pastebin, not my charm, I'm still not getting the 'upgrade-charm' hook to fire | 20:51 |
bdx | kwmonroe: http://paste.ubuntu.com/23907168/ | 20:51 |
bdx | my log shows http://paste.ubuntu.com/23907184/ | 20:52 |
cory_fu | bdx: You're mixing reactive and non-reactive. http://pastebin.ubuntu.com/23907195/ | 20:53 |
bdx | cory_fu: that would do it! thanks! | 20:54 |
cory_fu | np | 20:54 |
siva_guru | @tvansteenburgh, I installed the python package in my charms. I don't get the old error but it still fails in the install hook | 21:10 |
siva_guru | I get the following error | 21:10 |
siva_guru | 2017-02-01 20:49:13 ERROR juju.worker.dependency engine.go:539 "metric-collect" manifold worker returned unexpected error: failed to read charm from: /var/lib/juju/agents/unit-contrail-control-0/charm: stat /var/lib/juju/agents/unit-contrail-control-0/charm: no such file or directory 2017-02-01 20:49:13 INFO worker.uniter.jujuc tools.go:20 ensure jujuc symlinks in /var/lib/juju/tools/unit-contrail-control-0 2017-02-01 20:49:13 INFO w | 21:10 |
siva_guru | Sorry , I still see the same error | 21:14 |
siva_guru | 2017-02-01 20:49:43 INFO juju.worker.meterstatus connected.go:112 skipped "meter-status-changed" hook (missing) 2017-02-01 20:49:43 INFO install /usr/bin/env: 'python': No such file or directory 2017-02-01 20:49:43 ERROR juju.worker.uniter.operation runhook.go:107 hook "install" failed: exit status 127 2017-02-01 20:49:43 INFO juju.worker.uniter resolver.go:100 awaiting error resolution for "install" hook 2017-02-01 20:49:48 INFO juj | 21:14 |
tvansteenburgh | siva_guru: really hard to diagnose without seeing charm source code | 21:16 |
siva_guru | Let me paste the install hook for you | 21:17 |
siva_guru | PACKAGES = [ "python", "docker.io" ] | 21:17 |
siva_guru | @hooks.hook() def install(): apt_upgrade(fatal=True, dist=True) apt_install(PACKAGES, fatal=True) load_docker_image() | 21:17 |
siva_guru | http://paste.ubuntu.com/23907302/ | 21:18 |
siva_guru | @tvansteenburgh, I find that the python package was not installed even though it is there in the list of packages | 21:21 |
tvansteenburgh | siva_guru: can you link to the repo or something? also maybe pastebin the entire juju debug-log | 21:23 |
cory_fu | petevg: I got this with the new juju-crashdump repo and the latest matrix code: | 21:26 |
cory_fu | matrix:216:execute_process: ERROR retrieving SSH host keys for "ubuntu/1": keys not found | 21:26 |
cory_fu | petevg: Shouldn't that be resolved? | 21:27 |
petevg | cory_fu: I believe that's okay. It probably means that ubuntu/1 had gone away. | 21:27 |
petevg | cory_fu: ... or that it hand't come up. | 21:27 |
cory_fu | petevg: Oh, wait. There never should have been a /1 if I'm reading this right | 21:27 |
petevg | cory_fu: I got that error, threw things into a debugger, and confirmed that the ssh trick worked, and that glitch had just added the machine. | 21:28 |
petevg | cory_fu: glitch probably added the /1 | 21:28 |
cory_fu | petevg: http://pastebin.ubuntu.com/23907345/ | 21:28 |
cory_fu | petevg: You're right | 21:28 |
cory_fu | I missed the "add_unit" at the top due to glare on my monitor. >_< | 21:28 |
petevg | cory_fu: cool. I think that it's worth continuing to watch, and I don't think that we should squelch those messages, but I'm 95% certain that the ssh fix is working, and that message is okay. | 21:30 |
cory_fu | petevg: Is there any way we can improve or skip the error message in the case that glitch added a unit and it's not up yet? | 21:30 |
cory_fu | Why do you think we shouldn't drop those messages (for that particular case)? | 21:30 |
petevg | cory_fu: if you can think of a way to squelch it that doesn't squelch actual errors, I'm all ears. | 21:30 |
cory_fu | Ah | 21:30 |
cory_fu | Yeah, I don't have any ideas. :p | 21:30 |
petevg | cory_fu: yeah the error is being generated by juju-crashdump, and glitch is the thing that knows about the added machine. | 21:31 |
cory_fu | petevg: I do think that glitch probably shouldn't terminate until the units it added are up and healthy, otherwise we're not actually testing that add_unit works | 21:31 |
cory_fu | But maybe it can just do that at the end, instead of blocking before the next glitch step? | 21:32 |
petevg | cory_fu: true. Right now, the only way to wait is our health check, though, and that only works once per test. | 21:32 |
petevg | cory_fu: adding a more general "wait 'til everything is good" check makes sense to me, though. I'll make an issue and a ticket. | 21:33 |
cory_fu | petevg: Thanks | 21:33 |
petevg | np | 21:33 |
siva_guru | @tvansteenburgh, the code is in a private repo | 21:34 |
siva_guru | I can cut n paste the entire juju log | 21:34 |
siva_guru | will that help | 21:34 |
siva_guru | will that help? | 21:34 |
siva_guru | @tvansteenburgh, after I manually install it and do a juju resolved, it goes through | 21:36 |
siva_guru | Any idea why the charm is not installing it | 21:37 |
tvansteenburgh | siva_guru: log might help, yeah | 21:39 |
bdx | kwmonroe: what is the scoop on the pipeline you demoed using private interfaces/layers? e.g. my interfaces and layers are not on interfaces.juju.solutions | 21:39 |
tvansteenburgh | siva_guru: you're sure the unit has the new charm code? | 21:40 |
siva_guru | @tvansteenburgh, here is the log | 21:45 |
siva_guru | http://paste.ubuntu.com/23907417/ | 21:45 |
kwmonroe | bdx: great question. currently, the jenkins job will shell out to 'charm build' (https://github.com/juju-solutions/layer-cwr/blob/master/templates/BuildMyCharm/config.xml#L60). we do not support adding flags to charm build, but we should. i think you'd need the job to do 'charm build --interface-service=http://private.repo'. | 21:46 |
bdx | kwmonroe: I see, how do I make myself an 'interface-service'? | 21:47 |
kwmonroe | bdx: would you please open an issue requesting that charm build support private interface registreies? https://github.com/juju-solutions/layer-cwr/issues | 21:47 |
bdx | yes | 21:47 |
bdx | kwmonroe: https://github.com/juju-solutions/layer-cwr/issues/49 | 21:51 |
kwmonroe | thx bdx! i'm looking for docs on making your own interface service, but am coming up empty. cory_fu, do you recall what 'charm build --interface-service=foo' requires for foo? | 21:52 |
kwmonroe | i think it might be as simple as running a python -m SimpleHTTPServer in your $INTERFACE_PATH somehwere | 21:53 |
cory_fu | kwmonroe, bdx: https://github.com/juju-solutions/juju-interface | 21:53 |
kwmonroe | ah, cool, thx cory_fu | 21:54 |
tvansteenburgh | siva_guru: does your install hook source file have a shebang line at the top? | 21:54 |
=== mskalka is now known as mskalka|afk | ||
bdx | kwmonroe, cory_fu: looking through https://github.com/juju-solutions/juju-interface, a) this is great! b) I'm not seeing how/where I might add a private registry entry, possibly that functionality doesn't exist yet .. | 22:06 |
cory_fu | bdx: I don't think there's any support for private entries at the moment. You'd have to run your own instance of that service and point to it with the --interface-service variable. | 22:07 |
bdx | I'm wondering if ^ will just give me a gui, similar to interfaces.juju.solutions that I can log into and add my private repos in the ui possibly? | 22:08 |
bdx | it looks like that is the site interfaces.juju.solutions | 22:09 |
cory_fu | bdx: Yes, that is the application that runs interfaces.juju.solutions | 22:10 |
cory_fu | I'm not sure if there's a charm for it, but there ought to be | 22:10 |
bdx | cory_fu: I see, so if I was to run it locally, I could just login and manually add my private repo interface entries then eh? | 22:10 |
cory_fu | Right | 22:10 |
bdx | ok, nicee | 22:11 |
lazyPower | cory_fu - not at this time | 22:44 |
lazyPower | cory_fu - there was a TODO i took to writ eone for it, but i've since been busy with the k8s work. However we've also talked about just running it in k8s as a manifest since its a cloud native app as it were | 22:44 |
lazyPower | i mean it uses mongo as its backing store, so its webscale already right? thats like CN right? | 22:44 |
cory_fu | ha | 22:45 |
lazyPower | bdx - lmk if you need any help with that. as i'm the current maintainer of the interfaces instance | 22:45 |
siva_guru | @tvansteenburgh, yes it has the shbang line #!/usr/bin/env python | 22:59 |
tvansteenburgh | siva_guru: well that's the problem | 23:04 |
tvansteenburgh | siva_guru: the script is trying to use python2 to install python2 | 23:04 |
siva_guru | @tvansteenburgh, should I remove it as a solution? | 23:05 |
siva_guru | this works fine in trusty though | 23:05 |
tvansteenburgh | siva_guru: python2 is not installed on xenial by default | 23:06 |
tvansteenburgh | siva_guru: you could try running the script with python3 instead | 23:06 |
stormmore | siva_guru: try changing the shebang line to #!/usr/bin/env python3 | 23:06 |
siva_guru | @stormore, I will try that | 23:07 |
stormmore | siva_guru: you might running into other problems so I would recommend updating your code to python3 | 23:07 |
siva_guru | @stormore, what is the default python version that will be used if don't put any shebang in the code? | 23:08 |
tvansteenburgh | siva_guru: it won't work at all | 23:10 |
stormmore | a linux system won't know which interpreter to us | 23:10 |
stormmore | use* | 23:10 |
stormmore | https://en.wikipedia.org/wiki/Shebang_%28Unix%29 | 23:10 |
bdx | lazyPower: thx, will do | 23:25 |
bdx | https://git.launchpad.net/layer-apt/ | 23:25 |
bdx | http://paste.ubuntu.com/23908017/ | 23:26 |
bdx | `charm build` is failing me | 23:27 |
bdx | bc ^^ | 23:27 |
bdx | ahh its back now | 23:27 |
bdx | looks like git.launchpad was down for a moment | 23:27 |
lazyPower | bdx gremlins | 23:29 |
skuda | Hello everyone! | 23:35 |
siva_guru | @tvansteenburgh, @stormmore the old error is not coming anymore | 23:36 |
lazyPower | progress \o/ | 23:37 |
lazyPower | o/ skuda | 23:37 |
siva_guru | but I find that other hooks are getting run as part of install | 23:38 |
siva_guru | should they be modified as well | 23:38 |
siva_guru | should they be modified as well? | 23:38 |
lazyPower | siva_guru - I presume you're using the layered/reactive approach to charming? | 23:38 |
siva_guru | No.. I am not using reactive model | 23:38 |
skuda | I am trying to use Juju to deploy canonical kubernetes but I think I am missing something, I have four bare metal servers rented in a hosting provider, I don't have access to ipmi (or should I install a DHCP server for that matter) | 23:38 |
siva_guru | @lazyPower, No.. I am not using reactive model | 23:39 |
skuda | Should i not be able to use Juju and deploy to me dedicated servers? those server have Ubuntu Xenial installed and everything working fine | 23:39 |
lazyPower | siva_guru - 1 SEC | 23:39 |
lazyPower | skuda - you certainly can. If you don't have a functional cloud API that juju integrates with, you can certainly use the manual provider. Its less automatic than we would like, but its certainly possible to enlist those machines manually into a model and deploy CDK to them | 23:40 |
skuda | I would love to be able to install to those servers using LXD for example, or directly using the Ubunto Os installed | 23:40 |
lazyPower | however with only 4 bare metal servers, you might be better served by kubernetes-core, as it has fewer machine requirements | 23:40 |
siva_guru | @lazPower, @tvansteenburgh, how come the @hooks.hook("contrail-control-relation-joined") is getting called as part of install | 23:40 |
lazyPower | skuda - you can do both | 23:40 |
skuda | ahh I am always redirected to MAAS when reading about bare metal | 23:41 |
lazyPower | skuda - yeah, we prefer maas as the substrate for reasons that allow you to treat those bare metal units as a cloud, like, a proper cloud, not a manually managed cloud. | 23:41 |
skuda | speaking about kubernetes when launching conjure-up I am only offered localhost or MAAS | 23:41 |
stormmore | skuda that is cause MAAS gives Juju the "cloud" API layer | 23:41 |
lazyPower | skuda - but MAAS does have some assumptions there, that it will manage your DNS, and IPMI, and other settings at the metal layer, because you're basically modeling the machines in maas | 23:42 |
skuda | How should I "manually, it doesn't matter" instruct juju or conjure-up to use my servers? | 23:42 |
lazyPower | skuda - i do believe you need to add other cloud credentials in order ot see the other substrates, i may be incorrec ton that though | 23:42 |
lazyPower | mmcc stokachu ^ any feedback here on my statement? am i wildly misinformed? | 23:42 |
skuda | I understand what MAAS brings to the table, and I see the value, I would use it if I were controlling my datacenter, but I am not :( | 23:43 |
lazyPower | skuda - so you have some options here, you can juju bootstrap a manual provider controller, and enlist each machien 1 by 1, and then deployd irectly to them | 23:43 |
stormmore | skuda I don't think conjure-up is going to be the best method for you to install with | 23:43 |
lazyPower | but as stormmore is alluding to, you're probably not going to be able to get a good experience with conjure unless you want 4 independent clusters, one per machine, all in lxd | 23:43 |
skuda | Ok, I can manually deploy the units, no problem | 23:44 |
lazyPower | you can use placement directives in a bundle to control how your applications are deployed, and that seems like the better bet | 23:44 |
lazyPower | skuda , i would however encourage you to try a lxd based deployment locally first to get familiar with how its put together | 23:44 |
lazyPower | skuda once you've got that intial poking done, figur eout how you want the applications arranged on what machine, and then you can export a bundle and re-use it in your manual deployment | 23:44 |
skuda | Still have to familiarize myself a little bit more with Juju but it should not be a problem if I can create a manual provider controller someway | 23:44 |
lazyPower | skuda - so i'm going to be traveling over the next week, but i'll make sure i pop in here to see how things are going. If all else fails, make sure you mail the juju list juju | 23:45 |
lazyPower | argh | 23:45 |
lazyPower | juju@lists.ubuntu.com, and i'll monitor it like a hawk to help you through the manual deployment or any questions you have about the lxd initial poking | 23:45 |
lazyPower | but thats my suggested route, is to deploy on lxd first and get a feel for it | 23:46 |
skuda | placement directives... ok.. I will check all that information, thanks | 23:46 |
lazyPower | then go for the manual step, as manual denotes, if something gets botched, you're likely to have to wipe the model, then reinstall each machine base OS + re-enlist in a new model | 23:46 |
lazyPower | and thats time consuming | 23:46 |
lazyPower | and i want to be respectful of your time/effort | 23:46 |
skuda | I will lazyPower, tomorrow I will deploy in local LXD | 23:46 |
lazyPower | awesome \o/ | 23:46 |
lazyPower | and you'll get the conjure experience there | 23:46 |
lazyPower | i'll see if i can talk to adam about the conjure bits while we are in ghent, maybe there's a better story there | 23:46 |
lazyPower | as more people are showing up with BM clouds, this is going to be a growing concern | 23:47 |
skuda | it seems pretty awesome juju and conjure | 23:47 |
lazyPower | we can probably get this somewhere on the roadmap at some point and try to come up with something better than "fork the bundle and make edits" | 23:47 |
skuda | I wanted to try OpenStack too because we are choosing the best tool for our project | 23:47 |
lazyPower | yeah man, you can openstack on lxd too if you have the horsepower | 23:47 |
lazyPower | great way to poke at it and see if you like it | 23:47 |
lazyPower | very cheap to experiment | 23:48 |
skuda | and those are two really complex beast that seems to be muuuuuuuuch easier in Juju, it's awesome, I hope everything works fine in my tests! | 23:48 |
lazyPower | skuda - if not, i want your feedback | 23:48 |
lazyPower | positive/negative/indifferent, it all helps | 23:48 |
skuda | I will send to the mailing list any roadblock I found, sure | 23:48 |
lazyPower | :D | 23:48 |
lazyPower | fan-tastic | 23:49 |
lazyPower | glad i ran into you then :D | 23:49 |
lazyPower | siva_guru - ok sorry about that, i'm very passionate about k8s | 23:49 |
stormmore | I was going to suggest that maybe an OpenStack cluser would be a good solution to put down first and then install CDK in VMs on it | 23:49 |
lazyPower | siva_guru - so, its a classic charm, with a single hook file i presume symlinked? | 23:49 |
lazyPower | and your *-relation-joined hook is executing during the install phase? | 23:49 |
siva_guru | @lazyPower, yes it is a single hook file symlinked | 23:50 |
skuda | In reality what I would love to have is a solution with a good UI able to manage LXD containers with live migration and ZFS deduplication, but it seems really difficult to find | 23:50 |
lazyPower | siva_guru - i would presume one of two things has happened | 23:50 |
lazyPower | 1) tehre's some dirty state on the unit (least likely culprit) | 23:50 |
skuda | So I am right now testing different options to be as close to possible to what we want | 23:50 |
siva_guru | @lazyPower, yes it is a single hook file symlinked and yes relation-joined hook is getting called during install phase | 23:50 |
lazyPower | 2) there's a code error somewhere in the code thats falling through and executing that hook stanza | 23:50 |
catbus1 | I saw K8 in lxd.. Just wanted to share my recent experience on this. I followed https://www.stgraber.org/2017/01/13/kubernetes-inside-lxd/ and there are only two things I need to change for a successful deploy. | 23:50 |
lazyPower | siva_guru - like perhaps the method itself is being invoked directly | 23:51 |
stormmore | skuda LXDs seem good from a systems stand point but Docker containers / Kubernetes is more dev friendly | 23:51 |
lazyPower | from withiin the install() block | 23:51 |
catbus1 | one is to add "local:" prefix to the lxd container name,, which is 'kubernetes' in stgraber's example. | 23:51 |
lazyPower | man stgraber is a beast. just sayin | 23:52 |
lazyPower | that guy is like the local container legend 'round these parts | 23:52 |
skuda | stormmore: true, but LXD offers live migration and docker not, not yet at least | 23:52 |
catbus1 | The other is to limit the zfs pool size so that you don't run out of disk space on the system. I used my 3-year old laptop, but if it's a rental from a data center, you probably don't have to worry about this. | 23:52 |
lazyPower | skuda <3 you get it | 23:52 |
siva_guru | @lazypower, the same charm code works fine in trusty... I am seeing this issue in xenial | 23:52 |
lazyPower | siva_guru - doesn't seem like series would cause the weirdness though. | 23:52 |
lazyPower | siva_guru - i guess py2 vs py3? but i would have expected to see things like type errors and syntax errors, not random hook execution | 23:53 |
stormmore | skuda: not sure they will offer live mitigrations in Docker, seems their view is you should be running multiple instances of your service and make the service handle a lost of an instance | 23:53 |
siva_guru | @lazpower, yes I moved from py2 to py3 | 23:53 |
skuda | well and the state, sometimes I don't want to use slower cluster filesystems just to be able to put everything on top of docker | 23:54 |
lazyPower | siva_guru - thats what i'm saying, i'm thinking outloud with you here. as we dont have hook code to look at, its very hard to debug | 23:54 |
lazyPower | siva_guru - so the best i can do is offer thoughts while you debug | 23:54 |
skuda | I mean Docker it's awesome and everything, we use for many stateless apps and the orchestration of those services it's awesome, but it's not the solution for everything I think | 23:54 |
siva_guru | @lazypower, is this a bug or is this something I need to fix in my charm code to make it work with py3 | 23:55 |
lazyPower | stormmore - i think there's room for both in your DC/workflow. LXD is amazing at handling just about every class of workload, docker is engineered and sold as a very specific class of workload. | 23:55 |
stormmore | skuda that should be handled by replication between the instances | 23:55 |
lazyPower | siva_guru - well without seeing the code, i can only guess | 23:55 |
lazyPower | siva_guru and i'm going to guess its in the hook code | 23:55 |
stormmore | lazyPower oh I definitely agree :) | 23:55 |
siva_guru | @lazyPower, do you need my code or are you talking about the juju code? | 23:56 |
skuda | In the project we are creating right now, minecraft servers, we are speaking about big IOPS needs and a lot of state, but only 1 instance needed per server | 23:56 |
lazyPower | siva_guru - i mean your code, the charm code you are working on thats exhibiting the bad behavior | 23:56 |
lazyPower | skuda ooooh man | 23:56 |
lazyPower | skuda would you like an ARK server workload for k8s to test with? | 23:56 |
skuda | So I can't not make a good usage of all this amazing replication sets | 23:56 |
lazyPower | i just wrote the manifest for that a coupel weeks ago for my homelab and my friends and I have been beating on it quite furiously. we are quite enamored with how well its performing | 23:56 |
skuda | sure | 23:57 |
lazyPower | skuda - you sound like you could get away with just charming up the workload, and juju deploying it directly into lxd | 23:58 |
skuda | yes, I think so | 23:58 |
lazyPower | some minor network config i believe will need to happen on the host to forward things correctly, but thats minor, and we can totally get you up and running on just lxd and juju in short order | 23:58 |
lazyPower | and that networkign bit should be sorted in one of the forthcoming juju releases, we have even more networking goodness in the oven afaik | 23:58 |
skuda | right now I have two options, with different tradeoffs about this project | 23:58 |
lazyPower | dont hold me to that, but i'm like 80% certain that is the case | 23:58 |
skuda | use OpenStack and manage LXD as virtual machines, managing live migration, using local SSD, good | 23:59 |
skuda | use k8s being able to do crash recovery on a cluster filesystem like scaleIO | 23:59 |
skuda | no live migration for k8s | 23:59 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!