[00:46] catbus1, i should be around for a bit just ping me if you come back to it [01:45] balloons, around? [06:06] Hi. I have a question regarding Cinder charm. [06:07] If i locally modify the cinder.conf through the shell and then modify config through the juju-gui [06:07] the local changes get overridden. Any workaround this? [07:13] pranav_: I haven't used juju gui for deployment. Have you tried modifying the template file in /var/lib/juju/agents/unit-cinder-0/charm/templates/ and then writing changes using gui? [07:20] junaidali: No haven't tried that. We tried to see if any relation hooks get fired on modification but no good. Let me check the templates [07:22] One question, to modify the template file our application has to be running in the same unit, correct? Or is there a way to change the templates via relations? [07:36] imo, if you want some changes to be retained, you probably look for changing template file because whenever you change some config, all the template files will be re-written [07:37] if you've more than one units of charm, update template file in each unit [07:38] should probably* [07:41] I see. In which case our application always has to co-exist with the cinder charm. [07:43] sorry, just to correct what i said earlier, whenever we change any config, all the file are re-written **from the template file** [07:43] pranav_: [07:44] files* [07:50] Cool! we are gonna try out changing the templates, which i feel should work for us. [07:51] Thanks for the help Junaid :) === aluria` is now known as aluria [08:33] Hi! I'm starting to write tests to my charms, however I seem to have a problem: [08:33] 2017-02-07 10:32:51 Error getting env api endpoints, env bootstrapped? [08:33] 2017-02-07 10:32:51 Command (juju api-endpoints -e localhost-localhost:admin/default) Output: [08:35] http://paste.ubuntu.com/23946536/ [08:35] juju version is 2.0.2 [08:37] and amulet is 1.18.2 [08:37] somehow it looks like the amulet? is using old juju commands that are not available on 2.0 ? [08:47] anrah: what's the version of juju-deployer you have? [08:48] 0.6.4 [08:51] I upgraded it but no help [08:54] upgraded to which version? i think juju-deployer doesn't yet support 2.0. You can get a 2.0 supported package from Tim Van's ppa https://launchpad.net/~tvansteenburgh/+archive/ubuntu/ppa [08:54] 0.10.0 is the new version [08:54] oh okay, you've the newest version.. [08:55] I think the problem is not with the juju-deployer but the at the very first lines where the amulet? is trying to get information about the controller [08:56] as it trys to run this command: Command (juju api-endpoints -e localhost-localhost:admin/default) [08:56] and there is no juju api-endpoints command on juju 2.0 [08:56] yes, you're right [08:57] so.. Amulet is useless with Juju 2.0 ? [09:03] wait, someone will help you. It is probably late for many people here [09:05] Thanks! [09:19] I think I mixed the python versions.. As amulet uses python3 and juju-deployer python2 so had old version of the juju-deployer on python2 packages [09:22] Zic heyo, i'm starting to hack on that issue you uncovered lastnight now. [09:23] Zic - should have something fo ryou to look at before we head into the kubernetes hacking sessions later today. This might take a bit, but i'm hoping to have something for you to setup in a staging capacity shortly. [09:25] lazyPower: hello, I just come to my office, you're so synced :) [09:25] Zic :) Blame it on Belgium [09:26] lazyPower: what jetlag does it cause to you? you came from London/Canonical? [10:24] Zic - i came from KC MO, so much further than London [10:37] lazyPower: so "jetlag" was the right word :D I was not so sure "jetlag" is applicable if you just... take the boat or the train :) [10:41] indeed. i'll be leaving tomorrow so onw that i'm adjusted to the timezone, its time to leave soon [10:42] @stub hey stub, question for you re: layer-leadership if you're around [10:42] yo [10:42] stub, can i just invoke it once and give it a dict? or do i need to call it everytime with key=val pairs for each data point i want the leader to push? [10:43] charms.leadership.leader_set({'service_key': '/etc/kubernetes/serviceaccount.key', 'foo': 'bar'}) [10:43] that method accepts keyword arguments or a dictionary, so either leader_set({'foo': 'bar','baz':False}) or leader_set(foo='bar', baz=False) [10:44] oh fantastic [10:44] stub <3 [10:46] although now I look at the signature, leader_set(settings='hello') will fail, so don't call your leadership setting 'setting' until I fix that :-P === rogpeppe1 is now known as rogpeppe [11:37] lazyPower: as you are at Europe time now, I will be available if you need any test to 19:00 :) [11:37] Zic - i'm just now deploying the first attempt at fixing, currently waiting on convergence [11:37] Zic - but as i'm at a conference its hard to dedicate any reasonable amount of time, so thank you for being patience [11:37] *patient [11:37] I understand, no worries :) [11:38] my customer continue to adapt their code to k8s with minikube installed locally for now [13:09] kwmonroe: have you come up with an apachecon talk yet? [13:09] dont want to end up sumitting something similar to you guys === magicalt1out is now known as magicaltrout [14:12] let the fun begin [14:12] $ juju deploy easyrsa [14:12] ERROR cannot resolve URL "cs:easyrsa": charm or bundle not found [14:14] juju deploy cs:~containers/easyrsa-6 works. [14:20] xnox: yup, it's not yet in a promoted namespace [14:20] ok [14:20] xnox: you can expect it to complete review in time for the 1.6.0 release of kubernetes [14:21] working of instructions from https://jujucharms.com/u/containers/easyrsa/ [14:21] I cannot do "juju deploy tls-client" and the relation name is wrong too? [14:21] tls-client is a place holder for connecting to something that consumes tls [14:21] xnox: what's your end goal? [14:22] currently testing that this charm (by itself) works on s390x. [14:22] assuming since i have managed to deploy easyrsa, it does. [14:22] xnox: you'll probably want something that connects [14:23] it's python so it'll probably work [14:23] I don't think it uses anything that's not interprutted [14:23] i am getting a lot of _juju_complete_2_0: command not found [14:24] xnox: where? in your terminal? [14:24] yes [14:25] when i try to tab complete things [14:25] (i'm running zesty) [14:25] xnox: are you using the snap? [14:25] no [14:25] whatever is in zesty as deb [14:36] where is source code for https://jujucharms.com/u/containers/kubernetes-master/11 ? [14:39] xnox: https://github.com/kubernetes/kubernetes/tree/master/cluster/juju/layers/kubernetes-master [14:39] marcoceppi, how would I fund that from the charmstore? [14:40] xnox: you wouldn't unless it's in th ereadme [14:41] marcoceppi, but that's the source code, rather than published charm revision? [14:41] does that correspond to the v11 of the charm? [14:41] xnox: no, that's the upstream code [14:41] marcoceppi, how can I clone the actual charm? [14:41] xnox: just download the .zip or use `charm pull ~containers/kubernetes-master` [14:42] it's missing resources and binaries for my architecture, so i need to modify it and submit merge proposals back. But also test before I do all that. [14:42] nope [14:42] what do you mean nope? [14:43] resources aren't delivered in the source code [14:43] you can deploy kubernetes-master from the store, and attach your kubernetes rsource during that time from a local mirror [14:43] =) [14:43] sure, but i want to fix it for everyone on s390x, not just me. [14:44] xnox: well, that's a slight change, but it's on our roadmap. s390x either just landed in upstream for support or will be soon [14:44] once it does we can include it in the charm officially [14:44] landed [14:44] xnox: either way, you'll want to submit pull requests to the repo mentioned above [14:45] ok [14:45] xnox: as the charms are built from that source, using `charm build` [14:45] and after charm build, i can test my merge proposal locally, right? [14:45] (last time i did charms, there was no `charm build` and no reactive anything) [14:46] xnox: modify the code; charm build, juju deploy builds/kubernetes-master --resource kubernetes=s390x.tar.gz; test [14:46] xnox: I'm a bit busy at the moment, but i'd love to help you more [14:46] ok, thanks for your time. [14:46] Very interested in getting s390x support (and ppc64el) added to the charms [14:46] we have two routes we're chasing for that, so we should chat more [14:47] it appears my 'juju deploy' is using a proxy, although i dont have http_proxy or https_proxy set - any ideas? [14:52] stokachu - both the snap and deb packages are illustrating some odd behavior. its no longer waiting for the deployment [14:52] stokachu - i'll try to get more details about this and get a proper bug filed, but just FYI'ing you that there's one in flight incoming. [14:52] lazyPower, thanks im debugging right now [14:52] oh has someone already reported? [14:53] yea arosales [14:53] Zic - i've got a branch in flight, not going to be ready by EOD today. [14:53] Zic - however, i will def get some additinoal testing on this and ping you in the PR so you can track its progress [14:59] i am confused where and how a resource ends up packaged and uploaded into the charm store. [15:02] xnox: how so? using the charm command? [15:03] rick_h, but i'm failing to pin point where the resource is assembled/downloaded from, e.g. a makefile or some declarative syntax. [15:05] xnox: https://jujucharms.com/docs/2.0/developer-resources ? [15:06] aha! thank you [15:08] rick_h, is there $ charm resource-get command? [15:08] lazyPower: in case your interested the bug I filled was https://github.com/conjure-up/spells/issues/45 [15:08] arosales - thats consistent with whats happened in the k8s flavors of spells [15:08] thanks, i'll sub to this [15:09] * arosales thought it was the deploy_done step, but seems more prevalent if your hitting it as well. Good news is stokachu is aware and working on a fix. [15:10] xnox: so there's a resource-get hook for the charm itself. I don't see a charm pull-resource like there is a charm pull command for outside of a running unit :( [15:10] rick_h, also that makes little sense...... so the resource has binaries, but they are x86-64 binaries. [15:10] rick_h, how does one envision having arch specific resources? e.g. similar to how juju-tools are arch specific. [15:10] rick_h, can I register s390x resource; amd64 resource; etc? with the same name? [15:11] or shall I have everything inside the resource tarball and do it myself in the charm (and keep just one resource name) [15:11] xnox: well we're doing one of two things [15:11] it seems like it's already an API that a tarball with toplevel binaries must be supported [15:12] xnox: the first is you can add a named resource like `kubernetes-s390x` and rename kubernetes to `kubernetes-x86` [15:12] xnox: no, so I think you'd need to define them as different resources and upload them at publish time and then the hooks would have to have an if block that states "if arch = xxx resource-get xxx" [15:12] eeewwww [15:12] xnox: packing every arch in a tarbal is expensive for everyone [15:12] lazyPower: just saw your message, ACK, thanks for your work while travelling anyway... it's tough :) [15:12] xnox: to be honest we're moving ot snaps for deliverying kubernetes [15:13] marcoceppi, renaming a resource breaks all deployments that use custom resource. [15:13] which handles architecture specific payloads (and adds security / process confinement) [15:13] xnox: we'll we'd deprecate the kubernetes resource [15:13] ack. [15:13] keep it, but deliver an empty blob [15:13] the charm would say if the blobs empty, look for an arch named resource key [15:14] that way we wouldn't break exising custom stuff [15:14] we can do that tastefully, but I think we're going to move away from charm resources [15:14] about expensive, the tarball is currently 78MB, if we explode it 3 times. It's ok, if we download that; but then remove unused arches. [15:14] ./foo -> [15:15] ./x86_64/foo [15:15] ./s390x/foo [15:15] ./ppc64el/foo [15:15] as a resource should be backwards enough compatible. E.g. look for ./arch/foo, failing that take ./foo and run with it. [15:16] lazyPower: do you have any recommendation to do manually the work of the fix while waiting for the fix? my customer/devs works on their minikube locally while the cluster is b0rken so it's fine, but I need myself to do some testing (like connecting our k8s cluster with our Nagios) [15:17] Zic - you can manually sync those files, but i'd rather have this run during a charm upgrade so its consistent if thats ok [15:17] Zic - otherwise i'm encourgaing you to make a snowflake [15:21] lazyPower: bonus question, how can I explain to my customer that this problem was not discovered in your testing environment, as for him, the step-to-reproduce is just a reboot [15:21] I don't want to put all the fault on your shoulders :/ [15:21] Zic - we're in beta and HA testing hasn't been part of the release process. #honestanswer [15:21] what can cause this in our environment and not in your? [15:22] Zic - we've been more focused on single master with in-place upgrades, once that's rock solid, we were going to do HA testing. [15:22] you just happened to beat us to it :) [15:22] \o/ [15:22] Zic and thanks to that testing, we should get HA bits done before we're done with the upgrade testing that we're still working through. A lot of the plumbing is there [15:22] but the minor release updates is different from major release updates [15:23] so we're workign through *that* story this cycle, is 1.5.x => 1.6.x [15:27] lazyPower, are you guys seeing errors like (ServerError(...), 'json: cannot unmarshal string into Go value of type uint64') [15:27] arosales, ^ [15:29] stokachu: checking [15:30] arosales, lazyPower, can you guys also try with `sudo snap refresh conjure-up --edge` [15:30] i got some more debugging in there [15:30] stokachu: not seeing that in my stack trace, where else should I look? [15:30] arosales, ~/.cache/conjure-up/conjure-up.log [15:33] stokachu: not finding that string in my conjure-up.log [15:33] arosales, ok give that --edge version a go [15:33] * arosales now going to try with "sudo snap refresh conjure-up --edge [15:33] thanks [15:40] hmm, what exactly "juju enable-ha" do? especially as I'm on manual-cloud provisionning with no MaaS, does it pop a new juju controller machine? [15:45] jcastro: hello, the k8s meeting we talked the last week is tomorrow right? [15:46] I will have time to come this time normally :) [15:48] arosales, how's it going? [15:48] Zic: on manual nothing much, however if you `juju switch controller; juju add-machine @; juju add-machine @; juju enable-ha --to ,;` you can get ha for contorller. However, there are some caveats with manual provider [15:49] marcoceppi: thanks! was wat I'm looking for :) [15:49] what* [15:51] stokachu: hitting a snap issue atm [15:51] working through that [15:51] arosales, ok [15:52] marcoceppi: I think we're going to deploy a MaaS anyway if we have other infra like the one we are preparing with CDK, as our interest for Juju will also be on OpenStack in coming month [15:52] Zic: I think it'll help with a lot of the headaches you've had [15:56] the main reason I didn't deploy a MaaS was because we have a kind of like prudct internally-developed, but with less features, and for only one infra, even if MaaS has more features, it's a bit redundant (and our internal system is so strongly tied to other tools we used here...) [15:56] but as we're planning to deploy more CDK, and test Juju for OpenStack in coming month... I will give it a try :) [15:57] s/prudct/product/ what happened to my keyboard! :] [16:01] stokachu: got a stack at a different point now [16:01] http://paste.ubuntu.com/23948408/ [16:01] arosales, paste ~/.cache/conjure-up/conjure-up.log [16:02] when having reset:false in the bundletester yaml manifest, are charms already in the model supposed to be upgraded if the test is running a newer version of the charms or not? [16:02] i suppose this is up to amulet? [16:03] Zic: sure, understood. If that system has an api, you could probably bridge the two, by adding your thing as a cloud provider, if you're comfortable with golang [16:03] stokachu: http://paste.ubuntu.com/23948419/ [16:03] stokachu: odd its calling out not registering a controller [16:03] conjure should make one for me if I don't have one, but I going to bootstrap and try again just for a data point [16:04] arosales, yea it can't find your aws creds for some reason [16:04] do you have ~/.local/share/juju/credentials.yaml file? [16:06] stokachu: http://paste.ubuntu.com/23948429/ <--- with a controller bootstrapped [16:06] marcoceppi: Go is one of my "needed skills of 2017", as I'm only skilled in C today (and Bash/Python of course, but I do not place Go in the scripting category) [16:06] arosales, ok remove ~/.local/share/juju/credentials.yaml [16:06] stokachu: and yes I have a ~/.local/share/juju/credentials.yaml file [16:06] ? [16:07] arosales, look in that file is one of the accounts for aws incomplete? [16:07] I still get a stack when I conjure-up with a controller ready [16:07] i wonder if we're failing on that [16:07] I bootstrapped just fine [16:07] arosales, pastebin ~/.cache/conjure-up/conjure-up.log [16:08] http://paste.ubuntu.com/23948438/ [16:08] arosales, ah! ok that was the bug i was wrestling [16:08] arosales, i pushed an update `sudo snap refresh conjure-up --edge` [16:08] there is an issue with the constraints parameters we're doing [16:09] snap refresh conjure-up --edge [16:09] yea [16:09] previously didn't give me an update [16:09] said I was at the latest [16:09] ran it again . . . [16:09] what about now? [16:09] and looks to be downloading [16:09] arosales, should be rev56 [16:09] snap refresh conjure-up --edge [16:09] conjure-up (edge) 2.1.0 from 'canonical' refreshed [16:09] any way to tell which rev? [16:09] whats snap list show? [16:09] conjure-up 2.1.0 56 canonical classic [16:09] yea [16:09] rev 56 [16:09] try that [16:10] ok [16:11] marcoceppi: in the first stage of Go, I preferred Rust as it's more like C (no garbage collector or runtime), but Go is everywhere today, so I'm beginning to train at it :) [16:11] stokachu: good that it is now waiting for applications to deploy [16:11] still getting an odd message about the size of my terminal though [16:12] arosales, ok good, yea need to debug the constraints [16:12] but if the spell works . . . . [16:12] arosales, yea you gotta be 134x42 [16:12] arosales, you using gnome-terminal [16:12] what ever ships with xenial [16:13] gnome 3.18.3 [16:13] arosales, so if you go to the menu item 'Terminal' and select 132x43 [16:13] that's what we test the UI on [16:13] that size [16:13] I def have a terminal > 132x43 [16:14] hmm ok i just tested it and it works as expected [16:14] arosales, those are rows and columns [16:14] nothing to do with the resolution [16:15] but I did go ahead and try to select Terminal --> 132x43 and then `conjure-up bigdata` and I still get the warning [16:15] stokachu: no stack and waiting for the deploy though . . . [16:15] arosales, ok cool [16:19] stokachu: got to reclocate [16:19] but looks to be coming up [16:19] arosales, ill be here [16:19] arosales, cool thanks [16:19] not sure how conjure will handle a suspend [16:19] we'll see [16:19] getting kicked out of this room [16:19] are you mid deploy? [16:19] thanks for the speedy work on the bug [16:19] np [16:19] yes deploying to aws atm. . . . [16:20] ok yea that'd be interesting to see [16:35] Like teh video plssss!!! https://youtu.be/wFhaJaY4pqU [16:35] PLSSS [16:35] shur up and like, no DISLIKES [16:46] gnuoy, i'm having trouble finding the ceph nerpe scripts. Do you know where they are? [17:14] cory_fu: kwmonroe: any chance I can get write access/collab to https://github.com/juju/juju-crashdump when you all get a chance? [17:15] lutostag: You should already have admin access. Let me get you the invite link again [17:16] lutostag: https://github.com/juju/juju-crashdump/invitations [17:17] cory_fu: indeed, thanks! [17:55] stokachu: conjure-down command not found, how do I destroy the deployment? === frankban is now known as frankban|afk [18:02] catbus1: all conjure-down does is 'juju destroy-model -y' under the hood. If you can tell which model it created, you can do it and nothing else needs to happen [18:03] mmcc: got it, thanks. === frankban|afk is now known as frankban === frankban is now known as frankban|afk [19:36] help please: juju ssh nova-compute/0 is failing with an ssh key failure… however then ssh-keygen to remove the line can’t find the ssh_known_hosts file listed. [19:37] where would it reside? === scuttle|afk is now known as scuttlemonkey [19:43] catbus1, you install via snap? [19:43] stokachu: no, via apt, added ppa:conjure-up/next first. 16.04 [19:44] stokachu: the snap install with the --classic flag doesn't work, it says --classic flag is unknown [19:44] catbus1, ok if you remove that package and do `sudo snap install conjure-up --classic --beta` [19:44] catbus1, you need to be using snap 2.21 [19:44] which is in the xenial-updates [19:45] hm, snap is not installed on the node. [19:45] ok, I will remove conjure-up and start from snap. [19:45] catbus1, thanks [19:47] stokachu: to confirm, by snap 2.21, you mean 'snap' package, or 'snapd'? I do have snapd and snap-confine 2.21 installed. [19:47] catbus1, yea snapd version 2.21 [19:49] weird, snap install conjure-up with --classic flag works now. [19:50] catbus1, :) [19:52] stokachu: conjure-up is installed fine. Will redeploy openstack soon. [19:52] catbus1, \o/ [19:52] catbus1, ill be around lemme know how it goes [19:52] stokachu: Thank you! [21:16] stokachu: Running 'conjure-up' says -bash: /usr/bin/conjure-up: No such file or directory [21:17] conjure-up 2.1.0 56 canonical classic [21:29] it works on another machine. reinstalling [21:37] catbus1, you probably need to logout and back in [21:37] if you apt removed previously deb package [21:38] got it [22:56] anyone played with running Nexus3 in a container?