/srv/irclogs.ubuntu.com/2017/02/07/#juju.txt

stokachucatbus1, i should be around for a bit just ping me if you come back to it00:46
stokachuballoons, around?01:45
pranav_Hi. I have a question regarding Cinder charm.06:06
pranav_If i locally modify the cinder.conf through the shell and then modify config through the juju-gui06:07
pranav_the local changes get overridden. Any workaround this?06:07
junaidalipranav_: I haven't used juju gui for deployment. Have you tried modifying the template file in /var/lib/juju/agents/unit-cinder-0/charm/templates/ and then writing changes using gui?07:13
pranav_junaidali: No haven't tried that. We tried to see if any relation hooks get fired on modification but no good. Let me check the templates07:20
pranav_One question, to modify the template file our application has to be running in the same unit, correct? Or is there a way to change the templates via relations?07:22
junaidaliimo, if you want some changes to be retained, you probably look for changing template file because whenever you change some config, all the template files will be re-written07:36
junaidaliif you've more than one units of charm, update template file in each unit07:37
junaidalishould probably*07:38
pranav_I see. In which case our application always has to co-exist with the cinder charm.07:41
junaidalisorry, just to correct what i said earlier, whenever we change any config, all the file are re-written **from the template file**07:43
junaidalipranav_:07:43
junaidalifiles*07:44
pranav_Cool! we are gonna try out changing the templates, which i feel should work for us.07:50
pranav_Thanks for the help Junaid :)07:51
=== aluria` is now known as aluria
anrahHi! I'm starting to write tests to my charms, however I seem to have a problem:08:33
anrah2017-02-07 10:32:51 Error getting env api endpoints, env bootstrapped?08:33
anrah2017-02-07 10:32:51 Command (juju api-endpoints -e localhost-localhost:admin/default) Output:08:33
anrahhttp://paste.ubuntu.com/23946536/08:35
anrahjuju version is 2.0.208:35
anrahand amulet is 1.18.208:37
anrahsomehow it looks like the amulet? is using old juju commands that are not available on 2.0 ?08:37
junaidalianrah: what's the version of juju-deployer you have?08:47
anrah0.6.408:48
anrahI upgraded it but no help08:51
junaidaliupgraded to which version? i think juju-deployer doesn't yet support 2.0. You can get a 2.0 supported package from Tim Van's ppa https://launchpad.net/~tvansteenburgh/+archive/ubuntu/ppa08:54
anrah0.10.0 is the new version08:54
junaidalioh okay, you've the newest version..08:54
anrahI think the problem is not with the juju-deployer but the at the very first lines where the amulet? is trying to get information about the controller08:55
anrahas it trys to run this command: Command (juju api-endpoints -e localhost-localhost:admin/default)08:56
anrahand there is no juju api-endpoints command on juju 2.008:56
junaidaliyes, you're right08:56
anrahso.. Amulet is useless with Juju 2.0 ?08:57
junaidaliwait, someone will help you. It is probably late for many people here09:03
anrahThanks!09:05
anrahI think I mixed the python versions.. As amulet uses python3 and juju-deployer python2 so had old version of the juju-deployer on python2 packages09:19
lazyPowerZic heyo, i'm starting to hack on that issue you uncovered lastnight now.09:22
lazyPowerZic - should have something fo ryou to look at before we head into the kubernetes hacking sessions later today. This might take a bit, but i'm hoping to have something for you to setup in a staging capacity shortly.09:23
ZiclazyPower: hello, I just come to my office, you're so synced :)09:25
lazyPowerZic :) Blame it on Belgium09:25
ZiclazyPower: what jetlag does it cause to you? you came from London/Canonical?09:26
lazyPowerZic - i came from KC MO, so much further than London10:24
ZiclazyPower: so "jetlag" was the right word :D I was not so sure "jetlag" is applicable if you just... take the boat or the train :)10:37
lazyPowerindeed. i'll be leaving tomorrow so onw that i'm adjusted to the timezone, its time to leave soon10:41
lazyPower@stub  hey stub, question for you re: layer-leadership if you're around10:42
stubyo10:42
lazyPowerstub, can i just invoke it once and give it a dict? or do i need to call it everytime with key=val pairs for each data point i want the leader to push?10:42
lazyPower       charms.leadership.leader_set({'service_key': '/etc/kubernetes/serviceaccount.key', 'foo': 'bar'})10:43
stubthat method accepts keyword arguments or a dictionary, so either leader_set({'foo': 'bar','baz':False}) or leader_set(foo='bar', baz=False)10:43
lazyPoweroh fantastic10:44
lazyPowerstub <310:44
stubalthough now I look at the signature, leader_set(settings='hello') will fail, so don't call your leadership setting 'setting' until I fix that :-P10:46
=== rogpeppe1 is now known as rogpeppe
ZiclazyPower: as you are at Europe time now, I will be available if you need any test to 19:00 :)11:37
lazyPowerZic - i'm just now deploying the first attempt at fixing, currently waiting on convergence11:37
lazyPowerZic - but as i'm at a conference its hard to dedicate any reasonable amount of time, so thank you for being patience11:37
lazyPower*patient11:37
ZicI understand, no worries :)11:37
Zicmy customer continue to adapt their code to k8s with minikube installed locally for now11:38
magicalt1outkwmonroe: have you come up with an apachecon talk yet?13:09
magicalt1outdont want to end up sumitting something similar to you guys13:09
=== magicalt1out is now known as magicaltrout
xnoxlet the fun begin14:12
xnox$ juju deploy easyrsa14:12
xnoxERROR cannot resolve URL "cs:easyrsa": charm or bundle not found14:12
xnoxjuju deploy cs:~containers/easyrsa-6 works.14:14
marcoceppixnox: yup, it's not yet in a promoted namespace14:20
xnoxok14:20
marcoceppixnox: you can expect it to complete review in time for the 1.6.0 release of kubernetes14:20
xnoxworking of instructions from https://jujucharms.com/u/containers/easyrsa/14:21
xnoxI cannot do "juju deploy tls-client" and the relation name is wrong too?14:21
marcoceppitls-client is a place holder for connecting to something that consumes tls14:21
marcoceppixnox: what's your end goal?14:21
xnoxcurrently testing that this charm (by itself) works on s390x.14:22
xnoxassuming since i have managed to deploy easyrsa, it does.14:22
marcoceppixnox: you'll probably want something that connects14:22
marcoceppiit's python so it'll probably work14:23
marcoceppiI don't think it uses anything that's not interprutted14:23
xnoxi am getting a lot of _juju_complete_2_0: command not found14:23
marcoceppixnox: where? in your terminal?14:24
xnoxyes14:24
xnoxwhen i try to tab complete things14:25
xnox(i'm running zesty)14:25
marcoceppixnox: are you using the snap?14:25
xnoxno14:25
xnoxwhatever is in zesty as deb14:25
xnoxwhere is source code for https://jujucharms.com/u/containers/kubernetes-master/11 ?14:36
marcoceppixnox: https://github.com/kubernetes/kubernetes/tree/master/cluster/juju/layers/kubernetes-master14:39
xnoxmarcoceppi, how would I fund that from the charmstore?14:39
marcoceppixnox: you wouldn't unless it's in th ereadme14:40
xnoxmarcoceppi, but that's the source code, rather than published charm revision?14:41
xnoxdoes that correspond to the v11 of the charm?14:41
marcoceppixnox: no, that's the upstream code14:41
xnoxmarcoceppi, how can I clone the actual charm?14:41
marcoceppixnox: just download the .zip or use `charm pull ~containers/kubernetes-master`14:41
xnoxit's missing resources and binaries for my architecture, so i need to modify it and submit merge proposals back. But also test before I do all that.14:42
marcoceppinope14:42
xnoxwhat do you mean nope?14:42
marcoceppiresources aren't delivered in the source code14:43
marcoceppiyou can deploy kubernetes-master from the store, and attach your kubernetes rsource during that time from a local mirror14:43
xnox=)14:43
xnoxsure, but i want to fix it for everyone on s390x, not just me.14:43
marcoceppixnox: well, that's a slight change, but it's on our roadmap. s390x either just landed in upstream for support or will be soon14:44
marcoceppionce it does we can include it in the charm officially14:44
xnoxlanded14:44
marcoceppixnox: either way, you'll want to submit pull requests to the repo mentioned above14:44
xnoxok14:45
marcoceppixnox: as the charms are built from that source, using `charm build`14:45
xnoxand after charm build, i can test my merge proposal locally, right?14:45
xnox(last time i did charms, there was no `charm build` and no reactive anything)14:45
marcoceppixnox: modify the code; charm build, juju deploy builds/kubernetes-master --resource kubernetes=s390x.tar.gz; test14:46
marcoceppixnox: I'm a bit busy at the moment, but i'd love to help you more14:46
xnoxok, thanks for your time.14:46
marcoceppiVery interested in getting s390x support (and ppc64el) added to the charms14:46
marcoceppiwe have two routes we're chasing for that, so we should chat more14:46
admcleodit appears my 'juju deploy' is using a proxy, although i dont have http_proxy or https_proxy set - any ideas?14:47
lazyPowerstokachu - both the snap and deb packages are illustrating some odd behavior. its no longer waiting for the deployment14:52
lazyPowerstokachu - i'll try to get more details about this and get a proper bug filed, but just FYI'ing you that there's one in flight incoming.14:52
stokachulazyPower, thanks im debugging right now14:52
lazyPoweroh has someone already reported?14:52
stokachuyea arosales14:53
lazyPowerZic - i've got a branch in flight, not going to be ready by EOD today.14:53
lazyPowerZic - however, i will def get some additinoal testing on this and ping you in the PR so you can track its progress14:53
xnoxi am confused where and how a resource ends up packaged and uploaded into the charm store.14:59
rick_hxnox: how so? using the charm command?15:02
xnoxrick_h, but i'm failing to pin point where the resource is assembled/downloaded from, e.g. a makefile or some declarative syntax.15:03
rick_hxnox: https://jujucharms.com/docs/2.0/developer-resources ?15:05
xnoxaha! thank you15:06
xnoxrick_h, is there $ charm resource-get command?15:08
arosaleslazyPower: in case your interested the bug I filled was https://github.com/conjure-up/spells/issues/4515:08
lazyPowerarosales - thats consistent with whats happened in the k8s flavors of spells15:08
lazyPowerthanks, i'll sub to this15:08
* arosales thought it was the deploy_done step, but seems more prevalent if your hitting it as well. Good news is stokachu is aware and working on a fix.15:09
rick_hxnox: so there's a resource-get hook for the charm itself. I don't see a charm pull-resource like there is a charm pull command for outside of a running unit :(15:10
xnoxrick_h, also that makes little sense...... so the resource has binaries, but they are x86-64 binaries.15:10
xnoxrick_h, how does one envision having arch specific resources? e.g. similar to how juju-tools are arch specific.15:10
xnoxrick_h, can I register s390x resource; amd64 resource; etc? with the same name?15:10
xnoxor shall I have everything inside the resource tarball and do it myself in the charm (and keep just one resource name)15:11
marcoceppixnox: well we're doing one of two things15:11
xnoxit seems like it's already an API that a tarball with toplevel binaries must be supported15:11
marcoceppixnox: the first is you can add a named resource like `kubernetes-s390x` and rename kubernetes to `kubernetes-x86`15:12
rick_hxnox: no, so I think you'd need to define them as different resources and upload them at publish time and then the hooks would have to have an if block that states "if arch = xxx resource-get xxx"15:12
xnoxeeewwww15:12
marcoceppixnox: packing every arch in a tarbal is expensive for everyone15:12
ZiclazyPower: just saw your message, ACK, thanks for your work while travelling anyway... it's tough :)15:12
marcoceppixnox: to be honest we're moving ot snaps for deliverying kubernetes15:12
xnoxmarcoceppi, renaming a resource breaks all deployments that use custom resource.15:13
marcoceppiwhich handles architecture specific payloads (and adds security / process confinement)15:13
marcoceppixnox: we'll we'd deprecate the kubernetes resource15:13
xnoxack.15:13
marcoceppikeep it, but deliver an empty blob15:13
marcoceppithe charm would say if the blobs empty, look for an arch named resource key15:13
marcoceppithat way we wouldn't break exising custom stuff15:14
marcoceppiwe can do that tastefully, but I think we're going to move away from charm resources15:14
xnoxabout expensive, the tarball is currently 78MB, if we explode it 3 times. It's ok, if we download that; but then remove unused arches.15:14
xnox./foo ->15:14
xnox./x86_64/foo15:15
xnox./s390x/foo15:15
xnox./ppc64el/foo15:15
xnoxas a resource should be backwards enough compatible. E.g. look for ./arch/foo, failing that take ./foo and run with it.15:15
ZiclazyPower: do you have any recommendation to do manually the work of the fix while waiting for the fix? my customer/devs works on their minikube locally while the cluster is b0rken so it's fine, but I need myself to do some testing (like connecting our k8s cluster with our Nagios)15:16
lazyPowerZic - you can manually sync those files, but i'd rather have this run during a charm upgrade so its consistent if thats ok15:17
lazyPowerZic - otherwise i'm encourgaing you to make a snowflake15:17
ZiclazyPower: bonus question, how can I explain to my customer that this problem was not discovered in your testing environment, as for him, the step-to-reproduce is just a reboot15:21
ZicI don't want to put all the fault on your shoulders :/15:21
lazyPowerZic - we're in beta and HA testing hasn't been part of the release process. #honestanswer15:21
Zicwhat can cause this in our environment and not in your?15:21
lazyPowerZic - we've been more focused on single master with in-place upgrades, once that's rock solid, we were going to do HA testing.15:22
lazyPoweryou just happened to beat us to it :)15:22
Zic\o/15:22
lazyPowerZic and thanks to that testing, we should get HA bits done before we're done with the upgrade testing that we're still working through. A lot of the plumbing is there15:22
lazyPowerbut the minor release updates is different from major release updates15:22
lazyPowerso we're workign through *that* story this cycle, is 1.5.x => 1.6.x15:23
stokachulazyPower, are you guys seeing errors like (ServerError(...), 'json: cannot unmarshal string into Go value of type uint64')15:27
stokachuarosales, ^15:27
arosalesstokachu: checking15:29
stokachuarosales, lazyPower, can you guys also try with `sudo snap refresh conjure-up --edge`15:30
stokachui got some more debugging in there15:30
arosalesstokachu: not seeing that in my stack trace, where else should I look?15:30
stokachuarosales, ~/.cache/conjure-up/conjure-up.log15:30
arosalesstokachu: not finding that string in my conjure-up.log15:33
stokachuarosales, ok give that --edge version a go15:33
* arosales now going to try with "sudo snap refresh conjure-up --edge15:33
stokachuthanks15:33
Zichmm, what exactly "juju enable-ha" do? especially as I'm on manual-cloud provisionning with no MaaS, does it pop a new juju controller machine?15:40
Zicjcastro: hello, the k8s meeting we talked the last week is tomorrow right?15:45
ZicI will have time to come this time normally :)15:46
stokachuarosales, how's it going?15:48
marcoceppiZic: on manual nothing much, however if you `juju switch controller; juju add-machine <user>@<ip>; juju add-machine <user>@<ip>; juju enable-ha --to <machine-id1>,<machine-id2>;` you can get ha for contorller. However, there are some caveats with manual provider15:48
Zicmarcoceppi: thanks! was wat I'm looking for :)15:49
Zicwhat*15:49
arosalesstokachu: hitting a snap issue atm15:51
arosalesworking through that15:51
stokachuarosales, ok15:51
Zicmarcoceppi: I think we're going to deploy a MaaS anyway if we have other infra like the one we are preparing with CDK, as our interest for Juju will also be on OpenStack in coming month15:52
marcoceppiZic: I think it'll help with a lot of the headaches you've had15:52
Zicthe main reason I didn't deploy a MaaS was because we have a kind of like prudct internally-developed, but with less features, and for only one infra, even if MaaS has more features, it's a bit redundant (and our internal system is so strongly tied to other tools we used here...)15:56
Zicbut as we're planning to deploy more CDK, and test Juju for OpenStack in coming month... I will give it a try :)15:56
Zics/prudct/product/ what happened to my keyboard! :]15:57
arosalesstokachu: got a stack at a different point now16:01
arosaleshttp://paste.ubuntu.com/23948408/16:01
stokachuarosales, paste ~/.cache/conjure-up/conjure-up.log16:01
SimonKLBwhen having reset:false in the bundletester yaml manifest, are charms already in the model supposed to be upgraded if the test is running a newer version of the charms or not?16:02
SimonKLBi suppose this is up to amulet?16:02
marcoceppiZic: sure, understood. If that system has an api, you could probably bridge the two, by adding your thing as a cloud provider, if you're comfortable with golang16:03
arosalesstokachu: http://paste.ubuntu.com/23948419/16:03
arosalesstokachu: odd its calling out not registering a controller16:03
arosalesconjure should make one for me if I don't have one, but I going to bootstrap and try again just for a data point16:03
stokachuarosales, yea it can't find your aws creds for some reason16:04
stokachudo you have ~/.local/share/juju/credentials.yaml file?16:04
arosalesstokachu: http://paste.ubuntu.com/23948429/ <--- with a controller bootstrapped16:06
Zicmarcoceppi: Go is one of my "needed skills of 2017", as I'm only skilled in C today (and Bash/Python of course, but I do not place Go in the scripting category)16:06
stokachuarosales, ok remove ~/.local/share/juju/credentials.yaml16:06
arosalesstokachu: and yes I have a ~/.local/share/juju/credentials.yaml file16:06
arosales?16:06
stokachuarosales, look in that file is one of the accounts for aws incomplete?16:07
arosalesI still get a stack when I conjure-up with a controller ready16:07
stokachui wonder if we're failing on that16:07
arosalesI bootstrapped just fine16:07
stokachuarosales, pastebin ~/.cache/conjure-up/conjure-up.log16:07
arosaleshttp://paste.ubuntu.com/23948438/16:08
stokachuarosales, ah! ok that was the bug i was wrestling16:08
stokachuarosales, i pushed an update `sudo snap refresh conjure-up --edge`16:08
stokachuthere is an issue with the constraints parameters we're doing16:08
arosalessnap refresh conjure-up --edge16:09
stokachuyea16:09
arosalespreviously didn't give me an update16:09
arosalessaid I was at the latest16:09
arosalesran it again . . .16:09
stokachuwhat about now?16:09
arosalesand looks to be downloading16:09
stokachuarosales, should be rev5616:09
arosales snap refresh conjure-up --edge16:09
arosalesconjure-up (edge) 2.1.0 from 'canonical' refreshed16:09
arosalesany way to tell which rev?16:09
stokachuwhats snap list show?16:09
arosalesconjure-up  2.1.0    56   canonical  classic16:09
stokachuyea16:09
arosalesrev 5616:09
stokachutry that16:09
arosalesok16:10
Zicmarcoceppi: in the first stage of Go, I preferred Rust as it's more like C (no garbage collector or runtime), but Go is everywhere today, so I'm beginning to train at it :)16:11
arosalesstokachu: good that it is now waiting for applications to deploy16:11
arosalesstill getting an odd message about the size of my terminal though16:11
stokachuarosales, ok good, yea need to debug the constraints16:12
arosalesbut if the spell works . .  . .16:12
stokachuarosales, yea you gotta be 134x4216:12
stokachuarosales, you using gnome-terminal16:12
arosaleswhat ever ships with xenial16:12
arosalesgnome 3.18.316:13
stokachuarosales, so if you go to the menu item 'Terminal' and select 132x4316:13
stokachuthat's what we test the UI on16:13
stokachuthat size16:13
arosalesI def have a terminal > 132x4316:13
stokachuhmm ok i just tested it and it works as expected16:14
stokachuarosales, those are rows and columns16:14
stokachunothing to do with the resolution16:14
arosalesbut I did go ahead and try to select Terminal --> 132x43 and then `conjure-up bigdata` and I still get the warning16:15
arosalesstokachu: no stack and waiting for the deploy though . .  .16:15
stokachuarosales, ok cool16:15
arosalesstokachu: got to reclocate16:19
arosalesbut looks to be coming up16:19
stokachuarosales, ill be here16:19
stokachuarosales, cool thanks16:19
arosalesnot sure how conjure will handle a suspend16:19
arosaleswe'll see16:19
arosalesgetting kicked out of this room16:19
stokachuare you  mid deploy?16:19
arosalesthanks for the speedy work on the bug16:19
stokachunp16:19
arosalesyes deploying to aws atm. . . .16:19
stokachuok yea that'd be interesting to see16:20
MadarafvLike teh video plssss!!! https://youtu.be/wFhaJaY4pqU16:35
MadarafvPLSSS16:35
Madarafvshur up and like, no DISLIKES16:35
cholcombegnuoy, i'm having trouble finding the ceph nerpe scripts.  Do you know where they are?16:46
lutostagcory_fu: kwmonroe: any chance I can get write access/collab to https://github.com/juju/juju-crashdump when you all get a chance?17:14
cory_fulutostag: You should already have admin access.  Let me get you the invite link again17:15
cory_fulutostag: https://github.com/juju/juju-crashdump/invitations17:16
lutostagcory_fu: indeed, thanks!17:17
catbus1stokachu: conjure-down command not found, how do I destroy the deployment?17:55
=== frankban is now known as frankban|afk
mmcccatbus1: all conjure-down does is 'juju destroy-model -y' under the hood. If you can tell which model it created, you can do it and nothing else needs to happen18:02
catbus1mmcc: got it, thanks.18:03
=== frankban|afk is now known as frankban
=== frankban is now known as frankban|afk
hmlhelp please: juju ssh nova-compute/0 is failing with an ssh key failure… however then ssh-keygen to remove the line can’t find the ssh_known_hosts file listed.19:36
hmlwhere would it reside?19:37
=== scuttle|afk is now known as scuttlemonkey
stokachucatbus1, you install via snap?19:43
catbus1stokachu: no, via apt, added ppa:conjure-up/next first. 16.0419:43
catbus1stokachu: the snap install with the --classic flag doesn't work, it says --classic flag is unknown19:44
stokachucatbus1, ok if you remove that package and do `sudo snap install conjure-up --classic --beta`19:44
stokachucatbus1, you need to be using snap 2.2119:44
stokachuwhich is in the xenial-updates19:44
catbus1hm, snap is not installed on the node.19:45
catbus1ok, I will remove conjure-up and start from snap.19:45
stokachucatbus1, thanks19:45
catbus1stokachu: to confirm, by snap 2.21, you mean 'snap' package, or 'snapd'? I do have snapd and snap-confine 2.21 installed.19:47
stokachucatbus1, yea snapd version 2.2119:47
catbus1weird, snap install conjure-up with --classic flag works now.19:49
stokachucatbus1, :)19:50
catbus1stokachu: conjure-up is installed fine. Will redeploy openstack soon.19:52
stokachucatbus1, \o/19:52
stokachucatbus1, ill be around lemme know how it goes19:52
catbus1stokachu: Thank you!19:52
catbus1stokachu: Running 'conjure-up' says -bash: /usr/bin/conjure-up: No such file or directory21:16
catbus1conjure-up 2.1.0 56 canonical classic21:17
catbus1it works on another machine. reinstalling21:29
stokachucatbus1, you probably need to logout and back in21:37
stokachuif you apt removed previously deb package21:37
catbus1got it21:38
stormmoreanyone played with running Nexus3 in a container?22:56

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!