/srv/irclogs.ubuntu.com/2016/09/23/#juju.txt

marcoceppiwell, don't use that one. Use the promulgated one ;)00:00
arosalesgah00:00
marcoceppieither way, that's all you need to do00:00
marcoceppiarosales: and I added this hook https://api.jujucharms.com/charmstore/v5/mariadb/archive/hooks/update-status00:00
arosalesmarcoceppi: thanks00:02
marcoceppiarosales: I'd be curious why there's a bigdata-dev version of a promulgated charm and if we can get those patches upstream00:02
arosalesI'll work with petevg to get his bigdata-dev/mariadb charm merged with the promulgated one00:02
arosalesmarcoceppi: petevg  was working on it for xenial s390 support00:02
marcoceppicool, I'll make sure it's obvious where the source is00:03
arosalesI think he was trying to get in contact with the mariadb maintainer to push his updates00:03
marcoceppiarosales: I'm in the mariadb-charmers team, and can help get fixes landed00:03
petevgYeah. The maintainer wasn't getting back to me.00:03
marcoceppiwell, I'm a maintainer (implicitly) and I'll listen to you00:04
marcoceppipetevg: ...for a price ;)00:04
marcoceppicharm teams ftw00:04
petevgmarcoceppi: the price is I don't wag my finger at you for ignoring me before :-p00:04
arosalesI think marcoceppi currency is measured in volumne00:04
marcoceppiit's measured in ABV00:05
arosalesah00:06
arosales:-)00:06
arosalesmarcoceppi: when is update-status hook ran?00:06
marcoceppiarosales: every 5 mins00:06
marcoceppigive or take the hooke queue00:07
arosalesok00:07
arosalesmarcoceppi: et all00:07
arosalesdo you think mariadb-ghost bundle would be a better getting started bundle than wiki-simple?00:08
arosalesin regards to https://github.com/juju/docs/issues/138200:08
arosalesor now that you have these easy hook to drop in, we could add them  to wiki-simple00:10
marcoceppipetevg arosales I've updated the code hosting and bugs URL for the charm, pull requests welcome00:22
petevgmarcoceppi: awesome. I will put one together for you :-)00:22
marcoceppipetevg: prepared to be ignored ;)00:23
marcoceppipetevg: I care about mariadb becase I want to write ONE mysql-base layer that I can base the oracle-mysql and mariadb charm on00:24
marcoceppisince they are like 95% identical00:24
marcoceppiand I use mariadb in production00:24
petevgMakes sense :-)00:26
marcoceppispeaking of playing with fire, I just ran juju upgrade-charm on that environment00:27
* marcoceppi crosses fingers00:27
arosalesmarcoceppi: in production00:28
arosalesgood luck00:31
arosalesof course its in production first try, I forgot who I was chatting with00:31
marcoceppiwelp, didn't crash00:33
arosalesof course it didn't :-)00:55
huhaoranpweston a pre-existing openstack: http://docs.openstack.org/developer/charm-guide/openstack-on-lxd.html, I am trying this02:26
=== gaughen_ is now known as gaughen
huhaoranpweston, sorry to disunderstand you...02:41
=== rmcall_ is now known as rmcall
=== frankban|afk is now known as frankban
venom3Hello, we are trying to deploy openstack-dashboard charm with the shared-db relation. It doesn't works due to network address issue (I guess).07:27
venom3In Percona the horizon user has  IP 10.40.0.X07:28
venom3in local_settings.py there is another IP 10.40.103.X/2407:29
venom3so horizon fail the authentication07:29
venom3and the db is not popolated.07:29
venom3inside the horizon-hook i found07:31
venom3        try:07:31
venom3            # NOTE: try to use network spaces07:31
venom3            host = network_get_primary_address('shared-db')07:31
venom3but I wonder If network space is implemented for horizon07:32
venom3Any suggestion?07:32
=== jamespag` is now known as jamespage
jamespagetvansteenburgh, https://code.launchpad.net/~james-page/juju-deployer/bug-1625797/+merge/30659509:05
jamespageI think that fixes beisner's issue - just testing now09:05
jamespagetvansteenburgh, needed a tickle for 1 and 2 compat afaict09:36
jamespageMachine vs machine09:37
jamespagebeisner, ok I think that fixes up 1.25 compate in juju-deployer for placements09:39
jamespage2.0 placement with v3 formats its still broken - the placements don't accomodate machine 0 properly09:40
jamespagei.e. really we should add that prior to deploying any services to ensure that it exists and we don't end up with machine 0 service being smudged with machine 1 services as it stands right now09:41
huhaoranHey, I have a silly question about Juju, :)10:07
huhaoranIs Juju suitable in a VM?10:08
babbageclunkhuhaoran: Not a silly question! :)10:09
babbageclunkhuhaoran: I think the answer is yes, but also a bit "it depends".10:09
babbageclunkBut you can definitely run juju in a VM.10:10
huhaoranbabbageclunk, Thanks10:10
aisraelhuhaoran, Juju is great with containers. Juju 2 has native support for lxd.11:10
aisraelIt depends on your use case, but the juju controller can be run inside a VM but deploy machines to physical hardware. You can even co-locate units inside the vm.11:11
marcoceppijamespage: ugh, what a lame thing from the API rename11:30
jamespagemarcoceppi, there may be other impacts but I've not seen anything functionally broken with that fix in my testing12:16
marcoceppijamespage: it gets an initial +1 from me, I'll have tvansteenburgh look at it when he gets up12:18
jamespagemarcoceppi, that should at least get 1.25 support working again12:19
tvansteenburghjamespage, marcoceppi: looking now13:12
SimonKLBis it possible to retrieve the charm config with amulet?13:22
rock_Hi. I have question. On Juju Version 2.0-rc1-xenial-amd64 , "$juju config "  command working but not "$juju set-config". But on Juju Version 2.0-beta15-xenial-amd64, "$juju set-config" command working but not "$juju config". So Juju version 2.0-rc1-xenial-amd64 is in under development?13:54
rick_h_rock_: sorry, the documentation there needs updating13:55
rick_h_set-config was replaced with just juju config13:55
rick_h_so juju config key=value13:55
rock_rich_h_: juju version rc1, rc2 means they are development versions?13:57
rick_h_rock_: no, rc1 is stable and we're doing bug fixes, but it's not development13:58
rick_h_rock_: that change was done in beta 18 and was carried to rc113:59
jrwrenrock_: rc = release candidate13:59
rock_rich_h_/jrwren: Oh. I have one more question.  all juju version 1.x, have Only one configuration setting command? I mean "$juju service set " and as well as all juju version 2.X, can we use $juju config" command?14:03
rick_h_rock_: the juju config command is only in 2.x14:07
rock_rick_h_: Hi. I am asking that only.  "$juju config"  will be used in all 2.x versions right? and "$juju service set" will work on all 1.x versions right?14:09
rock_rick_h_: "$juju config" is not working on 2.0-beta15-xenial-amd64. It was showing command not recognized.14:12
rock_rick_h_: Actually, we developed a "cinder sturage driver" charm. We are preparing README.md file in a detailed mannaer. We tested our charm in different ways with different combinations.14:15
rock_rick_h_: So I want to know things clearly.14:17
tvansteenburghrock_: juju config works in 2.0-beta18 and forward14:18
tvansteenburghrock_: so yes, 'juju service set' for juju1, and 'juju config' for juju214:19
rock_tvansteenburgh : Oh. Thanks.14:20
rock_rick_h_/jrwren: Thank you14:21
rick_h_rock_: sorry, was on the phone. Yes, what tvansteenburgh says. Thanks tim for the assist14:30
rock_rick_h_: No problem . tahnk you14:37
lazyPowerSimonKLB: what you'll see typically is the test/bundle will define the configuration and what we do is inspect the end-location file renderings, flags, etc. for the existance of those config values14:41
lazyPowerSimonKLB: but off the top of my head, id ont think amulet has any amenities to pull the charm config from the self.deployment object. I may be incorrect though14:41
autonomousehi, I posted this question to the internal juju channel without thinking about it before (so sorry about that whom it may concern), but I have a question about scopes that I'd really appreciate some help with. I put it here if anyone's interested in giving it a shot: http://askubuntu.com/questions/828732/can-anyone-explain-scopes-in-reactive-charms-to-me-please-juju-2-014:43
SimonKLBlazyPower: so ususally you can access the config values through the charmhelpers package via the hookenv, but when running an amulet test thats going to be different right?14:45
lazyPowercorrect, amulet is an external thing, its poking at juju through the api and juju run14:46
SimonKLByea, perhaps there is a better way of solving it, but currently i have the ports available as config values, and i need to know them when testing if the service is up and running during the amulet test14:47
SimonKLBperhaps i should put some default ports in a python module and override it with the charm config?14:48
SimonKLBthat way i could still test stuff with the default settings and still let the user choose which ports it should run on14:48
SimonKLBbut it feels a bit messy14:49
tvansteenburghSimonKLB: i'd just do a little subprocess wrapper around `juju config`14:49
tvansteenburgharguably that would be a handy addition to amulet14:50
tvansteenburghSimonKLB: or you can just set the ports in the amulet test so you know what they are14:50
SimonKLBtvansteenburgh: yea, im just trying to refrain from having configuration options in multiple places14:51
tvansteenburghSimonKLB: but it's a test. it's reasonable to set some config and then act on those values14:53
SimonKLByea, that is the way im doing it right now, i was just curious if you could grab the default config values somehow and be rid of the extra configuration in the test14:55
tvansteenburghSimonKLB: well you can deploy without setting config and it'll use the defaults. to actually retrieve the defaults though, you'd need to use `juju config`14:57
SimonKLByea, it's the retrieving part im having trouble with, since i need some stuff from the config to access the charm - not sure if a subprocess wrapper or setting the config values i need "externally" in the test is cleaner14:59
tvansteenburghSimonKLB: if it were me i'd do the latter15:01
SimonKLBtvansteenburgh: thanks, ill go with that then!15:01
balloonsDoes anyone know how I might be able to remove a github key by name? juju remove-ssh-key blah@github or blah@github/12345 or gh:blah all fail15:10
petevgcory_fu, kwmonroe, kjackal: go some PRs for you: https://github.com/juju-solutions/interface-zookeeper-quorum/pull/5 and https://github.com/juju-solutions/bigtop/pull/4615:45
petevgTogether, they give us rolling restart automagic in the new zookeeper charm.15:45
petevg*got15:45
=== frankban is now known as frankban|afk
kwmonroegood news cory_fu, bigtop zepp doesn't need hadoop-plugin.  it just so happens that bigtop defaults the spark interpreter to yarn-client (which is why i thought it needed the plugin in the first place).  we can override that with a proper spark master and make plugin optional.19:47
kwmonroepetevg: do you have a build of the zookeeper charm that includes your PRs?19:49
petevgkwmonroe: I caught and issue and I've been debugging it (think that I may have a fix), so I haven't pushed a build yet.19:50
petevgkwmonroe: if this test that I'm running right now works out, though, I can push what I've got to bigdata-dev19:50
kwmonroeoh good petevg: i was fixin to say that all the letters look right, but frankly, you and cory_fu were putting me to sleep with all that remote_ conversation babble.19:50
petevgHeh. It's kind of sleep inducing. I think that my mega comment explaining it probably makes for good bedtime reading.19:51
petevgThe bug is actually in the relation stuff: if you remove-unit the Juju leader, it still tries to orchestrate while it is shutting down, and then it throws errors because it doesn't have the relation data any more.19:52
kwmonroehm, that sounds like a -departed vs -broken thing... as in, don't do conversation stuff during -broken because you don't have the relation data any more.19:55
petevgYep.19:55
petevgkwmonroe: am I missing a design pattern that gets around it?19:55
petevgIs it just @when_not('{relation-name}.broken')?19:56
kwmonroeno petevg, i just checked peers.py on interface-zookeeper.. it's reacting to -departed (which is good).  if it were reacting to -broken, you'd be in trouble.19:56
petevgGot it.19:56
hmlquestion: why would the containers be so small that charm deploy fails?  this just started happening with juju 2.0-rc1 and conjure-up 2.0 - openstack-novalxd - xenial20:02
petevgkwmonroe: here's a charm for you: cs:~bigdata-dev/zookeeper-1020:02
kwmonroethanks petevg!20:03
petevgnp!20:03
hmlmy container appear to be using zfs this time and i didn’t before20:05
kwmonroehml: is your zpool out of space (sudo zpool list)?20:09
kwmonroei seem to recall a juju ML post about the default size being small (like 10G)20:09
hmlkwmonroe: I took all of the defaults when i did the lxd init20:09
hmlkwmonroe: the biggested container is only 2G, i do remember something about 10G during the init but i don’t have a log20:10
kwmonroehml: speculating here, but i think this might be your fix: https://github.com/lxc/lxd/pull/2364/files  lxd init used to do 10g by default, and now it does something 20% of your disk or 100G (whichever is smaller)20:12
kwmonroetych0: any idea when that might make it into a lxd package? ^^20:13
hmlkwmonroe: just looked at the pool - it’s maxed.  do you know if something changed recently.  i’ve been deploying and reinstalling a lot in the past month and haven’t run into this.20:14
tych0kwmonroe: i think it will be released into yakkety on tuesday20:15
tych0i'm not sure about backporting to xenial20:15
hmlkwmonroe: is it easy to increase the zpool size?  i have lots of extra disk space to give it20:15
kwmonroehml: not sure if something changed recently, but if you weren't using zfs before and your are now, that might explain it.  i think bootstrap asks if you want to use file-backed or zfs-backed containers.20:20
hmlkwmonroe: hrm.. must have some soemthing different.  :-/ which is better to use?  file-backed or zfs-backed?20:21
stokachuhml: small?20:25
kwmonroehml: if by "better" you mean "new, shiny, fast", then zfs :)20:25
stokachuhml: are you using ZFS?20:25
kwmonroehml: i don't know how to expand an existing zpool.. tych0, do you?20:25
hmlhml: at this point, i just want something that works.  :-)20:25
stokachuyou can't20:25
stokachuyou need to remove the lxc images20:26
stokachuand rerun lxd init20:26
stokachumake the zfs pool size bigger, i usually do like 100G20:26
hmlstokach: i went back to a snapshot of my vm - i’ve been around this block too many times.  :-)20:26
stokachu:D20:26
hmlstockach: i don’t care if it’s new and shiny, i just need it to work.  this config isnt’ going into production or the like20:27
stokachuhml: then just use the dir for the storage backend20:28
stokachuyou still need to re-run lxd init20:28
hmlstokach: i’ll give it go.  thank you!20:28
stokachuyw20:28
petevgkwmonroe: here's a version of the zookeeper charm that should avoid some redundant restarts: cs:~bigdata-dev/zookeeper-11 (I also updated the PR with the code from this charm.)21:13
kwmonroeack petevg21:19
petevgkwmonroe: the "redundant restarts" might be paranoia on my part. I refactored to prevent two similar routines from essentially executing the same code twice ... but it would only cause an extra restart if things got timed unfortunately. If you're using version 10, it's probably okay. Version 11 is just a little bit tidier.21:24
petevg... also, the issue would only happen after you'd removed a zookeeper unit, so you're unlikely to run into it in a demo.21:32
kwmonroefrankly petevg, i'm not versed enough in zookeeper to know what effect zk restarts have on connected services.. like what if spark/kafka/namenode are asking zk something and it restarts?  do they ask again?  melt?  get the answer from /dev/random?  ear-regardless, eliminating extra restarts seems good.21:32
petevgkwmonroe: we do the rolling restart so that zookeeper handles that well.21:33
kwmonroepetevg: speaking of demos, you comfy enough to put zookeeper-11 in the spark-processing bundle?  or shall we stick to the older zk for strata?21:33
petevgkwmonroe: hmmm ... I'm not less confident of zookeeper-11 than I am of zookeeper-921:34
kwmonroeheh, nm.  it's friday afternoon.  there's no way we vet zk-11 in time for strata.21:34
petevgkwmonroe: also, to clarify, zookeeper will only restart when you add or remove nodes, and it has to restart. The "extra" restarts won't happen in the middle of normal use.21:35
petevgkwmonroe: On the other hand, it might be nice to put zookeeper through a trial by fire ourselves, rather than wait for someone else to do it, post strata.21:36
cmarsmbruzek, lazyPower hi, i'm trying out the master-node-split bundle.. how do i get a client to connect to my cluster?21:36
cmarsawesome stuff, btw21:36
petevg... unless you've already finished a lot of testing on the bundle. (In which case, I'd stick with zookeeper-10 -- you're not going to run into the issue unless you're adding and removing a bunch of nodes on the fly.)21:36
lazyPowercmars: mkdir -p ~/.kube &&  juju scp kubernetes-master/0:config ~/.kube/config && kubectl get nodes22:15
cmarslazyPower, awesome, thanks22:15
lazyPowercmars: that command is descructive if you're already connected to a kubernetes cluster22:16
lazyPowerso be mindful of blindly overwriting, it may have unintended consequences22:16
cmarslazyPower, ack. all good, i'll tweak this & use the env var22:16
cmarslazyPower, so i'm going to hack on the kubernetes-master layer. where can i get the resource it needs, if i deploy it locally?22:17
lazyPowercmars: i wonder, can we fetch resources from the charm store?22:20
lazyPoweri dont think i've tried22:20
lazyPowerother than when juju deploy happens22:20
cmarslazyPower, maybe, i don't know the incantations22:20
lazyPowersame22:21
lazyPower1 sec22:21
lazyPowerhttps://gist.github.com/935f03ce936bc221ca7cdcc4c974fbd7  - courtesy of mbruzek22:21
lazyPowerfetch a release tarball from github.com/kubernetes/kubernetes/releases22:21
lazyPowerwe're good on deploy on latest beta1022:22
lazyPowersince beta3 was our first attempt?22:22
lazyPoweryeah, we're good in the range of betas from 3-1022:22
lazyPowerand 1.3.x22:22
cmarslazyPower, ok, great!22:23
cmarslazyPower, last question. if i set different relation data keys on each side of a relation, does that erase the keys i don't specify, or is it additive?22:23
lazyPowerwat22:24
cmarsusing reactive framework22:24
lazyPowerdifferent relation data keys... meaning?22:24
mbruzekcmars: If it is a different key, then it adds, if you set the same key then it would replace22:25
cmarsmbruzek, ok, perfect22:25
lazyPowerthat22:25
cmarsnever done a two-way handshaking relation22:25
mbruzekcmars: are you messing with our interface? What are you adding?22:25
lazyPowerhaha22:25
cmarsmbruzek, i'm not changing your existing interfaces, i'm writing a new one22:25
* mbruzek is worried all of a sudden22:25
cmarsdon't worry, it's an "experiment"22:26
mbruzekWell good luck I hope it is a success22:26
cmarsthanks. fun stuff :)22:26
cmarswill def share22:26
cmarsmbruzek, lazyPower i've got master-worker-split running in lxd, with the worker in a manually added kvm machine22:33
cmars\m/22:33
cmarsawesome, awesome22:33
lazyPower\m/,22:33
mbruzeksweet22:33
mbruzekgood experiment22:33
lazyPowercmars: word of caution. etcd will choke if you put it on an ipv6 only host22:33
lazyPowerknown caveat22:33
lazyPowers/etcd/the etcd charm/22:34
cmarslazyPower, interesting, ok. i disable ipv6 on my lxd bridge, but good to know22:34
cmarsi just like lxd for developing, so much faster22:34
lazyPoweri hear ya22:35
mbruzekcmars: Have you run any docker workloads inside the lxd  yet? Can you confirm they work without adding a profile or something?22:35
cmarsmbruzek, i can't run the worker in lxd. that's why i manually added the kvm instance. i'm launching those with uvt-kvm and then using add-machine to attach them22:36
cmarsrouting between the kvm and lxd subnets works22:36
mbruzekoh yeah22:36
cmarstried the docker profile, no luck22:36
mbruzekcmars that is why we split the master/workers so we could use lxd22:36
mbruzekfor the control plane22:36
mbruzekSo in theory you could deploy --to lxd:<kvm machine>22:37
cmarsmbruzek, in the worker, i get some read-only filesystem errors for stuff under /proc/sys/...22:37
cmarsspecifically, https://paste.ubuntu.com/23222264/22:37
cmarsthat's after adding docker to the worker container's lxd profile and restarting it22:38
cmarsoh well..22:38
mbruzekcmars: I meant the master could be deployed to the lxd on the worker yes?22:38
mbruzek(where the worker is kvm)22:38
mbruzekor whatever.22:38
cmarsah.. sure could22:38

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!