marcoceppi | well, don't use that one. Use the promulgated one ;) | 00:00 |
---|---|---|
arosales | gah | 00:00 |
marcoceppi | either way, that's all you need to do | 00:00 |
marcoceppi | arosales: and I added this hook https://api.jujucharms.com/charmstore/v5/mariadb/archive/hooks/update-status | 00:00 |
arosales | marcoceppi: thanks | 00:02 |
marcoceppi | arosales: I'd be curious why there's a bigdata-dev version of a promulgated charm and if we can get those patches upstream | 00:02 |
arosales | I'll work with petevg to get his bigdata-dev/mariadb charm merged with the promulgated one | 00:02 |
arosales | marcoceppi: petevg was working on it for xenial s390 support | 00:02 |
marcoceppi | cool, I'll make sure it's obvious where the source is | 00:03 |
arosales | I think he was trying to get in contact with the mariadb maintainer to push his updates | 00:03 |
marcoceppi | arosales: I'm in the mariadb-charmers team, and can help get fixes landed | 00:03 |
petevg | Yeah. The maintainer wasn't getting back to me. | 00:03 |
marcoceppi | well, I'm a maintainer (implicitly) and I'll listen to you | 00:04 |
marcoceppi | petevg: ...for a price ;) | 00:04 |
marcoceppi | charm teams ftw | 00:04 |
petevg | marcoceppi: the price is I don't wag my finger at you for ignoring me before :-p | 00:04 |
arosales | I think marcoceppi currency is measured in volumne | 00:04 |
marcoceppi | it's measured in ABV | 00:05 |
arosales | ah | 00:06 |
arosales | :-) | 00:06 |
arosales | marcoceppi: when is update-status hook ran? | 00:06 |
marcoceppi | arosales: every 5 mins | 00:06 |
marcoceppi | give or take the hooke queue | 00:07 |
arosales | ok | 00:07 |
arosales | marcoceppi: et all | 00:07 |
arosales | do you think mariadb-ghost bundle would be a better getting started bundle than wiki-simple? | 00:08 |
arosales | in regards to https://github.com/juju/docs/issues/1382 | 00:08 |
arosales | or now that you have these easy hook to drop in, we could add them to wiki-simple | 00:10 |
marcoceppi | petevg arosales I've updated the code hosting and bugs URL for the charm, pull requests welcome | 00:22 |
petevg | marcoceppi: awesome. I will put one together for you :-) | 00:22 |
marcoceppi | petevg: prepared to be ignored ;) | 00:23 |
marcoceppi | petevg: I care about mariadb becase I want to write ONE mysql-base layer that I can base the oracle-mysql and mariadb charm on | 00:24 |
marcoceppi | since they are like 95% identical | 00:24 |
marcoceppi | and I use mariadb in production | 00:24 |
petevg | Makes sense :-) | 00:26 |
marcoceppi | speaking of playing with fire, I just ran juju upgrade-charm on that environment | 00:27 |
* marcoceppi crosses fingers | 00:27 | |
arosales | marcoceppi: in production | 00:28 |
arosales | good luck | 00:31 |
arosales | of course its in production first try, I forgot who I was chatting with | 00:31 |
marcoceppi | welp, didn't crash | 00:33 |
arosales | of course it didn't :-) | 00:55 |
huhaoran | pweston a pre-existing openstack: http://docs.openstack.org/developer/charm-guide/openstack-on-lxd.html, I am trying this | 02:26 |
=== gaughen_ is now known as gaughen | ||
huhaoran | pweston, sorry to disunderstand you... | 02:41 |
=== rmcall_ is now known as rmcall | ||
=== frankban|afk is now known as frankban | ||
venom3 | Hello, we are trying to deploy openstack-dashboard charm with the shared-db relation. It doesn't works due to network address issue (I guess). | 07:27 |
venom3 | In Percona the horizon user has IP 10.40.0.X | 07:28 |
venom3 | in local_settings.py there is another IP 10.40.103.X/24 | 07:29 |
venom3 | so horizon fail the authentication | 07:29 |
venom3 | and the db is not popolated. | 07:29 |
venom3 | inside the horizon-hook i found | 07:31 |
venom3 | try: | 07:31 |
venom3 | # NOTE: try to use network spaces | 07:31 |
venom3 | host = network_get_primary_address('shared-db') | 07:31 |
venom3 | but I wonder If network space is implemented for horizon | 07:32 |
venom3 | Any suggestion? | 07:32 |
=== jamespag` is now known as jamespage | ||
jamespage | tvansteenburgh, https://code.launchpad.net/~james-page/juju-deployer/bug-1625797/+merge/306595 | 09:05 |
jamespage | I think that fixes beisner's issue - just testing now | 09:05 |
jamespage | tvansteenburgh, needed a tickle for 1 and 2 compat afaict | 09:36 |
jamespage | Machine vs machine | 09:37 |
jamespage | beisner, ok I think that fixes up 1.25 compate in juju-deployer for placements | 09:39 |
jamespage | 2.0 placement with v3 formats its still broken - the placements don't accomodate machine 0 properly | 09:40 |
jamespage | i.e. really we should add that prior to deploying any services to ensure that it exists and we don't end up with machine 0 service being smudged with machine 1 services as it stands right now | 09:41 |
huhaoran | Hey, I have a silly question about Juju, :) | 10:07 |
huhaoran | Is Juju suitable in a VM? | 10:08 |
babbageclunk | huhaoran: Not a silly question! :) | 10:09 |
babbageclunk | huhaoran: I think the answer is yes, but also a bit "it depends". | 10:09 |
babbageclunk | But you can definitely run juju in a VM. | 10:10 |
huhaoran | babbageclunk, Thanks | 10:10 |
aisrael | huhaoran, Juju is great with containers. Juju 2 has native support for lxd. | 11:10 |
aisrael | It depends on your use case, but the juju controller can be run inside a VM but deploy machines to physical hardware. You can even co-locate units inside the vm. | 11:11 |
marcoceppi | jamespage: ugh, what a lame thing from the API rename | 11:30 |
jamespage | marcoceppi, there may be other impacts but I've not seen anything functionally broken with that fix in my testing | 12:16 |
marcoceppi | jamespage: it gets an initial +1 from me, I'll have tvansteenburgh look at it when he gets up | 12:18 |
jamespage | marcoceppi, that should at least get 1.25 support working again | 12:19 |
tvansteenburgh | jamespage, marcoceppi: looking now | 13:12 |
SimonKLB | is it possible to retrieve the charm config with amulet? | 13:22 |
rock_ | Hi. I have question. On Juju Version 2.0-rc1-xenial-amd64 , "$juju config " command working but not "$juju set-config". But on Juju Version 2.0-beta15-xenial-amd64, "$juju set-config" command working but not "$juju config". So Juju version 2.0-rc1-xenial-amd64 is in under development? | 13:54 |
rick_h_ | rock_: sorry, the documentation there needs updating | 13:55 |
rick_h_ | set-config was replaced with just juju config | 13:55 |
rick_h_ | so juju config key=value | 13:55 |
rock_ | rich_h_: juju version rc1, rc2 means they are development versions? | 13:57 |
rick_h_ | rock_: no, rc1 is stable and we're doing bug fixes, but it's not development | 13:58 |
rick_h_ | rock_: that change was done in beta 18 and was carried to rc1 | 13:59 |
jrwren | rock_: rc = release candidate | 13:59 |
rock_ | rich_h_/jrwren: Oh. I have one more question. all juju version 1.x, have Only one configuration setting command? I mean "$juju service set " and as well as all juju version 2.X, can we use $juju config" command? | 14:03 |
rick_h_ | rock_: the juju config command is only in 2.x | 14:07 |
rock_ | rick_h_: Hi. I am asking that only. "$juju config" will be used in all 2.x versions right? and "$juju service set" will work on all 1.x versions right? | 14:09 |
rock_ | rick_h_: "$juju config" is not working on 2.0-beta15-xenial-amd64. It was showing command not recognized. | 14:12 |
rock_ | rick_h_: Actually, we developed a "cinder sturage driver" charm. We are preparing README.md file in a detailed mannaer. We tested our charm in different ways with different combinations. | 14:15 |
rock_ | rick_h_: So I want to know things clearly. | 14:17 |
tvansteenburgh | rock_: juju config works in 2.0-beta18 and forward | 14:18 |
tvansteenburgh | rock_: so yes, 'juju service set' for juju1, and 'juju config' for juju2 | 14:19 |
rock_ | tvansteenburgh : Oh. Thanks. | 14:20 |
rock_ | rick_h_/jrwren: Thank you | 14:21 |
rick_h_ | rock_: sorry, was on the phone. Yes, what tvansteenburgh says. Thanks tim for the assist | 14:30 |
rock_ | rick_h_: No problem . tahnk you | 14:37 |
lazyPower | SimonKLB: what you'll see typically is the test/bundle will define the configuration and what we do is inspect the end-location file renderings, flags, etc. for the existance of those config values | 14:41 |
lazyPower | SimonKLB: but off the top of my head, id ont think amulet has any amenities to pull the charm config from the self.deployment object. I may be incorrect though | 14:41 |
autonomouse | hi, I posted this question to the internal juju channel without thinking about it before (so sorry about that whom it may concern), but I have a question about scopes that I'd really appreciate some help with. I put it here if anyone's interested in giving it a shot: http://askubuntu.com/questions/828732/can-anyone-explain-scopes-in-reactive-charms-to-me-please-juju-2-0 | 14:43 |
SimonKLB | lazyPower: so ususally you can access the config values through the charmhelpers package via the hookenv, but when running an amulet test thats going to be different right? | 14:45 |
lazyPower | correct, amulet is an external thing, its poking at juju through the api and juju run | 14:46 |
SimonKLB | yea, perhaps there is a better way of solving it, but currently i have the ports available as config values, and i need to know them when testing if the service is up and running during the amulet test | 14:47 |
SimonKLB | perhaps i should put some default ports in a python module and override it with the charm config? | 14:48 |
SimonKLB | that way i could still test stuff with the default settings and still let the user choose which ports it should run on | 14:48 |
SimonKLB | but it feels a bit messy | 14:49 |
tvansteenburgh | SimonKLB: i'd just do a little subprocess wrapper around `juju config` | 14:49 |
tvansteenburgh | arguably that would be a handy addition to amulet | 14:50 |
tvansteenburgh | SimonKLB: or you can just set the ports in the amulet test so you know what they are | 14:50 |
SimonKLB | tvansteenburgh: yea, im just trying to refrain from having configuration options in multiple places | 14:51 |
tvansteenburgh | SimonKLB: but it's a test. it's reasonable to set some config and then act on those values | 14:53 |
SimonKLB | yea, that is the way im doing it right now, i was just curious if you could grab the default config values somehow and be rid of the extra configuration in the test | 14:55 |
tvansteenburgh | SimonKLB: well you can deploy without setting config and it'll use the defaults. to actually retrieve the defaults though, you'd need to use `juju config` | 14:57 |
SimonKLB | yea, it's the retrieving part im having trouble with, since i need some stuff from the config to access the charm - not sure if a subprocess wrapper or setting the config values i need "externally" in the test is cleaner | 14:59 |
tvansteenburgh | SimonKLB: if it were me i'd do the latter | 15:01 |
SimonKLB | tvansteenburgh: thanks, ill go with that then! | 15:01 |
balloons | Does anyone know how I might be able to remove a github key by name? juju remove-ssh-key blah@github or blah@github/12345 or gh:blah all fail | 15:10 |
petevg | cory_fu, kwmonroe, kjackal: go some PRs for you: https://github.com/juju-solutions/interface-zookeeper-quorum/pull/5 and https://github.com/juju-solutions/bigtop/pull/46 | 15:45 |
petevg | Together, they give us rolling restart automagic in the new zookeeper charm. | 15:45 |
petevg | *got | 15:45 |
=== frankban is now known as frankban|afk | ||
kwmonroe | good news cory_fu, bigtop zepp doesn't need hadoop-plugin. it just so happens that bigtop defaults the spark interpreter to yarn-client (which is why i thought it needed the plugin in the first place). we can override that with a proper spark master and make plugin optional. | 19:47 |
kwmonroe | petevg: do you have a build of the zookeeper charm that includes your PRs? | 19:49 |
petevg | kwmonroe: I caught and issue and I've been debugging it (think that I may have a fix), so I haven't pushed a build yet. | 19:50 |
petevg | kwmonroe: if this test that I'm running right now works out, though, I can push what I've got to bigdata-dev | 19:50 |
kwmonroe | oh good petevg: i was fixin to say that all the letters look right, but frankly, you and cory_fu were putting me to sleep with all that remote_ conversation babble. | 19:50 |
petevg | Heh. It's kind of sleep inducing. I think that my mega comment explaining it probably makes for good bedtime reading. | 19:51 |
petevg | The bug is actually in the relation stuff: if you remove-unit the Juju leader, it still tries to orchestrate while it is shutting down, and then it throws errors because it doesn't have the relation data any more. | 19:52 |
kwmonroe | hm, that sounds like a -departed vs -broken thing... as in, don't do conversation stuff during -broken because you don't have the relation data any more. | 19:55 |
petevg | Yep. | 19:55 |
petevg | kwmonroe: am I missing a design pattern that gets around it? | 19:55 |
petevg | Is it just @when_not('{relation-name}.broken')? | 19:56 |
kwmonroe | no petevg, i just checked peers.py on interface-zookeeper.. it's reacting to -departed (which is good). if it were reacting to -broken, you'd be in trouble. | 19:56 |
petevg | Got it. | 19:56 |
hml | question: why would the containers be so small that charm deploy fails? this just started happening with juju 2.0-rc1 and conjure-up 2.0 - openstack-novalxd - xenial | 20:02 |
petevg | kwmonroe: here's a charm for you: cs:~bigdata-dev/zookeeper-10 | 20:02 |
kwmonroe | thanks petevg! | 20:03 |
petevg | np! | 20:03 |
hml | my container appear to be using zfs this time and i didn’t before | 20:05 |
kwmonroe | hml: is your zpool out of space (sudo zpool list)? | 20:09 |
kwmonroe | i seem to recall a juju ML post about the default size being small (like 10G) | 20:09 |
hml | kwmonroe: I took all of the defaults when i did the lxd init | 20:09 |
hml | kwmonroe: the biggested container is only 2G, i do remember something about 10G during the init but i don’t have a log | 20:10 |
kwmonroe | hml: speculating here, but i think this might be your fix: https://github.com/lxc/lxd/pull/2364/files lxd init used to do 10g by default, and now it does something 20% of your disk or 100G (whichever is smaller) | 20:12 |
kwmonroe | tych0: any idea when that might make it into a lxd package? ^^ | 20:13 |
hml | kwmonroe: just looked at the pool - it’s maxed. do you know if something changed recently. i’ve been deploying and reinstalling a lot in the past month and haven’t run into this. | 20:14 |
tych0 | kwmonroe: i think it will be released into yakkety on tuesday | 20:15 |
tych0 | i'm not sure about backporting to xenial | 20:15 |
hml | kwmonroe: is it easy to increase the zpool size? i have lots of extra disk space to give it | 20:15 |
kwmonroe | hml: not sure if something changed recently, but if you weren't using zfs before and your are now, that might explain it. i think bootstrap asks if you want to use file-backed or zfs-backed containers. | 20:20 |
hml | kwmonroe: hrm.. must have some soemthing different. :-/ which is better to use? file-backed or zfs-backed? | 20:21 |
stokachu | hml: small? | 20:25 |
kwmonroe | hml: if by "better" you mean "new, shiny, fast", then zfs :) | 20:25 |
stokachu | hml: are you using ZFS? | 20:25 |
kwmonroe | hml: i don't know how to expand an existing zpool.. tych0, do you? | 20:25 |
hml | hml: at this point, i just want something that works. :-) | 20:25 |
stokachu | you can't | 20:25 |
stokachu | you need to remove the lxc images | 20:26 |
stokachu | and rerun lxd init | 20:26 |
stokachu | make the zfs pool size bigger, i usually do like 100G | 20:26 |
hml | stokach: i went back to a snapshot of my vm - i’ve been around this block too many times. :-) | 20:26 |
stokachu | :D | 20:26 |
hml | stockach: i don’t care if it’s new and shiny, i just need it to work. this config isnt’ going into production or the like | 20:27 |
stokachu | hml: then just use the dir for the storage backend | 20:28 |
stokachu | you still need to re-run lxd init | 20:28 |
hml | stokach: i’ll give it go. thank you! | 20:28 |
stokachu | yw | 20:28 |
petevg | kwmonroe: here's a version of the zookeeper charm that should avoid some redundant restarts: cs:~bigdata-dev/zookeeper-11 (I also updated the PR with the code from this charm.) | 21:13 |
kwmonroe | ack petevg | 21:19 |
petevg | kwmonroe: the "redundant restarts" might be paranoia on my part. I refactored to prevent two similar routines from essentially executing the same code twice ... but it would only cause an extra restart if things got timed unfortunately. If you're using version 10, it's probably okay. Version 11 is just a little bit tidier. | 21:24 |
petevg | ... also, the issue would only happen after you'd removed a zookeeper unit, so you're unlikely to run into it in a demo. | 21:32 |
kwmonroe | frankly petevg, i'm not versed enough in zookeeper to know what effect zk restarts have on connected services.. like what if spark/kafka/namenode are asking zk something and it restarts? do they ask again? melt? get the answer from /dev/random? ear-regardless, eliminating extra restarts seems good. | 21:32 |
petevg | kwmonroe: we do the rolling restart so that zookeeper handles that well. | 21:33 |
kwmonroe | petevg: speaking of demos, you comfy enough to put zookeeper-11 in the spark-processing bundle? or shall we stick to the older zk for strata? | 21:33 |
petevg | kwmonroe: hmmm ... I'm not less confident of zookeeper-11 than I am of zookeeper-9 | 21:34 |
kwmonroe | heh, nm. it's friday afternoon. there's no way we vet zk-11 in time for strata. | 21:34 |
petevg | kwmonroe: also, to clarify, zookeeper will only restart when you add or remove nodes, and it has to restart. The "extra" restarts won't happen in the middle of normal use. | 21:35 |
petevg | kwmonroe: On the other hand, it might be nice to put zookeeper through a trial by fire ourselves, rather than wait for someone else to do it, post strata. | 21:36 |
cmars | mbruzek, lazyPower hi, i'm trying out the master-node-split bundle.. how do i get a client to connect to my cluster? | 21:36 |
cmars | awesome stuff, btw | 21:36 |
petevg | ... unless you've already finished a lot of testing on the bundle. (In which case, I'd stick with zookeeper-10 -- you're not going to run into the issue unless you're adding and removing a bunch of nodes on the fly.) | 21:36 |
lazyPower | cmars: mkdir -p ~/.kube && juju scp kubernetes-master/0:config ~/.kube/config && kubectl get nodes | 22:15 |
cmars | lazyPower, awesome, thanks | 22:15 |
lazyPower | cmars: that command is descructive if you're already connected to a kubernetes cluster | 22:16 |
lazyPower | so be mindful of blindly overwriting, it may have unintended consequences | 22:16 |
cmars | lazyPower, ack. all good, i'll tweak this & use the env var | 22:16 |
cmars | lazyPower, so i'm going to hack on the kubernetes-master layer. where can i get the resource it needs, if i deploy it locally? | 22:17 |
lazyPower | cmars: i wonder, can we fetch resources from the charm store? | 22:20 |
lazyPower | i dont think i've tried | 22:20 |
lazyPower | other than when juju deploy happens | 22:20 |
cmars | lazyPower, maybe, i don't know the incantations | 22:20 |
lazyPower | same | 22:21 |
lazyPower | 1 sec | 22:21 |
lazyPower | https://gist.github.com/935f03ce936bc221ca7cdcc4c974fbd7 - courtesy of mbruzek | 22:21 |
lazyPower | fetch a release tarball from github.com/kubernetes/kubernetes/releases | 22:21 |
lazyPower | we're good on deploy on latest beta10 | 22:22 |
lazyPower | since beta3 was our first attempt? | 22:22 |
lazyPower | yeah, we're good in the range of betas from 3-10 | 22:22 |
lazyPower | and 1.3.x | 22:22 |
cmars | lazyPower, ok, great! | 22:23 |
cmars | lazyPower, last question. if i set different relation data keys on each side of a relation, does that erase the keys i don't specify, or is it additive? | 22:23 |
lazyPower | wat | 22:24 |
cmars | using reactive framework | 22:24 |
lazyPower | different relation data keys... meaning? | 22:24 |
mbruzek | cmars: If it is a different key, then it adds, if you set the same key then it would replace | 22:25 |
cmars | mbruzek, ok, perfect | 22:25 |
lazyPower | that | 22:25 |
cmars | never done a two-way handshaking relation | 22:25 |
mbruzek | cmars: are you messing with our interface? What are you adding? | 22:25 |
lazyPower | haha | 22:25 |
cmars | mbruzek, i'm not changing your existing interfaces, i'm writing a new one | 22:25 |
* mbruzek is worried all of a sudden | 22:25 | |
cmars | don't worry, it's an "experiment" | 22:26 |
mbruzek | Well good luck I hope it is a success | 22:26 |
cmars | thanks. fun stuff :) | 22:26 |
cmars | will def share | 22:26 |
cmars | mbruzek, lazyPower i've got master-worker-split running in lxd, with the worker in a manually added kvm machine | 22:33 |
cmars | \m/ | 22:33 |
cmars | awesome, awesome | 22:33 |
lazyPower | \m/, | 22:33 |
mbruzek | sweet | 22:33 |
mbruzek | good experiment | 22:33 |
lazyPower | cmars: word of caution. etcd will choke if you put it on an ipv6 only host | 22:33 |
lazyPower | known caveat | 22:33 |
lazyPower | s/etcd/the etcd charm/ | 22:34 |
cmars | lazyPower, interesting, ok. i disable ipv6 on my lxd bridge, but good to know | 22:34 |
cmars | i just like lxd for developing, so much faster | 22:34 |
lazyPower | i hear ya | 22:35 |
mbruzek | cmars: Have you run any docker workloads inside the lxd yet? Can you confirm they work without adding a profile or something? | 22:35 |
cmars | mbruzek, i can't run the worker in lxd. that's why i manually added the kvm instance. i'm launching those with uvt-kvm and then using add-machine to attach them | 22:36 |
cmars | routing between the kvm and lxd subnets works | 22:36 |
mbruzek | oh yeah | 22:36 |
cmars | tried the docker profile, no luck | 22:36 |
mbruzek | cmars that is why we split the master/workers so we could use lxd | 22:36 |
mbruzek | for the control plane | 22:36 |
mbruzek | So in theory you could deploy --to lxd:<kvm machine> | 22:37 |
cmars | mbruzek, in the worker, i get some read-only filesystem errors for stuff under /proc/sys/... | 22:37 |
cmars | specifically, https://paste.ubuntu.com/23222264/ | 22:37 |
cmars | that's after adding docker to the worker container's lxd profile and restarting it | 22:38 |
cmars | oh well.. | 22:38 |
mbruzek | cmars: I meant the master could be deployed to the lxd on the worker yes? | 22:38 |
mbruzek | (where the worker is kvm) | 22:38 |
mbruzek | or whatever. | 22:38 |
cmars | ah.. sure could | 22:38 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!