[00:00] well, don't use that one. Use the promulgated one ;) [00:00] gah [00:00] either way, that's all you need to do [00:00] arosales: and I added this hook https://api.jujucharms.com/charmstore/v5/mariadb/archive/hooks/update-status [00:02] marcoceppi: thanks [00:02] arosales: I'd be curious why there's a bigdata-dev version of a promulgated charm and if we can get those patches upstream [00:02] I'll work with petevg to get his bigdata-dev/mariadb charm merged with the promulgated one [00:02] marcoceppi: petevg was working on it for xenial s390 support [00:03] cool, I'll make sure it's obvious where the source is [00:03] I think he was trying to get in contact with the mariadb maintainer to push his updates [00:03] arosales: I'm in the mariadb-charmers team, and can help get fixes landed [00:03] Yeah. The maintainer wasn't getting back to me. [00:04] well, I'm a maintainer (implicitly) and I'll listen to you [00:04] petevg: ...for a price ;) [00:04] charm teams ftw [00:04] marcoceppi: the price is I don't wag my finger at you for ignoring me before :-p [00:04] I think marcoceppi currency is measured in volumne [00:05] it's measured in ABV [00:06] ah [00:06] :-) [00:06] marcoceppi: when is update-status hook ran? [00:06] arosales: every 5 mins [00:07] give or take the hooke queue [00:07] ok [00:07] marcoceppi: et all [00:08] do you think mariadb-ghost bundle would be a better getting started bundle than wiki-simple? [00:08] in regards to https://github.com/juju/docs/issues/1382 [00:10] or now that you have these easy hook to drop in, we could add them to wiki-simple [00:22] petevg arosales I've updated the code hosting and bugs URL for the charm, pull requests welcome [00:22] marcoceppi: awesome. I will put one together for you :-) [00:23] petevg: prepared to be ignored ;) [00:24] petevg: I care about mariadb becase I want to write ONE mysql-base layer that I can base the oracle-mysql and mariadb charm on [00:24] since they are like 95% identical [00:24] and I use mariadb in production [00:26] Makes sense :-) [00:27] speaking of playing with fire, I just ran juju upgrade-charm on that environment [00:27] * marcoceppi crosses fingers [00:28] marcoceppi: in production [00:31] good luck [00:31] of course its in production first try, I forgot who I was chatting with [00:33] welp, didn't crash [00:55] of course it didn't :-) [02:26] pweston a pre-existing openstack: http://docs.openstack.org/developer/charm-guide/openstack-on-lxd.html, I am trying this === gaughen_ is now known as gaughen [02:41] pweston, sorry to disunderstand you... === rmcall_ is now known as rmcall === frankban|afk is now known as frankban [07:27] Hello, we are trying to deploy openstack-dashboard charm with the shared-db relation. It doesn't works due to network address issue (I guess). [07:28] In Percona the horizon user has IP 10.40.0.X [07:29] in local_settings.py there is another IP 10.40.103.X/24 [07:29] so horizon fail the authentication [07:29] and the db is not popolated. [07:31] inside the horizon-hook i found [07:31] try: [07:31] # NOTE: try to use network spaces [07:31] host = network_get_primary_address('shared-db') [07:32] but I wonder If network space is implemented for horizon [07:32] Any suggestion? === jamespag` is now known as jamespage [09:05] tvansteenburgh, https://code.launchpad.net/~james-page/juju-deployer/bug-1625797/+merge/306595 [09:05] I think that fixes beisner's issue - just testing now [09:36] tvansteenburgh, needed a tickle for 1 and 2 compat afaict [09:37] Machine vs machine [09:39] beisner, ok I think that fixes up 1.25 compate in juju-deployer for placements [09:40] 2.0 placement with v3 formats its still broken - the placements don't accomodate machine 0 properly [09:41] i.e. really we should add that prior to deploying any services to ensure that it exists and we don't end up with machine 0 service being smudged with machine 1 services as it stands right now [10:07] Hey, I have a silly question about Juju, :) [10:08] Is Juju suitable in a VM? [10:09] huhaoran: Not a silly question! :) [10:09] huhaoran: I think the answer is yes, but also a bit "it depends". [10:10] But you can definitely run juju in a VM. [10:10] babbageclunk, Thanks [11:10] huhaoran, Juju is great with containers. Juju 2 has native support for lxd. [11:11] It depends on your use case, but the juju controller can be run inside a VM but deploy machines to physical hardware. You can even co-locate units inside the vm. [11:30] jamespage: ugh, what a lame thing from the API rename [12:16] marcoceppi, there may be other impacts but I've not seen anything functionally broken with that fix in my testing [12:18] jamespage: it gets an initial +1 from me, I'll have tvansteenburgh look at it when he gets up [12:19] marcoceppi, that should at least get 1.25 support working again [13:12] jamespage, marcoceppi: looking now [13:22] is it possible to retrieve the charm config with amulet? [13:54] Hi. I have question. On Juju Version 2.0-rc1-xenial-amd64 , "$juju config " command working but not "$juju set-config". But on Juju Version 2.0-beta15-xenial-amd64, "$juju set-config" command working but not "$juju config". So Juju version 2.0-rc1-xenial-amd64 is in under development? [13:55] rock_: sorry, the documentation there needs updating [13:55] set-config was replaced with just juju config [13:55] so juju config key=value [13:57] rich_h_: juju version rc1, rc2 means they are development versions? [13:58] rock_: no, rc1 is stable and we're doing bug fixes, but it's not development [13:59] rock_: that change was done in beta 18 and was carried to rc1 [13:59] rock_: rc = release candidate [14:03] rich_h_/jrwren: Oh. I have one more question. all juju version 1.x, have Only one configuration setting command? I mean "$juju service set " and as well as all juju version 2.X, can we use $juju config" command? [14:07] rock_: the juju config command is only in 2.x [14:09] rick_h_: Hi. I am asking that only. "$juju config" will be used in all 2.x versions right? and "$juju service set" will work on all 1.x versions right? [14:12] rick_h_: "$juju config" is not working on 2.0-beta15-xenial-amd64. It was showing command not recognized. [14:15] rick_h_: Actually, we developed a "cinder sturage driver" charm. We are preparing README.md file in a detailed mannaer. We tested our charm in different ways with different combinations. [14:17] rick_h_: So I want to know things clearly. [14:18] rock_: juju config works in 2.0-beta18 and forward [14:19] rock_: so yes, 'juju service set' for juju1, and 'juju config' for juju2 [14:20] tvansteenburgh : Oh. Thanks. [14:21] rick_h_/jrwren: Thank you [14:30] rock_: sorry, was on the phone. Yes, what tvansteenburgh says. Thanks tim for the assist [14:37] rick_h_: No problem . tahnk you [14:41] SimonKLB: what you'll see typically is the test/bundle will define the configuration and what we do is inspect the end-location file renderings, flags, etc. for the existance of those config values [14:41] SimonKLB: but off the top of my head, id ont think amulet has any amenities to pull the charm config from the self.deployment object. I may be incorrect though [14:43] hi, I posted this question to the internal juju channel without thinking about it before (so sorry about that whom it may concern), but I have a question about scopes that I'd really appreciate some help with. I put it here if anyone's interested in giving it a shot: http://askubuntu.com/questions/828732/can-anyone-explain-scopes-in-reactive-charms-to-me-please-juju-2-0 [14:45] lazyPower: so ususally you can access the config values through the charmhelpers package via the hookenv, but when running an amulet test thats going to be different right? [14:46] correct, amulet is an external thing, its poking at juju through the api and juju run [14:47] yea, perhaps there is a better way of solving it, but currently i have the ports available as config values, and i need to know them when testing if the service is up and running during the amulet test [14:48] perhaps i should put some default ports in a python module and override it with the charm config? [14:48] that way i could still test stuff with the default settings and still let the user choose which ports it should run on [14:49] but it feels a bit messy [14:49] SimonKLB: i'd just do a little subprocess wrapper around `juju config` [14:50] arguably that would be a handy addition to amulet [14:50] SimonKLB: or you can just set the ports in the amulet test so you know what they are [14:51] tvansteenburgh: yea, im just trying to refrain from having configuration options in multiple places [14:53] SimonKLB: but it's a test. it's reasonable to set some config and then act on those values [14:55] yea, that is the way im doing it right now, i was just curious if you could grab the default config values somehow and be rid of the extra configuration in the test [14:57] SimonKLB: well you can deploy without setting config and it'll use the defaults. to actually retrieve the defaults though, you'd need to use `juju config` [14:59] yea, it's the retrieving part im having trouble with, since i need some stuff from the config to access the charm - not sure if a subprocess wrapper or setting the config values i need "externally" in the test is cleaner [15:01] SimonKLB: if it were me i'd do the latter [15:01] tvansteenburgh: thanks, ill go with that then! [15:10] Does anyone know how I might be able to remove a github key by name? juju remove-ssh-key blah@github or blah@github/12345 or gh:blah all fail [15:45] cory_fu, kwmonroe, kjackal: go some PRs for you: https://github.com/juju-solutions/interface-zookeeper-quorum/pull/5 and https://github.com/juju-solutions/bigtop/pull/46 [15:45] Together, they give us rolling restart automagic in the new zookeeper charm. [15:45] *got === frankban is now known as frankban|afk [19:47] good news cory_fu, bigtop zepp doesn't need hadoop-plugin. it just so happens that bigtop defaults the spark interpreter to yarn-client (which is why i thought it needed the plugin in the first place). we can override that with a proper spark master and make plugin optional. [19:49] petevg: do you have a build of the zookeeper charm that includes your PRs? [19:50] kwmonroe: I caught and issue and I've been debugging it (think that I may have a fix), so I haven't pushed a build yet. [19:50] kwmonroe: if this test that I'm running right now works out, though, I can push what I've got to bigdata-dev [19:50] oh good petevg: i was fixin to say that all the letters look right, but frankly, you and cory_fu were putting me to sleep with all that remote_ conversation babble. [19:51] Heh. It's kind of sleep inducing. I think that my mega comment explaining it probably makes for good bedtime reading. [19:52] The bug is actually in the relation stuff: if you remove-unit the Juju leader, it still tries to orchestrate while it is shutting down, and then it throws errors because it doesn't have the relation data any more. [19:55] hm, that sounds like a -departed vs -broken thing... as in, don't do conversation stuff during -broken because you don't have the relation data any more. [19:55] Yep. [19:55] kwmonroe: am I missing a design pattern that gets around it? [19:56] Is it just @when_not('{relation-name}.broken')? [19:56] no petevg, i just checked peers.py on interface-zookeeper.. it's reacting to -departed (which is good). if it were reacting to -broken, you'd be in trouble. [19:56] Got it. [20:02] question: why would the containers be so small that charm deploy fails? this just started happening with juju 2.0-rc1 and conjure-up 2.0 - openstack-novalxd - xenial [20:02] kwmonroe: here's a charm for you: cs:~bigdata-dev/zookeeper-10 [20:03] thanks petevg! [20:03] np! [20:05] my container appear to be using zfs this time and i didn’t before [20:09] hml: is your zpool out of space (sudo zpool list)? [20:09] i seem to recall a juju ML post about the default size being small (like 10G) [20:09] kwmonroe: I took all of the defaults when i did the lxd init [20:10] kwmonroe: the biggested container is only 2G, i do remember something about 10G during the init but i don’t have a log [20:12] hml: speculating here, but i think this might be your fix: https://github.com/lxc/lxd/pull/2364/files lxd init used to do 10g by default, and now it does something 20% of your disk or 100G (whichever is smaller) [20:13] tych0: any idea when that might make it into a lxd package? ^^ [20:14] kwmonroe: just looked at the pool - it’s maxed. do you know if something changed recently. i’ve been deploying and reinstalling a lot in the past month and haven’t run into this. [20:15] kwmonroe: i think it will be released into yakkety on tuesday [20:15] i'm not sure about backporting to xenial [20:15] kwmonroe: is it easy to increase the zpool size? i have lots of extra disk space to give it [20:20] hml: not sure if something changed recently, but if you weren't using zfs before and your are now, that might explain it. i think bootstrap asks if you want to use file-backed or zfs-backed containers. [20:21] kwmonroe: hrm.. must have some soemthing different. :-/ which is better to use? file-backed or zfs-backed? [20:25] hml: small? [20:25] hml: if by "better" you mean "new, shiny, fast", then zfs :) [20:25] hml: are you using ZFS? [20:25] hml: i don't know how to expand an existing zpool.. tych0, do you? [20:25] hml: at this point, i just want something that works. :-) [20:25] you can't [20:26] you need to remove the lxc images [20:26] and rerun lxd init [20:26] make the zfs pool size bigger, i usually do like 100G [20:26] stokach: i went back to a snapshot of my vm - i’ve been around this block too many times. :-) [20:26] :D [20:27] stockach: i don’t care if it’s new and shiny, i just need it to work. this config isnt’ going into production or the like [20:28] hml: then just use the dir for the storage backend [20:28] you still need to re-run lxd init [20:28] stokach: i’ll give it go. thank you! [20:28] yw [21:13] kwmonroe: here's a version of the zookeeper charm that should avoid some redundant restarts: cs:~bigdata-dev/zookeeper-11 (I also updated the PR with the code from this charm.) [21:19] ack petevg [21:24] kwmonroe: the "redundant restarts" might be paranoia on my part. I refactored to prevent two similar routines from essentially executing the same code twice ... but it would only cause an extra restart if things got timed unfortunately. If you're using version 10, it's probably okay. Version 11 is just a little bit tidier. [21:32] ... also, the issue would only happen after you'd removed a zookeeper unit, so you're unlikely to run into it in a demo. [21:32] frankly petevg, i'm not versed enough in zookeeper to know what effect zk restarts have on connected services.. like what if spark/kafka/namenode are asking zk something and it restarts? do they ask again? melt? get the answer from /dev/random? ear-regardless, eliminating extra restarts seems good. [21:33] kwmonroe: we do the rolling restart so that zookeeper handles that well. [21:33] petevg: speaking of demos, you comfy enough to put zookeeper-11 in the spark-processing bundle? or shall we stick to the older zk for strata? [21:34] kwmonroe: hmmm ... I'm not less confident of zookeeper-11 than I am of zookeeper-9 [21:34] heh, nm. it's friday afternoon. there's no way we vet zk-11 in time for strata. [21:35] kwmonroe: also, to clarify, zookeeper will only restart when you add or remove nodes, and it has to restart. The "extra" restarts won't happen in the middle of normal use. [21:36] kwmonroe: On the other hand, it might be nice to put zookeeper through a trial by fire ourselves, rather than wait for someone else to do it, post strata. [21:36] mbruzek, lazyPower hi, i'm trying out the master-node-split bundle.. how do i get a client to connect to my cluster? [21:36] awesome stuff, btw [21:36] ... unless you've already finished a lot of testing on the bundle. (In which case, I'd stick with zookeeper-10 -- you're not going to run into the issue unless you're adding and removing a bunch of nodes on the fly.) [22:15] cmars: mkdir -p ~/.kube && juju scp kubernetes-master/0:config ~/.kube/config && kubectl get nodes [22:15] lazyPower, awesome, thanks [22:16] cmars: that command is descructive if you're already connected to a kubernetes cluster [22:16] so be mindful of blindly overwriting, it may have unintended consequences [22:16] lazyPower, ack. all good, i'll tweak this & use the env var [22:17] lazyPower, so i'm going to hack on the kubernetes-master layer. where can i get the resource it needs, if i deploy it locally? [22:20] cmars: i wonder, can we fetch resources from the charm store? [22:20] i dont think i've tried [22:20] other than when juju deploy happens [22:20] lazyPower, maybe, i don't know the incantations [22:21] same [22:21] 1 sec [22:21] https://gist.github.com/935f03ce936bc221ca7cdcc4c974fbd7 - courtesy of mbruzek [22:21] fetch a release tarball from github.com/kubernetes/kubernetes/releases [22:22] we're good on deploy on latest beta10 [22:22] since beta3 was our first attempt? [22:22] yeah, we're good in the range of betas from 3-10 [22:22] and 1.3.x [22:23] lazyPower, ok, great! [22:23] lazyPower, last question. if i set different relation data keys on each side of a relation, does that erase the keys i don't specify, or is it additive? [22:24] wat [22:24] using reactive framework [22:24] different relation data keys... meaning? [22:25] cmars: If it is a different key, then it adds, if you set the same key then it would replace [22:25] mbruzek, ok, perfect [22:25] that [22:25] never done a two-way handshaking relation [22:25] cmars: are you messing with our interface? What are you adding? [22:25] haha [22:25] mbruzek, i'm not changing your existing interfaces, i'm writing a new one [22:25] * mbruzek is worried all of a sudden [22:26] don't worry, it's an "experiment" [22:26] Well good luck I hope it is a success [22:26] thanks. fun stuff :) [22:26] will def share [22:33] mbruzek, lazyPower i've got master-worker-split running in lxd, with the worker in a manually added kvm machine [22:33] \m/ [22:33] awesome, awesome [22:33] \m/, [22:33] sweet [22:33] good experiment [22:33] cmars: word of caution. etcd will choke if you put it on an ipv6 only host [22:33] known caveat [22:34] s/etcd/the etcd charm/ [22:34] lazyPower, interesting, ok. i disable ipv6 on my lxd bridge, but good to know [22:34] i just like lxd for developing, so much faster [22:35] i hear ya [22:35] cmars: Have you run any docker workloads inside the lxd yet? Can you confirm they work without adding a profile or something? [22:36] mbruzek, i can't run the worker in lxd. that's why i manually added the kvm instance. i'm launching those with uvt-kvm and then using add-machine to attach them [22:36] routing between the kvm and lxd subnets works [22:36] oh yeah [22:36] tried the docker profile, no luck [22:36] cmars that is why we split the master/workers so we could use lxd [22:36] for the control plane [22:37] So in theory you could deploy --to lxd: [22:37] mbruzek, in the worker, i get some read-only filesystem errors for stuff under /proc/sys/... [22:37] specifically, https://paste.ubuntu.com/23222264/ [22:38] that's after adding docker to the worker container's lxd profile and restarting it [22:38] oh well.. [22:38] cmars: I meant the master could be deployed to the lxd on the worker yes? [22:38] (where the worker is kvm) [22:38] or whatever. [22:38] ah.. sure could