[00:00] whats the release schedule for push to charm store? [00:03] magicaltrout - i propose an idea [00:03] nvm i just myth=busted myself [00:04] ah... nvm [00:04] o/ c0s [00:04] makes me cry in pain [00:04] fortunately, I am not using it ;) [00:05] i have it on good authority you're looking to do some puppet in charm layers? [00:06] not exactly [00:06] was rather wondering if this is possible, as all this operational knowledge has been codified a number of times either in puppet or chef [00:06] at the risk of sounding like a broken mp3 - Bigtop is the case in point ;) [00:07] i didn't realise mp3's were written to spinning plastic discs [00:07] yeah I know - I sound like a corner drug peddler ;) [00:07] thats more like it! [00:07] that's a quote from Futurama ;) [00:07] about the mp3 [00:07] dunno if there are fans of that cartoon here [00:08] as I complained to other Apache members today.... I have 6 1/2 hours of tutorials to write for ApacheCon..... I dont have time to watch tv (or cartoons ;) ) [00:09] lazyPower is ya man though [00:09] well, I can help [00:09] fortunately for you, that show ended years ago ;) [00:09] i'm not super familiar with puppet, but there have been some charms submitted in puppet already - although they are an Openstack SDN vendor [00:09] so the bindings in there may be helpful for us to look at, and extract for use in reactive charming [00:10] are your bigtop puppet scripts done in heira? [00:10] that'd be great, but first this community need to make up their mind how they want to cooperate with Bigtop [00:10] we are using hiera for the configuration, yes [00:10] but it isn't like "it is done in hiera" [00:10] yeah, i think we can extract some learnings there [00:10] i'll reach out to apuimido and see if he's got 30 minutes or so to riff with me on pain points [00:11] and i'll fire up charms.puppet i guess :) [00:11] ideally, the knowledge part should stay the same and just be upstreamed [00:11] that's how OSS works, you know ;) [00:11] c0s: explain bigtop to me, its like the hadoop reference stack right? so how does that differ to Apache Hadoop or what these charmer guys ship? [00:11] +1 if you can get something like hiera-eyaml integrated so that we can have a more sensible way of shunting secrets around the place [00:11] pretty cool. I would be happy to help with that [00:12] Bigtop is Apache BigData stack [00:12] with binaries, deployment, integration testing, and all 9 yards [00:12] blahdeblah - we need to speak louder about secret management with juju when we're at sprints. it would be nice to even see a password config type so it at the bare minimum renders a password field in the gui [00:12] + a framework to develop your own stacks [00:12] actually scratch that - we dont need to speak louder, we do need to be consistent with getting a bug filed for it and hten beating the drum of that bug# [00:13] * lazyPower goes to find it right now [00:13] hopefully it help magicaltrout [00:13] lazyPower: I don't know anything about the GUI side; I've never used it. I just know that we end up having lots of secrets sitting around in places that are not encrypted at rest, and that's not a great look. [00:13] yeah [00:13] i agree [00:14] https://bugs.launchpad.net/juju-core/+bug/1287661 [00:14] Bug #1287661: some config options should be considered secret [00:15] yup thanks c0s. I've swung by bigtop over the years but left non the wiser! clearly i need to do more reading [00:15] sure, no worries [00:15] I has developed a lot lately. [00:15] we have this cool feature where one can deploy docker-based cluster for development/testing purposes with a single command [00:16] and so on ;) [00:16] * blahdeblah edits that bug report to reference something other than SSL certs, which aren't actually secrets [00:16] :D [00:16] and being a reference implementation is really becomes a focal point were the standard for the Hadoop eco-system is getting created. [00:17] like that www.odpi.org which I am helping to get off the ground now [00:17] anyway, switching off - see yall later. [00:17] cheers o/ [00:17] thanks for catching up c0s [00:17] ah yeah i recall that bit. cool. [00:17] Adios c0s [00:18] cheers [00:19] lazyPower: I think there's also the whole issue of how they're stored, not just how they're displayed. To achieve repeatable deploys, we currently need to keep secrets files hanging around on the deployment server, which is also not desirable. [00:19] blahdeblah - right and projects like vault exist for this reason [00:20] Haven't played with vault myself, but yeah; we need something like that [00:20] * lazyPower nods [00:20] i'm in the exact same boat [00:20] i keep thinking i'm going to deploy it and poke at it, but haven't found the time yet [00:29] magicaltrout: re push marcoceppi is working on getting the latest version into Ubuntu 16.04 as we speak. Post that I think he is going to announce to the list how folks can try the beta [00:30] arosales magicaltrout tldr: tomorrow - https://lists.ubuntu.com/archives/juju/2016-March/006910.html [00:36] marcoceppi: thanks [00:39] marcoceppi - considering the store breaks from DVCS then - i guess i should round up the scripts and give this one more go eh? https://youtu.be/rz5DVte0SKg [00:40] marcoceppi: does the charm command allow for setting an upstream url for a bundle? [00:40] arosales: probably? [00:40] my guess is yes [00:40] marcoceppi: I may confirm that with urulama|afk [00:40] marcoceppi: thanks [00:40] arosales - output of charm set shows it as true for a charm, unclear on bundle [00:41] but i would imagine it has the same meta properties as charms [00:41] marcoceppi - does this mean tomorrow i need to gut the PPA from devel flavor? [00:41] lazyPower: devel flavor of? [00:41] charmbox [00:41] ya I was just wondering if someone wanted to fork or contribute to a bundle how would they do so [00:41] lazyPower: does it have juju/devel in there? [00:41] it does, but it also has ppa:marcoceppi [00:42] lazyPower: juju/devel will be a higher version than ppa:marcoceppi [00:42] perfect [00:42] no rush then [00:42] ~beta vs ~rc [00:43] then when xenial lands I'll backport into ppa:juju/stable [00:43] solid, i just dont want stale cruft hanging around as :devel is going to supplant :latest when we release 2.0 [00:43] and :latest is going to move to a 1.25 tag, frozen in time [00:47] lazyPower: ack [00:47] lazyPower: going forward, esp for charm-tools, I want to kind of build a release schedule and standardized ppa setup for it [00:47] i'm for this :+1: [00:49] lazyPower: yeah, we're going to fall in line with the core 6 month cycle [00:49] part of what i'm doing now is trying to bring the docker images in alignment with our other tooling so its on a stable build cycle too, and i want to get better docs around its uses. its a pretty versatile porky little toolbox. Once we get matts lxd image and our new vagrant scripts in alignment with this effort we'll have a nice pipeline for everything [00:50] all based out of the groundwork we have in the old vagrant image, now in charmbox, moving along to the other projects :) [00:50] lazyPower: you going to be around for a code review in like 30 mins? [00:50] sure [00:50] i gotta take out the garbage, let me go do that and i'll do CR for ya [00:56] cory_fu: kwmonroe: for what ever reason realtime-syslog-analtics works in us-west-2 [00:56] magical reasons [00:59] * thumper likes magic [01:00] arosales - just welcomed cloudguru to the honorary containers-contributors team :) [01:00] \o/ [01:01] thumper o/ [01:01] fancy seeing you here [01:01] I'm always here [01:01] just not always talkative [01:01] thats the punchline :D [01:01] I'd be careful, thumper probably likes trout too [01:02] not a big fish fan actually [01:02] \o/ [01:02] trout is too subtle for me [01:02] unless its rainbow [01:02] or magical [01:03] lazyPower: when are we next going to be at the same sprint? [01:03] thumper - i have no clue but i'm totes ready for another coffee adventure [01:05] nice congrats cloudguru and thanks for the contributions [01:05] thumper are you coming to the next charmer summit? [01:05] when and where? [01:05] lazyPower: payloads, what are teh valid values? [01:05] marcoceppi : 1 sec [01:06] thumper: late september, looking like pasadena ca [01:06] * marcoceppi tries to get 93 patched for 2.0 [01:06] hmm... [01:06] thumper: pasadena, ca mid-sept [01:06] sounds interesting [01:06] how mid? [01:06] kiwipycon is 9-11 sep [01:06] like 12-14 tentatively [01:06] \o/ [01:07] hmm... [01:07] it's possible then [01:07] marcoceppi type docker type kvm is all that i'm aware of, but i'm pretty sure it was arbitrary [01:07] how does one get to pasadena? [01:07] lax? [01:08] thumper: basically [01:08] or sfo? [01:08] thumper: lax for sure, for international [01:08] is it a drive or flight from LAX? [01:08] thumper: drive, 20 mins or so [01:09] hmm... [01:09] lazyPower: I'll just have kvm and docker for now, I think 2.0.1 will fill the gaps [01:09] I'd really like to come... [01:09] ack [01:09] perhaps we could even fix python-django charm to suck less [01:09] like rewrite it or something [01:09] thumper - i hear this thing called layers is all the rage [01:10] yeah, I've had no time to play [01:10] thumper: I fixed the python-django charm, by making a django layer [01:10] :) [01:10] * thumper knows nothing [01:11] marcoceppi: does it use virtual envs? [01:11] thumper: if you want it to, yes [01:11] marcoceppi: and can you upgrade the django version? [01:11] thumper: yes [01:11] python 3? [01:11] thumper: ootb [01:11] sounds like I want that magic [01:11] thumper: sounds like you want layers [01:11] fuck [01:11] * marcoceppi makes rainbow magic hand wave [01:11] * thumper needs more hours [01:11] what about nginx? [01:12] and gunicorn? [01:12] thumper: duh, thta's another layer, and they work together, like magic [01:12] for django [01:12] ootb [01:12] thumper: here's a stupid simple example of implementing a django site as layers with nginx [01:12] so instead of having multiple charms, we just have one? [01:12] thumper: no, we have layers [01:13] and you assemble them to build a charm for your django application [01:13] but you reuse the operational components that comprise that solution [01:13] right, but you assemble it into a charm right? [01:13] yes [01:13] so instead of python-django, gunicorn, and my subordinate payload charm [01:13] you just deploy "your charm" [01:13] I add my payload into a layer, and use django layer [01:14] and it's django, gunicorn, nginx, as a charm [01:14] right [01:14] much nicer [01:14] very much [01:14] the glue code is pretty straight forward to [01:14] thumper: http://bazaar.launchpad.net/~ubucon-site-developers/ubucon-site/ubucon-layer/view/head:/reactive/ubucon.py [01:14] that's the basically all it takes to get ubucon.org deployed [01:14] unfortunately I have so little time, it's negative right now [01:15] thumper: I know. I feel it man [01:15] but soon, one day, probably - maybe ;) [01:15] thumper - i had no idea the days we were spending crossover hours hacking on your project, that it was literally the only time you were ever going to have to hang out [01:15] RIP fun [01:16] yeah... [01:16] hey man, it was fun while it was a thing :D [01:16] model migrations won't be fully functional for 2.0\ [01:16] and I've just been pulled onto MAAS 2.0 support [01:16] eventually you'll come back for the layers, thats how we get ya :D [01:16] until then i'll keep a pot of coffee on for ya and the light on the porch. [01:17] mind magicaltrout on your way in [01:25] lazyPower: just finishing up unit tests [01:25] solid, still g2g whenever you are [01:30] lazyPower: https://github.com/juju/charm-tools/pull/150 [01:31] lazyPower: should clear travis in a min [01:32] Dude, your test skills [01:32] they have like, level ++++++ [01:34] lazyPower: yeah, my copy and paste skills are on the up and up (shamelessly ripped from whoever - cory I think - made the storage tests for metadata yaml) [01:34] I thought it looked suspiciously efficient [01:34] +1 LGTM [01:34] can i click the button? [01:34] Because i've waited very patiently for this one \o/ and you just made my day :D [01:34] lazyPower: go for it [01:35] lazyPower: 2.0 is 100% complete. Time to cut a release [02:35] lazyPower: you still around? [02:35] You betchya [02:36] lazyPower: check out ppa:juju/devel and update/upgrade ;) [02:40] on it! === scuttlemonkey is now known as scuttle|afk === thumper is now known as thumper-afk === scuttle|afk is now known as scuttlemonkey === blahdeblah is now known as Killfirefox === Killfirefox is now known as blahdeblah === beisner- is now known as beisner === urulama|afk is now known as urulama [07:08] falanx, that's being worked on atm [07:08] zfs on wily was not as accessible [08:24] gnuoy, charm-tools 2.0.0 just broke our gate for charms... [08:24] ah, that's why your mps failed [08:24] I did wonder [08:24] gnuoy, basically the fix is to switch from "charm proof" -> "charm-proof" [08:37] gnuoy, I'll raise reviews now... [08:37] gnuoy, as nothing can get past verification atm [08:37] kk, ta [08:43] jamespage gnuoy if it's easier to change the dep, 1.11.2 is still on pypi if you charm-tools==1.11.2 to avoid 2.0 atm [08:45] marcoceppi, meh [08:45] marcoceppi, its the same work either way [08:45] ack, again, sorry about that [08:59] gnuoy, generating reviews now - https://review.openstack.org/#/q/status:open+branch:master+topic:charm-tools-2.0 [09:00] excellent [09:01] gnuoy, same change across all charms - the only one I could not do was lxd - rockstar - you'll need to pull in the same fix to your in-flight review [09:04] marcoceppi, oh great - theblues has a == on requests 2.6.0 [09:06] which is creating problems on OSCI verification, but not in upstream [09:06] that's odd [09:26] jamespage, is it right that hacluster is not part of our charm collection on github? [09:30] gnuoy, yes [09:30] ack, ta [09:31] gnuoy, I'm having to pin requests to 2.6.0 in the charm test-requirements.txt to avoid pep8 lint failures in OSCI for now [09:31] we can drop that later - just want to unblock the gate right now [09:31] ok [09:33] gnuoy, i think the version of pip and pkg_resources that the osci lab uses is different to upstream and to my local xenial install which causes some issues with entry point loading without that [09:35] gnuoy, ah crap - I managed to re-create the reviews... [09:35] gnuoy, I was using a pre-canned commit message - misses the ID's that git review added first time around === thumper-afk is now known as thumper [09:54] gnuoy, I'm making alot of load of OSCI but generally the lint failures I saw first time round have gone with the version pin [09:54] kk [10:29] gnuoy, anything with a +1 post 10:00 in the following list is good to land IMHO - https://review.openstack.org/#/q/status:open+branch:master+topic:charm-tools-2.0 === scuttlemonkey is now known as scuttle|afk [11:38] jamespage: yeah, that's what this is about [11:39] marcoceppi, the fixed requests version in theblues? [11:39] jamespage: yeah, see pm [12:14] gnuoy, time to unblock the gate? [12:14] https://review.openstack.org/#/q/status:open+branch:master+topic:charm-tools-2.0 [12:14] two failed due to amulet problems - but I suspect they have never passed... [12:14] beisner, ^^ [12:14] jamespage, ack, looking [13:20] dosaboy, ack re: lp bugs going directly to fix-released via upstream. aiui, it's a behavior change and by design. ref: [13:21] https://review.openstack.org/#/c/248922/ [13:21] http://lists.openstack.org/pipermail/openstack-dev/2015-November/080288.html [13:21] jamespage, gnuoy - unless i've missed one, the osci tests are all calling tox -e pep8 which i think should give you the same experience anywhere, shouldn't it? [13:22] beisner, well apparently not [13:22] beisner, as the upstream verification passed, but OSCI bailed on the fixed requests version in theblues.... [13:22] jamespage, if the tox ini is set to also use system packages, then definitely not. some of them i believe are. [13:22] beisner, openstack add in new pkg-resources and stuff which i suspect we don't [13:24] jamespage, ah right. we're just doing whatever the tox.ini and *-requirements.txt files instruct [13:39] jamespage, ack re: cinder-backup amulet not passing. can't raise a bug, but added a card for triage. [13:42] beisner, I suspec the tox/pip/pkg-resource environment on the upstream ci is not the same as trusty... [13:42] jamespage, right. imho we shouldn't be leaning on site-packages at all on unit/lint tests. [13:43] beisner, tbh its probably not - but depending on which tox/pip you use determines which versions of stuff you get in the venv [13:44] jamespage, lovely. so, build a venv, install a specific version of tox into it, then call tox? [13:44] beisner, not quite [13:57] jamespage, ack i see you pinned requests. looking at other upstream projects, they pin requests to a range of versions as well. [13:57] beisner, that should be droppable once theblues sorts things out [13:57] beisner, marcoceppi is dealing with that [13:58] jamespage: https://github.com/juju/theblues/pull/20 and http://paste.ubuntu.com/15477938/ [13:58] yah gotcha [13:58] jamespage: is that enough? [13:59] marcoceppi, I commented generally on your pull request... [13:59] jamespage: I missed that, thanks [13:59] some balked at the idea of using a range [14:00] and this is why I welcome, snappy, our new packaging overlord [14:01] gnuoy, ok for me to +1 workflow those two rollup config changes? [14:01] they are passing pep8 ok now with a rebase... [14:02] jamespage, yep, please do [14:03] gnuoy, that feels nice... [14:03] I like ditching lots of old code... [14:50] tinwood, would you mind taking a look at https://code.launchpad.net/~gnuoy/charms/trusty/hacluster/pause-resume/+merge/289911 if/when you have a moment? [14:51] gnuoy, sure. I'll do it this afternoon. [14:51] ta [14:52] gnuoy, is hacluster not on github then? [14:52] tinwood, apparently not, I was surprised too [14:53] gnuoy, how strange. Oh well. [14:53] gnuoy, this ceph-mon charm is quite awkward. I think I've got it right now - just doing the amulet tests on my bastion to see if pause/resume actually works! [14:56] tinwood, hacluster as odd ball too [14:57] beisner, I accidentally posted a charm recheck full on https://review.openstack.org/#/c/294554/ after already having a successful run and just my luck the second run failed due to a deploy timeout which I assume is unrelated to my change. Do I have to do another charm-recheck-full to get rid of the canonical ci -1 do you think? === JoseeAntonioR is now known as jose === cos1 is now known as c0s === scuttle|afk is now known as scuttlemonkey [15:27] Is there a best practice for filesystems on lxd/juju? There are many articles saying zfs is where the party is, but nothing definitive stating what has the best support vs performance. [15:29] falanx: yes, running lxd-init on xenial I think will ask about zfs as an option and that will help with the best setup for speed. [15:30] falanx: I think jcastro has done some testing/etc and has a sweet setup [15:30] hi gnuoy, i'll have a look and holler back shortly [15:32] beisner, thanks much appreciated [15:38] falanx: I've been pretty happy with lxd/zfs/juju [15:40] gnuoy, yes we can ignore the results of that recheck. it was actually a nova-compute failure on xenial due to libvirt/locale bug 1560939. [15:40] Bug #1560939: libvirt-bin fails to install on a fresh xenial server [15:40] gnuoy, shall i push that now? [15:52] jcastro rick_h_ thanks! we're looking to do that combo + openstack in production asap =) [15:53] falanx: heh I should note that work laptop filesystem and your production openstack architecture filesystem recommendations are probably different. :) [16:33] rockstar, ok although tests did pass a day ago, there has been some movement in charm-tools requirements and tests now fail. all of the charms except lxd have been updated to satisfy and work around that issue. your change will need one more patchset. [16:33] rockstar, here is an example of what needs to change: https://review.openstack.org/#/c/296312/ ... and that should let us merge this puppy. [16:34] beisner: a re-sync should do it, right? [16:34] Oh, no, no it won't. :) [16:34] rockstar, i don't see that lxd has received those fixes at master [16:35] On it. [16:35] coolio [16:54] cholcombe, I wonder if I could ask you a question about what pause-health does in the ceph-mon charm? === cos1 is now known as c0s [17:11] jcastro, is there an up to date guide on using juju2 with lxd? [17:19] balloons: https://jujucharms.com/docs/devel/config-LXD [17:20] balloons: oh, this reminded me I need to PR and fix a few bits there [17:21] jcastro, I have some thoughts on that page, lol [17:22] but if that is it, I'm not seeing success. It seems like it's missing some bits, and I'm left confused on a few points [17:22] the lxd-images stuff is all out of date [17:22] all you should need to do is `sudo lxd init` [17:23] balloons: to be fair though, all of this was in flux until the other day [17:25] alexisb: do we know if juju still wants "ubuntu-trusty" as an alias for a container or will it just use the automagic remotes LXD provides? [17:25] I have to use the specific alias name? [17:25] balloons, jcastro: guys, that page was heavily refreshed yesterday. i can't remove lxd-images b/c you can't use Xenial Juju machines without it [17:25] jcastro: she's out, I think it looks for that specific alias so that it doesn't stomp on the existing lxd use [17:25] jcastro, I guess I'd love to see your PR, before I start firing off bugs [17:25] you used to have to, not sure if it still does [17:26] pmatulis: oh ok so we still need that command then? [17:26] I think that's a bug then? The intent was to just use what LXD provides without the user having to know that command. [17:26] balloons, jcastro: unless the user is smart enough to know they need to create an alias for the Xenial image [17:27] beta3 is supposed to create the alias automatically [17:27] ah ok [17:27] so it's still in flux then [17:27] is there anything else with that page that is wrong or can be extended? lemme know or open an issue [17:28] rick_h_: we don't care too much if juju2 is in the xenial beta2 image right? [17:29] all we need after that is for bootstrap to work without needing --upload-tools and I'm good to go [17:29] balloons: which part are you stuck on? [17:31] mgz: we're having a hard time getting in also for charm-tools, so we said nope and going to try as soon as beta2 is out. [17:31] all the bugs are filed, we're just waiting in line [17:34] anyone know why i would not have juju2 sub-command completion on a fresh Xenial openstack instance? [17:35] scratch that, seems a reboot fixed it [17:39] pmatulis: I am wondering if keeping around the "changes from lxc local provider" in that page just adds length for something most people won't care about [17:40] jcastro, I can't juju bootstrap. And the instructions left me questioning if just installing juju got me the lxd provider or not [17:40] also, do I need an environments.yaml file? How do I create a controller for lxd? [17:40] jcastro: yes, that's debatable [17:40] https://jujucharms.com/docs/devel/controllers-creating [17:40] you do not need an environments.yaml [17:41] balloons: you're not reading the page :D [17:41] after you have it installed all you should need to do is `juju bootstrap blah lxd --upload-tools [17:42] the style of the docs are not condusive to "ok you've finished this page, go to this page next" because there's no pagination [17:43] perhaps once we have the final changes in from the tool we should consider just doing a quick and dirty summary up top that gets a controller fired up [17:43] and then while that is happening the user can read all the details below [17:44] jcastro: maybe you're looking for a tutorial or yet another getting-started page? [17:44] jcastro: Juju is complex and it's very difficult to have a running story that fits everyone's purpose [17:45] I can't fix the get-started page until 2.0 is out though [17:45] and it's not difficult, we're just telling people things in the wrong order [17:46] jcastro: there are plans for a much-needed "architectural overview" page replete with diagrams and core concepts, evern arrows but there is so much to do [17:46] yeah it's just like, the instructions have CLI commands in them [17:46] and then at the end of the section it's like "xenial comes with lxd by default" [17:47] what's wrong with saying that? [17:47] actually, let me just ask luca if we can start on the 16.04 version of the get-started page [17:47] well, all the commands above that are explaining how to install and configure lxd [17:48] install on Wily maybe [17:48] and LXD does not require any configuration, unless you want to use network stuff or ZFS [17:49] maybe if you give me a specific example [17:49] I'm just saying we can trim most of that entire section out and make firing up the controller more upfront [17:49] like a TLDR version [17:50] well, these are the definitive/upstream docs for Juju. everything cannot be a TLDR [17:51] a TLDR/getting-started/quickstart should be separate [17:51] well I had to not use lxd-images (that fails), and then starting a container mannually failed. So not really juju's fault I guess. But the bootstrap command didn't recognize lxd as a provider for instance [17:51] anyways, are you planning to swap the 'getting started' guide to using lxd for 2.0? [17:52] balloons: dunno what's going on over there. sounds like LXD itself is not working [17:53] I don't think we can swap it for LXD on get-started, the get-started is supposed to work on all OSes afaict. [17:53] pmatulis, right, so I'm not blaming juju for that at all. But as I said the bootstrap command didn't work, and I installed juju-local to maybe fix it -- but the page doesn't mention that [17:54] I know with juju2 it's changing, so if nothing else I can say the page is confusing at the moment [17:54] and I would like to play with lxd + juju2 locally, however that may be possible [17:54] wait, are you trying to use juju1 with lxd? [17:56] balloons: yeah, you should be using Juju 2 [17:58] and don't get me started with the getting-started page :) [17:58] jcastro, I wasn't, but again, those pages make things confusing. Also, I can't get juju2 in juju/stable (or course not), but the page mentions it [17:59] yeah [17:59] you're in the middle of an awkward transition [18:00] the page is being written for the future while the commands are still being written [18:02] balloons: all the good information on how to use juju2 is in the release notes and not the docs [18:02] like how to list-clouds, the new credentials stuff, etc. [18:06] balloons: I can walk you through lxd in a hangout after I'm done with this meeting I'm on now [18:15] ballons afaik you should be good on the latest beta from the ppa [18:15] juju bootstrap dev-lxd lxd --debug is about all you need once you have a lxd trusty image [18:15] balloons: yep, i concede there is confusion (no juju2 in stable) but, given the resources we have, it was decided to write for the future. this is a devel branch after all [18:16] (the software and the docs are not released) [18:16] pmatulis, yes I agree. Write towards what is landing (and there is the header stating as such) [18:38] === cos1 is now known as c0s === redir is now known as redir-lunch [19:17] nothing under the kilt? === freyes_ is now known as freyes === bogdanteleaga_ is now known as bogdanteleaga === kadams54_ is now known as kadams54 === mup_ is now known as mup [19:22] pmatulis: it was feeling a bit breezy there for a second, that's for sure === redir-lunch is now known as redir [20:16] c0s: So, the main problem that admcleod- was talking about earlier when he mentioned multiple namenodes is that there is a bug in Juju leadership in the current stable (1.25) where if you spin up and then tear down two NNs, then spin up two more without re-bootstrapping the entire environment, a leader isn't selected. [20:17] I haven't encountered it, personally, so I can't really give any more detials [20:17] *details [20:17] cory_fu I am a bit confused: how Juju leader election is related to ZKFC leader election for HDFS HA? [20:17] But from what he was saying, the HDFS HA should be pretty functional [20:18] c0s: He's using juju leadership to ensure that only one NN tries to do the formatting and other logic that apparently should only happen once. Once it's up and configured to HA, juju leadership is irrelevant [20:18] cory_fu: oh, that's the juju way of handling the distributed locking. Got it. [20:18] Right [20:19] cory_fu: by the way: Here's Bigtop puppet recipes on the topic: https://github.com/apache/bigtop/blob/master/bigtop-deploy/puppet/modules/hadoop/manifests/init.pp#L556 and all the way to line 607 [20:19] admcleod-: ^ for reference when you sign back on [20:20] I will add this to the review notes as well. And thanks for the explanation: now I can focus on the HDFS HA part ;) [20:20] c0s: I haven't looked at the HA branches in much detail recently, so I can't speak to it other than what's been explained during the dailies [20:20] Cool. Thanks [20:30] cory_fu: speaking of solving the distributed lock issue [20:30] what if active and standby NNs know about their roles from the beginning? [20:30] This way, the file system formatting can only be done on the primary code by simply checking against the namenode role. [20:31] How would they know their role? [20:32] when you deploy the services you can arbitrary chose to be an active and another to be standby [20:32] here's how we do it [20:32] Here’s the link to Bigtop’s puppet recipes to setup HDFS HA http://is.gd/d0cWel from line 556 all the way to line 607 [20:32] scratch that [20:32] https://github.com/apache/bigtop/blob/master/bigtop-deploy/puppet/modules/hadoop/manifests/init.pp#L639 [20:33] we always pick the 1st node to carry on as the active and to hell with that ;) [20:33] this is out choice [20:34] because the same recipe code is getting executed on all the namenodes, with such check we guarantee "only one" execution [20:34] cause otherwise to expose the internal HDFS concern up a layer to the orchestrator, and that makes the implementation harder, of course [20:36] So, that's basically what we're doing with the Juju is_leader check. The "lowest numbered unit" would work and is what charms used before is_leader was available, but is_leader is generally considered better (barring the bug we're seeing) [20:37] In the end, it basically works out to the same thing, and in theory should actually be simpler since it's a simple boolean check (and the Juju controller decides on the leader) [20:37] Also, the bug we're hitting with is_leader really only affects iterative development use. Normal usage wouldn't be affected by it [20:38] I feel like there was a stronger reason why we moved away from "lowest unit" for this specific charm, as well, but I can't recall off the top of my head [20:39] ok, that makes sense. Thanks cory_fu for the explanation - learning a bit more each hour ;) [21:03] new blog post about juju posted here: http://blog.emccode.com/2016/03/23/storage-operations-with-juju-charms/ [21:04] specifically, exploring what's out there for attachable, detachable, persistent storage for charms [21:04] (spoiler: not much.) [21:05] cory_fu: do you know if the bug for the leader election has been logged somewhere? [21:05] I have a feeling that it might be not a bug, but rather a limitation of ZK [21:06] if you look here https://aphyr.com/posts/291-call-me-maybe-zookeeper all the way to the Recommendations section, it says [21:06] "Also keep in mind that linearizable state in Zookeeper (such as leader election) does not guarantee the linearizability of a system which uses ZK. For instance, a cluster which uses ZK for leader election might allow multiple nodes to be the leader simultaneously..." [21:06] And that guy knows what he's talking about ;) [21:10] c0s: So, this doesn't actually have anything to do with ZK, only the Juju is-leader command. As I understand it, is that Juju's is-leader tool is supposed to return True for at least one unit at all times except for a somewhat brief fail-over period when the leader unit goes away. The problem is that Andrew is seeing normal behavior for units 1 and 2, but not 3 and 4 after removing the service entirely and redeploying it [21:10] Again, for a new service coming up, at least one unit (the first to come up) should see is-leader return True even before running any charm code [21:11] oh, I see. Not I think I got it - was mislead a bit by the presence of ZK from HDFS HA ;) [21:11] I assume there was a bug filed against core, but I'd have to go looking for it [21:11] no worries [21:11] so, juju doesn't use ZK under the hood, after all? [21:11] Yeah, there are a couple of orthogonal ideas of "leadership" going on [21:12] k, thanks! [21:12] No, the controller just picks one of the units as the leader and records that in Mongo. With the current single controller, something like ZK isn't needed at all. [21:13] c0s: Also, someone in #juju-dev could give a much more accurate explanation of how Juju's leadership code works [21:13] my services seem to be running but workload state shows as unknown when I run juju status. [21:13] But from a charm's perspective, we're supposed to be able to rely on is-leader telling us if we're leader or not, and (almost) always having a leader === skay_ is now known as skay [21:14] cory_fu: thanks, this really helps! === skay is now known as Guest88295 [21:15] the fact of the single controller skipped my mind. [21:15] And of course, we're really only relying on that to ensure that only one unit runs the HA initialization logic. === Guest88295 is now known as skay [21:42] zhttp://www.oreilly.com/programming/free/how-to-make-mistakes-in-python.csp?imm_mid=0e1f49&cmp=em-prog-free-lp-ostx16_nem4_mistakes_in_python [21:42] its like Oreilly knew I was hacking some python badly...... [23:37] Mayyybe someone knows the answer to this... i need to mock/patch a private method on an object. my google fu is failing me on finding the method to do so, any helpful pythonista's in the crowd? [23:51] lazyPower: don't do that. :] [23:51] jrwren - just leave the priate method untested? [23:51] *private [23:52] lazyPower: also, you can just do that [23:52] private in python is just a convention [23:52] well, thing is - i need to patch it as its kind of the core dispatch method in the class [23:52] but genrally, you only want to test interface points, so not private helpers [23:52] i guess i could move it elsewhere and just patch it there [23:53] nebulous questions without context [23:53] * lazyPower resumes test hacking