=== natefinch-afk is now known as natefnich === natefnich is now known as natefinch === bruno is now known as Guest32213 === Guest32213 is now known as BrunoR [09:18] hi [09:18] has anyone ever tested a reactive charm on xenial ? [09:33] and I would like to remind you about my collectd subordinate: https://bugs.launchpad.net/charms/+bug/1538573 review :) [09:33] Bug #1538573: New collectd subordinate charm [09:37] I just found the Juju is a great tool. [09:38] fstyrell: you're late! ;) [09:39] haha,😃 [09:40] jacekn: http://review.juju.solutions/ you're not in the review queue [09:41] I am thinking play Juju with ansible. [09:42] ah cool, well lazyPower was the guy that gave the talk about juju & ansible in Belgium this year [09:42] when he wakes up you should prod him about it :) [09:46] oh, thanks. Can i find the talk video at youtube ? [09:46] erm [09:46] should be able to [09:47] found it.😉 [09:47] https://www.youtube.com/watch?v=0eymk93lY8k [09:47] quicker than me :P [09:48] thanks, you are very nice. [09:48] I'm not, i'm just bored :P [10:00] magicaltrout: so what do I need to do to be there? I was told in the past that setting my bug to "fix commited" was good enough [10:00] jacekn: thats the $1m question, I don't know that either :) [10:00] magicaltrout: I've been trying to get that charm merged since January.... [10:30] axino: it should work, what's the problem? [11:37] marcoceppi: axino's problem is his charm is attempting to pull in charmhelpers from pypi, which isn't going to work in his environment. [11:38] marcoceppi: I think the problem is he specified to use a venv and not fall back to system packages in his layer options, but I'm not sure. [11:40] marcoceppi: sorry, not charmhelpers. setuptools. [11:41] 2016-03-16 07:52:42 INFO install Installing setuptools, pkg_resources, _markerlib, pip, wheel... [11:41] 2016-03-16 07:52:42 INFO install Complete output from command /var/lib/juju/agents...-0/.venv/bin/python3 - setuptools pkg_resources _markerlib pip wheel: [11:41] 2016-03-16 07:52:42 INFO install Collecting setuptools [11:41] And then it explodes, because it can't collect setuptools [12:49] axino: stub: yeah looks you'll want to use both venv and system-packages options in the layer.yaml cory_fu ^^ [13:21] gnuoy: qq on https://code.launchpad.net/~gnuoy/charm-helpers/keystone_authtoken/+merge/289042 [13:21] lazyPower: 16gb of RAM, usage seems fine, it's IO that gets wailed on [13:21] lazyPower: I'll fire up the bundle again next daily and you can observe [13:39] "import amulet" yields an "ImportError: No module named 'path'" in python3 for amulet 1.13.0-0ubuntu1~ubuntu15.10.1~ppa1 ~ is this a missing dependency? [13:40] marcoceppi: ^ [13:41] BrunoR: it's being worked on [13:56] marcoceppi: can you give me a work-around or an estimate how long this will take? [13:56] \_____________________o_________________/ [13:57] best answer ever? [13:58] hehe [13:58] rick_h__: marcoceppi someone who knows me and jacekn want to know how you get stuff that has been reviewed and rejected back into the review queue [13:58] BrunoR: 2 hours, pip install amulet into an activated python virtualenv for now [13:59] magicaltrout: move the bug to new or fix committed [13:59] fair enough [13:59] jacekn: dunno then, you're just in limbo somewhere :) [13:59] magicaltrout: this/next month new review queue where this will be streamlined [13:59] jacekn: ^^ [14:01] cool, *finally* got Saiku 3.8 cut [14:01] so i *am* going to get this fscking charm finished :P [14:07] magicaltrout schenanigans ;) [14:07] marcoceppi: yes I'm aware of that. Are you saying I just have to wait and there is no way around it? [14:08] jacekn: can you set the bug to new? [14:08] marcoceppi: (I've been trhing to get this charm into the charmstore since Jan) [14:08] might trigger the regrepping of the issue and pick it back up [14:08] done [14:10] marcoceppi: magicaltrout: there is also another thing that is problematic - because charmstore submissions can take months and we have to keep working I had to "freeze" that layer for submissions and I work on it's fork. That fork is moving furgher and further away from what you are reviewing [14:10] jacekn: again, these are all growing pains as we wait for teh new charm upload command and review queue [14:10] that goes away come the new charmstore [14:11] jacekn: I found the issue with teh review queue, give me a min to address [14:11] marcoceppi: ok, I thoujgh there was also resources problem but if it's just the queu itself it's easy to solve [14:13] jacekn: trhe problem is the review queue is tied to the old charm store method of "everything in lp" going forward, you'll just submit a charmsto0re url, like cs:~jacekn/collectd-10 for review and you can continue to dev/rev, the review queue will be quicker to pick things up, quicker to test, and quicker to land fixes while also adding a lot of observability to you and charmers [14:13] to do this we're decoupling from the lp, like the charmstore is currently, but until the new charm command lands (hopefully this week) we can't really switch over or finish the new review queue [14:13] so we're in a weird transition state [14:15] axino, stub, marcoceppi: Sorry I'm late to the conversation, but does your charm layer have its own wheelhouse.txt? Also, did you do the initial charm-build on a xenial box? That error looks somewhat similar to https://github.com/juju/charm-tools/issues/117 though not exactly the same. [14:15] jacekn magicaltrout I've just kicked the reviewqueue with a large boot, in the next few hours it should be up-to-date (after it finishes crawliing lp) [14:15] jacekn: in the meantime, I'll take a look and review your charm now! [14:16] ta [14:18] thanks! [14:23] jacekn: I've just kciked off a test against AWS, pending that coming back pass it LGTM [14:23] marcoceppi: great! thanks! [14:28] tvansteenburgh: something just crossed my mind, bundletester isn't really packaged anywhere and now is the best time to deprecate charm test for bundletester [14:30] marcoceppi: okay. what would you like me to do? [14:30] tvansteenburgh: does that sound like a good idea? [14:30] or should we just package bundle tester seperate? [14:30] marcoceppi: separate from what? [14:31] charm-tools [14:31] yes [14:31] marcoceppi: package separate, then we can change the impl of `charm test` to call bundletester, or charmbox, or whatever [14:32] ah, yeah, true [14:35] hi marcoceppi, tvansteenburgh - os-charms aren't clear yet to break from juju test (need to be able to preserve enviro variables a la `juju test -p`) if i'm mistaken on that, thump me on the noggin. [14:35] +1 to that [14:37] beisner: well, adding preserve env variables is easy enough, but I'll keep that in mind [14:37] marcoceppi, tyvm. for ref: https://github.com/juju-solutions/bundletester/issues/17 === rogpeppe2 is now known as rogpeppe [15:04] beisner, marcoceppi: bundletester already preserves all envvars by default [15:07] tvansteenburgh, sweet. thank you for confirming. [15:13] hey bdx, I'm going to mail the fiche author and let him know the charm has been promulgated [15:13] I figured it'd be cool to let him know [15:31] rick_h__: urulama the charm push command doesn't upload dot files? [15:31] marcoceppi: nope, due to size/etc it's not a replacement for git clone [15:32] rick_h__: wtf, .build.manifest is not a version control file [15:32] you can't blanket assume any dot file or directory is for vcs [15:35] rick_h__ urulama I can apprecaite a whitelist of thigns like .git, .bzr, .gitignore, but there are files in charms that are dot files that are needed for operationa nd we're stripping them out. [15:35] * marcoceppi opens bug [15:35] you need .charmignore! ;) [15:35] you gest, but i think thats a great idea ;D [15:36] s/gest/jest/ [15:44] hidden files look to have been excluded since 2011: https://github.com/juju/charm/commit/ddf98e0f66ebad7ccb9c41d409b64c1febbd4cf3#diff-59640b4e3d1c61f83bdd75d573589453R154 [15:45] jacekn: what. this is stupid, it only hides on the top level [15:45] don't abuse the wrong person :P [15:46] marcoceppi: fair enough, please open a bug. This is good feedback from testing things out [15:46] jrwren: ** [15:46] rick_h__: bug opened, I'll open one against charm as well [15:46] marcoceppi: appreciate it [15:55] i agree. it is stupid. [15:56] <3 didn't realize how long that's been a thing [15:58] I also agree, a .charmignore would be nice. [16:18] marcoceppi - sounds great, but the base layer is GPL-3'd as well :-\ [16:19] lazyPower: sure, we should change that cory_fu ^ [16:20] https://github.com/juju-solutions/layer-basic/issues/47 [16:20] i bug'd it [16:22] o_O I've been making all of my layers Apache licensed. I had no idea that basic was GPL. [16:23] thats what i considered then i saw the base layer was GPL'd, soooo [16:23] i figured i had to follow suit [16:23] bcsaller: I think you chose the original license for the basic layer. Any reason you chose GPL? [16:24] cory_fu: no, we can issue it under a new license at any time [16:26] lazyPower: I'm not seeing the comment that was the original context for you noting that the basic layer is GPL [16:26] cory_fu - ah thread origin started in the logstash layer - https://github.com/juju-solutions/layer-logstash/pull/11 [16:27] So I certainly have no objections to changing the license, and I'm sure none of the other committers or members of juju-solutions would object, but is there a formal process we need to follow? [16:27] technically afaik you need written consent from the copyright holders [16:29] magicaltrout: The copyright is assigned to Canonical [16:29] Would a PR from a Canonical employee count as written consent? :p [16:29] i vote ben saying we can at any time on IRC as the word of good [16:29] the only commiters I see are canonical as well [16:29] Yeah [16:30] depends who commits I guess if its canonical, who cares [16:30] you guys have no ICLA in place for outside commits [16:30] not that I think you'll get into issues, but there's no formal joint copyright holding or migration of ownership [16:30] but IANAL, so just going from past experience [16:31] I'd rather err of the side of too permissive than make a choice that made people feel unable to use this code [16:34] cory_fu: for example I remember when PDI swapped from GPL to ASL, and they had to find every committer and get them to sign off on the license change as there was no CLA just adhoc contributions from the outside world :) [16:34] we did the same with Saiku and now have a CLA for committers to sign [16:35] and obviously in Apache land we all have to sign a CLA before committing anything === alexisb is now known as alexisb-afk === natefinch is now known as natefinch-lunch === alexisb-afk is now known as alexisb [18:31] Hi! I was wondering if anyone knew roughly when OpenStack charms for Mitaka on Xenial will be available (pre-releases / testing are fine)? [18:32] I see there are some mentions in the store: https://jujucharms.com/q/openstack?series=xenial, but 0 deploys on most of them still [18:34] anyone know how to mock os.lstat? I'm trying to instantiate posix.stat_result manually and it's trickier than i thought [18:39] nvm i just figured it out. posix.stat_result wants a list not a dict [18:40] matt_dupre: beisner or coreycb might be able to help. We've been deploying them on Xenial for a bit, and I think that the last charm release in January helped carve out Mitaka support [18:43] hi matt_dupre - the "next" charms track the development version of the charms (beware, they have a lot of movement currently). https://jujucharms.com/u/openstack-charmers-next/ === natefinch-lunch is now known as natefinch [19:27] wolsen, can you have a cruise through these?: [19:28] https://review.openstack.org/#/c/293616/ [19:28] https://review.openstack.org/#/c/293608/ [19:28] https://review.openstack.org/#/c/293625/ [19:28] beisner, sure [19:28] wolsen, ta [19:28] beisner, np [20:19] Hi all, if 'juju status' just hangs how can the issue be diagnosed? [20:40] hi fritchie - i'm out shortly, but i'd do `juju status --debug` to get more info about where it's hanging. [20:41] fritchie: yup --debug is a good start. Also, out of curiosity, what is the environment you're using and juju version? [20:44] cory_fu - didn't you write a layer for templating to break that out of CH? [20:45] lazyPower: I did: https://github.com/juju-solutions/charms.templating.jinja2 [20:45] i just found it, gracias [20:46] lazyPower: The main reason I did that was because I needed to add the ability to provide custom filters & tests, and didn't want to keep CH alive [20:46] I completely respect this decision [20:46] so, i need to add this to my layers wheelhouse.txt correct? [20:46] charms.templating.jinja2>=0.0.1<=1.0.0 [20:47] Yes, though I started it at 1.0.0 [20:47] o, even better [20:48] marcoceppi, juju and maas on same node [20:48] for version, I am reinstalling now :-) [20:48] fritchie: does maas show the bootstrap node as still running? [20:49] right now 5 hp dl380s, 2 nics each, 2nd nic being used for dhcp, no external router [20:52] cory_fu I just added that ot my wheelhouse, re-built, and upgrade-charm'd, however i get this during baselayer updating from the wheelhouse - https://gist.github.com/chuckbutler/5b7a70a8a2e944cfa31b [20:57] I could swear that was fixed [20:58] lazyPower: That was fixed in 1.0.1, 20 days ago: https://github.com/juju-solutions/charms.templating.jinja2/commit/9df7ef886ec6e3f570f103422c0804501ca56d31 [20:58] Did your charm-build pick up 1.0.1 or 1.0.0? [20:58] beisner, my test failure on 287500 is quite odd. the unit test works locally but fails when i upload it [20:58] cory_fu 1.0.0 [20:59] let me try again, and see if it pulls in 1.0.1 [20:59] here's a question for you marcoceppi et al https://jujucharms.com/q/?text=pentaho [21:00] lazyPower: Yeah, pypi definitely has 1.0.1 [21:00] cholcombe, looks like dev_name != 'sda' re: http://logs.openstack.org/00/287500/4/check/gate-charm-ceph-osd-python27/ae7b39a/console.html#_2016-03-16_19_15_09_758 [21:00] how does the charm store come to its conclusion, for example, i write a charm, its not recommended but there is nothing else pentaho related in the store [21:00] beisner, yeah i saw that. the odd thing is that works fine on my local box [21:00] i patched python's open function [21:01] magicaltrout: you should talk to rick_h__ about this one. I can't explain the searching results stuff, but I know that it's something our web team is looking into. [21:01] how come i get 54(!) recommended charms [21:01] fair enough === natefinch is now known as natefinch-afk [21:01] beisner, maybe marcoceppi is there a better way to patch open in python? [21:01] it just seems a bit loaded, i'm all for recommended charms clearly, but when the search must return basically 0, how it ends up so far down the page is a bit tedious [21:02] and hard for users who just want to see if anything pentaho related happens to be in the store [21:02] magicaltrout: I agree [21:02] cholcombe beisner how can I help? [21:02] rick_h__ - have we thought about talking to merlijin and team about a recommendation engine for the charm store? Might make our search story significantly better [21:02] cholcombe, destroy your venvs and re-run with `tox -e py27` ? [21:02] ok [21:02] rick_h__ - and i cant think of anyone better to ask than data scientists [21:03] magicaltrout - you're completely included in this list as well ^ [21:03] hehe [21:03] i can see it now [21:03] well sometimes i put the scientist hat on, but most of the time I hide it behind the sofa [21:03] "You just deployed wordpress. Can i interest you in VARNISH?" [21:04] it just strikes me that there should be some ranking going on, keyword relevance, recommended vs non recommended, times deployed, age, last updated etc [21:05] put it in a pot and give it a stir and come up with a search ranking algo that makes it better [21:05] lazyPower, can I pay to have my charm moved up the list? lol [21:05] magicaltrout - best SOW ever [21:05] charmwords(tm) [21:05] hehe [21:06] cholcombe - thats unethical [21:06] hehe [21:06] small merge or new charm? [21:06] i'm just kidding [21:06] hey i try to bribe with pizza all the time [21:06] hah [21:06] i guess people dont like bread and cheese as much as they used to [21:07] cory_fu: I don't know if the charm was built on xenial [21:08] axino: It may be that pip has different requirements on xenial, though I would expect it to not matter since it's a python lib. [21:08] cory_fu - pebkac in the wheelhouse.txt, i had >=1.0.0, so it stopped. >1.0.0,<=2.0.0 seems to have pulled in the right dep. [21:08] lazyPower: Odd. I would have thought that >=1.0.0 would still have pulled in the highest version [21:09] didn't appear to, i got 1.0.0 otherwise [21:09] cory_fu: "include_system_packages: false" in layer.yaml [21:09] cory_fu: perhaps I should change that ? [21:10] axino: It's certainly worth a shot, but shouldn't be necessary [21:10] axino: If it works, please let me know [21:11] cory_fu: it does have a wheelhouse directory with tar.gz in it, but no wheelhouse.txt afaics [21:11] axino: Is that the built charm or the source layer? I don't suppose you can provide a link? [21:12] the built charm [21:12] cory_fu: try https://launchpad.net/~canonical-sysadmins/canonical-is-charms/collectd/ ? [21:13] I doubt it though [21:16] cory_fu: https://code.launchpad.net/~axino/canonical-is-charms/collectd should work for you [21:16] axino: I can access the first one, actually :) [21:16] ho [21:16] well [21:17] axino: So, that's the built charm artifact, so I wouldn't expect it to have the wheelhouse.txt; that gets converted to the wheelhouse/ directory during build. [21:17] ok good [21:17] (Though, now that I think about it, it should probably preserve the combined wheelhouse.txt as well, just in case) [21:17] axino: That wheelhouse looks entirely reasonable, except that it has two versions of charms.reactive in it [21:18] does that include setuptools ? [21:18] looks like not [21:18] No, it doesn't. [21:19] However, if pip depends on it, it ought to. My guess is that there is a difference in how venvs are created between xenial and trusty and perhaps setuptools is included by default in the former. [21:20] cory_fu: https://pastebin.canonical.com/151985/ is the error I get, in case you didn't see it [21:20] Quite possibly, doing the charm-build from the source layer on a xenial box might fix it. [21:20] ok [21:22] axino: Ok, so that's not even getting to the wheelhouse. It's actually failing trying to create the venv [21:24] cory_fu: if you have any additional insight, I'll take it :) [21:26] beisner, i'm pushing up a patch set with more logging. It def works fine locally [21:27] beisner, i tried a clean docker env also and it's fine [21:27] axino: I think it's actually a problem with xenial. Creating a virtualenv apparently requires setuptools but the python3-virtualenv package isn't listing it as a dependency [21:27] axino: Normally, that wouldn't be too big of a deal, as it gets pip installed on demand when creating the venv, but in that network-restricted environment, it fails. [21:27] cory_fu: note that I have this behaviour even with python3-setuptools installed [21:27] axino: Ok, what about with python-setuptools? [21:28] cory_fu: same [21:29] axino: If you have a restricted network xenial env, can you try manually doing the following: [21:29] sudo apt-get install python3-virtualenv [21:29] python3-virtualenv testenv [21:29] (That last one might be virtualenv3) [21:29] Probably is [21:29] ok, trying now [21:30] And then also try `virtualenv --python=python3 testenv2` [21:30] cory_fu: I have no virtualenv3, and no python3-virtualenv binary [21:31] axino: Ok, I'm not surprised. Try the other one, please. [21:31] I saw the python3-virtualenv package in your error and got optimistic [21:31] :p [21:32] :) [21:32] cory_fu: yup that's failing the same way [21:32] timing out [21:32] axino: Yeah. The problem isn't reactive charms, per se. It's that virtualenvs don't work in network-restricted envs in xenial [21:33] axino: What *should* happen is that, if python{,3}-setuptools is installed, it should use setuptools from that and not try to pip install it [21:33] cory_fu: just tried virtualenv --python=python3 --system-site-packages testenv2, same error [21:34] hmm, Ubuntu/Debian install in dist-packages though [21:35] "virtualenv --python=python3 --no-setuptools --no-pip testenv2" now it's timing out on "installing wheel" [21:35] axino: Honestly, I don't know why virtualenv is trying to install anything from pypi. It shouldn't. [21:38] axino: What version of pip does xenial have? [21:38] cory_fu: 8.1.0-2 [21:39] cory_fu: !! virtualenv --python=python3 --never-download testenv2 <= that worked [21:39] of course this option is not in the man [21:40] http://ccnmtl.columbia.edu/compiled/released/new_features_in_virtualenv.html found it here [21:40] axino: That's terrible. If it can succeed with that option, that means it has the packages locally and so why did it even attempt to download them in the first place? [21:41] cory_fu: yeah. :( [21:42] axino: That option seems to work on trusty's virtualenv as well, so I'm fine with adding it to layer-basic [21:44] axino: I'm a little bit concerned about any repercussions that might have on trying to install the wheelhouse after that. I don't suppose you have the wheelhouse dir handy on that unit? [21:44] cory_fu: I have, this is the unit where the charm is failing its install hook [21:45] Ok, I was hopeful. Can you try doing: testenv2/bin/pip install -U --no-index -f wheelhouse wheelhouse/* [21:45] cory_fu: https://github.com/pypa/virtualenv/pull/412 looks like this may become the default ? or something, just skimmed over the comments [21:47] cory_fu: https://pastebin.canonical.com/152054/ LGTM [21:48] axino: Ok, great. [21:50] let me hack lib/charms/layer/basic.py on disk and juju resolved -r that install hook [21:55] cory_fu tvansteenburgh BrunoR amulet 1.14.0 fixes all issues for trusty and xenial [21:55] it's out, it's tested, it's working [21:55] yay, thanks marcoceppi! [21:56] axino: https://github.com/juju-solutions/layer-basic/pull/48 [21:56] cory_fu: cool ! thanks [21:57] marcoceppi: Awewsome! Thanks [22:01] tvansteenburgh: bleh @ your pull req [22:01] ? [22:01] marcoceppi: you don't have to do another release [22:01] tvansteenburgh: Oh I won't [22:01] marcoceppi: i was just about to upload new docs and figured i'd do that first [22:01] tvansteenburgh: ack, lgtm [22:04] cory_fu: will this fire if a different function (like quorum_add on line 43) changes zoo.cfg? [22:04] https://github.com/juju-solutions/layer-apache-zookeeper/blob/master/reactive/zookeeper.py#L33 [22:04] marcoceppi: thanks, docs uploaded [22:05] cory_fu: i'm seeing stuff like "Invoking reactive handler: reactive/zookeeper.py:41:quorum_add" when a peer comes along, and i know that function updates zoo.cfg, but i don't see a subsequent firing of the method gated by @when_file_changed(zoo.cfg) [22:06] even when i go on that unit and type naughty words into zoo.cfg, i don't see that method being called on the next status_update. [22:06] kwmonroe: Yes. Any time that file changes, it should trigger that handler. However, combining @when and @when_file_changed may run you up against the open issue(s) with @when_file_changed [22:07] ah, yeah, you warned me of that [22:07] kwmonroe: Actually, even without the @when, I think you're running up against those issues. I'm pretty sure zkpeer.dismiss_joined() is removing a state, which can cause the @when_file_changed to get dropped [22:08] kwmonroe: https://github.com/juju-solutions/charms.reactive/issues/44 [22:08] yeah, i need to double check if we really need to do the dismiss_joined. i think we're doing that so we pick up subsequent peer joins. [22:08] but dunno if that's really needed. [22:08] I doubt that it's necessary [22:09] ack cory_fu. is the "any_file_changed" helper safer than @when_file_changed whilst issue 44 gets worked? [22:09] Yes [22:09] roger dodger. thanks. [22:09] That will avoid the issue. [22:10] kwmonroe: I'd very much like to spend some time trying to fix @when_file_changed because it's a great decorator. But I have to EOD soon [22:30] is there a way to view the porogress when importing boot images to a new MAAS install - seems to be taking longer than I would imagine [22:31] fritchie: on the images page in the webui? [22:32] fritchie: I know they were working on a new page there, not sure if progress is on the new one vs the one you're using [22:32] sorry rick, message was meant for maas channel, the wheel is just spinning [22:32] fritchie: understand, we try to help here too :) [23:50] Hey guys. Hi @arosales [23:50] has anyone seen this error? DEBUG juju.provider.common bootstrap.go:259 connection attempt for 172.16.100.101 failed: /var/lib/juju/nonce.txt does not exist [23:51] juju.network address.go:504 removing unresolvable address "dnjostack01.maas": lookup dnjostack01.maas: no such host