catbus1 | Hi, I am using juju-deployer to deploy workload in lxc containers. I have containers all created and started in RUNNING state, but some of them got IP addresses, some don't. For those that don't have IP addresses, there is no corresponding /var/log/juju/machine-#-lxc-#.log. | 00:23 |
---|---|---|
catbus1 | How do I find out why those containers didn't get an IP? | 00:24 |
catbus1 | http://pastebin.ubuntu.com/15020637/ | 00:36 |
catbus1 | There is enough IP address in the DHCP pool. | 00:40 |
apuimedo | jamespage: the three charms have been updated | 00:47 |
apuimedo | jamespage: I put https://bugs.launchpad.net/charms/+bug/1453678 back to new so that you see the new message I wrote | 00:54 |
mup | Bug #1453678: New charms: midonet-host-agent, midonet-agnet, midonet-api <Juju Charms Collection:New> <https://launchpad.net/bugs/1453678> | 00:54 |
apuimedo | I couldn't assign it to you somehow | 00:54 |
apuimedo | I'll not be reachable tomorrow, public holiday | 00:54 |
=== natefinch is now known as natefinch-afk | ||
=== med_ is now known as Guest76507 | ||
jamespage | dosaboy, your MP's need to be synced from lp:charm-helpers for the source configuration stuff (just spotted that) | 08:41 |
dosaboy | jamespage: they are already synced | 09:18 |
dosaboy | jamespage: in fact i just re-synced and there was still no diff | 09:18 |
jamespage | dosaboy: | 09:18 |
jamespage | -branch: lp:charm-helpers | 09:18 |
jamespage | 6+branch: lp:~hopem/charm-helpers/lp1518975 | 09:18 |
dosaboy | jamespage: hmm that must have crept through, lemme check | 09:19 |
dosaboy | jamespage: ah its just the cinder MP, i'll fix that one | 09:20 |
jamespage | dosaboy, hah - that was the first I looked at :-) | 09:20 |
jamespage | dosaboy, if they are passing please merge away - I have a few xenial fixes to follow up with once you have that landed | 09:21 |
dosaboy | jamespage: sure, atcually there are a couple of amulet failures, heat and nova-compute | 09:21 |
dosaboy | jamespage: heat is a test that was previously broken but i'm gonna see if i can fix | 09:22 |
dosaboy | nova-compute not sure yet | 09:22 |
jamespage | dosaboy, I can re-run if need be | 09:22 |
dosaboy | jamespage: k i'll ping when ready | 09:22 |
gnuoy | dosaboy, jamespage, got any time for https://code.launchpad.net/~gnuoy/charm-helpers/keystone-v3-support/+merge/285689 ? | 09:34 |
dosaboy | jamespage: gonna merge all but heat until it passes since the rest are +1 now | 09:42 |
dosaboy | gnuoy: maybe soon... | 09:42 |
gnuoy | ta | 09:43 |
jamespage | gnuoy, maybe in a bit - on a half day today and wading through midonet reviews atm | 10:14 |
gnuoy | ok np | 10:14 |
wesleymason | Well here's an odd one | 11:25 |
wesleymason | it seems every time I've invoked charm build since the last time I rm -rf'd the built charm, I've ended up with an embedded build inside the last one | 11:26 |
wesleymason | so I have: trusty/errbot/trusty/errbot/trusty/errbot/trusty/errbot/trusty/errbot/ | 11:26 |
wesleymason | 5 levels deep | 11:26 |
wesleymason | bet if I call charm build again I end up witha 6th | 11:26 |
wesleymason | That can't be expected behaviour, right? | 11:27 |
wesleymason | I'm guessing it's not blacklisting the trusty and deps dirs when building in "." (as opposed to a JUJU_REPOSITORY) | 11:30 |
wesleymason | https://github.com/juju/charm-tools/issues/106 | 11:35 |
jamespage | thedac, thanks for the reviews - I've landed swift-storage, nova-* are updated and I've added neutron-gateway which I missed befor | 12:10 |
jamespage | I've disable mitaka tests for now; we can re-enable once all of those land | 12:10 |
icey | Ursinha: do you by chance have a bundle of what you deployed yesterday that ran into #1517940 ? | 13:33 |
mup | Bug #1517940: workload-status is wrong <landscape> <openstack> <sts> <ceph-radosgw (Juju Charms Collection):New> <https://launchpad.net/bugs/1517940> | 13:33 |
Ursinha | icey: let me check | 13:40 |
icey | Ursinha: I just had it hang for a while at blocked: mon relation missing, and then start executing | 13:42 |
Ursinha | icey: so... we don't do bundles | 13:45 |
icey | thought that may be the case, no worries | 13:45 |
icey | do you have any special config on the mon or radosgw side? | 13:45 |
icey | (feel free to pastebin them and PM me) | 13:45 |
icey | Ursinha: | 13:45 |
Ursinha | icey: done :) | 13:55 |
icey | thanks Ursinha, will keep digging | 13:56 |
Ursinha | icey: thanks for looking into that | 13:56 |
Ursinha | icey: out of curiosity, for how long you waited until the relations settled? | 13:58 |
icey | 5 minutes? on AWS | 13:59 |
icey | will try to test on OpenStack in a bit | 13:59 |
Ursinha | ah, right, | 13:59 |
sparkiegeek | icey: the unit should transition out of blocked as *soon* as it sees the relation is established - it should go into maintenance state when it's busy doing things but no longer requires action from the user | 14:01 |
icey | sparkiegeek: agreed, and that's what I saw happen | 14:02 |
sparkiegeek | 5m is way too long for the charm to spend /noticing/ that it has the relation (even if it has a bunch of stuff to do before all the work for that relation is still ongoing) | 14:02 |
icey | but it did take a few minutes between adding the relation and the state changing / hook running | 14:02 |
icey | sparkiegeek: I'm curious about how long juju took to trigger the hook execution | 14:03 |
icey | I've seen hook execution take a long time to start when juju is fairly heavily loaded before | 14:03 |
icey | and not just with the radosgw charm sparkiegeek | 14:04 |
icey | either way, need to do mroe digging though | 14:04 |
* sparkiegeek nods | 14:04 | |
sparkiegeek | FWIW we've only seen this with ceph-mon charm, none of the others | 14:04 |
sparkiegeek | hence the belief it's a charm bug :) | 14:04 |
icey | I've seen it with other charm relations, just not for /that/ long | 14:05 |
sparkiegeek | sorry, I mean with radosgw charm relation to ceph-mon | 14:07 |
jcastro | http://www.jorgecastro.org/2016/02/12/super-fast-local-workloads-with-juju/ | 14:23 |
jcastro | jamespage: ^^^ | 14:23 |
jcastro | I set it all up yesterday | 14:23 |
rick_h__ | jcastro: <3 | 14:24 |
jamespage | jcastro, nice... | 14:24 |
rick_h__ | jcastro: safe to share out or still in editing? | 14:25 |
jcastro | share like the wind | 14:28 |
=== cherylj_ is now known as cherylj | ||
stub | icey, sparkiegeek: Was the hook you expected to be triggered actually triggered, or was it the update-status hook? If something was messed up, such as a hook missing executable permissions, the update-status hook kicks in every five minutes or so and can hide the problem. | 14:47 |
icey | stub: the hook is a relation joined hook | 14:49 |
icey | and the hook usualyl runs fine | 14:49 |
icey | occaisonally, it seems like the hook doesn't run (or just takes forever to run) | 14:50 |
icey | well, it's a relation-changed and relation-joined so it should have been run | 14:50 |
stub | Hooks running on subordinates block other hooks running on the same unit, which might apply here | 14:50 |
icey | both of the sides of the relation are primary charms | 14:51 |
icey | no subordinates deployed | 14:51 |
stub | I think juju run might block hooks too | 14:51 |
icey | no juju run | 14:51 |
icey | juju deploy x3 | 14:51 |
icey | juju add-relation(s) | 14:51 |
icey | one of the relations never seems to get related | 14:51 |
icey | Ursinha: correct me if I'm wrong | 14:52 |
Ursinha | icey: it's like the relation exists but isn't relating :) | 14:53 |
Ursinha | I have to remove it and readd | 14:53 |
icey | jcastro: already top 10 on HN | 14:54 |
jose | freyes: ping | 15:13 |
freyes | jose, pong | 15:13 |
jose | freyes: hey! I have a quick question on a merge you proposed | 15:13 |
jose | have a couple mins to figure this out? | 15:13 |
freyes | jose, sure, which one? | 15:13 |
jose | freyes: https://code.launchpad.net/~freyes/charms/trusty/memcached/lp1525026/+merge/281254 | 15:14 |
jose | there's a test in there, test 20. it checks for the 'public-address' on the instance and makes sure it's the same on the memcached.conf file, however, could it work with the private address instead of the public one as well? | 15:14 |
freyes | jose, right, that is failing for AWS, because the replication is configured over the private address, and I changed the test to use private-address, the problem with it is that the sentry doesn't have 'private-address' | 15:16 |
jose | I thought it did... | 15:16 |
freyes | I have to dig in it yet, not sure why that happens | 15:16 |
freyes | yup, I thought the same | 15:16 |
jose | marcoceppi: do sentries in amulet have private addresses? e.g. AWS with public-address and private-address on config-get | 15:16 |
jose | freyes: the other option would be to have it as a `juju-run` and get it from there | 15:17 |
freyes | jose, yes, not happy with that approach, but could be enough to get it passing | 15:18 |
sparkiegeek | icey: Ursinha: stub: there /are/ subordinates in play here | 15:21 |
thedac | jamespage: wrt, xenial MPs. great. I'll shepherd any remaining ones in today. | 15:31 |
rick_h__ | juju on hackernews ftw #4 atm make sure to vote and keep an eye out in case of questions and such https://news.ycombinator.com/news | 15:42 |
marcoceppi | rick_h__: you mean #2 :) | 15:57 |
rick_h__ | marcoceppi: it's moving up and up wheeee | 15:57 |
sparkiegeek | I need to find where jcastro keeps his bug tracker | 15:59 |
sparkiegeek | jcastro: s/ubuntu-trust/ubuntu-trusty/ in the lxc launch command | 15:59 |
marcoceppi | sparkiegeek: he's at an appointment, atm but his bug tracker is overflowing - best to just ping him directly ;) | 16:05 |
marcoceppi | rick_h__: how do i make a charm I published public? | 16:21 |
marcoceppi | what's the change-perms encantation? | 16:21 |
rick_h__ | marcoceppi: charm2 change-perm cs:xxxx --add-read=everyone ? | 16:23 |
rick_h__ | atm I think | 16:23 |
marcoceppi | thta's it | 16:23 |
marcoceppi | rick_h__: all I have to do now is create an account in juju with the username "everyone" ;) ;) | 16:24 |
rick_h__ | marcoceppi: pretty sure it's not allowed | 16:24 |
rick_h__ | marcoceppi: oh you mean juju...but you have to charm login so it's not the same | 16:24 |
marcoceppi | rick_h__: everyone worked, thanks! I love the instant gratification of stuff landing in the sore | 16:25 |
marcoceppi | stor | 16:25 |
marcoceppi | e | 16:25 |
rick_h__ | marcoceppi: instant gratification is most gratifying | 16:25 |
* rick_h__ runs for lunchables | 16:25 | |
cory_fu | marcoceppi: Did you by chance get the juju-deployer apt package updated for the plebs like me that occasionally still use it? :) | 16:27 |
marcoceppi | cory_fu: no, not yet | 16:27 |
marcoceppi | I've had to run errands all day | 16:27 |
bdx | icey, jamespage: I'm getting "No block devices detected using current configuration" Message for one of my ceph-osd nodes .... I'm wondering if there is some insight you can give on this before I jump in .... you can see my devices all exist -> http://paste.ubuntu.com/15025182/ | 16:27 |
cory_fu | marcoceppi: No worries. Just keeping it on your radar. :) | 16:28 |
icey | bdx what's your config look like for osd-devices? | 16:28 |
cory_fu | marcoceppi: btw, I'm going to be spending time today on these issues for the charm-tools deadline: https://github.com/juju/charm-tools/issues?q=is%3Aopen+is%3Aissue+milestone%3A2.0 | 16:28 |
icey | bdx: can look after lunch, solid meetings around lunchtime so be back soon | 16:28 |
cory_fu | Let me know if you disagree with any of those being required for 2.0 | 16:29 |
bdx | icey: http://paste.ubuntu.com/15025196/ | 16:29 |
marcoceppi | cory_fu: LGTM make sure to target the road-to-2.0 branch for these | 16:35 |
cory_fu | Will do, thanks | 16:36 |
icey | bdx: any chance that it has settled and found the devices now? | 17:03 |
bdx | icey: https://bugs.launchpad.net/charms/+source/ceph-osd/+bug/1545079 | 17:07 |
mup | Bug #1545079: "No block devices detected using current configuration" <ceph-osd (Juju Charms Collection):New> <https://launchpad.net/bugs/1545079> | 17:07 |
icey | bdx: the charm won't get around to adding storage until it can confirm that the cluster is bootstrapped, which requires the osd be able to talk to the mon quorum | 17:12 |
bdx | icey: it can talk to the mons just fine .... it just didn't have dhcp assigned to its repl/cluster interface | 17:16 |
bdx | icey: now that it has an ip on the cluster network ... it deploys. | 17:17 |
icey | yeah bdx, if it can't talk on its cluster network, it won't bootstrap :-P | 17:17 |
icey | glad it works now! | 17:18 |
bdx | icey: yeah ... totally .. didn't pay attention to the node interfaces getting wiped clean at commissioning. thanks! | 17:21 |
=== redelmann is now known as rudi|brb | ||
=== rudi|brb is now known as redelmann | ||
jcastro | sparkiegeek: I fixed that, my CDN might or might not be caught up wherever you are | 18:45 |
cory_fu | lazyPower: Do you have a copy of the stacktrace for https://github.com/juju/charm-tools/issues/102 | 19:00 |
lazyPower | cory_fu not off hand but i can make one really quick, gimme a sec to fire up a container and pip install charm-tools without git installed | 19:00 |
cory_fu | Thanks | 19:01 |
=== Yrrsinn_ is now known as Yrrsonn | ||
=== Yrrsonn is now known as Yrrsinn | ||
cory_fu | lazyPower: Nevermind, I reproduced it | 19:21 |
lazyPower | cory_fu - awesome, sorry i got distracted | 19:29 |
cory_fu | lazyPower: No worries. Your explanation on how to reproduce it clued me in | 19:30 |
cory_fu | marcoceppi: The road-to-2.0 branch is behind master by several commits. Any objection to me bringing it up to date before I start creating MPs against it? | 19:35 |
marcoceppi | cory_fu: not at all | 19:36 |
cory_fu | marcoceppi: It won't ff merge. Do you prefer a rebase or non-ff merge? | 19:37 |
marcoceppi | cory_fu: rebase, tbh | 19:37 |
marcoceppi | since it's a feature branch | 19:37 |
cory_fu | Ok, that's my preference as well, but they were your commits that will be rewritten | 19:37 |
admcleod- | is there a document which lists environment variables that should be set for juju2.0? | 19:47 |
jcastro | 10 minute warning until office hours! | 19:48 |
jcastro | jose: around? | 19:54 |
marcoceppi | jcastro: you setting up the hangout? | 19:56 |
marcoceppi | cory_fu: I seem to agree that charms.layer should be split to it's own library at this point | 19:56 |
jcastro | https://plus.google.com/hangouts/_/j74qty46pdo6how2xcj4j573aea | 19:57 |
jcastro | marcoceppi: ^^^ | 19:57 |
jcastro | anyone who wants to join the office hours hangout is welcome to do so, see the above link | 19:57 |
lazyPower | urgh, i want to attend but the hangout plugin keeps crashing here :( i guess i'll just listen | 20:04 |
marcoceppi | o/ | 20:04 |
lazyPower | man! Even hacking around on maas 1.9, nicely done Gilbert! | 20:09 |
MemeTeam6 | who /tpg/ here | 20:14 |
marcoceppi | https://lists.ubuntu.com/archives/juju/2016-February/006447.html | 20:14 |
arosales | https://lists.ubuntu.com/archives/juju/2016-February/006447.html | 20:15 |
lazyPower | MemeTeam6 - thinkpad user? | 20:16 |
marcoceppi | ppa:juju/devel | 20:16 |
lazyPower | wooo multi-model-stateserver! | 20:21 |
* lazyPower throws confetti | 20:21 | |
kwmonroe | woohooo juju deploy bundle! | 20:21 |
lazyPower | argh i said it again... i mean multi-model-controller | 20:21 |
lazyPower | All these awesome features in succession, i wonder if the watchers really get how much progress this really is we just showed in under 60 seconds | 20:22 |
arosales | https://lists.ubuntu.com/archives/juju/2016-February/006498.html -- juju release notes | 20:29 |
lazyPower | marcoceppi - charms.ansible landed during the summit, which is a supporting extension of michael nelsons work | 20:39 |
lazyPower | https://github.com/chuckbutler/charms.ansible - readme and proper documentation forthcoming in ~ a week | 20:40 |
lazyPower | it'll move to juju-solutions after its documented | 20:41 |
lazyPower | and put under CI | 20:41 |
* lazyPower fanfares @ NS950 and their doc contributions | 20:42 | |
arosales | "ERROR the name of the model must be specified" | 20:49 |
arosales | on juju bootstrap | 20:49 |
rick_h__ | arosales: have to give the controller a name on bootstrap | 20:49 |
rick_h__ | arosales: because now you can bootstrap several times, each with their own name e.g. staging and production | 20:50 |
marcoceppi | arosales: juju bootstrap -m "environment-name" | 20:50 |
rick_h__ | arosales: the error should be 'controller name' vs 'model name' | 20:50 |
rick_h__ | which is interesting, will have to talk to wallyworld about that one. | 20:50 |
arosales | -m was the key there | 20:50 |
rick_h__ | oh hmm, shouldn't be behind a flag according to the spec | 20:50 |
arosales | note the juju docs for "juju help bootstrap" does not state the -m is mandatory | 20:51 |
* rick_h__ goes and double checks | 20:51 | |
arosales | so "juju bootstrap -m aws-east1" is what worked for me | 20:51 |
arosales | given my environment.yaml file in ~/.local/share/juju and has a aws-east1 stanza | 20:52 |
rick_h__ | arosales: ok yea, that'll turn ingo "juju bootstrap $controllername $credentialname" | 20:52 |
rick_h__ | arosales: but looks like it's not there yet | 20:52 |
arosales | rick_h__: ack, juju help bootstrap just told me "usage: juju bootstrap [options]" which I am sure is just a case of the alpha help commands not being updated | 20:54 |
rick_h__ | arosales: yea | 20:54 |
* arosales finally bootstrapping though after moving my environments.yaml file and appending -m <env-name> on bootstrap | 20:55 | |
arosales | thanks rick_h__ and marcoceppi | 20:55 |
lazyPower | arosales - we both hit that today :D rick_h__ showed me up by sneaking out -alpha2 with a full announcement while i wasn't watching | 20:56 |
lazyPower | awesome office hours gents! | 20:56 |
arosales | +1 | 20:56 |
* arosales learned a lot | 20:56 | |
lazyPower | arosales - know what my favorite part was? | 20:56 |
arosales | lazyPower: mbruzek orange shirt? | 20:57 |
lazyPower | arosales - thats a close second - jujusolutions/charmbox:devel already had *everything* i needed :P simply update an alias and we're ready to goooooooo | 20:57 |
rick_h__ | lazyPower: is the recording up? | 20:57 |
lazyPower | rick_h__ if you hit ubuntuonair.com its there and ready for you | 20:57 |
* lazyPower is currently watching a resources demo | 20:57 | |
* lazyPower is pretty excited about this! | 20:58 | |
arosales | lazyPower: nice | 20:58 |
* rick_h__ loads it up | 20:58 | |
arosales | man jorge I should have mentioned how folks can try xenial in AWS | 20:58 |
* arosales to send that to the list | 20:58 | |
arosales | xenial with juju that is. | 20:59 |
lazyPower | arosales oh yeah!! do that! | 20:59 |
arosales | will do | 20:59 |
lazyPower | i'll send you a pizza :D | 20:59 |
jcastro | omg | 21:04 |
jcastro | on the flipside of the create-model being fast | 21:05 |
jcastro | destroy-model is instant | 21:05 |
arosales | mmmm pizzza :-) lazyPower | 21:05 |
arosales | all meat please | 21:05 |
jrwren | do pizza bribes work well here, I'll start using them often if they do. ;] | 21:06 |
marcoceppi | rick_h__: does the gui team know of the alpha2 login issues? | 21:06 |
cloudguru_ | Need to know who is working on a layer-docker charm for openVswitch as an SDN layer for MWC | 21:07 |
jrwren | marcoceppi: yes, we know. we cry about it every night. I'm filling a bucket with tears as i write this. | 21:07 |
marcoceppi | cloudguru_: you should ping lazyPower | 21:08 |
cloudguru_ | thx. already done. | 21:08 |
lazyPower | cloudguru_ o/ heyo | 21:08 |
lazyPower | i'm still here | 21:08 |
rick_h__ | jrwren: this is that the gui isn't 2.0 api ready? Or something else? | 21:09 |
jrwren | rick_h__: it is that exactly. Better not be something else. | 21:10 |
rick_h__ | jrwren: k | 21:10 |
lazyPower | cloudguru_ i'm really out/offline today, but i stuck around to do office hours. anything specific I can answer? is this coming up to crunch time and causing an issue? | 21:10 |
rick_h__ | jrwren: yea, just making sure it's not a different bug/etc. I'd not heard of anything there. | 21:10 |
cloudguru_ | @lazyPower .. all good. We can the scripts but docker charm for OVS is preferred | 21:12 |
lazyPower | cloudguru_ that initial stab at the layer for OVS was basically an encapsulation of that script | 21:12 |
lazyPower | should be able to juju deploy the built charm --to 0 and bypass the entirety of running the script | 21:12 |
cloudguru_ | re: lxc lBR clusters for openstack .. you guys are right. I'm pretty such this is how the nebula appliance worked under the covers on a single (huge) appliance. | 21:13 |
lazyPower | as its delivered on furst run | 21:13 |
lazyPower | *first run | 21:13 |
cloudguru_ | Nice !!! | 21:14 |
lazyPower | i need to step out, i have somewhere to be in 45 minutes across pittsburgh, but i left you my # in an im cloudguru_ - feel free to call | 21:14 |
lazyPower | sorry i'm pressed for time :\ | 21:14 |
cloudguru_ | LXC OpenStack on full killer .. pretty much OpenStack Magnum | 21:14 |
rick_h__ | lol at marcoceppi wanting to make sure "no one wants to look at my face" | 21:15 |
jose | jcastro: just got your ping, was stuck at work with no internet | 21:34 |
jcastro | no worries, I was just editing ubuntuonair | 21:38 |
cory_fu | marcoceppi: Did you see I updated https://github.com/juju/charm-tools/pull/108 ? | 21:58 |
jcastro | ok with a new machine I need to test it, any hardcore bundles that can exercise my system? | 22:00 |
rick_h__ | jcastro: I'd think kwmonroe's would be best | 22:00 |
marcoceppi | cory_fu: thanks! | 22:00 |
marcoceppi | jcastro: juju deploy -n 1000 cs:~jorge/trusty/boinc | 22:01 |
jcastro | rick_h__: that didn't really break a sweat | 22:01 |
rick_h__ | jcastro: add moar units? | 22:01 |
jcastro | I suppose I could actually run something in hadoop | 22:01 |
rick_h__ | jcastro: was that the realtime-syslog-analytics? | 22:01 |
jcastro | yeah | 22:02 |
marcoceppi | hey cory_fu | 22:26 |
cory_fu | Yes? | 22:27 |
marcoceppi | https://github.com/juju/charm-tools/issues/115 should we just create a `$JUJU_REPOSITORY/build/<charm_name>` instead? or just put <charm_name> in JUJU_REPOSITORY? | 22:27 |
rick_h__ | marcoceppi: cory_fu just a heads up on the series in metadata. The UI team was working on updating the charmstore to support that and the jujucharms.com website | 22:28 |
rick_h__ | marcoceppi: cory_fu and I'm not 100% sure where that's left off (e.g. might be comitted but not yet deployed) | 22:28 |
marcoceppi | rick_h__: right, but it will make it to 2.0 ? | 22:28 |
rick_h__ | marcoceppi: cory_fu definitely | 22:28 |
marcoceppi | this is all work planned for charm-tools 2.0 | 22:28 |
marcoceppi | not the 1.11.2 release | 22:28 |
rick_h__ | marcoceppi: cory_fu ah ok np then | 22:28 |
cory_fu | marcoceppi: Crap. I would have liked to get some of these changes that I landed today into 1.11.2. I assume we can backport them? | 22:29 |
marcoceppi | cory_fu: we can totally backport | 22:29 |
marcoceppi | cory_fu: 2.0 will be released with the new charm command from uros team | 22:29 |
marcoceppi | cory_fu: which I need to get into xenial like tomorrow | 22:29 |
marcoceppi | cory_fu: so I will upload charm/charm-tools 2.0 to the juju/devel ppa | 22:30 |
marcoceppi | but we can still do 1.X releases | 22:30 |
cory_fu | marcoceppi: Cool. And as for the directory layout, I'm not sure. I guess we should come up with a different recommendation than suggesting $LAYER_PATH etc be subdirectories of $JUJU_REPOSITORY | 22:30 |
marcoceppi | and we can worry about backports later | 22:30 |
marcoceppi | cory_fu: I think it's a good idea still, tbh | 22:30 |
cory_fu | marcoceppi: What's still a good idea? | 22:31 |
marcoceppi | cory_fu: it's kind of like the GOPATH stuff | 22:31 |
marcoceppi | cory_fu: $JUJU_REPOSITORY being an umbrella for stuff | 22:31 |
marcoceppi | cory_fu: though, it doesn't have to live in JUJU_REPOSITORY, since LAYER_PATH and INTERFACE_PATH are individual settings | 22:31 |
cory_fu | Ok, but how is the new juju going to handle "juju deploy foo" where foo is checked out locally? | 22:31 |
marcoceppi | cory_fu: gooooood point | 22:32 |
marcoceppi | cory_fu: so we could just put the charm_name in the $JUJU_REPOSITORY | 22:32 |
marcoceppi | $JUJU_REPOSITORY/charm | 22:33 |
cory_fu | There's nothing really stopping us from keeping the same pattern and having a "layers" directory under $JR. It just means no one can have a charm named "layers" (or "interfaces") | 22:33 |
marcoceppi | right | 22:33 |
marcoceppi | which seems silly | 22:33 |
marcoceppi | cory_fu: I could also see someone doing $JR/src/layers,interfaces | 22:33 |
cory_fu | I like something like $JR/{layers,interfaces,charms} | 22:33 |
cory_fu | I'd be ok with that, too | 22:33 |
marcoceppi | cory_fu: we should figure out how juju will handle $JR now in 2.0 though | 22:34 |
* marcoceppi is off to #juju-dev unless rick_h__has feedback | 22:34 | |
marcoceppi | cory_fu: as a work around we could do $JR="$HOME/stuff/charms"; $LAYER_PATH=$JR/../layers; etc | 22:35 |
cory_fu | I guess. Though that breaks my handy dev env switching aliases that change $JR and allow me to have, e.g., a pristine $JR for RQ | 22:36 |
cory_fu | Including layers, interfaces, etc | 22:36 |
cory_fu | But I can work around it | 22:36 |
marcoceppi | cory_fu: well would ranter not do that then | 22:37 |
cory_fu | Also, this only really applies to the default values for LAYER_PATH etc. If they're set manually, they can be whatever the user wants | 22:37 |
marcoceppi | true | 22:39 |
apuimedo | jcastro: nice article about the juju lxd that jamespage told me he was using | 22:46 |
apuimedo | I'm getting some issue when doing the bootstrap | 22:46 |
apuimedo | ERROR there was an issue examining the model: invalid config: Can not change ZFS config. Images or containers are still using the ZFS pool | 22:46 |
cory_fu | marcoceppi: Updated https://github.com/juju/charm-tools/pull/113 | 22:49 |
apuimedo | any idea about that error? | 22:52 |
koaps | hi guys, I'm having an issue with juju ha, wondering if anyone could help me troubleshoot | 23:03 |
apuimedo | koaps: juju ha? hacluster? | 23:04 |
koaps | apuimedo: juju ha | 23:09 |
koaps | my two addition servers just stay in adding-vote | 23:09 |
apuimedo | koaps: servers of which charm? | 23:09 |
koaps | juju ensure-availability --to 1,2 | 23:10 |
koaps | the server 1 and 2 never get vote | 23:10 |
koaps | this has worked | 23:10 |
koaps | but we are rebuilding the environment, now it's not | 23:10 |
aisrael | marcoceppi: I got it working \m/ | 23:10 |
koaps | seems like some software package changed and mongodb isn't working right | 23:11 |
apuimedo | sorry, I've only got experience developing charms with ha support, haven't tried the "ensure-availability" command | 23:12 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!