[00:23] Hi, I am using juju-deployer to deploy workload in lxc containers. I have containers all created and started in RUNNING state, but some of them got IP addresses, some don't. For those that don't have IP addresses, there is no corresponding /var/log/juju/machine-#-lxc-#.log. [00:24] How do I find out why those containers didn't get an IP? [00:36] http://pastebin.ubuntu.com/15020637/ [00:40] There is enough IP address in the DHCP pool. [00:47] jamespage: the three charms have been updated [00:54] jamespage: I put https://bugs.launchpad.net/charms/+bug/1453678 back to new so that you see the new message I wrote [00:54] Bug #1453678: New charms: midonet-host-agent, midonet-agnet, midonet-api [00:54] I couldn't assign it to you somehow [00:54] I'll not be reachable tomorrow, public holiday === natefinch is now known as natefinch-afk === med_ is now known as Guest76507 [08:41] dosaboy, your MP's need to be synced from lp:charm-helpers for the source configuration stuff (just spotted that) [09:18] jamespage: they are already synced [09:18] jamespage: in fact i just re-synced and there was still no diff [09:18] dosaboy: [09:18] -branch: lp:charm-helpers [09:18] 6 +branch: lp:~hopem/charm-helpers/lp1518975 [09:19] jamespage: hmm that must have crept through, lemme check [09:20] jamespage: ah its just the cinder MP, i'll fix that one [09:20] dosaboy, hah - that was the first I looked at :-) [09:21] dosaboy, if they are passing please merge away - I have a few xenial fixes to follow up with once you have that landed [09:21] jamespage: sure, atcually there are a couple of amulet failures, heat and nova-compute [09:22] jamespage: heat is a test that was previously broken but i'm gonna see if i can fix [09:22] nova-compute not sure yet [09:22] dosaboy, I can re-run if need be [09:22] jamespage: k i'll ping when ready [09:34] dosaboy, jamespage, got any time for https://code.launchpad.net/~gnuoy/charm-helpers/keystone-v3-support/+merge/285689 ? [09:42] jamespage: gonna merge all but heat until it passes since the rest are +1 now [09:42] gnuoy: maybe soon... [09:43] ta [10:14] gnuoy, maybe in a bit - on a half day today and wading through midonet reviews atm [10:14] ok np [11:25] Well here's an odd one [11:26] it seems every time I've invoked charm build since the last time I rm -rf'd the built charm, I've ended up with an embedded build inside the last one [11:26] so I have: trusty/errbot/trusty/errbot/trusty/errbot/trusty/errbot/trusty/errbot/ [11:26] 5 levels deep [11:26] bet if I call charm build again I end up witha 6th [11:27] That can't be expected behaviour, right? [11:30] I'm guessing it's not blacklisting the trusty and deps dirs when building in "." (as opposed to a JUJU_REPOSITORY) [11:35] https://github.com/juju/charm-tools/issues/106 [12:10] thedac, thanks for the reviews - I've landed swift-storage, nova-* are updated and I've added neutron-gateway which I missed befor [12:10] I've disable mitaka tests for now; we can re-enable once all of those land [13:33] Ursinha: do you by chance have a bundle of what you deployed yesterday that ran into #1517940 ? [13:33] Bug #1517940: workload-status is wrong [13:40] icey: let me check [13:42] Ursinha: I just had it hang for a while at blocked: mon relation missing, and then start executing [13:45] icey: so... we don't do bundles [13:45] thought that may be the case, no worries [13:45] do you have any special config on the mon or radosgw side? [13:45] (feel free to pastebin them and PM me) [13:45] Ursinha: [13:55] icey: done :) [13:56] thanks Ursinha, will keep digging [13:56] icey: thanks for looking into that [13:58] icey: out of curiosity, for how long you waited until the relations settled? [13:59] 5 minutes? on AWS [13:59] will try to test on OpenStack in a bit [13:59] ah, right, [14:01] icey: the unit should transition out of blocked as *soon* as it sees the relation is established - it should go into maintenance state when it's busy doing things but no longer requires action from the user [14:02] sparkiegeek: agreed, and that's what I saw happen [14:02] 5m is way too long for the charm to spend /noticing/ that it has the relation (even if it has a bunch of stuff to do before all the work for that relation is still ongoing) [14:02] but it did take a few minutes between adding the relation and the state changing / hook running [14:03] sparkiegeek: I'm curious about how long juju took to trigger the hook execution [14:03] I've seen hook execution take a long time to start when juju is fairly heavily loaded before [14:04] and not just with the radosgw charm sparkiegeek [14:04] either way, need to do mroe digging though [14:04] * sparkiegeek nods [14:04] FWIW we've only seen this with ceph-mon charm, none of the others [14:04] hence the belief it's a charm bug :) [14:05] I've seen it with other charm relations, just not for /that/ long [14:07] sorry, I mean with radosgw charm relation to ceph-mon [14:23] http://www.jorgecastro.org/2016/02/12/super-fast-local-workloads-with-juju/ [14:23] jamespage: ^^^ [14:23] I set it all up yesterday [14:24] jcastro: <3 [14:24] jcastro, nice... [14:25] jcastro: safe to share out or still in editing? [14:28] share like the wind === cherylj_ is now known as cherylj [14:47] icey, sparkiegeek: Was the hook you expected to be triggered actually triggered, or was it the update-status hook? If something was messed up, such as a hook missing executable permissions, the update-status hook kicks in every five minutes or so and can hide the problem. [14:49] stub: the hook is a relation joined hook [14:49] and the hook usualyl runs fine [14:50] occaisonally, it seems like the hook doesn't run (or just takes forever to run) [14:50] well, it's a relation-changed and relation-joined so it should have been run [14:50] Hooks running on subordinates block other hooks running on the same unit, which might apply here [14:51] both of the sides of the relation are primary charms [14:51] no subordinates deployed [14:51] I think juju run might block hooks too [14:51] no juju run [14:51] juju deploy x3 [14:51] juju add-relation(s) [14:51] one of the relations never seems to get related [14:52] Ursinha: correct me if I'm wrong [14:53] icey: it's like the relation exists but isn't relating :) [14:53] I have to remove it and readd [14:54] jcastro: already top 10 on HN [15:13] freyes: ping [15:13] jose, pong [15:13] freyes: hey! I have a quick question on a merge you proposed [15:13] have a couple mins to figure this out? [15:13] jose, sure, which one? [15:14] freyes: https://code.launchpad.net/~freyes/charms/trusty/memcached/lp1525026/+merge/281254 [15:14] there's a test in there, test 20. it checks for the 'public-address' on the instance and makes sure it's the same on the memcached.conf file, however, could it work with the private address instead of the public one as well? [15:16] jose, right, that is failing for AWS, because the replication is configured over the private address, and I changed the test to use private-address, the problem with it is that the sentry doesn't have 'private-address' [15:16] I thought it did... [15:16] I have to dig in it yet, not sure why that happens [15:16] yup, I thought the same [15:16] marcoceppi: do sentries in amulet have private addresses? e.g. AWS with public-address and private-address on config-get [15:17] freyes: the other option would be to have it as a `juju-run` and get it from there [15:18] jose, yes, not happy with that approach, but could be enough to get it passing [15:21] icey: Ursinha: stub: there /are/ subordinates in play here [15:31] jamespage: wrt, xenial MPs. great. I'll shepherd any remaining ones in today. [15:42] juju on hackernews ftw #4 atm make sure to vote and keep an eye out in case of questions and such https://news.ycombinator.com/news [15:57] rick_h__: you mean #2 :) [15:57] marcoceppi: it's moving up and up wheeee [15:59] I need to find where jcastro keeps his bug tracker [15:59] jcastro: s/ubuntu-trust/ubuntu-trusty/ in the lxc launch command [16:05] sparkiegeek: he's at an appointment, atm but his bug tracker is overflowing - best to just ping him directly ;) [16:21] rick_h__: how do i make a charm I published public? [16:21] what's the change-perms encantation? [16:23] marcoceppi: charm2 change-perm cs:xxxx --add-read=everyone ? [16:23] atm I think [16:23] thta's it [16:24] rick_h__: all I have to do now is create an account in juju with the username "everyone" ;) ;) [16:24] marcoceppi: pretty sure it's not allowed [16:24] marcoceppi: oh you mean juju...but you have to charm login so it's not the same [16:25] rick_h__: everyone worked, thanks! I love the instant gratification of stuff landing in the sore [16:25] stor [16:25] e [16:25] marcoceppi: instant gratification is most gratifying [16:25] * rick_h__ runs for lunchables [16:27] marcoceppi: Did you by chance get the juju-deployer apt package updated for the plebs like me that occasionally still use it? :) [16:27] cory_fu: no, not yet [16:27] I've had to run errands all day [16:27] icey, jamespage: I'm getting "No block devices detected using current configuration" Message for one of my ceph-osd nodes .... I'm wondering if there is some insight you can give on this before I jump in .... you can see my devices all exist -> http://paste.ubuntu.com/15025182/ [16:28] marcoceppi: No worries. Just keeping it on your radar. :) [16:28] bdx what's your config look like for osd-devices? [16:28] marcoceppi: btw, I'm going to be spending time today on these issues for the charm-tools deadline: https://github.com/juju/charm-tools/issues?q=is%3Aopen+is%3Aissue+milestone%3A2.0 [16:28] bdx: can look after lunch, solid meetings around lunchtime so be back soon [16:29] Let me know if you disagree with any of those being required for 2.0 [16:29] icey: http://paste.ubuntu.com/15025196/ [16:35] cory_fu: LGTM make sure to target the road-to-2.0 branch for these [16:36] Will do, thanks [17:03] bdx: any chance that it has settled and found the devices now? [17:07] icey: https://bugs.launchpad.net/charms/+source/ceph-osd/+bug/1545079 [17:07] Bug #1545079: "No block devices detected using current configuration" [17:12] bdx: the charm won't get around to adding storage until it can confirm that the cluster is bootstrapped, which requires the osd be able to talk to the mon quorum [17:16] icey: it can talk to the mons just fine .... it just didn't have dhcp assigned to its repl/cluster interface [17:17] icey: now that it has an ip on the cluster network ... it deploys. [17:17] yeah bdx, if it can't talk on its cluster network, it won't bootstrap :-P [17:18] glad it works now! [17:21] icey: yeah ... totally .. didn't pay attention to the node interfaces getting wiped clean at commissioning. thanks! === redelmann is now known as rudi|brb === rudi|brb is now known as redelmann [18:45] sparkiegeek: I fixed that, my CDN might or might not be caught up wherever you are [19:00] lazyPower: Do you have a copy of the stacktrace for https://github.com/juju/charm-tools/issues/102 [19:00] cory_fu not off hand but i can make one really quick, gimme a sec to fire up a container and pip install charm-tools without git installed [19:01] Thanks === Yrrsinn_ is now known as Yrrsonn === Yrrsonn is now known as Yrrsinn [19:21] lazyPower: Nevermind, I reproduced it [19:29] cory_fu - awesome, sorry i got distracted [19:30] lazyPower: No worries. Your explanation on how to reproduce it clued me in [19:35] marcoceppi: The road-to-2.0 branch is behind master by several commits. Any objection to me bringing it up to date before I start creating MPs against it? [19:36] cory_fu: not at all [19:37] marcoceppi: It won't ff merge. Do you prefer a rebase or non-ff merge? [19:37] cory_fu: rebase, tbh [19:37] since it's a feature branch [19:37] Ok, that's my preference as well, but they were your commits that will be rewritten [19:47] is there a document which lists environment variables that should be set for juju2.0? [19:48] 10 minute warning until office hours! [19:54] jose: around? [19:56] jcastro: you setting up the hangout? [19:56] cory_fu: I seem to agree that charms.layer should be split to it's own library at this point [19:57] https://plus.google.com/hangouts/_/j74qty46pdo6how2xcj4j573aea [19:57] marcoceppi: ^^^ [19:57] anyone who wants to join the office hours hangout is welcome to do so, see the above link [20:04] urgh, i want to attend but the hangout plugin keeps crashing here :( i guess i'll just listen [20:04] o/ [20:09] man! Even hacking around on maas 1.9, nicely done Gilbert! [20:14] who /tpg/ here [20:14] https://lists.ubuntu.com/archives/juju/2016-February/006447.html [20:15] https://lists.ubuntu.com/archives/juju/2016-February/006447.html [20:16] MemeTeam6 - thinkpad user? [20:16] ppa:juju/devel [20:21] wooo multi-model-stateserver! [20:21] * lazyPower throws confetti [20:21] woohooo juju deploy bundle! [20:21] argh i said it again... i mean multi-model-controller [20:22] All these awesome features in succession, i wonder if the watchers really get how much progress this really is we just showed in under 60 seconds [20:29] https://lists.ubuntu.com/archives/juju/2016-February/006498.html -- juju release notes [20:39] marcoceppi - charms.ansible landed during the summit, which is a supporting extension of michael nelsons work [20:40] https://github.com/chuckbutler/charms.ansible - readme and proper documentation forthcoming in ~ a week [20:41] it'll move to juju-solutions after its documented [20:41] and put under CI [20:42] * lazyPower fanfares @ NS950 and their doc contributions [20:49] "ERROR the name of the model must be specified" [20:49] on juju bootstrap [20:49] arosales: have to give the controller a name on bootstrap [20:50] arosales: because now you can bootstrap several times, each with their own name e.g. staging and production [20:50] arosales: juju bootstrap -m "environment-name" [20:50] arosales: the error should be 'controller name' vs 'model name' [20:50] which is interesting, will have to talk to wallyworld about that one. [20:50] -m was the key there [20:50] oh hmm, shouldn't be behind a flag according to the spec [20:51] note the juju docs for "juju help bootstrap" does not state the -m is mandatory [20:51] * rick_h__ goes and double checks [20:51] so "juju bootstrap -m aws-east1" is what worked for me [20:52] given my environment.yaml file in ~/.local/share/juju and has a aws-east1 stanza [20:52] arosales: ok yea, that'll turn ingo "juju bootstrap $controllername $credentialname" [20:52] arosales: but looks like it's not there yet [20:54] rick_h__: ack, juju help bootstrap just told me "usage: juju bootstrap [options]" which I am sure is just a case of the alpha help commands not being updated [20:54] arosales: yea [20:55] * arosales finally bootstrapping though after moving my environments.yaml file and appending -m on bootstrap [20:55] thanks rick_h__ and marcoceppi [20:56] arosales - we both hit that today :D rick_h__ showed me up by sneaking out -alpha2 with a full announcement while i wasn't watching [20:56] awesome office hours gents! [20:56] +1 [20:56] * arosales learned a lot [20:56] arosales - know what my favorite part was? [20:57] lazyPower: mbruzek orange shirt? [20:57] arosales - thats a close second - jujusolutions/charmbox:devel already had *everything* i needed :P simply update an alias and we're ready to goooooooo [20:57] lazyPower: is the recording up? [20:57] rick_h__ if you hit ubuntuonair.com its there and ready for you [20:57] * lazyPower is currently watching a resources demo [20:58] * lazyPower is pretty excited about this! [20:58] lazyPower: nice [20:58] * rick_h__ loads it up [20:58] man jorge I should have mentioned how folks can try xenial in AWS [20:58] * arosales to send that to the list [20:59] xenial with juju that is. [20:59] arosales oh yeah!! do that! [20:59] will do [20:59] i'll send you a pizza :D [21:04] omg [21:05] on the flipside of the create-model being fast [21:05] destroy-model is instant [21:05] mmmm pizzza :-) lazyPower [21:05] all meat please [21:06] do pizza bribes work well here, I'll start using them often if they do. ;] [21:06] rick_h__: does the gui team know of the alpha2 login issues? [21:07] Need to know who is working on a layer-docker charm for openVswitch as an SDN layer for MWC [21:07] marcoceppi: yes, we know. we cry about it every night. I'm filling a bucket with tears as i write this. [21:08] cloudguru_: you should ping lazyPower [21:08] thx. already done. [21:08] cloudguru_ o/ heyo [21:08] i'm still here [21:09] jrwren: this is that the gui isn't 2.0 api ready? Or something else? [21:10] rick_h__: it is that exactly. Better not be something else. [21:10] jrwren: k [21:10] cloudguru_ i'm really out/offline today, but i stuck around to do office hours. anything specific I can answer? is this coming up to crunch time and causing an issue? [21:10] jrwren: yea, just making sure it's not a different bug/etc. I'd not heard of anything there. [21:12] @lazyPower .. all good. We can the scripts but docker charm for OVS is preferred [21:12] cloudguru_ that initial stab at the layer for OVS was basically an encapsulation of that script [21:12] should be able to juju deploy the built charm --to 0 and bypass the entirety of running the script [21:13] re: lxc lBR clusters for openstack .. you guys are right. I'm pretty such this is how the nebula appliance worked under the covers on a single (huge) appliance. [21:13] as its delivered on furst run [21:13] *first run [21:14] Nice !!! [21:14] i need to step out, i have somewhere to be in 45 minutes across pittsburgh, but i left you my # in an im cloudguru_ - feel free to call [21:14] sorry i'm pressed for time :\ [21:14] LXC OpenStack on full killer .. pretty much OpenStack Magnum [21:15] lol at marcoceppi wanting to make sure "no one wants to look at my face" [21:34] jcastro: just got your ping, was stuck at work with no internet [21:38] no worries, I was just editing ubuntuonair [21:58] marcoceppi: Did you see I updated https://github.com/juju/charm-tools/pull/108 ? [22:00] ok with a new machine I need to test it, any hardcore bundles that can exercise my system? [22:00] jcastro: I'd think kwmonroe's would be best [22:00] cory_fu: thanks! [22:01] jcastro: juju deploy -n 1000 cs:~jorge/trusty/boinc [22:01] rick_h__: that didn't really break a sweat [22:01] jcastro: add moar units? [22:01] I suppose I could actually run something in hadoop [22:01] jcastro: was that the realtime-syslog-analytics? [22:02] yeah [22:26] hey cory_fu [22:27] Yes? [22:27] https://github.com/juju/charm-tools/issues/115 should we just create a `$JUJU_REPOSITORY/build/` instead? or just put in JUJU_REPOSITORY? [22:28] marcoceppi: cory_fu just a heads up on the series in metadata. The UI team was working on updating the charmstore to support that and the jujucharms.com website [22:28] marcoceppi: cory_fu and I'm not 100% sure where that's left off (e.g. might be comitted but not yet deployed) [22:28] rick_h__: right, but it will make it to 2.0 ? [22:28] marcoceppi: cory_fu definitely [22:28] this is all work planned for charm-tools 2.0 [22:28] not the 1.11.2 release [22:28] marcoceppi: cory_fu ah ok np then [22:29] marcoceppi: Crap. I would have liked to get some of these changes that I landed today into 1.11.2. I assume we can backport them? [22:29] cory_fu: we can totally backport [22:29] cory_fu: 2.0 will be released with the new charm command from uros team [22:29] cory_fu: which I need to get into xenial like tomorrow [22:30] cory_fu: so I will upload charm/charm-tools 2.0 to the juju/devel ppa [22:30] but we can still do 1.X releases [22:30] marcoceppi: Cool. And as for the directory layout, I'm not sure. I guess we should come up with a different recommendation than suggesting $LAYER_PATH etc be subdirectories of $JUJU_REPOSITORY [22:30] and we can worry about backports later [22:30] cory_fu: I think it's a good idea still, tbh [22:31] marcoceppi: What's still a good idea? [22:31] cory_fu: it's kind of like the GOPATH stuff [22:31] cory_fu: $JUJU_REPOSITORY being an umbrella for stuff [22:31] cory_fu: though, it doesn't have to live in JUJU_REPOSITORY, since LAYER_PATH and INTERFACE_PATH are individual settings [22:31] Ok, but how is the new juju going to handle "juju deploy foo" where foo is checked out locally? [22:32] cory_fu: gooooood point [22:32] cory_fu: so we could just put the charm_name in the $JUJU_REPOSITORY [22:33] $JUJU_REPOSITORY/charm [22:33] There's nothing really stopping us from keeping the same pattern and having a "layers" directory under $JR. It just means no one can have a charm named "layers" (or "interfaces") [22:33] right [22:33] which seems silly [22:33] cory_fu: I could also see someone doing $JR/src/layers,interfaces [22:33] I like something like $JR/{layers,interfaces,charms} [22:33] I'd be ok with that, too [22:34] cory_fu: we should figure out how juju will handle $JR now in 2.0 though [22:34] * marcoceppi is off to #juju-dev unless rick_h__has feedback [22:35] cory_fu: as a work around we could do $JR="$HOME/stuff/charms"; $LAYER_PATH=$JR/../layers; etc [22:36] I guess. Though that breaks my handy dev env switching aliases that change $JR and allow me to have, e.g., a pristine $JR for RQ [22:36] Including layers, interfaces, etc [22:36] But I can work around it [22:37] cory_fu: well would ranter not do that then [22:37] Also, this only really applies to the default values for LAYER_PATH etc. If they're set manually, they can be whatever the user wants [22:39] true [22:46] jcastro: nice article about the juju lxd that jamespage told me he was using [22:46] I'm getting some issue when doing the bootstrap [22:46] ERROR there was an issue examining the model: invalid config: Can not change ZFS config. Images or containers are still using the ZFS pool [22:49] marcoceppi: Updated https://github.com/juju/charm-tools/pull/113 [22:52] any idea about that error? [23:03] hi guys, I'm having an issue with juju ha, wondering if anyone could help me troubleshoot [23:04] koaps: juju ha? hacluster? [23:09] apuimedo: juju ha [23:09] my two addition servers just stay in adding-vote [23:09] koaps: servers of which charm? [23:10] juju ensure-availability --to 1,2 [23:10] the server 1 and 2 never get vote [23:10] this has worked [23:10] but we are rebuilding the environment, now it's not [23:10] marcoceppi: I got it working \m/ [23:11] seems like some software package changed and mongodb isn't working right [23:12] sorry, I've only got experience developing charms with ha support, haven't tried the "ensure-availability" command