catbus1Hi, I am using juju-deployer to deploy workload in lxc containers. I have containers all created and started in RUNNING state, but some of them got IP addresses, some don't. For those that don't have IP addresses, there is no corresponding /var/log/juju/machine-#-lxc-#.log.00:23
catbus1How do I find out why those containers didn't get an IP?00:24
catbus1There is enough IP address in the DHCP pool.00:40
apuimedojamespage: the three charms have been updated00:47
apuimedojamespage: I put https://bugs.launchpad.net/charms/+bug/1453678 back to new so that you see the new message I wrote00:54
mupBug #1453678: New charms: midonet-host-agent, midonet-agnet, midonet-api <Juju Charms Collection:New> <https://launchpad.net/bugs/1453678>00:54
apuimedoI couldn't assign it to you somehow00:54
apuimedoI'll not be reachable tomorrow, public holiday00:54
=== natefinch is now known as natefinch-afk
=== med_ is now known as Guest76507
jamespagedosaboy, your MP's need to be synced from lp:charm-helpers for the source configuration stuff (just spotted that)08:41
dosaboyjamespage: they are already synced09:18
dosaboyjamespage: in fact i just re-synced and there was still no diff09:18
jamespage-branch: lp:charm-helpers09:18
jamespage6+branch: lp:~hopem/charm-helpers/lp151897509:18
dosaboyjamespage: hmm that must have crept through, lemme check09:19
dosaboyjamespage: ah its just the cinder MP, i'll fix that one09:20
jamespagedosaboy, hah - that was the first I looked at :-)09:20
jamespagedosaboy, if they are passing please merge away - I have a few xenial fixes to follow up with once you have that landed09:21
dosaboyjamespage: sure, atcually there are a couple of amulet failures, heat and nova-compute09:21
dosaboyjamespage: heat is a test that was previously broken but i'm gonna see if i can fix09:22
dosaboynova-compute not sure yet09:22
jamespagedosaboy, I can re-run if need be09:22
dosaboyjamespage:  k i'll ping when ready09:22
gnuoydosaboy, jamespage, got any time for https://code.launchpad.net/~gnuoy/charm-helpers/keystone-v3-support/+merge/285689 ?09:34
dosaboyjamespage: gonna merge all but heat until it passes since the rest are +1 now09:42
dosaboygnuoy: maybe soon...09:42
jamespagegnuoy, maybe in a bit - on a half day today and wading through midonet reviews atm10:14
gnuoyok np10:14
wesleymasonWell here's an odd one11:25
wesleymasonit seems every time I've invoked charm build since the last time I rm -rf'd the built charm, I've ended up with an embedded build inside the last one11:26
wesleymasonso I have: trusty/errbot/trusty/errbot/trusty/errbot/trusty/errbot/trusty/errbot/11:26
wesleymason5 levels deep11:26
wesleymasonbet if I call charm build again I end up witha  6th11:26
wesleymasonThat can't be expected behaviour, right?11:27
wesleymasonI'm guessing it's not blacklisting the trusty and deps dirs when building in "." (as opposed to a JUJU_REPOSITORY)11:30
jamespagethedac, thanks for the reviews - I've landed swift-storage, nova-* are updated and I've added neutron-gateway which I missed befor12:10
jamespageI've disable mitaka tests for now; we can re-enable once all of those land12:10
iceyUrsinha: do you by chance have a bundle of what you deployed yesterday that ran into #1517940 ?13:33
mupBug #1517940: workload-status is wrong <landscape> <openstack> <sts> <ceph-radosgw (Juju Charms Collection):New> <https://launchpad.net/bugs/1517940>13:33
Ursinhaicey: let me check13:40
iceyUrsinha: I just had it hang for a while at blocked: mon relation missing, and then start executing13:42
Ursinhaicey: so... we don't do bundles13:45
iceythought that may be the case, no worries13:45
iceydo you have any special config on the mon or radosgw side?13:45
icey(feel free to pastebin them and PM me)13:45
Ursinhaicey: done :)13:55
iceythanks Ursinha, will keep digging13:56
Ursinhaicey: thanks for looking into that13:56
Ursinhaicey: out of curiosity, for how long you waited until the relations settled?13:58
icey5 minutes? on AWS13:59
iceywill try to test on OpenStack in a bit13:59
Ursinhaah, right,13:59
sparkiegeekicey: the unit should transition out of blocked as *soon* as it sees the relation is established - it should go into maintenance state when it's busy doing things but no longer requires action from the user14:01
iceysparkiegeek: agreed, and that's what I saw happen14:02
sparkiegeek5m is way too long for the charm to spend /noticing/ that it has the relation (even if it has a bunch of stuff to do before all the work for that relation is still ongoing)14:02
iceybut it did take a few minutes between adding the relation and the state changing / hook running14:02
iceysparkiegeek: I'm curious about how long juju took to trigger the hook execution14:03
iceyI've seen hook execution take a long time to start when juju is fairly heavily loaded before14:03
iceyand not just with the radosgw charm sparkiegeek14:04
iceyeither way, need to do mroe digging though14:04
* sparkiegeek nods14:04
sparkiegeekFWIW we've only seen this with ceph-mon charm, none of the others14:04
sparkiegeekhence the belief it's a charm bug :)14:04
iceyI've seen it with other charm relations, just not for /that/ long14:05
sparkiegeeksorry, I mean with radosgw charm relation to ceph-mon14:07
jcastrojamespage: ^^^14:23
jcastroI set it all up yesterday14:23
rick_h__jcastro: <314:24
jamespagejcastro, nice...14:24
rick_h__jcastro: safe to share out or still in editing?14:25
jcastroshare like the wind14:28
=== cherylj_ is now known as cherylj
stubicey, sparkiegeek: Was the hook you expected to be triggered actually triggered, or was it the update-status hook? If something was messed up, such as a hook missing executable permissions, the update-status hook kicks in every five minutes or so and can hide the problem.14:47
iceystub: the hook is a relation joined hook14:49
iceyand the hook usualyl runs fine14:49
iceyoccaisonally, it seems like the hook doesn't run (or just takes forever to run)14:50
iceywell, it's a relation-changed and relation-joined so it should have been run14:50
stubHooks running on subordinates block other hooks running on the same unit, which might apply here14:50
iceyboth of the sides of the relation are primary charms14:51
iceyno subordinates deployed14:51
stubI think juju run might block hooks too14:51
iceyno juju run14:51
iceyjuju deploy x314:51
iceyjuju add-relation(s)14:51
iceyone of the relations never seems to  get related14:51
iceyUrsinha: correct me if I'm wrong14:52
Ursinhaicey: it's like the relation exists but isn't relating :)14:53
UrsinhaI have to remove it and readd14:53
iceyjcastro: already top 10 on HN14:54
josefreyes: ping15:13
freyesjose, pong15:13
josefreyes: hey! I have a quick question on a merge you proposed15:13
josehave a couple mins to figure this out?15:13
freyesjose, sure, which one?15:13
josefreyes: https://code.launchpad.net/~freyes/charms/trusty/memcached/lp1525026/+merge/28125415:14
josethere's a test in there, test 20. it checks for the 'public-address' on the instance and makes sure it's the same on the memcached.conf file, however, could it work with the private address instead of the public one as well?15:14
freyesjose, right, that is failing for AWS, because the replication is configured over the private address, and I changed the test to use private-address, the problem with it is that the sentry doesn't have 'private-address'15:16
joseI thought it did...15:16
freyesI have to dig in it yet, not sure why that happens15:16
freyesyup, I thought the same15:16
josemarcoceppi: do sentries in amulet have private addresses? e.g. AWS with public-address and private-address on config-get15:16
josefreyes: the other option would be to have it as a `juju-run` and get it from there15:17
freyesjose, yes, not happy with that approach, but could be enough to get it passing15:18
sparkiegeekicey: Ursinha: stub: there /are/ subordinates in play here15:21
thedacjamespage: wrt, xenial MPs. great. I'll shepherd any remaining ones in today.15:31
rick_h__juju on hackernews ftw #4 atm make sure to vote and keep an eye out in case of questions and such https://news.ycombinator.com/news15:42
marcoceppirick_h__: you mean #2 :)15:57
rick_h__marcoceppi: it's moving up and up wheeee15:57
sparkiegeekI need to find where jcastro keeps his bug tracker15:59
sparkiegeekjcastro: s/ubuntu-trust/ubuntu-trusty/ in the lxc launch command15:59
marcoceppisparkiegeek: he's at an appointment, atm but his bug tracker is overflowing - best to just ping him directly ;)16:05
marcoceppirick_h__: how do i make a charm I published public?16:21
marcoceppiwhat's the change-perms encantation?16:21
rick_h__marcoceppi: charm2 change-perm cs:xxxx --add-read=everyone ?16:23
rick_h__atm I think16:23
marcoceppithta's it16:23
marcoceppirick_h__: all I have to do now is create an account in juju with the username "everyone" ;) ;)16:24
rick_h__marcoceppi: pretty sure it's not allowed16:24
rick_h__marcoceppi: oh you mean juju...but you have to charm login so it's not the same16:24
marcoceppirick_h__: everyone worked, thanks! I love the instant gratification of stuff landing in the sore16:25
rick_h__marcoceppi: instant gratification is most gratifying16:25
* rick_h__ runs for lunchables16:25
cory_fumarcoceppi: Did you by chance get the juju-deployer apt package updated for the plebs like me that occasionally still use it?  :)16:27
marcoceppicory_fu: no, not yet16:27
marcoceppiI've had to run errands all day16:27
bdxicey, jamespage: I'm getting "No block devices detected using current configuration" Message for one of my ceph-osd nodes  .... I'm wondering if there is some insight you can give on this before I jump in .... you can see my devices all exist -> http://paste.ubuntu.com/15025182/16:27
cory_fumarcoceppi: No worries.  Just keeping it on your radar.  :)16:28
iceybdx what's your config look like for osd-devices?16:28
cory_fumarcoceppi: btw, I'm going to be spending time today on these issues for the charm-tools deadline: https://github.com/juju/charm-tools/issues?q=is%3Aopen+is%3Aissue+milestone%3A2.016:28
iceybdx: can look after lunch, solid meetings around lunchtime so be back soon16:28
cory_fuLet me know if you disagree with any of those being required for 2.016:29
bdxicey: http://paste.ubuntu.com/15025196/16:29
marcoceppicory_fu: LGTM make sure to target the road-to-2.0 branch for these16:35
cory_fuWill do, thanks16:36
iceybdx: any chance that it has settled and found the devices now?17:03
bdxicey: https://bugs.launchpad.net/charms/+source/ceph-osd/+bug/154507917:07
mupBug #1545079: "No block devices detected using current configuration" <ceph-osd (Juju Charms Collection):New> <https://launchpad.net/bugs/1545079>17:07
iceybdx: the charm won't get around to adding storage until it can confirm that the cluster is bootstrapped, which requires the osd be able to talk to the mon quorum17:12
bdxicey: it can talk to the mons just fine .... it just didn't have dhcp assigned to its repl/cluster interface17:16
bdxicey: now that it has an ip on the cluster network ... it deploys.17:17
iceyyeah bdx, if it can't talk on its cluster network, it won't bootstrap :-P17:17
iceyglad it works now!17:18
bdxicey: yeah ... totally .. didn't pay attention to the node interfaces getting wiped clean at commissioning. thanks!17:21
=== redelmann is now known as rudi|brb
=== rudi|brb is now known as redelmann
jcastrosparkiegeek: I fixed that, my CDN might or might not be caught up wherever you are18:45
cory_fulazyPower: Do you have a copy of the stacktrace for https://github.com/juju/charm-tools/issues/10219:00
lazyPowercory_fu not off hand but i can make one really quick, gimme a sec to fire up a container and pip install charm-tools without git installed19:00
=== Yrrsinn_ is now known as Yrrsonn
=== Yrrsonn is now known as Yrrsinn
cory_fulazyPower: Nevermind, I reproduced it19:21
lazyPowercory_fu - awesome, sorry i got distracted19:29
cory_fulazyPower: No worries.  Your explanation on how to reproduce it clued me in19:30
cory_fumarcoceppi: The road-to-2.0 branch is behind master by several commits.  Any objection to me bringing it up to date before I start creating MPs against it?19:35
marcoceppicory_fu: not at all19:36
cory_fumarcoceppi: It won't ff merge.  Do you prefer a rebase or non-ff merge?19:37
marcoceppicory_fu: rebase, tbh19:37
marcoceppisince it's a feature branch19:37
cory_fuOk, that's my preference as well, but they were your commits that will be rewritten19:37
admcleod-is there a document which lists environment variables that should be set for juju2.0?19:47
jcastro10 minute warning until office hours!19:48
jcastrojose: around?19:54
marcoceppijcastro: you setting up the hangout?19:56
marcoceppicory_fu: I seem to agree that charms.layer should be split to it's own library at this point19:56
jcastromarcoceppi: ^^^19:57
jcastroanyone who wants to join the office hours hangout is welcome to do so, see the above link19:57
lazyPowerurgh, i want to attend but the hangout plugin keeps crashing here :( i guess i'll just listen20:04
lazyPowerman! Even hacking around on maas 1.9, nicely done Gilbert!20:09
MemeTeam6who /tpg/ here20:14
lazyPowerMemeTeam6 - thinkpad user?20:16
lazyPowerwooo multi-model-stateserver!20:21
* lazyPower throws confetti20:21
kwmonroewoohooo juju deploy bundle!20:21
lazyPowerargh i said it again... i mean multi-model-controller20:21
lazyPowerAll these awesome features in succession, i wonder if the watchers really get how much progress this really is we just showed in under 60 seconds20:22
arosaleshttps://lists.ubuntu.com/archives/juju/2016-February/006498.html  -- juju release notes20:29
lazyPowermarcoceppi - charms.ansible landed during the summit, which is a supporting extension of michael nelsons work20:39
lazyPowerhttps://github.com/chuckbutler/charms.ansible - readme and proper documentation forthcoming in ~ a week20:40
lazyPowerit'll move to juju-solutions after its documented20:41
lazyPowerand put under CI20:41
* lazyPower fanfares @ NS950 and their doc contributions20:42
arosales"ERROR the name of the model must be specified"20:49
arosaleson juju bootstrap20:49
rick_h__arosales: have to give the controller a name on bootstrap20:49
rick_h__arosales: because now you can bootstrap several times, each with their own name e.g. staging and production20:50
marcoceppiarosales: juju bootstrap -m "environment-name"20:50
rick_h__arosales: the error should be 'controller name' vs 'model name'20:50
rick_h__which is interesting, will have to talk to wallyworld about that one.20:50
arosales-m was the key there20:50
rick_h__oh hmm, shouldn't be behind a flag according to the spec20:50
arosalesnote the juju docs for "juju help bootstrap" does not state the -m is mandatory20:51
* rick_h__ goes and double checks20:51
arosalesso "juju bootstrap -m aws-east1" is what worked for me20:51
arosalesgiven my environment.yaml file in ~/.local/share/juju and has a aws-east1 stanza20:52
rick_h__arosales: ok yea, that'll turn ingo "juju bootstrap $controllername $credentialname"20:52
rick_h__arosales: but looks like it's not there yet20:52
arosalesrick_h__: ack, juju help bootstrap just told me "usage: juju bootstrap [options]" which I am sure is just a case of the alpha help commands not being updated20:54
rick_h__arosales: yea20:54
* arosales finally bootstrapping though after moving my environments.yaml file and appending -m <env-name> on bootstrap20:55
arosalesthanks rick_h__ and marcoceppi20:55
lazyPowerarosales - we both hit that today :D rick_h__ showed me up by sneaking out -alpha2 with a full announcement while i wasn't watching20:56
lazyPowerawesome office hours gents!20:56
* arosales learned a lot20:56
lazyPowerarosales - know what my favorite part was?20:56
arosaleslazyPower: mbruzek orange shirt?20:57
lazyPowerarosales - thats a close second -  jujusolutions/charmbox:devel already had *everything* i needed :P  simply update an alias and we're ready to goooooooo20:57
rick_h__lazyPower: is the recording up?20:57
lazyPowerrick_h__ if you hit ubuntuonair.com its there and ready for you20:57
* lazyPower is currently watching a resources demo20:57
* lazyPower is pretty excited about this!20:58
arosaleslazyPower: nice20:58
* rick_h__ loads it up20:58
arosalesman jorge I should have mentioned how folks can try xenial in AWS20:58
* arosales to send that to the list20:58
arosalesxenial with juju that is.20:59
lazyPowerarosales oh yeah!! do that!20:59
arosaleswill do20:59
lazyPoweri'll send you a pizza :D20:59
jcastroon the flipside of the create-model being fast21:05
jcastrodestroy-model is instant21:05
arosalesmmmm pizzza :-)  lazyPower21:05
arosalesall meat please21:05
jrwrendo pizza bribes work well here, I'll start using them often if they do.  ;]21:06
marcoceppirick_h__: does the gui team know of the alpha2 login issues?21:06
cloudguru_Need to know who is working on a layer-docker charm for openVswitch as an SDN layer for MWC21:07
jrwrenmarcoceppi: yes, we know. we cry about it every night. I'm filling a bucket with tears as i write this.21:07
marcoceppicloudguru_: you should ping lazyPower21:08
cloudguru_thx.  already done.21:08
lazyPowercloudguru_ o/ heyo21:08
lazyPoweri'm still here21:08
rick_h__jrwren: this is that the gui isn't 2.0 api ready? Or something else?21:09
jrwrenrick_h__: it is that exactly. Better not be something else.21:10
rick_h__jrwren: k21:10
lazyPowercloudguru_ i'm really out/offline today, but i stuck around to do office hours. anything specific I can answer? is this coming up to crunch time and causing an issue?21:10
rick_h__jrwren: yea, just making sure it's not a different bug/etc. I'd not heard of anything there.21:10
cloudguru_@lazyPower .. all good.  We can the scripts but docker charm for OVS is preferred21:12
lazyPowercloudguru_ that initial stab at the layer for OVS was basically an encapsulation of that script21:12
lazyPowershould be able to juju deploy the built charm --to 0 and bypass the entirety of running the script21:12
cloudguru_re: lxc lBR clusters for openstack .. you guys are right.  I'm pretty such this is how the nebula appliance worked under the covers on a single (huge) appliance.21:13
lazyPoweras its delivered on furst run21:13
lazyPower*first run21:13
cloudguru_Nice !!!21:14
lazyPoweri need to step out, i have somewhere to be in 45 minutes across pittsburgh, but i left you my # in an im cloudguru_  - feel free to call21:14
lazyPowersorry i'm pressed for time :\21:14
cloudguru_LXC OpenStack on full killer .. pretty much OpenStack Magnum21:14
rick_h__lol at marcoceppi wanting to make sure "no one wants to look at my face"21:15
josejcastro: just got your ping, was stuck at work with no internet21:34
jcastrono worries, I was just editing ubuntuonair21:38
cory_fumarcoceppi: Did you see I updated https://github.com/juju/charm-tools/pull/108 ?21:58
jcastrook with a new machine I need to test it, any hardcore bundles that can exercise my system?22:00
rick_h__jcastro: I'd think kwmonroe's would be best22:00
marcoceppicory_fu: thanks!22:00
marcoceppijcastro: juju deploy -n 1000 cs:~jorge/trusty/boinc22:01
jcastrorick_h__: that didn't really break a sweat22:01
rick_h__jcastro: add moar units?22:01
jcastroI suppose I could actually run something in hadoop22:01
rick_h__jcastro: was that the realtime-syslog-analytics?22:01
marcoceppihey cory_fu22:26
marcoceppihttps://github.com/juju/charm-tools/issues/115 should we just create a `$JUJU_REPOSITORY/build/<charm_name>` instead? or just put <charm_name> in JUJU_REPOSITORY?22:27
rick_h__marcoceppi: cory_fu just a heads up on the series in metadata. The UI team was working on updating the charmstore to support that and the jujucharms.com website22:28
rick_h__marcoceppi: cory_fu and I'm not 100% sure where that's left off (e.g. might be comitted but not yet deployed)22:28
marcoceppirick_h__: right, but it will make it to 2.0 ?22:28
rick_h__marcoceppi: cory_fu definitely22:28
marcoceppithis is all work planned for charm-tools 2.022:28
marcoceppinot the 1.11.2 release22:28
rick_h__marcoceppi: cory_fu ah ok np then22:28
cory_fumarcoceppi: Crap.  I would have liked to get some of these changes that I landed today into 1.11.2.  I assume we can backport them?22:29
marcoceppicory_fu: we can totally backport22:29
marcoceppicory_fu: 2.0 will be released with the new charm command from uros team22:29
marcoceppicory_fu: which I need to get into xenial like tomorrow22:29
marcoceppicory_fu: so I will upload charm/charm-tools 2.0 to the juju/devel ppa22:30
marcoceppibut we can still do 1.X releases22:30
cory_fumarcoceppi: Cool.  And as for the directory layout, I'm not sure.  I guess we should come up with a different recommendation than suggesting $LAYER_PATH etc be subdirectories of $JUJU_REPOSITORY22:30
marcoceppiand we can worry about backports later22:30
marcoceppicory_fu: I think it's a good idea still, tbh22:30
cory_fumarcoceppi: What's still a good idea?22:31
marcoceppicory_fu: it's kind of like the GOPATH stuff22:31
marcoceppicory_fu: $JUJU_REPOSITORY being an umbrella for stuff22:31
marcoceppicory_fu: though, it doesn't have to live in JUJU_REPOSITORY, since LAYER_PATH and INTERFACE_PATH are individual settings22:31
cory_fuOk, but how is the new juju going to handle "juju deploy foo" where foo is checked out locally?22:31
marcoceppicory_fu: gooooood point22:32
marcoceppicory_fu: so we could just put the charm_name in the $JUJU_REPOSITORY22:32
cory_fuThere's nothing really stopping us from keeping the same pattern and having a "layers" directory under $JR.  It just means no one can have a charm named "layers" (or "interfaces")22:33
marcoceppiwhich seems silly22:33
marcoceppicory_fu: I could also see someone doing $JR/src/layers,interfaces22:33
cory_fuI like something like $JR/{layers,interfaces,charms}22:33
cory_fuI'd be ok with that, too22:33
marcoceppicory_fu: we should figure out how juju will handle $JR now in 2.0 though22:34
* marcoceppi is off to #juju-dev unless rick_h__has feedback22:34
marcoceppicory_fu: as a work around we could do $JR="$HOME/stuff/charms"; $LAYER_PATH=$JR/../layers; etc22:35
cory_fuI guess.  Though that breaks my handy dev env switching aliases that change $JR and allow me to have, e.g., a pristine $JR for RQ22:36
cory_fuIncluding layers, interfaces, etc22:36
cory_fuBut I can work around it22:36
marcoceppicory_fu: well would ranter not do that then22:37
cory_fuAlso, this only really applies to the default values for LAYER_PATH etc.  If they're set manually, they can be whatever the user wants22:37
apuimedojcastro: nice article about the juju lxd that jamespage told me he was using22:46
apuimedoI'm getting some issue when doing the bootstrap22:46
apuimedoERROR there was an issue examining the model: invalid config: Can not change ZFS config. Images or containers are still using the ZFS pool22:46
cory_fumarcoceppi: Updated https://github.com/juju/charm-tools/pull/11322:49
apuimedoany idea about that error?22:52
koapshi guys, I'm having an issue with juju ha, wondering if anyone could help me troubleshoot23:03
apuimedokoaps: juju ha? hacluster?23:04
koapsapuimedo: juju ha23:09
koapsmy two addition servers just stay in adding-vote23:09
apuimedokoaps: servers of which charm?23:09
koapsjuju ensure-availability --to 1,223:10
koapsthe server 1 and 2 never get vote23:10
koapsthis has worked23:10
koapsbut we are rebuilding the environment, now it's not23:10
aisraelmarcoceppi: I got it working \m/23:10
koapsseems like some software package changed and mongodb isn't working right23:11
apuimedosorry, I've only got experience developing charms with ha support, haven't tried the "ensure-availability" command23:12

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!