/srv/irclogs.ubuntu.com/2016/09/21/#juju.txt

=== thumper is now known as thumper-dogwalk
huhaoranhey, charmers, my juju seems to get stuck, it there someway to restart juju itself?02:58
huhaoranhey, charmers, my juju seems to get stuck, is there someway to restart juju itself?02:59
=== thumper-dogwalk is now known as thumper
magicaltroutI'VE FINALLY FOUND A DONALD TRUMP SUPPORTER!03:43
magicaltroutmy life is complete03:43
MianHi, does anyone here know something about  the Xenial version of mongodb charm?  it's not available in the charm store right now04:09
Mianis there a schedule or calendar as to when we will release mongodb charm on Xenial?   appreciate any advice/clue on it04:10
magicaltroutthere was some movement on it not too long a go04:11
magicaltrouta go / ago04:11
magicaltroutbut clearly hasn't landed04:11
magicaltrouti needs rewriting and updating04:11
magicaltrouti think marcoceppi was hacking on it04:11
MianI'd like to deploy Openstack totally with Xenial release, Ceilometer has Xenial charm but it will be great if the underlying mongodb charm is available04:12
Mianmagicaltrout++04:15
magicaltroutcan't help you directly, but if you ask on the juju mailing list you'd get a response04:15
magicaltroutit appears the lazy folk aren't around, but on the mailing list it give them a chance for an offline response and also registers some interest in making it happen04:15
Mianappreciate your advice :), that's what I need and it's helpful04:16
Mianwill inquiry the mailing list04:17
magicaltroutno problem04:17
=== fginther is now known as fginther|away
=== fginther|away is now known as fginther
=== menn0 is now known as menn0-afk
pragsmikegreets06:12
marcoceppio/ pragsmike06:24
marcoceppiMian: I have the start of a new Xenial MongoDB charm, I'm just unable to finish it06:25
pragsmikemorning.  or is it evening?06:25
marcoceppiit's morning where I am06:25
pragsmikeback home yet?06:25
pragsmikei'm just writing up my notes from the charmer's summit06:26
pragsmikeit's taking me days to unpack all that06:26
Mianmarcoceppi:  thanks for this update, it's great to know that we're working on it06:26
Mianmarcoceppi:  A customer is trying to deploy with MongoDB of Xenial series in their lab, it will be even greater that if we have an estimation on when the Xenial MongoDB charm might be released :)06:28
=== redir is now known as redir-afk
marcoceppiMian: well, I have 0 time to work on it, despite my desire to build a better mongodb charm. If somone wants to push it the rest of the way, it's on GH https://github.com/marcoceppi/layer-mongodb06:29
Mianmarcoceppi: I can understand it :)06:30
MianI guess encouraging the customer to involve into the tinkering of it in their lab might be a win-win strategy if they are desiring it06:35
Mianmarcoceppi: thanks very much for this update06:35
junaidalihas the set-config option changed in juju 2.0rc1? I'm not able to set config use juju set-config command06:42
junaidalialso juju get-config is not working06:42
junaidaligot it.. the command is now  $ juju config06:43
pragsmikejunaidali: reading the scrollback it looks like you were having the same problem with container networking that i am06:49
junaidaliyes pragsmike, rc1 has the fix.06:50
pragsmikehey, released 4 hours ago!  I'm going to go try it!06:51
=== frankban|afk is now known as frankban
=== menn0-afk is now known as menn0
pragsmikehot damn! The openstack bundle is deploying with containers on the correct network, for the first time on my NUC cluster!  Thanks to all!08:11
magicaltroutboom!08:19
marcoceppipragsmike: nice!08:21
pragsmikeNow I can sleep :)08:31
pragsmikeI find that I have to go kick a container or two on each deploy, as they get stuck "pending", but logging into the container host and doing lxc stop / lxc start fixes that08:34
=== jamespag` is now known as jamespage
fernariHello all! Quite new charmer here and one question came to mind while writing my own charms11:16
fernariI am using reactive charms and building my custom software and it's dependencies with juju11:16
fernarione key dependency is consul and I've managed to build base layer which configures consul and redis cluster correctly11:17
fernarithen i'm using that layer on my actual charm and that works correctly, let's call that "control-software"11:18
fernariNow I have a problem as "client-software" needs to join the consul cluster and I can't get the consul-layer's peer information out which I need to correctly join the existing cluster11:19
fernariI'm trying to get the related units like this: cluster_rid = hookenv.relation_ids('control')11:22
fernarithat returns nothing on the client-charm and cluster:0 on the control-charm and after that I'm able to get ip-addresses of the peers11:23
marcoceppifernari: howdy! it might help if you pastebin (paste.ubuntu.com or gist.github.com) some of the metadata.yaml files for your control-software and client (redacted of senstaive information)11:34
fernarimarcoceppi: cheers mate! http://paste.ubuntu.com/23211107/11:50
fernariI think that the problem is that I am building the peer-relationship on the layer below the "control-plane"11:51
marcoceppifernari: so, peers, as I imagine you've found it is just for the units deployed in your charm. So if you have three units of control-plane, they'll all talk to each other11:51
marcoceppifernari: you'll also need to "provide" the raft interface, so it can be connected to worker/client11:51
fernariyep11:51
marcoceppiand in doing so, you can publish on the relation wire the data you need for it to join the cluster11:52
marcoceppifernari: http://paste.ubuntu.com/23211109/11:53
marcoceppisomething like that, you can't use the relation name "control" again, since it's already declared as a peer, but if you can think of another name for the relation replace my placeholder with it11:53
fernariRight, I'll try that!11:54
fernariLearning curve for those relations is quite steep, but slowly getting forward :)11:55
marcoceppifernari: it's the most powerful aspect of juju, but also one of the hardest to wrap a head around11:58
marcoceppifernari: let us know if we can be of any additional assistance!11:58
fernarisure thing :)11:59
=== petevg_afk is now known as petevg
kjackalkwmonroe: cory_fu: The hadoop bundle deployed on trusty seems pretty stable http://imgur.com/a/amvac lets talk at the sync what are we going to do with it14:06
jamespagemarcoceppi, hey - could I request a release of charm-helpers?14:37
jamespageI've pushed a few changes to support use of application-version across the our openstack charms, and need to work on pulling the samefeature into reactive charms14:37
rick_h_jamespage: he was on site at a customier thing this morning so might be best to go email on that14:37
jamespagerick_h_, ack14:38
jamespagemarcoceppi, alternatively I'd be happy to RM charm-helpers releases if you wanna give me perms14:38
geethaHi, `juju attach` command is taking long time to upload resources in s390x machine. Can anybody please suggest what would be the issue here?15:17
kwmonroehi geetha -- how large is the resource?15:22
rick_h_geetha: just the normal networking type issues one might see tbh.15:24
geethakwmonroe:Resource size is 1.5 G. But in x86 machine it's not taking that much time.15:25
junaidaliHey guys, how can we set an install_key in juju. I want to pass a GPG-KEY. I've tried "juju set <charm-name> install_keys="$(cat GPG-KEY)"", also tried using a yaml but erroring out " yaml.scanner.ScannerError: mapping values are not allowed here"15:26
rick_h_geetha: can/have you tested the network between the different machines? maybe checking mb/s on a curl or rsync or something?15:27
junaidaliI'm able to manually add the GPG-key using apt-key add command15:27
junaidalion the node15:27
rick_h_junaidali: what charm is this?15:28
junaidaliHey rick_h_, its a plumgrid-charm http://bazaar.launchpad.net/~junaidali/charms/trusty/plumgrid-director/trunk/files15:30
junaidaliinstall_key config is available in charmhelpers15:31
junaidaliso instead of writing my own code, i'm using charmhelper to parse and add the key15:31
junaidalirick_h_, fyi PLUMgrid is an SDN solution provider15:33
rick_h_junaidali: hmm, yea so it's a string, not sure what the command turns into with newlines/etc15:33
geetharick_h_: I kept rsources in same host machine.15:33
geethaEX: juju attach ibm-was-base ibm_was_base_installer=/root/repo/WAS_BASE/was.repo.9000.base.zip15:35
rick_h_geetha: ? the client is on the same machine as the controller?15:35
junaidalithis is what 'juju get' outputs for install_keys when i pass the value using a yaml file: http://paste.ubuntu.com/23211867/15:38
kjackalkwmonroe: how did you find out about this healthcheck resource manager is doing on the namenode? Is it on the RM's logs?15:38
junaidalirick_h_: this is when i  pass the value using the actual GPG key file ' juju set <charm> install_keys="$(cat GPG-KEY)"' http://paste.ubuntu.com/23211872/15:39
kjackalkwmonroe: I was thinking about the other thing that you said about not timing the start of the service correctly. That should affect trusty as well, but since I do not see it now, perhaps there is some timing issue there that is not always present15:40
rick_h_beisner: do you recall who worked on that charm and any idea how to package that up for the key to go through there? ^15:40
kwmonroekjackal: i learned about it here: http://johnjianfang.blogspot.com/2014/10/unhealthy-node-in-hadoop-two.html  and then i saw in the nodemanager logs stuff like "received SIGTERM, shutting down", which seems to be what happens when the RM detects that the NM is unhealthy.15:41
kjackalkwmonroe: thanks, interesting15:42
kwmonroekjackal: and "unhealthy" means it either failed to respond to a heartbeat, or it violated resource constraints (like "using 2.6GB of 2.1GB vmem", which i also saw in the NM logs)15:42
kjackalkwmonroe: Ah I see where you are getting with that15:43
kwmonroekjackal: so i hypothesized that either the network connectivity was failing the heartbeat, or our yarn.nodemanager.vmem-check-enabled: false setting was not taking effect when the NM started up (hence failing the resource constraint)15:44
geetharick_h_, We are using local lxd provider.15:44
kwmonroekjackal: both of which seem to be resolved by rebooting the slave15:44
kwmonroegeetha: what does 'juju version' say?15:45
geethakwmonroe: Tried with beta-15, beta-18 and now using juju 2.0-rc1.15:48
kwmonroeok geetha, but what does the actual output return?  for example, mine is:15:50
kwmonroe$ juju version15:50
kwmonroe2.0-beta18-xenial-amd6415:50
geethaIt's 2.0-rc1-xenial-s390x15:51
kwmonroecool geetha -- i just wanted to double check the series and arch were xenial-s390x15:52
beisnerhi rick_h_ /me searches backscroll for context of 'the charm'15:53
geethaok kevin :)15:54
kwmonroegeetha: i'm not sure what would be causing a significant slowdown on s390 vs x64.  1.5GB is a really large resource though.. how long as the 'juju attach' been running?15:55
beisnerrick_h_, i can't tell where the code repo is from the cs: charm pages for plumgrid.  hence, not able to view authors/contributors.  sorry i don't have more info on that.15:57
kwmonroehey dooferlad -- you had the good insights into resources being slow prior to beta-12 (https://bugs.launchpad.net/juju/+bug/1594924).  can you think of any reason why geetha's 1.5GB resource would take significantly longer to attach on s390x vs x86?  (both using 2.0-rc1 in lxd)15:58
mupBug #1594924: resource-get is painfully slow <2.0> <resources> <juju:Fix Released by dooferlad> <https://launchpad.net/bugs/1594924>15:58
geethaI have 5 resources to be attached as part of WAS deployment, 1.5GB is the largest one. it's taking more than 15 min. and other resources are around 1 GB..Totally it's taking around 1 hr for deployment.16:02
kwmonroegeetha: and how long does the same deploy typically take on x86?16:02
geethaOn x86, I have tried with beta-18 and it's taking around 20 min for deployment.16:06
=== CyberJacob is now known as zz_CyberJacob
junaidaliHi beisner: value from install_key is actually handled by charmhelpers. But passing a value gives error  while setting value using a yaml file or using the key file itself. It seems like, either there might be another way to setting value or the charmhelper should have a better way of handling this.16:09
junaidalifor a multi-line value, i usually use a yaml file to set a config but that isn't working in the present case.16:10
kwmonroegotcha geetha.. so 1.5GB in 15 minutes is roughly 13Mbps.  perhaps the s390x is not tuned very well to xfer that much data over the virtual network or to dasd.  can you benchmark the s390x as rick_h_ suggested?  perhaps try to do a normal scp of the local resource to the controller's /tmp directory.  you can find the controller ip with "juju show-status -m controller", and then "scp </path/to/resource.tgz> ubuntu@<16:12
kwmonroecontroller-ip>/tmp"16:12
beisnerjunaidali, unfortunately, i'm not familiar with those specific charms or using that config option with them.16:12
junaidalithanks beisner, np. I will try again to figure out a way.16:13
kwmonroegeetha: forgot a colon in that scp.. it would be something like "scp resource.tgz ubuntu@1.2.3.4:/tmp"16:14
beisnerjunaidali, i see install_keys is a string type option, so it should be usable in a bundle yaml by passing a string value.  what that string value should be, i'm not sure.16:14
junaidalibeisner, key usually looks like this one http://paste.ubuntu.com/23212091/    but fails at this line in charmhelpers http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/view/head:/charmhelpers/fetch/__init__.py#L12016:19
geethaok kevin, let me try that.16:20
beisnerjunaidali, so are you saying this doesn't work? http://paste.ubuntu.com/23211872/16:23
junaidaliyes, that was the juju get command output16:24
beisnerjunaidali, is there a specific error you're seeing?16:26
junaidalilet me share you the output please16:27
junaidalihttp://paste.ubuntu.com/23212121/16:28
junaidalibeisner: http://paste.ubuntu.com/23212121/16:28
thedacThis feels like a quoting problem16:29
thedacstub: if you are around do you know the proper way to send a GPG key for charmhelpers.fetch.configure_sources?16:29
thedacthe yaml load is choking on the ':' in Version: GnuPG v1.4.11 (GNU/Linux)16:30
thedacjunaidali: I can tell you add_source has had much more developer attention than configure_source. It might be a big ask but moving to add_source is probably the best option. http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/view/head:/charmhelpers/fetch/ubuntu.py#L21216:38
thedacmeaning using add_source directly rather than configure_sources.16:40
=== frankban is now known as frankban|afk
CorvetteZR1hello.  i'm trying to do juju bootstrap --upload-tools using xenial image but get connection refused on port 2216:54
CorvetteZR1google turned up some old posts with people having similar issues, but i haven't found any solution16:54
CorvetteZR1i can see the container is running, but bootstrap can't auth using ssh key-auth.  any suggestions?16:55
=== redir-afk is now known as redir
cory_fu@kjackal Per your comment on https://github.com/juju-solutions/layer-apache-kafka/pull/13 I realized that I did it the other way for Apache Zeppelin, and I'm leaning toward changing Apache Kafka to have a single, arch-independent resource, since we expect it to work on most platforms.  If you want to deploy to a platform that requires a custom build for some reason, then you would have to attach it at deploy time.  Seem reasonable?19:05
cory_fukwmonroe: ^19:05
cory_fuDownside is if we find a platform that we *do* want to support that always requires a custom build, but the upside is that we are more optimistic about platforms and open it up to deploy on, e.g., Power19:06
kjackalok cory_fu20:18
kjackalcory_fu: sorry for the delay, I am in the scala world20:18
kwmonroekjackal: cory_fu and i chatted about layer-hadoop-client (https://github.com/juju-solutions/layer-hadoop-client/pull/10).  i think we've agreed that the silent flag is no longer needed, and layer-hadoop-client will *only* report status when deploying the 'hadoop-client' charm.  since you authored the 'silent' flag in the first place, care to weigh in?20:40
cory_fuha, I already weighed in by merging it.  :p  If you have a -1, we can talk about reverting it20:41
kjackalcory_fu: kwmonroe: looks good! Thank you20:42
kwmonroelol cory_fu.. i think the only place we'd be exposed is if we had a charm that included layer-hadoop-client and did not set the 'silent' flag.  that charm might rely on hadoop-client to set the 'waiting for plugin'20:42
cory_fuExcept that it should now be optional anyway, so it's moot20:42
kwmonroeyour optimism inspires me20:44
kwmonroe(and i was talking about plugin, not java, which is not optional for hadoop-client)20:45
cory_fu:)20:45
cory_fuOh, the plugin20:45
cory_fuHuh20:45
cory_fuYeah, that's probably going to bite us.20:45
kwmonroewell, i did some scouring, and all the charms that include hadoop-client have something like this:  https://github.com/juju-solutions/layer-apache-flume-hdfs/blob/master/reactive/flume_hdfs.py#L820:45
cory_fuBut I think those charms should be fixed.  I think status messages should really only be set in the charm layer20:45
kwmonroeso they're doing their own blocking if/when hadoop isn't ready20:46
cory_fuWell, that's nice20:46
kwmonroeand when i say "i did some scouring", i mean i grepped the 5 charms on my laptop.20:46
kwmonroebut i'm sure the other 50 are fine20:46
kwmonroeoptimism.20:47
=== natefinch is now known as natefinch-afk
theoepratorstruggling to bootstrap, log here: http://paste.ubuntu.com/23213182/21:04
surtinanyone around that can explain why quickstart gives me an error that it can't find the default model, even though it's clearly there? running 16.04. just trying to install landscape-scalable21:20
surtinjust end up with juju-quickstart: error: environment land-test:admin@local/default not found21:23
theoepratorbueller?21:24
surtinand when i first ran the command i got juju-quickstart: error: error: flag provided but not defined: -e21:24
tvansteenburghsurtin: you don't need quickstart, just `juju deploy landscape-scalable21:25
petevgtheoeprator: it looks like juju is having trouble talking to the controller -- see the error connecting to port 22 in the first few lines of your log. If you do "lxc list", what do you get in response?21:26
surtinalright21:26
theoepratortheoperator@hype:~$ lxc list21:26
theoeprator+-------+---------+---------------------+---------------------------------------------+------------+-----------+21:26
theoeprator| NAME  |  STATE  |        IPV4         |                    IPV6                     |    TYPE    | SNAPSHOTS |21:26
theoeprator+-------+---------+---------------------+---------------------------------------------+------------+-----------+21:26
theoeprator| admin | RUNNING | 192.168.0.36 (eth0) | fd00:fc:8df3:eb62:216:3eff:feae:96f2 (eth0) | PERSISTENT | 0         |21:26
theoeprator+-------+---------+---------------------+---------------------------------------------+------------+-----------+21:26
theoeprator| vpn   | RUNNING | 192.168.0.29 (eth0) | fd00:fc:8df3:eb62:216:3eff:fef0:29c9 (eth0) | PERSISTENT | 0         |21:26
theoeprator+-------+---------+---------------------+---------------------------------------------+------------+-----------+21:26
theoepratortheoperator@hype:~$21:26
theoepratorthose are both containers I set up separately21:27
surtinhmm nope, says the charm or bundle not found. guess the instructions on the page are outdated or something21:28
petevgtheoeprator: It looks like juju wasn't able to create the container. Are you running out of disk space? Do you have any interesting messages in /var/log/lxd/lxd.log?21:29
tvansteenburghsurtin: which page?21:29
surtin https://help.landscape.canonical.com/LDS/JujuDeployment16.0621:29
tvansteenburghsurtin: dpb1 is the landscape guy, he might be able to help. i'm not sure which bundle you should use21:30
tvansteenburghsurtin: the only one i see in the store is this https://jujucharms.com/q/landscape?type=bundle21:30
surtinyeah i saw that as well21:31
dpb1surtin: http://askubuntu.com/search?q=deploy+landscape+bundle+andreas21:31
dpb1blah21:31
dpb1surtin: http://askubuntu.com/questions/549809/how-do-i-install-landscape-for-personal-use21:31
dpb1that one21:31
dpb1notice the higher rated answer.  talks about using juju as well.21:32
theoepratornothing interesting in the lxd log.  all just lvl=info stuff21:33
surtinok will try it out21:33
surtinty21:34
theoepratori don’t think I’m out of disk space, i don’t have any quotas setup and this is a clean box with ~2TB free21:34
petevgsurtin: are you using juju 1, or juju 2? (juju --version should tell you).21:34
theoepratorthe last error on the bootstrap output is 2016-09-21 21:34:20 ERROR cmd supercommand.go:458 new environ: creating LXD client: Get https://192.168.0.1:8443/1.0: Unable to connect to: 192.168.0.1:844321:35
theoepratorwhere’s it getting that 192.168.0.1 from?21:35
theoepratorthat’s not the address of the host21:35
theoepratorit appears to have the container running at one point, as it runs several apt commands inside it21:37
petevgtheoeprator: it looks like your lxd machines are setup to use your physical network, rather than using a private network. Is that something you setup when you setup the other lxd machines?21:39
petevgWe're moving beyond my grade level as far as juju and lxd internal go, but juju may be getting confused by that network config.21:41
theoepratorsorry, wifi failed21:45
petevgtheoeprator: no worries. Did you see my last message? (I was puzzled to see lxd machines talking directly to what looks like your physical network, rather than using virtual ips, and asked whether that was something that you had setup when you setup the other containers.)21:46
theoepratoryep, the containers are bridged to the host’s network21:47
petevgtheoeprator: that might be what's breaking things. We're unfortunately moving beyond the realm of things that I know a lot about, but juju typically expects to be able to herd its machines about on a private/virtual network, and then selectively expose them to the real world.21:48
petevgI could be very wrong, though ... you might try bootstrapping again, and watching "lxd list", to see if the machine gets created.21:48
petevgtheoeprator: another possibility is that you've setup your router to block port 22 -- do you have strict security rules setup for your local network?21:49
petevg*lxc list, I mean.21:50
theoepratorruning bootstrap again, the container is up right now21:51
theoepratortheoperator@hype:~$ lxc list21:51
theoeprator+---------------+---------+---------------------+---------------------------------------------+------------+-----------+21:51
theoeprator|     NAME      |  STATE  |        IPV4         |                    IPV6                     |    TYPE    | SNAPSHOTS |21:51
theoeprator+---------------+---------+---------------------+---------------------------------------------+------------+-----------+21:51
theoeprator| admin         | RUNNING | 192.168.0.36 (eth0) | fd00:fc:8df3:eb62:216:3eff:feae:96f2 (eth0) | PERSISTENT | 0         |21:51
theoeprator+---------------+---------+---------------------+---------------------------------------------+------------+-----------+21:52
theoeprator| juju-f71b38-0 | RUNNING | 192.168.0.41 (eth0) | fd00:fc:8df3:eb62:216:3eff:fe20:8e53 (eth0) | PERSISTENT | 0         |21:52
theoeprator+---------------+---------+---------------------+---------------------------------------------+------------+-----------+21:52
theoeprator| vpn           | RUNNING | 192.168.0.29 (eth0) | fd00:fc:8df3:eb62:216:3eff:fef0:29c9 (eth0) | PERSISTENT | 0         |21:52
theoeprator+---------------+---------+---------------------+---------------------------------------------+------------+-----------+21:52
theoepratortheoperator@hype:~$21:52
theoepratornow got the same error as before and the container is gone again21:52
blahdeblahtheoeprator: Please use pastebin.ubuntu.com for more than 2 lines21:53
theoepratormy apologies21:53
blahdeblahNo problem - it just makes it a bit noisy for those monitoring a lot of channels21:53
petevgtheoeprator: hmmm ... my guess is that juju is having trouble with the network settings, but I'm not sure quite what. If I were troubleshooting, my next steps would be the ssh into the container, and check out the logs (check /var/log/juju).22:01
petevgtheoeprator: failing that, you might try posting to the mailing list -- someone who knows more than I do should get back to you.22:04
theoepratorok, thanks Pete22:05
petevgyou're welcome! Sorry that I didn't have an immediate fix for you.22:07
surtinpetevg: juju 222:24
petevgsurtin: you def. want to avoid quickstart then. It's a juju 1 tool. Sounds like those instructions are in need of an update ...22:37
surtinyeah seems that way22:52
magicaltroutdefinately going to get to play with some Juju stuff @ JPL and Darpa after a few  meetings this week22:58
magicaltroutmight also get some buyin for JPL to extend marathon for LXC/LXD support, although thats a longer shot22:58
petevgmagicaltrout: awesome news :-)23:47

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!