=== thumper is now known as thumper-dogwalk [02:58] hey, charmers, my juju seems to get stuck, it there someway to restart juju itself? [02:59] hey, charmers, my juju seems to get stuck, is there someway to restart juju itself? === thumper-dogwalk is now known as thumper [03:43] I'VE FINALLY FOUND A DONALD TRUMP SUPPORTER! [03:43] my life is complete [04:09] Hi, does anyone here know something about the Xenial version of mongodb charm? it's not available in the charm store right now [04:10] is there a schedule or calendar as to when we will release mongodb charm on Xenial? appreciate any advice/clue on it [04:11] there was some movement on it not too long a go [04:11] a go / ago [04:11] but clearly hasn't landed [04:11] i needs rewriting and updating [04:11] i think marcoceppi was hacking on it [04:12] I'd like to deploy Openstack totally with Xenial release, Ceilometer has Xenial charm but it will be great if the underlying mongodb charm is available [04:15] magicaltrout++ [04:15] can't help you directly, but if you ask on the juju mailing list you'd get a response [04:15] it appears the lazy folk aren't around, but on the mailing list it give them a chance for an offline response and also registers some interest in making it happen [04:16] appreciate your advice :), that's what I need and it's helpful [04:17] will inquiry the mailing list [04:17] no problem === fginther is now known as fginther|away === fginther|away is now known as fginther === menn0 is now known as menn0-afk [06:12] greets [06:24] o/ pragsmike [06:25] Mian: I have the start of a new Xenial MongoDB charm, I'm just unable to finish it [06:25] morning. or is it evening? [06:25] it's morning where I am [06:25] back home yet? [06:26] i'm just writing up my notes from the charmer's summit [06:26] it's taking me days to unpack all that [06:26] marcoceppi: thanks for this update, it's great to know that we're working on it [06:28] marcoceppi: A customer is trying to deploy with MongoDB of Xenial series in their lab, it will be even greater that if we have an estimation on when the Xenial MongoDB charm might be released :) === redir is now known as redir-afk [06:29] Mian: well, I have 0 time to work on it, despite my desire to build a better mongodb charm. If somone wants to push it the rest of the way, it's on GH https://github.com/marcoceppi/layer-mongodb [06:30] marcoceppi: I can understand it :) [06:35] I guess encouraging the customer to involve into the tinkering of it in their lab might be a win-win strategy if they are desiring it [06:35] marcoceppi: thanks very much for this update [06:42] has the set-config option changed in juju 2.0rc1? I'm not able to set config use juju set-config command [06:42] also juju get-config is not working [06:43] got it.. the command is now $ juju config [06:49] junaidali: reading the scrollback it looks like you were having the same problem with container networking that i am [06:50] yes pragsmike, rc1 has the fix. [06:51] hey, released 4 hours ago! I'm going to go try it! === frankban|afk is now known as frankban === menn0-afk is now known as menn0 [08:11] hot damn! The openstack bundle is deploying with containers on the correct network, for the first time on my NUC cluster! Thanks to all! [08:19] boom! [08:21] pragsmike: nice! [08:31] Now I can sleep :) [08:34] I find that I have to go kick a container or two on each deploy, as they get stuck "pending", but logging into the container host and doing lxc stop / lxc start fixes that === jamespag` is now known as jamespage [11:16] Hello all! Quite new charmer here and one question came to mind while writing my own charms [11:16] I am using reactive charms and building my custom software and it's dependencies with juju [11:17] one key dependency is consul and I've managed to build base layer which configures consul and redis cluster correctly [11:18] then i'm using that layer on my actual charm and that works correctly, let's call that "control-software" [11:19] Now I have a problem as "client-software" needs to join the consul cluster and I can't get the consul-layer's peer information out which I need to correctly join the existing cluster [11:22] I'm trying to get the related units like this: cluster_rid = hookenv.relation_ids('control') [11:23] that returns nothing on the client-charm and cluster:0 on the control-charm and after that I'm able to get ip-addresses of the peers [11:34] fernari: howdy! it might help if you pastebin (paste.ubuntu.com or gist.github.com) some of the metadata.yaml files for your control-software and client (redacted of senstaive information) [11:50] marcoceppi: cheers mate! http://paste.ubuntu.com/23211107/ [11:51] I think that the problem is that I am building the peer-relationship on the layer below the "control-plane" [11:51] fernari: so, peers, as I imagine you've found it is just for the units deployed in your charm. So if you have three units of control-plane, they'll all talk to each other [11:51] fernari: you'll also need to "provide" the raft interface, so it can be connected to worker/client [11:51] yep [11:52] and in doing so, you can publish on the relation wire the data you need for it to join the cluster [11:53] fernari: http://paste.ubuntu.com/23211109/ [11:53] something like that, you can't use the relation name "control" again, since it's already declared as a peer, but if you can think of another name for the relation replace my placeholder with it [11:54] Right, I'll try that! [11:55] Learning curve for those relations is quite steep, but slowly getting forward :) [11:58] fernari: it's the most powerful aspect of juju, but also one of the hardest to wrap a head around [11:58] fernari: let us know if we can be of any additional assistance! [11:59] sure thing :) === petevg_afk is now known as petevg [14:06] kwmonroe: cory_fu: The hadoop bundle deployed on trusty seems pretty stable http://imgur.com/a/amvac lets talk at the sync what are we going to do with it [14:37] marcoceppi, hey - could I request a release of charm-helpers? [14:37] I've pushed a few changes to support use of application-version across the our openstack charms, and need to work on pulling the samefeature into reactive charms [14:37] jamespage: he was on site at a customier thing this morning so might be best to go email on that [14:38] rick_h_, ack [14:38] marcoceppi, alternatively I'd be happy to RM charm-helpers releases if you wanna give me perms [15:17] Hi, `juju attach` command is taking long time to upload resources in s390x machine. Can anybody please suggest what would be the issue here? [15:22] hi geetha -- how large is the resource? [15:24] geetha: just the normal networking type issues one might see tbh. [15:25] kwmonroe:Resource size is 1.5 G. But in x86 machine it's not taking that much time. [15:26] Hey guys, how can we set an install_key in juju. I want to pass a GPG-KEY. I've tried "juju set install_keys="$(cat GPG-KEY)"", also tried using a yaml but erroring out " yaml.scanner.ScannerError: mapping values are not allowed here" [15:27] geetha: can/have you tested the network between the different machines? maybe checking mb/s on a curl or rsync or something? [15:27] I'm able to manually add the GPG-key using apt-key add command [15:27] on the node [15:28] junaidali: what charm is this? [15:30] Hey rick_h_, its a plumgrid-charm http://bazaar.launchpad.net/~junaidali/charms/trusty/plumgrid-director/trunk/files [15:31] install_key config is available in charmhelpers [15:31] so instead of writing my own code, i'm using charmhelper to parse and add the key [15:33] rick_h_, fyi PLUMgrid is an SDN solution provider [15:33] junaidali: hmm, yea so it's a string, not sure what the command turns into with newlines/etc [15:33] rick_h_: I kept rsources in same host machine. [15:35] EX: juju attach ibm-was-base ibm_was_base_installer=/root/repo/WAS_BASE/was.repo.9000.base.zip [15:35] geetha: ? the client is on the same machine as the controller? [15:38] this is what 'juju get' outputs for install_keys when i pass the value using a yaml file: http://paste.ubuntu.com/23211867/ [15:38] kwmonroe: how did you find out about this healthcheck resource manager is doing on the namenode? Is it on the RM's logs? [15:39] rick_h_: this is when i pass the value using the actual GPG key file ' juju set install_keys="$(cat GPG-KEY)"' http://paste.ubuntu.com/23211872/ [15:40] kwmonroe: I was thinking about the other thing that you said about not timing the start of the service correctly. That should affect trusty as well, but since I do not see it now, perhaps there is some timing issue there that is not always present [15:40] beisner: do you recall who worked on that charm and any idea how to package that up for the key to go through there? ^ [15:41] kjackal: i learned about it here: http://johnjianfang.blogspot.com/2014/10/unhealthy-node-in-hadoop-two.html and then i saw in the nodemanager logs stuff like "received SIGTERM, shutting down", which seems to be what happens when the RM detects that the NM is unhealthy. [15:42] kwmonroe: thanks, interesting [15:42] kjackal: and "unhealthy" means it either failed to respond to a heartbeat, or it violated resource constraints (like "using 2.6GB of 2.1GB vmem", which i also saw in the NM logs) [15:43] kwmonroe: Ah I see where you are getting with that [15:44] kjackal: so i hypothesized that either the network connectivity was failing the heartbeat, or our yarn.nodemanager.vmem-check-enabled: false setting was not taking effect when the NM started up (hence failing the resource constraint) [15:44] rick_h_, We are using local lxd provider. [15:44] kjackal: both of which seem to be resolved by rebooting the slave [15:45] geetha: what does 'juju version' say? [15:48] kwmonroe: Tried with beta-15, beta-18 and now using juju 2.0-rc1. [15:50] ok geetha, but what does the actual output return? for example, mine is: [15:50] $ juju version [15:50] 2.0-beta18-xenial-amd64 [15:51] It's 2.0-rc1-xenial-s390x [15:52] cool geetha -- i just wanted to double check the series and arch were xenial-s390x [15:53] hi rick_h_ /me searches backscroll for context of 'the charm' [15:54] ok kevin :) [15:55] geetha: i'm not sure what would be causing a significant slowdown on s390 vs x64. 1.5GB is a really large resource though.. how long as the 'juju attach' been running? [15:57] rick_h_, i can't tell where the code repo is from the cs: charm pages for plumgrid. hence, not able to view authors/contributors. sorry i don't have more info on that. [15:58] hey dooferlad -- you had the good insights into resources being slow prior to beta-12 (https://bugs.launchpad.net/juju/+bug/1594924). can you think of any reason why geetha's 1.5GB resource would take significantly longer to attach on s390x vs x86? (both using 2.0-rc1 in lxd) [15:58] Bug #1594924: resource-get is painfully slow <2.0> [16:02] I have 5 resources to be attached as part of WAS deployment, 1.5GB is the largest one. it's taking more than 15 min. and other resources are around 1 GB..Totally it's taking around 1 hr for deployment. [16:02] geetha: and how long does the same deploy typically take on x86? [16:06] On x86, I have tried with beta-18 and it's taking around 20 min for deployment. === CyberJacob is now known as zz_CyberJacob [16:09] Hi beisner: value from install_key is actually handled by charmhelpers. But passing a value gives error while setting value using a yaml file or using the key file itself. It seems like, either there might be another way to setting value or the charmhelper should have a better way of handling this. [16:10] for a multi-line value, i usually use a yaml file to set a config but that isn't working in the present case. [16:12] gotcha geetha.. so 1.5GB in 15 minutes is roughly 13Mbps. perhaps the s390x is not tuned very well to xfer that much data over the virtual network or to dasd. can you benchmark the s390x as rick_h_ suggested? perhaps try to do a normal scp of the local resource to the controller's /tmp directory. you can find the controller ip with "juju show-status -m controller", and then "scp ubuntu@< [16:12] controller-ip>/tmp" [16:12] junaidali, unfortunately, i'm not familiar with those specific charms or using that config option with them. [16:13] thanks beisner, np. I will try again to figure out a way. [16:14] geetha: forgot a colon in that scp.. it would be something like "scp resource.tgz ubuntu@1.2.3.4:/tmp" [16:14] junaidali, i see install_keys is a string type option, so it should be usable in a bundle yaml by passing a string value. what that string value should be, i'm not sure. [16:19] beisner, key usually looks like this one http://paste.ubuntu.com/23212091/ but fails at this line in charmhelpers http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/view/head:/charmhelpers/fetch/__init__.py#L120 [16:20] ok kevin, let me try that. [16:23] junaidali, so are you saying this doesn't work? http://paste.ubuntu.com/23211872/ [16:24] yes, that was the juju get command output [16:26] junaidali, is there a specific error you're seeing? [16:27] let me share you the output please [16:28] http://paste.ubuntu.com/23212121/ [16:28] beisner: http://paste.ubuntu.com/23212121/ [16:29] This feels like a quoting problem [16:29] stub: if you are around do you know the proper way to send a GPG key for charmhelpers.fetch.configure_sources? [16:30] the yaml load is choking on the ':' in Version: GnuPG v1.4.11 (GNU/Linux) [16:38] junaidali: I can tell you add_source has had much more developer attention than configure_source. It might be a big ask but moving to add_source is probably the best option. http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/view/head:/charmhelpers/fetch/ubuntu.py#L212 [16:40] meaning using add_source directly rather than configure_sources. === frankban is now known as frankban|afk [16:54] hello. i'm trying to do juju bootstrap --upload-tools using xenial image but get connection refused on port 22 [16:54] google turned up some old posts with people having similar issues, but i haven't found any solution [16:55] i can see the container is running, but bootstrap can't auth using ssh key-auth. any suggestions? === redir-afk is now known as redir [19:05] @kjackal Per your comment on https://github.com/juju-solutions/layer-apache-kafka/pull/13 I realized that I did it the other way for Apache Zeppelin, and I'm leaning toward changing Apache Kafka to have a single, arch-independent resource, since we expect it to work on most platforms. If you want to deploy to a platform that requires a custom build for some reason, then you would have to attach it at deploy time. Seem reasonable? [19:05] kwmonroe: ^ [19:06] Downside is if we find a platform that we *do* want to support that always requires a custom build, but the upside is that we are more optimistic about platforms and open it up to deploy on, e.g., Power [20:18] ok cory_fu [20:18] cory_fu: sorry for the delay, I am in the scala world [20:40] kjackal: cory_fu and i chatted about layer-hadoop-client (https://github.com/juju-solutions/layer-hadoop-client/pull/10). i think we've agreed that the silent flag is no longer needed, and layer-hadoop-client will *only* report status when deploying the 'hadoop-client' charm. since you authored the 'silent' flag in the first place, care to weigh in? [20:41] ha, I already weighed in by merging it. :p If you have a -1, we can talk about reverting it [20:42] cory_fu: kwmonroe: looks good! Thank you [20:42] lol cory_fu.. i think the only place we'd be exposed is if we had a charm that included layer-hadoop-client and did not set the 'silent' flag. that charm might rely on hadoop-client to set the 'waiting for plugin' [20:42] Except that it should now be optional anyway, so it's moot [20:44] your optimism inspires me [20:45] (and i was talking about plugin, not java, which is not optional for hadoop-client) [20:45] :) [20:45] Oh, the plugin [20:45] Huh [20:45] Yeah, that's probably going to bite us. [20:45] well, i did some scouring, and all the charms that include hadoop-client have something like this: https://github.com/juju-solutions/layer-apache-flume-hdfs/blob/master/reactive/flume_hdfs.py#L8 [20:45] But I think those charms should be fixed. I think status messages should really only be set in the charm layer [20:46] so they're doing their own blocking if/when hadoop isn't ready [20:46] Well, that's nice [20:46] and when i say "i did some scouring", i mean i grepped the 5 charms on my laptop. [20:46] but i'm sure the other 50 are fine [20:47] optimism. === natefinch is now known as natefinch-afk [21:04] struggling to bootstrap, log here: http://paste.ubuntu.com/23213182/ [21:20] anyone around that can explain why quickstart gives me an error that it can't find the default model, even though it's clearly there? running 16.04. just trying to install landscape-scalable [21:23] just end up with juju-quickstart: error: environment land-test:admin@local/default not found [21:24] bueller? [21:24] and when i first ran the command i got juju-quickstart: error: error: flag provided but not defined: -e [21:25] surtin: you don't need quickstart, just `juju deploy landscape-scalable [21:26] theoeprator: it looks like juju is having trouble talking to the controller -- see the error connecting to port 22 in the first few lines of your log. If you do "lxc list", what do you get in response? [21:26] alright [21:26] theoperator@hype:~$ lxc list [21:26] +-------+---------+---------------------+---------------------------------------------+------------+-----------+ [21:26] | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | [21:26] +-------+---------+---------------------+---------------------------------------------+------------+-----------+ [21:26] | admin | RUNNING | 192.168.0.36 (eth0) | fd00:fc:8df3:eb62:216:3eff:feae:96f2 (eth0) | PERSISTENT | 0 | [21:26] +-------+---------+---------------------+---------------------------------------------+------------+-----------+ [21:26] | vpn | RUNNING | 192.168.0.29 (eth0) | fd00:fc:8df3:eb62:216:3eff:fef0:29c9 (eth0) | PERSISTENT | 0 | [21:26] +-------+---------+---------------------+---------------------------------------------+------------+-----------+ [21:26] theoperator@hype:~$ [21:27] those are both containers I set up separately [21:28] hmm nope, says the charm or bundle not found. guess the instructions on the page are outdated or something [21:29] theoeprator: It looks like juju wasn't able to create the container. Are you running out of disk space? Do you have any interesting messages in /var/log/lxd/lxd.log? [21:29] surtin: which page? [21:29] https://help.landscape.canonical.com/LDS/JujuDeployment16.06 [21:30] surtin: dpb1 is the landscape guy, he might be able to help. i'm not sure which bundle you should use [21:30] surtin: the only one i see in the store is this https://jujucharms.com/q/landscape?type=bundle [21:31] yeah i saw that as well [21:31] surtin: http://askubuntu.com/search?q=deploy+landscape+bundle+andreas [21:31] blah [21:31] surtin: http://askubuntu.com/questions/549809/how-do-i-install-landscape-for-personal-use [21:31] that one [21:32] notice the higher rated answer. talks about using juju as well. [21:33] nothing interesting in the lxd log. all just lvl=info stuff [21:33] ok will try it out [21:34] ty [21:34] i don’t think I’m out of disk space, i don’t have any quotas setup and this is a clean box with ~2TB free [21:34] surtin: are you using juju 1, or juju 2? (juju --version should tell you). [21:35] the last error on the bootstrap output is 2016-09-21 21:34:20 ERROR cmd supercommand.go:458 new environ: creating LXD client: Get https://192.168.0.1:8443/1.0: Unable to connect to: 192.168.0.1:8443 [21:35] where’s it getting that 192.168.0.1 from? [21:35] that’s not the address of the host [21:37] it appears to have the container running at one point, as it runs several apt commands inside it [21:39] theoeprator: it looks like your lxd machines are setup to use your physical network, rather than using a private network. Is that something you setup when you setup the other lxd machines? [21:41] We're moving beyond my grade level as far as juju and lxd internal go, but juju may be getting confused by that network config. [21:45] sorry, wifi failed [21:46] theoeprator: no worries. Did you see my last message? (I was puzzled to see lxd machines talking directly to what looks like your physical network, rather than using virtual ips, and asked whether that was something that you had setup when you setup the other containers.) [21:47] yep, the containers are bridged to the host’s network [21:48] theoeprator: that might be what's breaking things. We're unfortunately moving beyond the realm of things that I know a lot about, but juju typically expects to be able to herd its machines about on a private/virtual network, and then selectively expose them to the real world. [21:48] I could be very wrong, though ... you might try bootstrapping again, and watching "lxd list", to see if the machine gets created. [21:49] theoeprator: another possibility is that you've setup your router to block port 22 -- do you have strict security rules setup for your local network? [21:50] *lxc list, I mean. [21:51] runing bootstrap again, the container is up right now [21:51] theoperator@hype:~$ lxc list [21:51] +---------------+---------+---------------------+---------------------------------------------+------------+-----------+ [21:51] | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | [21:51] +---------------+---------+---------------------+---------------------------------------------+------------+-----------+ [21:51] | admin | RUNNING | 192.168.0.36 (eth0) | fd00:fc:8df3:eb62:216:3eff:feae:96f2 (eth0) | PERSISTENT | 0 | [21:52] +---------------+---------+---------------------+---------------------------------------------+------------+-----------+ [21:52] | juju-f71b38-0 | RUNNING | 192.168.0.41 (eth0) | fd00:fc:8df3:eb62:216:3eff:fe20:8e53 (eth0) | PERSISTENT | 0 | [21:52] +---------------+---------+---------------------+---------------------------------------------+------------+-----------+ [21:52] | vpn | RUNNING | 192.168.0.29 (eth0) | fd00:fc:8df3:eb62:216:3eff:fef0:29c9 (eth0) | PERSISTENT | 0 | [21:52] +---------------+---------+---------------------+---------------------------------------------+------------+-----------+ [21:52] theoperator@hype:~$ [21:52] now got the same error as before and the container is gone again [21:53] theoeprator: Please use pastebin.ubuntu.com for more than 2 lines [21:53] my apologies [21:53] No problem - it just makes it a bit noisy for those monitoring a lot of channels [22:01] theoeprator: hmmm ... my guess is that juju is having trouble with the network settings, but I'm not sure quite what. If I were troubleshooting, my next steps would be the ssh into the container, and check out the logs (check /var/log/juju). [22:04] theoeprator: failing that, you might try posting to the mailing list -- someone who knows more than I do should get back to you. [22:05] ok, thanks Pete [22:07] you're welcome! Sorry that I didn't have an immediate fix for you. [22:24] petevg: juju 2 [22:37] surtin: you def. want to avoid quickstart then. It's a juju 1 tool. Sounds like those instructions are in need of an update ... [22:52] yeah seems that way [22:58] definately going to get to play with some Juju stuff @ JPL and Darpa after a few meetings this week [22:58] might also get some buyin for JPL to extend marathon for LXC/LXD support, although thats a longer shot [23:47] magicaltrout: awesome news :-)