[01:06] <madhu> Hey, I am using juju 2.0bet18 version. I have a lxc container on which I have installed maas and which has two users ubuntu and maas. Maas detected a physical machine which I had pxe booted. When I deploy that physical machine using maas, it gets deployed. But this deploy process does not copies the ssh key of maas user, it copies the ssh key of ubuntu user. Any reason for such behavior?
[01:13] <madhu> On the newly deployed physical machine, I want to create vm's and deploy it through maas. The commissioning of those VM fails with the error "Failed to login to virsh console". I am suspecting that it could be because of keys. Any one has idea about this issue?
[09:56] <hoenir> Is 1 GB the lowest amount of RAM juju expects when provisioning a machine? It the ram could be expressed in MB?
[10:09] <hoenir> anyone?
[13:30] <jamespage> marcoceppi, charmhelpers 0.9.1 add_source has broken support for UCA and proposed pockets
[13:33] <jamespage> marcoceppi, https://code.launchpad.net/~james-page/charm-helpers/fix-ca-sources/+merge/305953
[13:41] <kjackal> Hello Juju World
[14:29] <natefinch> easy review anyone? +12 -12 https://github.com/juju/version/pull/2
[14:30] <natefinch> oops, hang on, there was a problem when I merged
[14:47] <natefinch> ok, try again, easy review anyone? +12 -12 https://github.com/juju/version/pull/2
[14:48] <natefinch> oops wrong channel
[15:24] <natefinch> marcoceppi: do we have a way to grep all bundles in the store?  I'm wondering how often people actually use cpu-power
[15:25] <rick_h_> natefinch: will have to script it with the api/client.
[15:25] <rick_h_> natefinch: https://api.jujucharms.com/v4/wiki-simple/archive/bundle.yaml
[15:26] <rick_h_> basically have to search for all bundles, build the yaml file link, and then fetch it/grep against it
[15:26] <natefinch> rick_h_: are there docs for the API?
[15:26] <rick_h_> natefinch: https://api.jujucharms.com/v4/search?type=bundle&limit=300
[15:27] <rick_h_> natefinch: yes, sec. https://github.com/juju/charmstore/tree/v4/docs
[15:28] <rick_h_> uiteam, where's the link to the GH project for the blues client for making api calls to the charmstore? /me is blind today
[15:29] <rick_h_> natefinch: ^ might be of use as well if I could find it
[15:29] <rick_h_> natefinch: ah here we go https://github.com/juju/theblues
[15:58] <plars> I'm not sure if this is a snapd problem or a juju problem, but I'm deploying a xenial instance right now and it keeps getting stuck. It appears it's still stuck in cloud-init, and appears to be running 'snap booted'
[17:07] <mbruzek> Hi #juju we need some help about availability zones. What is the constraint to deploy to a specific availability zone in Juju 2.0?
[17:29] <coreycb> hello, I have a juju/maas 2.0 deployment to maas with some services in lxd containers, but containers on one machine can't access containers on another.  should juju be using a bridge other than lxdbr0?
[17:30] <kjackal> hi mbruzek I got a question on jujuresources, can you spare a few minutes?
[17:37] <lazyPower1> coreycb: so, i dont know how automagic this needs to be, but i do know that if you configure the fan, and have your lxd containers spun up on that fan bridge, you will get the networking you seek
[17:37] <lazyPower1> dimitern would have more info on that, and i thought he said that juju does this automatically now... but I may have misunderstood
[17:38] <lazyPower1> kjackal: mbruzek is out today on site @ a customer engagement. I can lend a hand, whats up?
[17:38] <kjackal> yeap thanks lazyPower1
[17:39] <kjackal> so here is what, I have this PR from cory_fu that adds optional juju resources to apache-kafka charm https://github.com/juju-solutions/layer-apache-kafka/pull/13
[17:39] <coreycb> lazyPower1, thanks
[17:40] <kjackal> lazyPower1: The idea is that we will be doing a resource_get and if that fails we would fall back to what we had besore (fetch kafka binary from our own se bucket)
[17:41] <kjackal> lazyPower1: that seems to work on juju 2.0 deployment but on a juju 1.25 I gaet this exception: http://pastebin.ubuntu.com/23187371/
[17:41] <admcleod_> kjackal: et al, any idea why your slaves might think the resourcemanager daemon is running on the namenode unit?
[17:42] <admcleod_> kjackal: also have you added HA capability to yarn?
[17:42] <kjackal> admcleod_: that is on the bigtop slaves?
[17:42] <admcleod_> yes
[17:43] <kjackal> admcleod_: HA is under review
[17:43] <admcleod_> kjackal: well..
[17:43] <admcleod_> kjackal: this is why my teraosrt isnt working: https://pastebin.canonical.com/165768/
[17:43] <admcleod_> is it under review but enabled in the xenial charms?
[17:44] <kjackal> where do you get the charms from?
[17:44] <kjackal> admcleod_: do you have  bundle you can share?
[17:44] <admcleod_> hang on
[17:45] <lazyPower1> kjackal: - feedback left
[17:45] <lazyPower1> kjackal: - and yes, when we implement resources, we're locking the charms to 2.0+ only. resources in metadata cause 1.25 to panic on deployment
[17:46] <admcleod_> kjackal: https://github.com/ubuntu-openstack/bopenstack/blob/master/juju-bundles/spark-hadoop-processing.yaml
[17:46] <kjackal> so lazyPower1 if we move on with that PR we will be breaking the juju 1.25 charm deployments, right?
[17:47] <lazyPower1> correct, unless you catch that NotImplementedError charm-tools is surfacing for you
[17:47] <lazyPower1> updated my review comments to reflect that
[17:48] <kjackal> thank you
[17:48] <kjackal> lazyPower1: ^
[17:48] <lazyPower1> np
[17:48] <kjackal> admcleod_: I see you get the charms from bigdata-dev where HA should be available
[17:48] <kjackal> admcleod_: checking!
[17:49] <kjackal> admcleod_: going to deploy the bundle, it will take me some time
[17:49] <admcleod_> the relations arent quite right
[17:49] <admcleod_> anyway, -1 from me for resourcemanager HA review :P
[17:50] <kjackal> admcleod_: you can make this official here: https://canonical.leankit.com/Boards/View/112674289/123088605
[17:51] <admcleod_> kjackal: which other unit is it supposed to be running on?
[17:52] <kjackal> admcleod_: what is "it"?
[17:53] <admcleod_> kjackal: resourcemanger
[17:53] <kjackal> admcleod_: the resourcemamager should only be running on the hadoop-resourcemanager
[17:54] <kjackal> it is the namenode that is suposed to be running in two of the three namenode units
[17:54] <admcleod_> kjackal: what?
[17:55] <kjackal> the resource manager should only be running on the hadoop-resource manager
[17:55] <kjackal> the namenode (primary and standby) should be running on the namenode units
[17:55] <kjackal> admcleod_: ^
[17:55] <admcleod_> kjackal: lets start again. are you making resourcemanager HA?
[17:56] <kjackal> admcleod_: no. Only the namenode
[17:56] <admcleod_> kjackal: so why did my resouremanager go into standby mode as if its been configured for HA?
[17:57] <magicaltrout> kjackal: did you break things?
[17:58] <magicaltrout> bloody hacker
[17:58] <kjackal> magicaltrout: again, I do not break things, I obliderate!
[17:58] <magicaltrout> lol
[17:59] <admcleod_> kjackal: something is definitely not right, because: "Cannot run -getServiceState when ResourceManager HA is not enabled"
[17:59] <kjackal> admcleod_: I do not know why this happened, it might be that bigtop decides that you cannot have an HDFS in HA without a RM in HA?
[17:59] <kjackal> admcleod_: Ok, give me some time to finish with the deployment
[18:00] <admcleod_> kjackal: theres no bigtop setting like that as far as i remember
[18:00] <admcleod_> kjackal: also no one really uses resourcemanager HA anyway
[18:01] <kjackal> but even so, without a zookeeper we should not be messing with HA anywhere
[18:01] <kjackal> strange, I am on it
[18:02] <admcleod_> kjackal: ok. im probably going to eow in about 2 hours ago
[18:22] <kjackal> admcleod_: ok, deployed
[18:22] <kjackal> admcleod_: lets see the tera sort now
[18:28] <magicaltrout> *kaboom*
[18:28] <rick_h_> soooooo, is that like bad?
[18:29] <lazypower> What'd i miss?
[18:29] <kjackal> magicaltrout: what was that? Coconut falling?
[18:30] <lazypower> i'm fiddling with the znc charm getting my bouncer stood back up in a locked model
[18:30] <lazypower> word to the wise, ensure you're in the correct model context when you execute bundletester, OR, ensure you've locked the models you care about
[18:31] <lazypower> lest you find yourself in my position of redeploying your apps... (from a mistake made nearly 2 weeks ago)
[18:31] <kjackal> admcleod_: teragen finished ok. Terasort takes a bit more because it runs on a single machine with 1GB of ram
[18:31] <kjackal> admcleod_: also I am running everything on canonistack
[18:39] <magicaltrout> a mistake lazypower,  from you? surely not?! :)
[18:39] <lazypower> magicaltrout: story of my life :)
[19:12] <admcleod_> kjackal:
[19:12] <admcleod_> kjackal: but why is the resourcemanager going into standby
[19:13] <kjackal> admcleod_: No, idea! And I cannot reproduce the issue
[19:13] <kjackal> do you think you could give me access to the machine where the slave is?
[19:14] <kjackal> I would be interested in the yarn-site.xml on the RM and slave
[19:14] <kjackal> admcleod_: ^
[19:16] <kjackal> admcleod_: are you colocating any services?
[20:30] <jcastro> balloons: hey so, now that client is unblocked from running on fedora
[20:30] <jcastro> will I get a new client in --edge at some point or do I need to wait for release?
[20:31] <balloons> jcastro, you should already have it. Edge gets a daily build about midday
[20:32] <balloons> jcastro, I'd be very curious to know if that unblocks arch linux ;p
[20:32] <balloons> or perhaps fedora
[20:32] <jcastro> ok going to give it a shot
[20:36] <jcastro> balloons: works so far!
[20:53] <magicaltrout> jcastro: your form is hidden for non canonical people
[20:54] <jcastro> fixing, thanks!
[20:54] <jcastro> try now pls
[20:54] <magicaltrout> much improved
[20:55] <jcastro> that picture is fresh yo
[20:55] <magicaltrout> luckily i'm not in it
[22:00]  * marcoceppi fires up photoshop to get magicaltrout into the photo
[22:43] <magicaltrout> booo
[22:46] <valeech> lazyPower: how do you lock a model to protect it?
[23:28] <lazypower> valeech: juju disable-command all "model locked down"
[23:28] <lazypower> valeech: juju disable-command --help  and juju enable-command --help respectively