[09:39] <junaidali> Hi everyone, the get_hostname function in charmhelpers charmhelpers/contrib/network/ip.py is returning the hostname as "eth4.juju-5a02c3-4-lxd-18.maas" instead of "juju-5a02c3-4-lxd-18.maas"
[09:40] <junaidali> where eth4 is the interface of ip that is provided to get_hostname
[09:41] <junaidali> rabbitmq charm is failing on config-changed hook as the above mentioned issues writes wrong hostna/etc/rabbitmq/rabbitmq-env.confme in
[09:41] <junaidali> sorry for the typo
[09:41] <junaidali> rabbitmq charm is failing on config-changed hook as the above mentioned issues writes wrong hostname  in /etc/rabbitmq/rabbitmq-env.con
[09:46] <junaidali> anyone faced this issue?
[09:57] <Spaulding> gday!
[09:59] <Spaulding> is there any easy way to copy file to juju charm? or basically i need to include it into hooks/reactive?
[10:04] <Spaulding> there is render() for templates... should be enough for me.
[10:25] <magicaltrout> Spaulding: whats its purpose?
[10:25] <magicaltrout> you can just include stuff in the charm package for templates and stuff
[10:25] <magicaltrout> or you have resources for installable packages etc
[10:29] <junaidali> the interface appended hostname that i mentioned earlier is a new feature in MAAS 2.0 but rabbitmq and many other charms are relying on reverse dns lookups to determine hostname https://bugs.launchpad.net/maas/+bug/1584569.
[10:29] <mup> Bug #1584569: maas 2 adds multiple DNS entries for nodes <canonical-bootstack> <MAAS:Won't Fix> <https://launchpad.net/bugs/1584569>
[10:30] <junaidali> this issue seems to be pretty critical for the charms, are there any workarounds available?
[10:36] <magicaltrout> probably junaidali but the openstackers in this channel appear to be sleeping
[10:36] <magicaltrout> i'm not sure if jamespage can point you to someone who can help you
[10:38] <jamespage> junaidali, magicaltrout: this is a tricky one, which we 'fixed' and then 'unfixed' as it completely made juju 1.25.x unreliable
[10:38] <magicaltrout> lol
[10:38] <jamespage> the key problem here is juju makes no guarnatee to charm authors about resolvability of the local hostname to something other than 127.0.0.1
[10:39] <jamespage> so on different providers, you get different behaviour
[10:39] <jamespage> RabbitMQ kinda fails to understand IP addresses
[10:39] <jamespage> so we have to use hostnames - the ip helper attempts to resolve the local IP to a real hostname - under MAAS 1.9 this works ok (a reverse-lookup of IP == single hostname)
[10:40] <jamespage> but for 2.0 it results in two matches
[10:42] <junaidali> thanks jamespage, completely understands now. is it reliable to get the second last occurrence of a hostname splitted on '.' instead of getting the first one?
[10:54] <jamespage> junaidali, hmm
[10:57] <Spaulding> magicaltrout: i just want to include some of my config files
[10:58] <magicaltrout> ah right Spaulding anything you put into the charm directory will be shipped when you run charm build
[10:59] <magicaltrout> but you have a few options as well depending how they work, you could include them as resources still, that way people can overload them
[10:59] <magicaltrout> similarly, you could keep the config options in config.yaml
[10:59] <magicaltrout> and have users set them then populate a template when the install hook runs
[11:04] <anrah> I have problem with juju 2.0 and ipv6 addresses, on 1.25 when prefer_ipv6 was true and querying it on hookenv.unit_private_ip method it returned ipv6 address
[11:04] <anrah> now with juju 2.0 it returns ipv4 address and that is a problem as some services I am deploying bind only to ipv6 address
[12:21] <Spaulding> magicaltrout: i figured out different approach
[12:21] <Spaulding> imo the simplest one
[12:21] <Spaulding> I'll tar all of files that I need
[12:21] <Spaulding> and I'll just fetch them and untar them
[12:22] <Spaulding> so I should have most of the files in the right place
[13:26] <magicaltrout> Spaulding: by fetch you mean fetch over the web?
[13:27] <magicaltrout> in which case, thats basically what Resources were implemented to replace, for offline deployments behind firewalls etc
[13:27] <petevg> kwmonroe: left a comment on https://github.com/apache/bigtop/pull/148. I get an error when deploying bundle-local.yaml :-/
[13:38] <Spaulding> magicaltrout: yes
[13:47] <magicaltrout> Spaulding: i'd consider using resources then imho
[13:48] <magicaltrout> also you can version resources etc which keeps stuff working as charm versions change
[13:50] <Spaulding> magicaltrout: ok, I'll try to check that
[14:17] <cory_fu> Has anyone taken a look at the charm testing libraries that free put up on the list yesterday?  bcsaller_, petevg?
[14:19] <petevg> cory_fu: I'm behind on email right now. Looking ...
[14:27] <petevg> cory_fu: which list was it? (I may be missing a subscription ...)
[14:28] <cory_fu> petevg: juju-dev.  I'm going to request that it be cross-posted to the main juju list
[16:59] <aisrael> Anyone have ideas on how to debug Juju 2 rc3 when it lxd allocation hangs forever? Some machines launched fine, and retry-provisioning the machine doesn't do anything.
[17:00] <lazyPower> aisrael only lxd images launched via juju? assuming lxd launch ubuntu works as expected?
[17:01] <aisrael> lazyPower, Correct. I deployed a bundle with four services. Two deployed fine. The other two didn't, and those machines aren't created in lxd either (so it's not cloud-init)
[17:02] <lazyPower> series were the same across the board?
[17:02] <aisrael> Ohh. You might be on to something.
[17:03] <aisrael> Two trusty, two xenial.
[17:03] <aisrael> Neither trusty machine has launched
[17:03] <lazyPower> i think there's a bug on this
[17:03]  * lazyPower digs
[17:03] <aisrael> ding ding ding
[17:03] <aisrael> error: Failed to clone the filesystem
[17:03] <aisrael> trying to launch a trusty image via lxd
[17:04] <lazyPower> happy to help add some context :) i cant find the bug
[17:04] <aisrael> Much appreciated. :) I was testing lxd via snap earlier but had to roll back, so that might be related.
[17:15] <jgriffiths> Has anybody seen the openstack-base bundle hang with four bare-metal (MAAS) servers (mysql, ceph-radosgww, and rabbitmq-server are "waiting for machine")? I know this isn't much information, but I'm guessing this is a common problem with a simple solution
[17:16] <rick_h_> jgriffiths: try a juju status --format=yaml and see if there's better info in the machine section
[17:19] <jgriffiths> Thanks. That gives much better information than a 'juju status' it seems. "Creating container: failed to ensure LXD image: image not imported!'" and "agent not communicating with server"
[17:21] <jgriffiths> It's only the one server. I'm pretty sure I've put those services on a different machine, but I will rebuilt the bundle with different server constraints to see if anything changes.
[17:24] <aisrael> lazyPower, turned out to be a bad image in lxd. I purged 14.04 and re-launched to grab a fresh one.
[17:26] <lazyPower> aisrael nice. glad it was a localized issue and not something bigger. This explains why I couldn't find a bug :D
[17:27] <aisrael> lazyPower, yup. Although, if I could repo it, it'd be nice to handle it a little better.
[17:30] <hallyn> is there a best way to have a juju install charm copy a local directory into the instance?
[17:31]  * hallyn pings rockstar bc why not :)
[17:33] <lazyPower> hallyn the only thing that springs to mind is to package up that local directory, and deliver it as a resource.  Another option would be to use a devicemount if you're on lxd. but thats out of band of juju/charms, and more of a manual exercise.
[17:33] <hallyn> was hoping for an automated 'juju scp' kind of thing
[17:33] <hallyn> triggered from a charm
[17:34] <lazyPower> i'm not sure how your charm would inform your laptop to scp that over without running some kind of daemon
[17:34] <hallyn> i'm not either
[17:34] <lazyPower> i think cory_fu landed some work on dhx, and it has capabilities of attaching stuff to a charm when you enter debug-hooks
[17:34] <lazyPower> which iirc, just went 2.0 compat a couple weeks ago
[17:36] <lazyPower> https://github.com/juju/plugins/blob/master/juju-dhx -- according to the commit history it went 2.0 back in june.   its unofficial so ymmv - known docs for the plugin are here: https://jujucharms.com/docs/1.24/authors-hook-debug-dhx
[17:36] <hallyn> ok resources should work for me i think - thx
[17:37]  * hallyn looks at the plugin real quick
[17:41] <cory_fu> lazyPower: PR is still pending: https://github.com/juju/plugins/pull/70
[17:48] <lazyPower> cory_fu landed
[17:48] <cory_fu> lazyPower: :)
[18:13] <kwmonroe> cory_fu: petevg: we're still busted on lxd (i think this is what pete saw last week):  report: java.net.UnknownHostException: juju-c82a97-0.localdomain
[18:14] <kwmonroe> interestingly...
[18:14] <kwmonroe> ubuntu@juju-c82a97-1:~$ host juju-c82a97-0.localdomain                                                                                                                           │juju-info        slave                 ganglia-node          subordinate
[18:14] <kwmonroe> Host juju-c82a97-0.localdomain not found: 3(NXDOMAIN)                                                                                                                            │juju-info        slave                 rsyslog-forwarder-ha  subordinate
[18:14] <kwmonroe> ubuntu@juju-c82a97-1:~$ host juju-c82a97-0                                                                                                                                       │
[18:14] <kwmonroe> juju-c82a97-0.lxd has address 10.44.139.23                                                                                                                                       │
[18:14] <kwmonroe> whups.. i meant:
[18:14] <kwmonroe> ubuntu@juju-c82a97-1:~$ host juju-c82a97-0.localdomain
[18:14] <kwmonroe> Host juju-c82a97-0.localdomain not found: 3(NXDOMAIN)
[18:14] <kwmonroe> ubuntu@juju-c82a97-1:~$ host juju-c82a97-0
[18:14] <kwmonroe> juju-c82a97-0.lxd has address 10.44.139.23
[18:15] <kwmonroe> anyway cory_fu petevg.. i wonder if we can just tweak get_fqdn to use facter hostname instead of facter fqdn:  https://github.com/juju-solutions/layer-apache-bigtop-base/blob/master/lib/charms/layer/apache_bigtop_base.py#L479
[18:16] <kwmonroe> the short names seem to be resolvable in all our clouds.  the alternative would be to make .localdomain resolve throughout lxd containers.
[18:18] <petevg> kwmonroe: I think that I'm +1 to refactoring get_fqdn; I don't think that, in the places that we use it, we actually need the fqdn. (We should change the name, though.)
[18:39] <kwmonroe> hey frobware, should lxd fqdns be .localdomain or .lxd?
[18:41] <kwmonroe> frobware: i ask because i can't ping one lxd container from another when i do "ping <other>.localdomain", but i can when i do "ping <other>.lxd".  i'm wondering if it's related to https://bugs.launchpad.net/juju/+bug/1623480
[18:41] <mup> Bug #1623480: Cannot resolve own hostname in LXD container <lxd> <network> <juju:Fix Released by dooferlad> <https://launchpad.net/bugs/1623480>
[19:12] <cory_fu> petevg: Comments made on https://github.com/juju-solutions/matrix/pull/1
[19:13] <petevg> cory_fu: grazie. (Don't forget that we have that benefits meeting thingy right now.)
[19:13] <cory_fu> petevg: I'm in it.  :)
[21:30] <bdx> hey whats up everyone?
[21:30] <lazyPowe_> yo yo bdx
[21:30] <bdx> can someone help me explain what my user is experiencing here -> https://s13.postimg.org/ffzttj0if/Screen_Shot_2016_10_12_at_2_29_13_PM.png
[21:30] <bdx> I wasn't aware there was a localhost/localhost provider ...
[21:31] <lazyPower> errr
[21:31] <bdx> lazyPowe_: perfect
[21:31] <bdx> just the guy I was looking for :-)
[21:31] <lazyPower> thats got to be a manual cloud
[21:31] <lazyPower> or something similar custom named
[21:34] <bdx> lazyPower: I thought the cloud name was distinctly defined and displayed similarly though. Its the controller and that we have the capability of customizing though right .... trying to get more info from him now ..
[21:34] <lazyPower> cory_fu - did we land the excludes: key in layer.yaml?
[21:34] <bdx> lazyPower: what have you been deploying the kub stack on?
[21:34] <lazyPower> bdx: manual provider, gce, aws, azure, openstack
[21:35] <bdx> ahh ... so his reply "don’t know if is using lxd…. yes its bootstraped to localhost"
[21:35] <bdx> I'm thinking I'll point him at aws for now
[21:35] <lazyPower> its not compatible enough with LXD to be fully containerized. We've actually just completed an initial round of that work if you're interested in the results: https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/70
[21:35] <lazyPower> the yield was: "Not now, but we'll work towards making this a thing"
[21:36] <cory_fu> lazyPower: I don't think it's been released yet
[21:36] <lazyPower> cory_fu: ok, that makes sense then. I was looking through open PR's and the commit history and wasn't able to pin it down
[21:36] <lazyPower> btw, +1'd 2 of your pr's
[21:36] <lazyPower> thanks for cleaning up the debug output and fixing layer-names vs path-names
[21:36] <lazyPower> <3 'preciate you
[22:22] <hallyn> I'm using kvm local provider - how can I set the default memory limit higher?  i assume environments.yml
[22:22] <hallyn> but i'm failing to find documentation on the available keys
[22:24]  * hallyn bets rharper or jamespage knows