[05:15] <anrah> Hi guys! Couple questions regarding bootstrap phase and deploy. I'm using reactive charms and private OpenStack cloud. The question is that is there a way to modify the cloud-init file which is run at the bootstrap phase?
[05:17] <anrah> Problem is that I'm using reactive charms and I can't bea sure if python3 is installed on the image. The deploys fail at the very beginning obviously as the first commands run on deply phase are python3 related when using reactive charms + charm helpers
[05:17] <anrah> I know that I can make my own images which include necessary packages, but is there a way to "hack" the deploy phase and install those packages before python3 is run?
[05:34] <Shashaa> Hi Everyone, I'm trying to add openstack trove charm atop openstack charm platform
[05:34] <Shashaa> I see that charm is getting stuck at following status: rove/2  maintenance  idle   18       10.73.96.174           Installation complete - awaiting next status
[05:35] <Shashaa> Does anyone could redirect me where to look into ? just curious what is it waiting for here
[07:57] <viswesn> How to backup the running environment of lxc and then restore them back
[07:57] <viswesn> is it possible? in Juju
[07:58] <viswesn> Any video specific to Juju  endpoint bindings
[07:58] <viswesn> If so please share me the link
[12:23] <jrwren> anrah: there is no way to customize the generated cloud-init data. I've always wanted that feature too. Maybe file a bug as a feature request?
[19:21] <cory_fu_> kwmonroe, petevg, kjackal: https://github.com/apache/bigtop/pull/137 is updated
[19:22] <cory_fu_> There seems to be an odd issue with Juju 2.0 where if you remove the relation between a subordinate and its principal, the subordinate sticks around when it used to go away.  I know kjackal encountered this, but I wonder if anyone else has?
[19:24] <kjackal> cory_fu_: After getting the latest code and rebuilding the subordinate was removed without any issue.
[19:25] <cory_fu_> Strange.  I ran into it on the bigtop charms just now
[19:25] <kjackal> Could be transient then?
[19:26] <cory_fu_> Perhaps
[20:27] <holocron> why is it some times when using LXD, a juju unit will sit in "waiting for machine" state
[20:28] <holocron> the lxc container machine will sit in "pending" state
[20:28] <rick_h_> holocron: can you juju status --format tabular
[20:28] <rick_h_> and see if it has more details?
[20:29] <holocron> rick_h_ not seeing anything new.. when i "lxc exec <container> /bin/bash" and run top or systemctl, it looks like the container didn't start up properly
[20:29] <holocron> restarting it doesn't help, and scaling the unit doesn't help
[20:31] <jrwren> holocron: is the controller the same version as juju client?  mine had problems after upgrading beta -> rc. I had to remove controller and bootstrap new so that they are same version.
[20:32] <holocron> jrwren: checking but i suspect it is, i installed the OS, bootstrapped, and deployed today
[20:33] <holocron> jrwren: also, i have a number of other units that are working fine, so it's kinda random so far
[20:33] <holocron> i'm deploying https://jujucharms.com/u/james-page/openstack-on-lxd
[20:34] <holocron> this is the 2nd time to attempt to deploy, after destroying the controller and tearing everything down. the first time it was openstack-dashboard and rabbit-mqserver that sat in "waiting for machine"
[20:34] <holocron> this time it's neutron-gateway
[20:34] <jrwren> holocron: install the OS & bootstrapped without adding a PPA for latest juju2 beta?
[20:35] <holocron> jrwren: i am running juju 2.0-rc2
[20:35] <jrwren> holocron: oh, ok. in that case, I have no idea :[
[20:38] <holocron> eh well, rather than just restarting the container from inside it with "shutdown -r now", i used lxc stop/start and now it appears to be provisioning the unit
[20:38] <holocron> so..
[20:39]  * holocron shrugs
[21:01] <holocron> and now, it's doing it again...
[21:04] <holocron> you know what the most annoying thing about juju is? i run a command like "juju remove-unit <unit>" and nothing changes
[21:05] <holocron> like, is it going to work? when is it going to work?
[21:05] <holocron> is it just me?
[21:16] <rick_h_> holocron: so https://bugs.launchpad.net/juju/+bug/1626725 is a bug where we're looking into a potential cause around this.
[21:16] <mup> Bug #1626725: 8+ containers makes one get stuck in "pending" on joyent <joyent-provider> <jujuqa> <lxd> <scalability> <juju:Triaged by dooferlad> <https://launchpad.net/bugs/1626725>
[21:16] <rick_h_> holocron: I'm not sure if it's the one you're seeing, but something we're chasing down at the moment that looks like that.
[21:16] <rick_h_> holocron: if that doesn't seem plausible please file a bug with as much detail on the setup as possible, logs from juju, lxd, etc.
[21:17] <holocron> rick_h_: okay thanks, i'm attempting a repro now, if i see this again i'll file a bug
[21:17] <rick_h_> holocron: ty very much
[21:27] <valeech> working with juju 2.0 rc 2 and the openstack-base charm, is it best to use bindings to ensure different components operate on the correct networks or some other method?
[21:27] <kwmonroe> holocron: rick_h_:  at the charmer summit, bdx noticed large bundles would result in some machines stuck in 'pending'.  iirc admcleod and beisner were helping troubleshoot that and it looked like bug 1602192.  beisner, do you know how holocron can check to see if "Too many open files" is the issue here?
[21:27] <mup> Bug #1602192: when starting many LXD containers, they start failing to boot with "Too many open files" <lxd> <juju-core:Invalid> <lxd (Ubuntu):Confirmed> <https://launchpad.net/bugs/1602192>
[21:29] <kjackal> petevg: wouldn't you prefer a seperate section to put tutorials?
[21:29] <beisner> kwmonroe, the tell was that i saw 'Too many open files' in various application/service logs.  i don't think i ever saw that in the juju unit logs fwiw.
[21:29] <kjackal> petevg: like the one you wrote for the earthquake data?
[21:30] <petevg> kjackal: No. I think that should appear near the top of the landing page. One problem is that our "getting started" page gets very complex very fast. I wanted a simple project to get people going.
[21:31] <holocron> kwmonroe: rick_h_: the behaviour might've been similar to that as described in that bug, but I'm not able to reproduce. I had forgotten to take some of the steps described in https://jujucharms.com/u/james-page/openstack-on-lxd
[21:32] <beisner> hi holocron, this is the guide i'd recommend for openstack-on-lxd.  anything in personal namespaces might be old/bitrotted.  http://docs.openstack.org/developer/charm-guide/openstack-on-lxd.html
[21:32] <holocron> right now it looks like all the containers are started, all agents are executing properly
[21:32] <holocron> beisner: ah great, thanks for this link!
[21:32] <holocron> i had seen this before and totally forgot about it
[21:33] <holocron> beisner: though it does appear to be the same today
[21:34] <beisner> holocron, yes it's close, but still behind a few click from what we have in the dev charm-guide.
[21:35] <holocron> beisner: this is excellent, thanks again
[21:35] <beisner> holocron, yw
[21:37] <beisner> holocron, even with that, i've run into the too many open files thing and have had to raise max_user_instances on the host in most cases.  maybe not if it's just a deploy & destroy, but when you go to use the deployed cloud, file handles start to go wild.
[21:38] <holocron> beisner: gotcha, i'll watch for that.. though i'm not planning on using the nova hosts as provided
[21:43] <beisner> amulet question:  am i wrong to expect sentry.info to always contain a private-address key?  i'm finding that it doesn't always:  ['machine', 'open-ports', 'public-address', 'service', 'workload-status', 'agent-status', 'unit_name', 'agent-state', 'unit', 'agent-version']
[21:52] <cory_fu_> Some fixes for dhx for 2.0rc2 if someone wants to review: https://github.com/juju/plugins/pull/70
[21:53] <holocron> beisner: tore down that previous deploy, and went with the bundle-s390x.yaml provided on github -- 3 of the 14 machines ended up in "pending" state though i don't see any errors in journal about too many open files
[21:56] <beisner> holocron, so perhaps a useful check would be to bootstrap a fresh controller, add a model, then `juju deploy ubuntu -n 10` and see how that goes.  that would deploy 10 units of the ubuntu charm, which does pretty much nothing.  it's useful to check that the system, configuration and tooling are all in order.
[21:56] <holocron> beisner: trying this now
[21:56] <beisner> holocron, ps, are you on s390x?
[21:56] <holocron> aye
[22:07] <holocron> beisner: all 10 start up, going to try scaling to 20
[22:09] <kjackal> kwmonroe: last week we had this problem with "WARNING: The following packages cannot be authenticated!" when installing puppet-common. Do you know why? I am getting the same problem with cassandra 2.2
[22:10] <holocron> beisner: scales to 20 without a hitch
[22:12] <kwmonroe> yeah kjackal.. if you're using layer-puppet-agent, something is wrong with the key in that layer (or with the repo hosted by puppetlabs).  i never dug to find a solution though.  we moved all our charms to xenial so we didn't need layer-puppet-agent anymore.
[22:13] <kjackal> hm... I am getting the same error for a ppa repo for cassandra 2.2
[22:13] <kjackal> this used to work two weeks ago
[22:26] <holocron> is there a "juju status" switch that'll just show me the unit table?
[22:26] <x58> awk/grep? :P
[22:26] <holocron> x58 yeah, or i could parse the json output :P
[22:27] <x58> Sounds like you've solvd your problem.
[22:27] <x58> jq that output to your desire.
[22:27] <holocron> x58 I didn't say I couldn't solve the problem, i asked if there was a switch for just that specific data table
[22:28] <holocron> not a fan of reinventing wheels, but i can if i need to
[22:28] <x58> indeed you didn't, I am sorry that I may have implied otherwise.