=== thumper is now known as thumper-dogwalk === thumper-dogwalk is now known as thumper === fernari is now known as anrah [05:15] Hi guys! Couple questions regarding bootstrap phase and deploy. I'm using reactive charms and private OpenStack cloud. The question is that is there a way to modify the cloud-init file which is run at the bootstrap phase? [05:17] Problem is that I'm using reactive charms and I can't bea sure if python3 is installed on the image. The deploys fail at the very beginning obviously as the first commands run on deply phase are python3 related when using reactive charms + charm helpers [05:17] I know that I can make my own images which include necessary packages, but is there a way to "hack" the deploy phase and install those packages before python3 is run? [05:34] Hi Everyone, I'm trying to add openstack trove charm atop openstack charm platform [05:34] I see that charm is getting stuck at following status: rove/2 maintenance idle 18 10.73.96.174 Installation complete - awaiting next status [05:35] Does anyone could redirect me where to look into ? just curious what is it waiting for here === frankban|afk is now known as frankban [07:57] How to backup the running environment of lxc and then restore them back [07:57] is it possible? in Juju [07:58] Any video specific to Juju endpoint bindings [07:58] If so please share me the link [12:23] anrah: there is no way to customize the generated cloud-init data. I've always wanted that feature too. Maybe file a bug as a feature request? === alexisb is now known as alexisb-afk === alexisb-afk is now known as alexisb === frankban is now known as frankban|afk [19:21] kwmonroe, petevg, kjackal: https://github.com/apache/bigtop/pull/137 is updated [19:22] There seems to be an odd issue with Juju 2.0 where if you remove the relation between a subordinate and its principal, the subordinate sticks around when it used to go away. I know kjackal encountered this, but I wonder if anyone else has? [19:24] cory_fu_: After getting the latest code and rebuilding the subordinate was removed without any issue. [19:25] Strange. I ran into it on the bigtop charms just now [19:25] Could be transient then? [19:26] Perhaps [20:27] why is it some times when using LXD, a juju unit will sit in "waiting for machine" state [20:28] the lxc container machine will sit in "pending" state [20:28] holocron: can you juju status --format tabular [20:28] and see if it has more details? [20:29] rick_h_ not seeing anything new.. when i "lxc exec /bin/bash" and run top or systemctl, it looks like the container didn't start up properly [20:29] restarting it doesn't help, and scaling the unit doesn't help [20:31] holocron: is the controller the same version as juju client? mine had problems after upgrading beta -> rc. I had to remove controller and bootstrap new so that they are same version. [20:32] jrwren: checking but i suspect it is, i installed the OS, bootstrapped, and deployed today [20:33] jrwren: also, i have a number of other units that are working fine, so it's kinda random so far [20:33] i'm deploying https://jujucharms.com/u/james-page/openstack-on-lxd [20:34] this is the 2nd time to attempt to deploy, after destroying the controller and tearing everything down. the first time it was openstack-dashboard and rabbit-mqserver that sat in "waiting for machine" [20:34] this time it's neutron-gateway [20:34] holocron: install the OS & bootstrapped without adding a PPA for latest juju2 beta? [20:35] jrwren: i am running juju 2.0-rc2 [20:35] holocron: oh, ok. in that case, I have no idea :[ [20:38] eh well, rather than just restarting the container from inside it with "shutdown -r now", i used lxc stop/start and now it appears to be provisioning the unit [20:38] so.. [20:39] * holocron shrugs [21:01] and now, it's doing it again... [21:04] you know what the most annoying thing about juju is? i run a command like "juju remove-unit " and nothing changes [21:05] like, is it going to work? when is it going to work? [21:05] is it just me? [21:16] holocron: so https://bugs.launchpad.net/juju/+bug/1626725 is a bug where we're looking into a potential cause around this. [21:16] Bug #1626725: 8+ containers makes one get stuck in "pending" on joyent [21:16] holocron: I'm not sure if it's the one you're seeing, but something we're chasing down at the moment that looks like that. [21:16] holocron: if that doesn't seem plausible please file a bug with as much detail on the setup as possible, logs from juju, lxd, etc. [21:17] rick_h_: okay thanks, i'm attempting a repro now, if i see this again i'll file a bug [21:17] holocron: ty very much === adam_g` is now known as adam_g [21:27] working with juju 2.0 rc 2 and the openstack-base charm, is it best to use bindings to ensure different components operate on the correct networks or some other method? [21:27] holocron: rick_h_: at the charmer summit, bdx noticed large bundles would result in some machines stuck in 'pending'. iirc admcleod and beisner were helping troubleshoot that and it looked like bug 1602192. beisner, do you know how holocron can check to see if "Too many open files" is the issue here? [21:27] Bug #1602192: when starting many LXD containers, they start failing to boot with "Too many open files" [21:29] petevg: wouldn't you prefer a seperate section to put tutorials? [21:29] kwmonroe, the tell was that i saw 'Too many open files' in various application/service logs. i don't think i ever saw that in the juju unit logs fwiw. [21:29] petevg: like the one you wrote for the earthquake data? [21:30] kjackal: No. I think that should appear near the top of the landing page. One problem is that our "getting started" page gets very complex very fast. I wanted a simple project to get people going. [21:31] kwmonroe: rick_h_: the behaviour might've been similar to that as described in that bug, but I'm not able to reproduce. I had forgotten to take some of the steps described in https://jujucharms.com/u/james-page/openstack-on-lxd [21:32] hi holocron, this is the guide i'd recommend for openstack-on-lxd. anything in personal namespaces might be old/bitrotted. http://docs.openstack.org/developer/charm-guide/openstack-on-lxd.html [21:32] right now it looks like all the containers are started, all agents are executing properly [21:32] beisner: ah great, thanks for this link! [21:32] i had seen this before and totally forgot about it [21:33] beisner: though it does appear to be the same today [21:34] holocron, yes it's close, but still behind a few click from what we have in the dev charm-guide. [21:35] beisner: this is excellent, thanks again [21:35] holocron, yw [21:37] holocron, even with that, i've run into the too many open files thing and have had to raise max_user_instances on the host in most cases. maybe not if it's just a deploy & destroy, but when you go to use the deployed cloud, file handles start to go wild. [21:38] beisner: gotcha, i'll watch for that.. though i'm not planning on using the nova hosts as provided [21:43] amulet question: am i wrong to expect sentry.info to always contain a private-address key? i'm finding that it doesn't always: ['machine', 'open-ports', 'public-address', 'service', 'workload-status', 'agent-status', 'unit_name', 'agent-state', 'unit', 'agent-version'] [21:52] Some fixes for dhx for 2.0rc2 if someone wants to review: https://github.com/juju/plugins/pull/70 [21:53] beisner: tore down that previous deploy, and went with the bundle-s390x.yaml provided on github -- 3 of the 14 machines ended up in "pending" state though i don't see any errors in journal about too many open files [21:56] holocron, so perhaps a useful check would be to bootstrap a fresh controller, add a model, then `juju deploy ubuntu -n 10` and see how that goes. that would deploy 10 units of the ubuntu charm, which does pretty much nothing. it's useful to check that the system, configuration and tooling are all in order. [21:56] beisner: trying this now [21:56] holocron, ps, are you on s390x? [21:56] aye [22:07] beisner: all 10 start up, going to try scaling to 20 [22:09] kwmonroe: last week we had this problem with "WARNING: The following packages cannot be authenticated!" when installing puppet-common. Do you know why? I am getting the same problem with cassandra 2.2 [22:10] beisner: scales to 20 without a hitch [22:12] yeah kjackal.. if you're using layer-puppet-agent, something is wrong with the key in that layer (or with the repo hosted by puppetlabs). i never dug to find a solution though. we moved all our charms to xenial so we didn't need layer-puppet-agent anymore. [22:13] hm... I am getting the same error for a ppa repo for cassandra 2.2 [22:13] this used to work two weeks ago [22:26] is there a "juju status" switch that'll just show me the unit table? [22:26] awk/grep? :P [22:26] x58 yeah, or i could parse the json output :P [22:27] Sounds like you've solvd your problem. [22:27] jq that output to your desire. [22:27] x58 I didn't say I couldn't solve the problem, i asked if there was a switch for just that specific data table [22:28] not a fan of reinventing wheels, but i can if i need to [22:28] indeed you didn't, I am sorry that I may have implied otherwise. === cyberjacob is now known as zz_cyberjacob === zz_cyberjacob is now known as CyberJacob