/srv/irclogs.ubuntu.com/2016/10/04/#juju.txt

=== thumper is now known as thumper-dogwalk
=== thumper-dogwalk is now known as thumper
=== fernari is now known as anrah
anrahHi guys! Couple questions regarding bootstrap phase and deploy. I'm using reactive charms and private OpenStack cloud. The question is that is there a way to modify the cloud-init file which is run at the bootstrap phase?05:15
anrahProblem is that I'm using reactive charms and I can't bea sure if python3 is installed on the image. The deploys fail at the very beginning obviously as the first commands run on deply phase are python3 related when using reactive charms + charm helpers05:17
anrahI know that I can make my own images which include necessary packages, but is there a way to "hack" the deploy phase and install those packages before python3 is run?05:17
ShashaaHi Everyone, I'm trying to add openstack trove charm atop openstack charm platform05:34
ShashaaI see that charm is getting stuck at following status: rove/2  maintenance  idle   18       10.73.96.174           Installation complete - awaiting next status05:34
ShashaaDoes anyone could redirect me where to look into ? just curious what is it waiting for here05:35
=== frankban|afk is now known as frankban
viswesnHow to backup the running environment of lxc and then restore them back07:57
viswesnis it possible? in Juju07:57
viswesnAny video specific to Juju  endpoint bindings07:58
viswesnIf so please share me the link07:58
jrwrenanrah: there is no way to customize the generated cloud-init data. I've always wanted that feature too. Maybe file a bug as a feature request?12:23
=== alexisb is now known as alexisb-afk
=== alexisb-afk is now known as alexisb
=== frankban is now known as frankban|afk
cory_fu_kwmonroe, petevg, kjackal: https://github.com/apache/bigtop/pull/137 is updated19:21
cory_fu_There seems to be an odd issue with Juju 2.0 where if you remove the relation between a subordinate and its principal, the subordinate sticks around when it used to go away.  I know kjackal encountered this, but I wonder if anyone else has?19:22
kjackalcory_fu_: After getting the latest code and rebuilding the subordinate was removed without any issue.19:24
cory_fu_Strange.  I ran into it on the bigtop charms just now19:25
kjackalCould be transient then?19:25
cory_fu_Perhaps19:26
holocronwhy is it some times when using LXD, a juju unit will sit in "waiting for machine" state20:27
holocronthe lxc container machine will sit in "pending" state20:28
rick_h_holocron: can you juju status --format tabular20:28
rick_h_and see if it has more details?20:28
holocronrick_h_ not seeing anything new.. when i "lxc exec <container> /bin/bash" and run top or systemctl, it looks like the container didn't start up properly20:29
holocronrestarting it doesn't help, and scaling the unit doesn't help20:29
jrwrenholocron: is the controller the same version as juju client?  mine had problems after upgrading beta -> rc. I had to remove controller and bootstrap new so that they are same version.20:31
holocronjrwren: checking but i suspect it is, i installed the OS, bootstrapped, and deployed today20:32
holocronjrwren: also, i have a number of other units that are working fine, so it's kinda random so far20:33
holocroni'm deploying https://jujucharms.com/u/james-page/openstack-on-lxd20:33
holocronthis is the 2nd time to attempt to deploy, after destroying the controller and tearing everything down. the first time it was openstack-dashboard and rabbit-mqserver that sat in "waiting for machine"20:34
holocronthis time it's neutron-gateway20:34
jrwrenholocron: install the OS & bootstrapped without adding a PPA for latest juju2 beta?20:34
holocronjrwren: i am running juju 2.0-rc220:35
jrwrenholocron: oh, ok. in that case, I have no idea :[20:35
holocroneh well, rather than just restarting the container from inside it with "shutdown -r now", i used lxc stop/start and now it appears to be provisioning the unit20:38
holocronso..20:38
* holocron shrugs20:39
holocronand now, it's doing it again...21:01
holocronyou know what the most annoying thing about juju is? i run a command like "juju remove-unit <unit>" and nothing changes21:04
holocronlike, is it going to work? when is it going to work?21:05
holocronis it just me?21:05
rick_h_holocron: so https://bugs.launchpad.net/juju/+bug/1626725 is a bug where we're looking into a potential cause around this.21:16
mupBug #1626725: 8+ containers makes one get stuck in "pending" on joyent <joyent-provider> <jujuqa> <lxd> <scalability> <juju:Triaged by dooferlad> <https://launchpad.net/bugs/1626725>21:16
rick_h_holocron: I'm not sure if it's the one you're seeing, but something we're chasing down at the moment that looks like that.21:16
rick_h_holocron: if that doesn't seem plausible please file a bug with as much detail on the setup as possible, logs from juju, lxd, etc.21:16
holocronrick_h_: okay thanks, i'm attempting a repro now, if i see this again i'll file a bug21:17
rick_h_holocron: ty very much21:17
=== adam_g` is now known as adam_g
valeechworking with juju 2.0 rc 2 and the openstack-base charm, is it best to use bindings to ensure different components operate on the correct networks or some other method?21:27
kwmonroeholocron: rick_h_:  at the charmer summit, bdx noticed large bundles would result in some machines stuck in 'pending'.  iirc admcleod and beisner were helping troubleshoot that and it looked like bug 1602192.  beisner, do you know how holocron can check to see if "Too many open files" is the issue here?21:27
mupBug #1602192: when starting many LXD containers, they start failing to boot with "Too many open files" <lxd> <juju-core:Invalid> <lxd (Ubuntu):Confirmed> <https://launchpad.net/bugs/1602192>21:27
kjackalpetevg: wouldn't you prefer a seperate section to put tutorials?21:29
beisnerkwmonroe, the tell was that i saw 'Too many open files' in various application/service logs.  i don't think i ever saw that in the juju unit logs fwiw.21:29
kjackalpetevg: like the one you wrote for the earthquake data?21:29
petevgkjackal: No. I think that should appear near the top of the landing page. One problem is that our "getting started" page gets very complex very fast. I wanted a simple project to get people going.21:30
holocronkwmonroe: rick_h_: the behaviour might've been similar to that as described in that bug, but I'm not able to reproduce. I had forgotten to take some of the steps described in https://jujucharms.com/u/james-page/openstack-on-lxd21:31
beisnerhi holocron, this is the guide i'd recommend for openstack-on-lxd.  anything in personal namespaces might be old/bitrotted.  http://docs.openstack.org/developer/charm-guide/openstack-on-lxd.html21:32
holocronright now it looks like all the containers are started, all agents are executing properly21:32
holocronbeisner: ah great, thanks for this link!21:32
holocroni had seen this before and totally forgot about it21:32
holocronbeisner: though it does appear to be the same today21:33
beisnerholocron, yes it's close, but still behind a few click from what we have in the dev charm-guide.21:34
holocronbeisner: this is excellent, thanks again21:35
beisnerholocron, yw21:35
beisnerholocron, even with that, i've run into the too many open files thing and have had to raise max_user_instances on the host in most cases.  maybe not if it's just a deploy & destroy, but when you go to use the deployed cloud, file handles start to go wild.21:37
holocronbeisner: gotcha, i'll watch for that.. though i'm not planning on using the nova hosts as provided21:38
beisneramulet question:  am i wrong to expect sentry.info to always contain a private-address key?  i'm finding that it doesn't always:  ['machine', 'open-ports', 'public-address', 'service', 'workload-status', 'agent-status', 'unit_name', 'agent-state', 'unit', 'agent-version']21:43
cory_fu_Some fixes for dhx for 2.0rc2 if someone wants to review: https://github.com/juju/plugins/pull/7021:52
holocronbeisner: tore down that previous deploy, and went with the bundle-s390x.yaml provided on github -- 3 of the 14 machines ended up in "pending" state though i don't see any errors in journal about too many open files21:53
beisnerholocron, so perhaps a useful check would be to bootstrap a fresh controller, add a model, then `juju deploy ubuntu -n 10` and see how that goes.  that would deploy 10 units of the ubuntu charm, which does pretty much nothing.  it's useful to check that the system, configuration and tooling are all in order.21:56
holocronbeisner: trying this now21:56
beisnerholocron, ps, are you on s390x?21:56
holocronaye21:56
holocronbeisner: all 10 start up, going to try scaling to 2022:07
kjackalkwmonroe: last week we had this problem with "WARNING: The following packages cannot be authenticated!" when installing puppet-common. Do you know why? I am getting the same problem with cassandra 2.222:09
holocronbeisner: scales to 20 without a hitch22:10
kwmonroeyeah kjackal.. if you're using layer-puppet-agent, something is wrong with the key in that layer (or with the repo hosted by puppetlabs).  i never dug to find a solution though.  we moved all our charms to xenial so we didn't need layer-puppet-agent anymore.22:12
kjackalhm... I am getting the same error for a ppa repo for cassandra 2.222:13
kjackalthis used to work two weeks ago22:13
holocronis there a "juju status" switch that'll just show me the unit table?22:26
x58awk/grep? :P22:26
holocronx58 yeah, or i could parse the json output :P22:26
x58Sounds like you've solvd your problem.22:27
x58jq that output to your desire.22:27
holocronx58 I didn't say I couldn't solve the problem, i asked if there was a switch for just that specific data table22:27
holocronnot a fan of reinventing wheels, but i can if i need to22:28
x58indeed you didn't, I am sorry that I may have implied otherwise.22:28
=== cyberjacob is now known as zz_cyberjacob
=== zz_cyberjacob is now known as CyberJacob

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!