/srv/irclogs.ubuntu.com/2016/04/20/#juju.txt

firlcharmstore seems to be down00:39
firland the openstack bundle doesn’t seem to work because of a keystone charm issue00:39
=== natefinch is now known as natefinch-afk
hatchfirl: it apears to be working for me - https://jujucharms.com/q/?text=apache does this not work for you?00:44
firlnope http://picpaste.com/Screen_Shot_2016-04-19_at_7.44.25_PM-y01ZdjPE.png00:45
firlhatch00:45
hatchhuh....interesting00:45
firlyeah, thought it was weird also00:45
hatchone moment I'll try from another network00:45
hatchalright I was able to get the failure00:46
hatchodd it doesn't fail for me00:47
hatchone moment I'll look into this00:47
firlok00:47
firlI can proxy through a diff net and help diagnose if you want00:47
hatchit appears to actually be down here now too00:48
firlhah ok00:48
firlhope I didn’t break it00:48
firl;)00:48
hatchyup all you!00:48
firlhaha00:48
hatch;)00:48
firlfor keystone / OS bundle issue, should I just mail the juju list or open a bug with the system directly?00:49
hatchUp to you, I'd probably create the bug then email the list :)00:50
firlkk00:50
hatchfirl: back up00:56
hatchit's00:56
firlsweet00:56
firl“FATAL ERROR: Could not determine OpenStack codename for version 8.1.0” just creating a bug for this now00:56
beisnerhi firl01:36
beisnerthere was a keystone SRU (package update) which requires an accompanying charm upgrade for keystone01:36
beisnerfyi, updated bug 1572358 re: keystone 8.1 SRU info02:00
mupBug #1572358: keystone FATAL ERROR: Could not determine OpenStack codename for version 8.1.0 <keystone (Juju Charms Collection):Invalid> <https://launchpad.net/bugs/1572358>02:00
firlthanks beisner02:28
firlso I just need to update the bundle to 253?02:28
beisnerfirl, yw.  fwiw, we've got an openstack-base bundle update in flight.02:28
beisneryep for exactly that02:29
firlahh ok02:29
firlthat makes much more sense02:29
firlthanks, I really appreciate it02:30
beisnerfirl, happy to help.  thanks for raising that.  SRUs can have domino effects.  ideally, we'd have had that bundle revved before the package landed.02:32
stubbdx: I updated wal-e in my PPA to the newly released 0.9.1, which might help your radosgw issue. Although (assuming it isn't just ceph/radosgw configuration and admin issues) the more critical piece is probably what version of python-swiftclient is installed.02:41
=== natefinch-afk is now known as natefinch
urulamacan someone in ~openstack-charmers-next push an update on LP? I'd like to see if an issue in old charm store is resolved.05:11
urulamajamespage: ^05:11
marcoceppiurulama: jamespage was up pretty late, I wouldn't expect a ping for for a few hours05:12
urulamamarcoceppi: i guess any LP change in ~charmers would do as well05:12
jamespageurulama, morning07:38
urulamajamespage: morning07:39
urulamajamespage: we've had some issues with the legacy charm store, the service was down over weekend until yesterday, and it seems it's not ingesting charms still. if you can update a charm in  ~openstack-charmers-next, it'll be easier to track what's going on07:40
jamespageurulama, ack - will do shortly - tricky to update directly otherwise I'll break the whole reverse mirror from github.com07:41
urulamajamespage: no need to do it directly, just so that at least one charm gets updated also on LP07:42
urulamajamespage: github + sync to LP is just fine07:42
jamespagegnuoy, we need to decide what todo re bug 157096007:43
mupBug #1570960: ceph-osd status blocked on block device detection <uosci> <ceph-osd (Juju Charms Collection):New> <https://launchpad.net/bugs/1570960>07:43
jamespageI think revert is the correct course of action today07:44
gnuoyok07:44
jamespagegnuoy, dealing with that now07:47
gnuoyjamespage, am I going mad or has the nic device naming just changed in mitaka?07:53
jamespagegnuoy, ?07:53
jamespagemitaka or xenial?07:54
gnuoyjamespage, sorry, yes, xenial07:54
jamespagegnuoy, its done the not eth thing for a while now07:54
jamespagegnuoy, MAAS however remaps things back to eth based naming for consistency...07:54
gnuoyjamespage, so how have any of the mojo specs been working then ? The juju set ext-port port to eth107:55
gnuoys/The/They/07:55
jamespagegnuoy, well I think it is still eth1 on a cloud instance07:55
gnuoyjamespage, ah ! it isn't now!07:55
jamespagegnuoy, oh fantastic...07:55
gnuoyoh...fiddlesticks07:56
jamespagegnuoy, updated cholcombe broken revert - https://review.openstack.org/#/c/308057/07:59
gnuoyack07:59
=== zz_CyberJacob is now known as CyberJacob
tinwoodmonring gnuoy, jamespage08:07
gnuoyhi tinwood08:07
tinwoodgnuoy, do you know much about ceph-radosgw on mitaka?  Having fixed the test 201 bug, I'm now hitting a mitaka only ceph pool bug: https://pastebin.canonical.com/154738/  -- Any pointers or thoughts?08:08
gnuoytinwood, I'd start off by gathering what data has been set between the two services with relation-{ids,list,get}08:11
tinwoodgnuoy, okay, will do.08:12
dosaboy_tinwood: that's odd since the .rgw pool should have been created by https://github.com/openstack/charm-ceph-radosgw/blob/master/hooks/ceph.py#L23208:24
=== dosaboy_ is now known as dosaboy
* tinwood is taking a look08:25
dosaboytinwood: unless infernalis is deleting that pool but i'd be very surpised08:26
tinwooddosaboy, and it *only* happens on mitaka, in the amulet test.08:26
dosaboythat's odd, let me kick off a mitaka run to see if I can flesh it out08:27
tinwoodokay, but there's a fix in test 201 needed to get the ceph-radosgw to pass: the test in test_201 needs to be flipped from if not any(ret) -> if any(ret). Thx!08:29
jamespagegnuoy, whats the scope of impact of this ens2 thing on our codebase?08:34
gnuoyjamespage, thinking08:38
jamespagegnuoy, for context the image on the 12th did not do this, the one on the 17th did...08:38
jamespagegnuoy, also - https://review.openstack.org/#/c/308057/08:39
jamespagenot proposing doing a full recheck for that one...08:39
gnuoyjamespage, The mojo specs add a nic to the neutron-gateway and/or the nova-compute nodes to act as the external port. The tests assume that a) The ext port will be eth1 and b) The ext port will be names consistantly accross the units of a service. Neither of those are now true so there is no access to guests that are booted as part of testing. The second nic I added during my last test came up as ens6. Our HA testing assumes that ha-bindiface is eth0 which08:49
gnuoy is no longer true. It *may* be safe to assume that on xenial the first nic is ens2 but it doesn't feel like a safe assumption. I don't think amulet tests are affected as they don't set the ext port or perform ha deployments.08:49
gnuoyWe can work around the ext-port by having mojo figure out the ext ports and set it via mac addresses08:50
gnuoyI'm more worried about the HA testing tbh08:50
gnuoyI guess we'd need to add an ha-bindiface: auto option08:51
gnuoyThose are the impacts of the cloud instance change.08:51
gnuoyI think users should be ok since MAAS and lxc use the old nic naming sceme I believe ?08:52
jamespagegnuoy, yeah - that at least is true08:53
jamespagegood-ole maas08:53
jamespagegnuoy, confirmed - lxd also names eth0...08:57
gnuoytip top08:57
jamespagegnuoy, what have we not tested yet which will be impacted by this08:59
jamespagegnuoy, just trying to figure out whether we can realistically release tommorow with confidence...09:00
jamespageha-bindiface: auto makes sense btw09:00
jamespagebut not something we commit for tomorrow...09:00
gnuoyjamespage, I don't know whats going on with xenial/mitaka ssl as someone has wiped the info from that cell but other than that all xenial/mitaka scenarios have been tested and have either past using master or have passed using in flight fixes which have since landed. What I was hoping to do today was rerun the mitaka tests using only master.09:04
jamespagegnuoy, that's going to be tricky...09:04
gnuoyjamespage, I think I can have a ext-port fix ready in ~ 90mins for mojo. The ha tests I may just rerun taking a guess at the device name09:05
jamespagegnuoy, it does appear to be consistently ens2 in the deploy I just did09:05
gnuoyok, then that should be good enough09:05
stubpkg_resources.RequirementParseError: Invalid requirement, parse error at "'-lxc==0.'"09:09
stubAnyone seen anything like that?09:09
stubIts from tox setting up a venv so I can run unit tests in my built charm, so it is some dependency pulled in from base layer but I have no hint as to what.09:10
stubHmm.... looks like it might be https://github.com/pypa/setuptools/issues/50209:19
stubNope, same indecipherable traceback, different problem. Pinning setuptools doesn't help.09:31
jamespageurulama, ok a change has worked it way into ~openstack-charmers-next branches...10:12
urulamajamespage: ok, ty. we've identified issue. it's corrupted mongo db on legacy charm store that is preventing any ingestion. we're working on it10:13
urulamajamespage: worst case if it can't get resolved, this means that charms will have to be pushed manually if they need to be in store ... but we have few more days left for ODS before such resolution is required10:14
jamespageurulama, well...10:16
jamespageurulama, trunk charms go in the same way via ~openstack-charmers atm10:16
jamespageurulama, maybe we should make the switch now to using charm push/publish...10:16
urulamagive us a day, please, to see what can be done10:17
jamespageurulama, have a charm release to get out tomorrow :)10:18
urulamajamespage: ok, if we don't resolve it in next 6-8h, than that'll be best option10:19
jamespageurulama, ok working a backup plan now then...10:21
dosaboytinwood: ive got a mitaka deploy and .rgw is missing but unfortunately there is an issue wth the broker logging which makes it hard to debug10:26
dosaboytinwood: https://bugs.launchpad.net/charms/+source/ceph/+bug/157249110:26
mupBug #1572491: ceph-broker printing incorrect pool info in logs <ceph (Juju Charms Collection):In Progress by hopem> <ceph-mon (Juju Charms Collection):In Progress by hopem> <https://launchpad.net/bugs/1572491>10:26
dosaboytinwood: i'm gonna redeploy with ^^ to see whats going on10:26
tinwooddosaboy, ah, I see.  That explains why I'm not really getting anywhere with it.  I'll have a look at that too.10:28
jamespageurulama, did the bad backend for charmstore get dropped yet? I'm still seeing alot of unauthorized messages10:31
dosaboytinwood: found le problem - http://paste.ubuntu.com/15944935/10:40
tinwooddosaboy, interesting.  is that fixed on mitaka but not previously?10:42
dosaboytinwood: i suspect the issue is that in previously releases of rgw that pool was autocreated by radosgw-admin on install10:43
dosaboybut now it is not so we fianlly hit the bug10:44
dosaboyits probably a pool that is not actually required by rgw but its still a bug in the charm code10:44
tinwooddosaboy, ah, I think I see.  I'll stare at the code to get an understanding of that bit.10:44
dosaboytinwood: i'll submit a patch shortly10:44
gnuoyjamespage, beisner I have a mp up for mojo mitaka ext-port  issue https://code.launchpad.net/~gnuoy/openstack-mojo-specs/mojo-openstack-specs-ext-port-rejig/+merge/292363 . It's marked as a wip until i've taken it for a spin10:46
jamespagegnuoy, ack10:47
dosaboytinwood: https://code.launchpad.net/~hopem/charm-helpers/lp1572506/+merge/29236510:54
gnuoydosaboy, +110:59
gnuoydosaboy, can you land the charmhelper fix ?10:59
gnuoyYour a charmer right?10:59
dosaboygnuoy: yeap11:01
tinwooddosaboy, that's great.  I'll wait for the merge, and then submit my fix.  Thanks for your help!11:01
urulamajamespage: if you're working on using charm push/publish, please make sure it won't get new revision every time cron job runs11:20
jamespageurulama, no worries we won't11:26
=== scuttle|afk is now known as scuttlemonkey
=== scuttlemonkey is now known as scuttle|afk
simonklbAnyone know what I'm doing wrong when using the docker layer getting this error when building the charm: OSError: Unable to load tactics.docker.DockerWheelhouseTactic12:24
simonklbI tried debugging it a bit and found that it's trying to load that tactic for both the docker layer and the basic layer12:24
simonklbSo it's like it remembers tactics from earlier layers and tries to load those in the other layers as well12:25
iceydosaboy the ceph/ceph-osd merge that you judst asked for a unit test for, there is a file full of tests for that function in charmhelpers...12:49
dosaboyicey: yeah sorry i revoked my comment after i realised it is a sync12:49
dosaboyicey: but you have new -1 re your commit message format :)12:49
iceydosaboy: yeah, working on it now :-)12:50
dosaboytx12:50
iceyjamespage dosaboy commits are re-upped12:54
beisnergnuoy, jamespage - wow yah that late eth dev naming thing in xenial is a real suckerpunch lol13:17
beisnergnuoy, flipping osci to run on your branch for a few validation runs @ X13:19
beisnerthx for the mp13:19
gnuoybeisner, ta13:19
jamespagebeisner, I have bundles tested ready for charm store tomorrow for trusty-mitaka xenial-mitaka and lxd13:24
jamespagelxd will need a tweak for devices landing13:24
jamespagebut I'm blocked on charm store injestion atm13:24
jamespagebeisner, urulama is investigating but we may need to switch to charm push/publish for tomorrow - working that as a backup plan atm13:24
beisnerjamespage, ack.  fyi we did update o-c-t alongside the gerrit review.  and that's all passing a-ok.13:24
jamespagebeisner, awesome13:25
jamespagebeisner, I also managed to get Nova-LXD running on an all-in-on under LXD on my laptop....13:25
jamespageparity with KVM ftw...13:25
beisnersweet13:25
jamespagebeisner, do you run tests directly from lescina for the bare-metal lab, or do you just access MAAS directly from a basition?13:26
beisnerjamespage, osci-slave/10 is pinned to the metal lab.  i never do my jujuing from lescina directly fwiw13:26
jamespagebeisner, that's probably wise...13:27
beisnerosci has its own oauth, etc13:27
beisnerjamespage, re: charm upload -- that'd put us one step closer to nixing LP branches and syncs13:28
=== natefinch is now known as natefinch-afk
jamespagebeisner, yah13:29
jamespagewe really could have done with series in metadata for this but its not the end of the world...13:29
beisnerjamespage, yah the time box for doing that was between last Wednesday and now (ie. 1.25.5's existence)13:30
=== fginther` is now known as fginther
jamespagedosaboy, do you have icey's reviews covered?13:56
dosaboyjamespage: sure13:58
iceydosaboy: doing a recheck-full right now14:00
dosaboyicey: 10-414:01
simonklbwhat is the best way of accessing data provided by a relation interface?14:07
simonklbfor example when I get config-changed and I want to use the relation data again to render the configuration14:08
simonklbis it possible to call the function that get the interface data again somehow?14:10
simonklbor should I store the data?14:11
simonklbI have a charm that uses the keystone-admin interface to retrieve the credentials14:13
simonklbat the moment I use the keystone-admin interface to retrieve the credentials with the @when("identity-admin.connected") event14:14
simonklbbut I'm not sure what the best practice would be to get the credentials again - for example when some other config values are changed and I need to render the configuration file14:15
jaceknanybody knows if "scope" for juju interface layers has to match on both side? I'd like to have provide side "global" (same data sent/received to all services) but on the require side "SERVICE" seems like better fit14:22
lazyPowerjacekn - doesn't have to be the same scope on opposing sides of the relationship14:23
jaceknthanks14:24
lazyPower simonklb - interfaces provide some level of caching for you already. The conversation data is stored in unitdata, so if you're referencing the conversation scope directly, you can re-use those values, directly off the interface.14:25
lazyPowersimonklb - in other words, no need to maintain your own cache unless thats really what you want to do :)14:25
simonklblazyPower: that's great!14:26
simonklbthanks14:26
simonklbbtw, did you see what I wrote about the docker layer before?14:26
lazyPowerI'm not certain, its been undergoing a lot of churn these days14:26
lazyPowerrefresh me?14:26
simonklbbasically when I try to build the charm using the docker layer I'm getting "OSError: Unable to load tactics.docker.DockerWheelhouseTactic"14:27
simonklband from the looks of it the cause of the error is that it tries to load the docker tactics from the basic layer path as well14:28
simonklbit's very possible that I've just configured something incorrectly though14:28
lazyPowersimonklb nope :(14:34
lazyPoweryou got bit by our latest work in master14:34
lazyPowerwe were investigating a fix for offlining docker-compose and its required deps in the wheelhouse, but it requires a custom tactic, which has some bugs in the current version of charm-tools. Let me double check those commits were reverted14:35
simonklblazyPower: ah, well that's something you have to deal with when you're living on the bleeding edge :)14:36
simonklblet me know if you know how to fix it, otherwise it would be great if I could follow an issue tracker somewhere so that I can be notified when it's fixed :)14:37
lazyPowersimonklb - yeah but I really should have vet that more before i threw that up in tip :) or i should adopt tagging / pointing interfaces.juju.solutions @ tags when i rev14:37
simonklbno worries!14:37
lazyPowersimonklb https://github.com/juju-solutions/layer-docker/issues/4614:38
jamespagegnuoy, https://review.openstack.org/#/c/308372/14:38
jamespagetested ok locally for me14:38
jamespageon xenial...14:38
lazyPowerI'll get a patch out for that after i drop out of this hangout14:38
jamespage| wsrep_incoming_addresses     | 10.0.8.54:3306,10.0.8.105:3306,10.0.8.178:3306 |14:38
simonklblazyPower: thanks!14:39
beisnergnuoy, approved @ https://code.launchpad.net/~gnuoy/openstack-mojo-specs/mojo-openstack-specs-ext-port-rejig/+merge/29236314:41
beisnerthx again!14:41
gnuoynp14:42
thedacgnuoy: see my last post on https://bugs.launchpad.net/charms/+source/heat/+bug/1571830 Should we revert just the relation_set bits? It only affects upgrading from stable to next15:03
mupBug #1571830: Heat has status blocked missing database relation <heat (Juju Charms Collection):Triaged> <https://launchpad.net/bugs/1571830>15:03
thedacor trigger a rerun of shared-db-joined?15:05
gnuoythedac, can we discuss after the standup? I'm failing to get my head around it atm15:05
thedacsure, no problem15:05
simonklblazyPower: my first attempt at using the unitdata wasn't succesful - however I noticed that the @when("identity-admin.connected") function was invoked everytime the "config-changed" event fired15:12
simonklbany idea why that is?15:12
lazyPowerif the state is set, the decorated method will be triggered15:14
jamespagethedac, that ha change also broke IPv6 support...15:14
thedacjamespage: ok, I'll check that as well15:15
simonklblazyPower: does this happen everytime a hook is fired? or what makes it trigger the decorated functions?15:16
simonklbis there some periodical interval that executes the current state over and over?15:16
lazyPowersimonklb - so in reactive, the idea is to work with synthetic states. There's a bus that executes every decorated method at least once, and if the state changes - the bus re-evaluates and re-calculates what needs to be executed15:17
lazyPowerso yeah its basically an event loop, that knows when its states have changed, and then inserts/removes methods to be invoked appropriately.15:17
lazyPowerwhen its executed everything it had in its execution list, it ends context until the next hook is invoked15:18
lazyPowersimonklb - did that help clarify?15:19
simonklbI see, do you still think I should use the cached values or would it be ok to simply use the fact that it will be in the state where the relation data can be fetched everytime?15:19
lazyPowerthe latter. Always opt for pulling the data right off the wire vs caching. Caching means you're responsible for keeping your cache of a cache in sync15:19
lazyPoweras we dont really go over the wire anymore :) its all serialized in unitdata in the conversation objects.15:20
simonklbgreat, thanks!15:23
simonklbI'm starting to see the reasoning behind getting away from hooks and only working with states15:23
simonklbyou probably want to keep track of data changing or not though, so that you don't have to reconfigure your charms everytime?15:26
lazyPowerthere's a decorator for charm config, but relation data... i dont think we have any indicators that the relation data has changed, no15:27
simonklbbut could the unitdata be used for that?15:27
simonklbcomparing the previous relation data to the current one?15:27
lazyPowerit could. cory_fu may already be tracking if relation data has changed in the conversation bits15:27
cory_fusimonklb: I would recommend using data_changed: https://pythonhosted.org/charms.reactive/charms.reactive.helpers.html#charms.reactive.helpers.data_changed15:29
simonklbcory_fu: awesome, thanks15:29
gnuoyjamespage, I see your percona change has Canoical CI +1, I feel like thats good enough to land it.15:29
jamespagegnuoy, yes15:30
jamespageagreed15:30
gnuoykk15:30
cory_fulazyPower: It's not tracked automatically, because we expect the interface layers to provide an API on top of the raw relation data, and the data from the API is what should be passed in to data_changed15:38
freak_hi15:50
freak_hi everyone15:50
freak_yesterday i raised the issue regarding ceph-osd15:50
freak_any resolution uptill on that?15:50
freak_upon startup ceph-osd showing hook failed update status15:51
freak_ 16:06
cory_fulazyPower: Still having issues with that custom tactic, eh?16:08
lazyPoweri havent had a chance ot circle back to bens patch16:08
cory_fuah16:09
marcoceppiaisrael: are you also going to patch https://github.com/juju/charm-tools/issues/190 ?16:11
aisraelmarcoceppi: yup, I'll take it16:11
jamespagegnuoy, beisner: does this - http://paste.ubuntu.com/15952999/16:44
jamespagelook like a reasonable set of changes for the stable branches post release?16:44
beisnerjamespage, ack those are the stable bits to flip wrt c-h sync and amulet tests16:45
jamespagebeisner, so creating the stable branches doe not require a review16:45
jamespagebeisner, so I think our process looks like16:45
jamespage1) create branches16:45
jamespage2) resync git -> bzr16:45
jamespage3) proposed stable updates as above16:46
jamespagethey can work through over time...16:46
beisnerjamespage, feel like slipping a rm tests/*junu* ?16:46
beisnerjuno of course16:46
jamespagebeisner, did we not do that already?16:46
beisnerwe've only blacklisted it from running.  we've not done a full sweep16:46
beisnerof reviews16:46
jamespagebeisner, ah16:47
jamespagebeisner, did you seem my comments re not re-using the current 'stable' branch16:50
jamespagebut going for a 'stable/16.04' branch instead16:50
beisnerjamespage, yah.  we'll have some automation and helper updates to make for that i think.16:50
jamespagebeisner, yes16:50
beisnerright now we're binary on stable||master16:50
jamespagebeisner, yup16:51
jamespagebeisner, I did have some thoughts on how we might make this easier16:51
jamespagebeisner, but they rely on us moving the push to charmstore post-commit for all branches...16:51
jamespagewhich we should do anyway16:51
jamespagebeisner, for now /next and /trunk charm branches on launchpad will remain analogous to master||stable16:52
beisner++charm-pusher to our list of things we need, and we shall be wise to work in catch/retry/thresholds in the upload/publish cmds.16:52
beisnerjamespage, we can trigger that as a new job when gerrit 'foo merged' messages are heard.16:53
beisnerso rather than a full pass scheduled thing, it'll just tack onto the end of the dev process16:53
jamespagebeisner, that would be ++ for me16:55
urulamajamespage, beisner: so, we're restoring the old broken DB for legacy store, but that'll take 10h it seems, so even if (and it's not certain) that works, ingestion will only start tomorrow again16:57
jamespageurulama, ok16:57
jamespageurulama, will check in first thing and then make a decision on process for our charm release...16:57
urulamajamespage, beisner: but everything seems borked, so, we redirected legacy charm store to jujucharms.com and turned off ingestion. it just might be it's end, just a bit too soon16:58
urulamajamespage: ok, will let you know first thing in the morning16:58
=== natefinch-afk is now known as natefinch
=== scuttle|afk is now known as scuttlemonkey
skayhey, I upgraded my laptop to xenial, and have aliased juju-1 to juju since I have scripts that call juju commands (and I use mojo)17:19
skaywhen I ran a spec, I got an error message about there being no current controller17:19
skayI think that would be something in juju-2, right?17:19
skayI guess having a zsh alias isn't going to work17:19
skayis there a proper workaround for this?17:20
skayinstead of my hacky one17:20
LiftedKiltwill model creation via the gui be fixed in tomorrow's release?17:21
=== scuttlemonkey is now known as scuttle|afk
=== scuttle|afk is now known as scuttlemonkey
=== scuttlemonkey is now known as scuttle|afk
cholcombehow do you force remove a service in a failed status without destroying the machine ?17:28
cholcombefreak_, is wondering this ^^17:29
freak_:)17:29
lazyPowercholcombe - thats not so easy to do. best bet is to juju destroy-service and juju resolved the failures all the down17:32
cholcombelazyPower, yeah I know this is tricky17:32
lazyPowerthere's a reaping setting on machines too, i believe by default, it will reap the machine once the service is off it17:32
cholcombelazyPower, freak_, has a deploy with several lxc's on metal17:32
cholcombei'd love to say force destroy the machine but that will cause collateral damage17:33
=== scuttle|afk is now known as scuttlemonkey
lazyPowerwell i should ammend that statement to say all services/containers are off of it.17:33
lazyPoweryeah17:33
lazyPowerso long as there's containers on that metal it'll keep the machine around if you destroy && resolve spam it17:33
cholcombewell all he did was reboot the machine and it got stuck in this error state :-/17:34
lazyPowerjinkies17:34
lazyPowerif you recycle the agent it doesn't un-stick it?17:34
cholcombei'm not sure17:35
freak_either i destroy unit or destroy service17:35
freak_it remain struck in hook failed state17:35
freak_doesn't get removed17:36
lazyPowerfreak_ - is there a relation in error state that is keeping it around?17:36
lazyPowerthats the likely culprit17:36
freak_error is the hook failed "update status",,, otherwise no error17:36
freak_not even on single link or relation17:37
beisnerjamespage, before we can do the charm upload in the flow though, i'd like to see the full set of commands known-to-work to do the upload and publish.  i've got some bits and pieces but not the whole picture i think.17:37
=== redir is now known as redir_afk
iceyfreak_: have you tried to mark it resolved? `juju resolved ceph-osd/0`17:51
freak_yes , i tried from GUI as well as from cli, but no use,,still in hook failed state17:52
freak_icey , it says : ERROR cannot set resolved mode for unit "ceph-osd/0": already resolved17:53
iceyfreak_: what juju version are you running?17:54
freak_icey ,  1.25.3-trusty-amd6417:54
marcoceppiurulama: yeah, we're still going to need a 30 days period of notice before we actually kill ingestion17:56
iceylazyPower: how would freak_ go about recycling the agent?18:05
urulamamarcoceppi: well, if the old service is dead and we can't bring it back, then, ingestion dies with it18:09
marcoceppiurulama: this is less than ideal.18:14
urulamamarcoceppi: yep. we'll know when repairing process finishes18:15
marcoceppiurulama: thanks18:15
marcoceppiurulama: we'll have a public beta of new review queue out next week, if this comes back we can then do the 30 day phase out for the month of may18:16
urulamamarcoceppi: we'll see tomorrow EU early morning, and try to restore it asap ... but in the meantime, manual publishing is the only way18:19
marcoceppiurulama: ack18:20
lazyPowericey : there's a service entry for it - /etc/init/jujud-unit-<service>-<number>18:33
lazyPowericey: so stop/start that service18:34
iceyawesome, thanks lazyPower; freak_ have you tried that, and if not, can you please?18:34
freak_icey , here is the output of that unit file  http://paste.ubuntu.com/15954684/18:38
freak_can you please guide me further,,,18:39
lazyPowerfreak_ - service jujud-unit-ceph-osd-0 restart18:39
freak_......still in hook failed state18:45
freak_ 18:45
lazyPowerfreak_ - you're only interested in destroying the ceph-osd service on the machine correct?18:49
freak_for the time being ..yes...main goal is to delete it and install ceph-osd again18:49
lazyPowerok, are you attached to debug-hooks on the unit?18:49
lazyPowerif not, can you be?18:50
ReSamI'm trying to deploy ceph-radosgw to a container - but it is stuck at: "Waiting for agent initialization to finish" - how can I trigger it?18:50
freak_u mean run the command juju debug-hook18:50
freak_it simply takes me to the node 018:50
lazyPowerfreak_ - yes, juju debug-hooks ceph-osd/018:50
freak_that takes me to the cli of node 018:51
lazyPowerfreak_ - now, cycle the juju agent, and you should be placed in another tmux session with hook context.18:51
lazyPoweror juju resolved --retry ceph-osd/018:51
=== dpm is now known as dpm-mostly-afk
freak_tried both......18:53
freak_http://paste.ubuntu.com/15954803/18:54
freak_http://paste.ubuntu.com/15954807/18:54
lazyPowerfreak_ no change in the debug-hooks session?19:10
freak_no, the last output was of start and stop which i shared19:11
freak_after this no msg appeared19:11
lazyPowerI'm not certain what ot recommend then if you cannot trap a hook, or resolve the service's hooks getting caught in error until it successfully removes.19:12
lazyPowerfreak_ - it would be helpful to collect logs on this and file a bug. icey if you have any further things to try?19:12
iceylazyPower: freak_ I don't have any more ideas, agree with lazyPower about grabbing logs and opening a bug19:13
freak_let me share log with you guys regarding this service19:13
freak_may be that can help19:13
=== matthelmke is now known as matthelmke-afk
beisnerdosaboy, i suspect cli difference for infernalis radosgw-admin cmds in the ceph-radosgw amulet test.  http://pastebin.ubuntu.com/15955129/   re: https://review.openstack.org/#/c/308339/19:16
beisnercholcombe, icey fyi ^19:16
iceybeisner: dosaboy that wouldn't surprise me, we're working on something that'll let us stop using the cli so much to send these commands to ceph19:17
beisnerdosaboy, frankly that cmd isn't used to inspect anything (Yet), the test is just checking that it returns zero.  i'd be ok with rm'ing that from the test.19:17
beisnervia #comment for later revisiting of course19:18
freak_icey, lazyPower , check these logs ubuntu@node0:/var/log/juju$ sudo cat unit-ceph-osd-0.log  2016-04-17 16:09:50 INFO juju.cmd supercommand.go:37 running jujud [1.25.5-trusty-amd64 gc] 2016-04-17 16:09:50 DEBUG juju.agent agent.go:482 read agent config, format "1.18" 2016-04-17 16:09:50 INFO juju.jujud unit.go:122 unit agent unit-ceph-osd-0 start (1.25.5-trusty-amd64 [gc]) 2016-04-17 16:09:50 INFO juju.network network.go:242 setting pr19:18
freak_ 19:18
freak_https://gist.github.com/anonymous/e82d5d143f67397d2f6667e1313776b119:19
lazyPowersimonklb - Just landed a fix for #46, that should unblock you :)19:20
beisnerdosaboy, fyi imagonna ++patchset on your change and recheck full19:27
arosaleskwmonroe: did you have a bundle you wanted some extra testing on?19:29
* arosales has aws and lxd on beta5 bootstrapped atm19:29
beisnericey, http://docs.ceph.com/docs/infernalis/man/8/radosgw-admin/19:31
beisneraccording to that `radosgw-admin regions list` should still be valid19:32
beisnerare the docs stale?19:32
beisneror is our binary fuxord?19:33
beisnerseems one of the two19:33
kwmonroeyeah arosales!  if you don't mind, could you fire this off on lxd? https://jujucharms.com/u/bigdata-dev/bigtop-processing-mapreduce/bundle/119:36
kwmonroewe had to work around some networking issues and i'm curious how the slave->namenode relation works on lxd.19:37
arosalessure, waiting for realtime-syslog-analytics to come back, and then I'll fire off that bundle19:37
kwmonroethanks!19:37
arosaleskwmonroe: thanks for the bundle :-)19:37
iceythey could be, we found an issue with the erasure coded pool stuff a few days ago19:38
beisnericey, ack.  deploying, will poke at it first hand19:38
beisner-> back in a bit19:38
=== matthelmke-afk is now known as matthelmke
beisnericey, yah that ceph manpage is cruft.  http://pastebin.ubuntu.com/15955807/20:08
beisnerand i always chuckle at `radosgw-admin bucket list`20:09
=== scuttlemonkey is now known as scuttle|afk
iceybeisner: that doesn't surprise me20:17
beisnericey, bucket list appears to be broken in infernalis as well20:19
beisner2016-04-20 20:15:37.510473 7f12f1ed1900  0 RGWZoneParams::create(): error creating default zone params: (17) File exists20:19
beisnericey, dosaboy - i suspect the infernalis packages revved or landed -after- the ceph-radosgw mitaka amulet tests initially passed their fulls.20:20
iceybeisner: xenial/mitaka is not jewel20:20
beisnericey, yah.  i'm talking i here.20:21
iceynow*20:21
beisnerit was jewel when these tests initially flew though ,yah?20:21
beisneror hammer or some cephy thing ;-)20:22
beisneradded patchset @ https://review.openstack.org/#/c/308339/20:24
beisnerthat passes in a one-off run, we'll let it churn on a full rerun20:24
iceyprobably infernalis, if it was over a week ago20:41
jamespagebeisner, rockstar: can you take a look at https://review.openstack.org/#/c/30848920:44
jamespagebeisner, icey: xenial/mitaka has been in jewel rc's for several weeks now20:44
jamespageits possible an rc change behaviour...20:44
beisnerjamespage, yah but ceph-radosgw full amulet hasn't had a reason to run in that timeframe i think.  and i wasn't forcing full reruns out of the blue yet (we are now).20:45
jamespagebeisner, yup20:45
jamespagequite...20:45
beisnerspecial.  :-)20:45
beisneryay for ++unit tests jamespage20:46
jamespagebeisner, always like to add a test...20:47
jamespageor three :-)20:47
jamespagebeisner, tested OK  for multiple config-changed executions post deployment...20:51
beisnerokie cool20:54
=== urulama is now known as urulama|____
arosalesso I am being dense here, but what the syntax to juju deploy <bundle> in a personal name space?21:11
* arosales looking at https://jujucharms.com/docs/devel/charms-deploying21:11
arosalesmy combinations are working atm, http://paste.ubuntu.com/15956415/21:11
arosalessorry my combinations aren't working atm, http://paste.ubuntu.com/15956415/21:13
beisnericey, onto the next fail on rgw test_402.  seems like swift client usage adjustments will be necessary21:13
arosaleshmm deploy juju deploy https://jujucharms.com/u/bigdata-dev/bigtop-processing-mapreduce/bundle/1 seemed to do the trick21:15
jamespagerockstar, remind me todo a minor refactor on the amulet test bundle for lxd...21:17
jamespagebut later.. not today..21:17
rockstarjamespage: specifics?21:17
jamespagerockstar, well its testing using nova-network21:18
jamespagerockstar, thats crazy talk...21:18
jamespage:-)21:18
rockstarjamespage: heh21:18
jamespagethat said it does avoid the need for another two services...21:19
beisnerjamespage, yes it needs neutron foo.  that was my january here's-something instead of nothing test, which is intended to be extended with network and an ssh check to the fired up instances.21:22
beisnerjamespage, nova-compute amulet needs the same love21:23
jamespagebeisner, okies...21:25
jamespageadd them to the list of debts....21:25
=== blr_ is now known as blr
jamespagebeisner, I need to crash - can you keep an eye on https://review.openstack.org/#/c/308489/21:31
jamespageI did a recheck - the amulet test failure appears unrelated to my changes, and not proposing to fix the world tonight...21:32
beisnerjamespage ack will do & thx21:36
thedacOSCI +1'ed. Anyone for a review https://review.openstack.org/#/c/307492/22:02
thedacwolsen: dosaboy: This is in your interests ^^22:02
thedacbeisner: jamespage: ^^ Really anyone :)22:08
LiftedKiltwill model creation via juju-gui still be broken in tomorrow's release?22:09
beisnerwolsen can you give that a review plz?  tia.  & thx thedac22:25
wolsenthedac, do I missed that comment - beisner will do22:26
wolsens/do/doh22:26
thedacwolsen: thanks22:26
thedacwolsen: it is just reverting a mistaken change from earlier22:27
wolsenthedac: ok22:27
wolsenthedac, I'm slow but its +2 +W22:36
thedacwolsen: sweet. Thank you.22:37
wolsenthedac, oh no - its a thank you22:37
thedacheh22:37
thedacTrue that would have been a problem22:37
wolsenlol22:38
arosaleskwmonroe: the bigtop-processing-mapreduce bundle is looking good on AWS on juju 2.0 beta522:56
arosaleshttp://paste.ubuntu.com/15957451/22:56
arosaleskwmonroe: I can't get  lxd to give me machines in 2.0-beta5 so I am still testing there22:56
arosaleskwmonroe: juju fetch <unit-id> should return additional information correct?22:56

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!