[00:39] charmstore seems to be down [00:39] and the openstack bundle doesn’t seem to work because of a keystone charm issue === natefinch is now known as natefinch-afk [00:44] firl: it apears to be working for me - https://jujucharms.com/q/?text=apache does this not work for you? [00:45] nope http://picpaste.com/Screen_Shot_2016-04-19_at_7.44.25_PM-y01ZdjPE.png [00:45] hatch [00:45] huh....interesting [00:45] yeah, thought it was weird also [00:45] one moment I'll try from another network [00:46] alright I was able to get the failure [00:47] odd it doesn't fail for me [00:47] one moment I'll look into this [00:47] ok [00:47] I can proxy through a diff net and help diagnose if you want [00:48] it appears to actually be down here now too [00:48] hah ok [00:48] hope I didn’t break it [00:48] ;) [00:48] yup all you! [00:48] haha [00:48] ;) [00:49] for keystone / OS bundle issue, should I just mail the juju list or open a bug with the system directly? [00:50] Up to you, I'd probably create the bug then email the list :) [00:50] kk [00:56] firl: back up [00:56] it's [00:56] sweet [00:56] “FATAL ERROR: Could not determine OpenStack codename for version 8.1.0” just creating a bug for this now [01:36] hi firl [01:36] there was a keystone SRU (package update) which requires an accompanying charm upgrade for keystone [02:00] fyi, updated bug 1572358 re: keystone 8.1 SRU info [02:00] Bug #1572358: keystone FATAL ERROR: Could not determine OpenStack codename for version 8.1.0 [02:28] thanks beisner [02:28] so I just need to update the bundle to 253? [02:28] firl, yw. fwiw, we've got an openstack-base bundle update in flight. [02:29] yep for exactly that [02:29] ahh ok [02:29] that makes much more sense [02:30] thanks, I really appreciate it [02:32] firl, happy to help. thanks for raising that. SRUs can have domino effects. ideally, we'd have had that bundle revved before the package landed. [02:41] bdx: I updated wal-e in my PPA to the newly released 0.9.1, which might help your radosgw issue. Although (assuming it isn't just ceph/radosgw configuration and admin issues) the more critical piece is probably what version of python-swiftclient is installed. === natefinch-afk is now known as natefinch [05:11] can someone in ~openstack-charmers-next push an update on LP? I'd like to see if an issue in old charm store is resolved. [05:11] jamespage: ^ [05:12] urulama: jamespage was up pretty late, I wouldn't expect a ping for for a few hours [05:12] marcoceppi: i guess any LP change in ~charmers would do as well [07:38] urulama, morning [07:39] jamespage: morning [07:40] jamespage: we've had some issues with the legacy charm store, the service was down over weekend until yesterday, and it seems it's not ingesting charms still. if you can update a charm in ~openstack-charmers-next, it'll be easier to track what's going on [07:41] urulama, ack - will do shortly - tricky to update directly otherwise I'll break the whole reverse mirror from github.com [07:42] jamespage: no need to do it directly, just so that at least one charm gets updated also on LP [07:42] jamespage: github + sync to LP is just fine [07:43] gnuoy, we need to decide what todo re bug 1570960 [07:43] Bug #1570960: ceph-osd status blocked on block device detection [07:44] I think revert is the correct course of action today [07:44] ok [07:47] gnuoy, dealing with that now [07:53] jamespage, am I going mad or has the nic device naming just changed in mitaka? [07:53] gnuoy, ? [07:54] mitaka or xenial? [07:54] jamespage, sorry, yes, xenial [07:54] gnuoy, its done the not eth thing for a while now [07:54] gnuoy, MAAS however remaps things back to eth based naming for consistency... [07:55] jamespage, so how have any of the mojo specs been working then ? The juju set ext-port port to eth1 [07:55] s/The/They/ [07:55] gnuoy, well I think it is still eth1 on a cloud instance [07:55] jamespage, ah ! it isn't now! [07:55] gnuoy, oh fantastic... [07:56] oh...fiddlesticks [07:59] gnuoy, updated cholcombe broken revert - https://review.openstack.org/#/c/308057/ [07:59] ack === zz_CyberJacob is now known as CyberJacob [08:07] monring gnuoy, jamespage [08:07] hi tinwood [08:08] gnuoy, do you know much about ceph-radosgw on mitaka? Having fixed the test 201 bug, I'm now hitting a mitaka only ceph pool bug: https://pastebin.canonical.com/154738/ -- Any pointers or thoughts? [08:11] tinwood, I'd start off by gathering what data has been set between the two services with relation-{ids,list,get} [08:12] gnuoy, okay, will do. [08:24] tinwood: that's odd since the .rgw pool should have been created by https://github.com/openstack/charm-ceph-radosgw/blob/master/hooks/ceph.py#L232 === dosaboy_ is now known as dosaboy [08:25] * tinwood is taking a look [08:26] tinwood: unless infernalis is deleting that pool but i'd be very surpised [08:26] dosaboy, and it *only* happens on mitaka, in the amulet test. [08:27] that's odd, let me kick off a mitaka run to see if I can flesh it out [08:29] okay, but there's a fix in test 201 needed to get the ceph-radosgw to pass: the test in test_201 needs to be flipped from if not any(ret) -> if any(ret). Thx! [08:34] gnuoy, whats the scope of impact of this ens2 thing on our codebase? [08:38] jamespage, thinking [08:38] gnuoy, for context the image on the 12th did not do this, the one on the 17th did... [08:39] gnuoy, also - https://review.openstack.org/#/c/308057/ [08:39] not proposing doing a full recheck for that one... [08:49] jamespage, The mojo specs add a nic to the neutron-gateway and/or the nova-compute nodes to act as the external port. The tests assume that a) The ext port will be eth1 and b) The ext port will be names consistantly accross the units of a service. Neither of those are now true so there is no access to guests that are booted as part of testing. The second nic I added during my last test came up as ens6. Our HA testing assumes that ha-bindiface is eth0 which [08:49] is no longer true. It *may* be safe to assume that on xenial the first nic is ens2 but it doesn't feel like a safe assumption. I don't think amulet tests are affected as they don't set the ext port or perform ha deployments. [08:50] We can work around the ext-port by having mojo figure out the ext ports and set it via mac addresses [08:50] I'm more worried about the HA testing tbh [08:51] I guess we'd need to add an ha-bindiface: auto option [08:51] Those are the impacts of the cloud instance change. [08:52] I think users should be ok since MAAS and lxc use the old nic naming sceme I believe ? [08:53] gnuoy, yeah - that at least is true [08:53] good-ole maas [08:57] gnuoy, confirmed - lxd also names eth0... [08:57] tip top [08:59] gnuoy, what have we not tested yet which will be impacted by this [09:00] gnuoy, just trying to figure out whether we can realistically release tommorow with confidence... [09:00] ha-bindiface: auto makes sense btw [09:00] but not something we commit for tomorrow... [09:04] jamespage, I don't know whats going on with xenial/mitaka ssl as someone has wiped the info from that cell but other than that all xenial/mitaka scenarios have been tested and have either past using master or have passed using in flight fixes which have since landed. What I was hoping to do today was rerun the mitaka tests using only master. [09:04] gnuoy, that's going to be tricky... [09:05] jamespage, I think I can have a ext-port fix ready in ~ 90mins for mojo. The ha tests I may just rerun taking a guess at the device name [09:05] gnuoy, it does appear to be consistently ens2 in the deploy I just did [09:05] ok, then that should be good enough [09:09] pkg_resources.RequirementParseError: Invalid requirement, parse error at "'-lxc==0.'" [09:09] Anyone seen anything like that? [09:10] Its from tox setting up a venv so I can run unit tests in my built charm, so it is some dependency pulled in from base layer but I have no hint as to what. [09:19] Hmm.... looks like it might be https://github.com/pypa/setuptools/issues/502 [09:31] Nope, same indecipherable traceback, different problem. Pinning setuptools doesn't help. [10:12] urulama, ok a change has worked it way into ~openstack-charmers-next branches... [10:13] jamespage: ok, ty. we've identified issue. it's corrupted mongo db on legacy charm store that is preventing any ingestion. we're working on it [10:14] jamespage: worst case if it can't get resolved, this means that charms will have to be pushed manually if they need to be in store ... but we have few more days left for ODS before such resolution is required [10:16] urulama, well... [10:16] urulama, trunk charms go in the same way via ~openstack-charmers atm [10:16] urulama, maybe we should make the switch now to using charm push/publish... [10:17] give us a day, please, to see what can be done [10:18] urulama, have a charm release to get out tomorrow :) [10:19] jamespage: ok, if we don't resolve it in next 6-8h, than that'll be best option [10:21] urulama, ok working a backup plan now then... [10:26] tinwood: ive got a mitaka deploy and .rgw is missing but unfortunately there is an issue wth the broker logging which makes it hard to debug [10:26] tinwood: https://bugs.launchpad.net/charms/+source/ceph/+bug/1572491 [10:26] Bug #1572491: ceph-broker printing incorrect pool info in logs [10:26] tinwood: i'm gonna redeploy with ^^ to see whats going on [10:28] dosaboy, ah, I see. That explains why I'm not really getting anywhere with it. I'll have a look at that too. [10:31] urulama, did the bad backend for charmstore get dropped yet? I'm still seeing alot of unauthorized messages [10:40] tinwood: found le problem - http://paste.ubuntu.com/15944935/ [10:42] dosaboy, interesting. is that fixed on mitaka but not previously? [10:43] tinwood: i suspect the issue is that in previously releases of rgw that pool was autocreated by radosgw-admin on install [10:44] but now it is not so we fianlly hit the bug [10:44] its probably a pool that is not actually required by rgw but its still a bug in the charm code [10:44] dosaboy, ah, I think I see. I'll stare at the code to get an understanding of that bit. [10:44] tinwood: i'll submit a patch shortly [10:46] jamespage, beisner I have a mp up for mojo mitaka ext-port issue https://code.launchpad.net/~gnuoy/openstack-mojo-specs/mojo-openstack-specs-ext-port-rejig/+merge/292363 . It's marked as a wip until i've taken it for a spin [10:47] gnuoy, ack [10:54] tinwood: https://code.launchpad.net/~hopem/charm-helpers/lp1572506/+merge/292365 [10:59] dosaboy, +1 [10:59] dosaboy, can you land the charmhelper fix ? [10:59] Your a charmer right? [11:01] gnuoy: yeap [11:01] dosaboy, that's great. I'll wait for the merge, and then submit my fix. Thanks for your help! [11:20] jamespage: if you're working on using charm push/publish, please make sure it won't get new revision every time cron job runs [11:26] urulama, no worries we won't === scuttle|afk is now known as scuttlemonkey === scuttlemonkey is now known as scuttle|afk [12:24] Anyone know what I'm doing wrong when using the docker layer getting this error when building the charm: OSError: Unable to load tactics.docker.DockerWheelhouseTactic [12:24] I tried debugging it a bit and found that it's trying to load that tactic for both the docker layer and the basic layer [12:25] So it's like it remembers tactics from earlier layers and tries to load those in the other layers as well [12:49] dosaboy the ceph/ceph-osd merge that you judst asked for a unit test for, there is a file full of tests for that function in charmhelpers... [12:49] icey: yeah sorry i revoked my comment after i realised it is a sync [12:49] icey: but you have new -1 re your commit message format :) [12:50] dosaboy: yeah, working on it now :-) [12:50] tx [12:54] jamespage dosaboy commits are re-upped [13:17] gnuoy, jamespage - wow yah that late eth dev naming thing in xenial is a real suckerpunch lol [13:19] gnuoy, flipping osci to run on your branch for a few validation runs @ X [13:19] thx for the mp [13:19] beisner, ta [13:24] beisner, I have bundles tested ready for charm store tomorrow for trusty-mitaka xenial-mitaka and lxd [13:24] lxd will need a tweak for devices landing [13:24] but I'm blocked on charm store injestion atm [13:24] beisner, urulama is investigating but we may need to switch to charm push/publish for tomorrow - working that as a backup plan atm [13:24] jamespage, ack. fyi we did update o-c-t alongside the gerrit review. and that's all passing a-ok. [13:25] beisner, awesome [13:25] beisner, I also managed to get Nova-LXD running on an all-in-on under LXD on my laptop.... [13:25] parity with KVM ftw... [13:25] sweet [13:26] beisner, do you run tests directly from lescina for the bare-metal lab, or do you just access MAAS directly from a basition? [13:26] jamespage, osci-slave/10 is pinned to the metal lab. i never do my jujuing from lescina directly fwiw [13:27] beisner, that's probably wise... [13:27] osci has its own oauth, etc [13:28] jamespage, re: charm upload -- that'd put us one step closer to nixing LP branches and syncs === natefinch is now known as natefinch-afk [13:29] beisner, yah [13:29] we really could have done with series in metadata for this but its not the end of the world... [13:30] jamespage, yah the time box for doing that was between last Wednesday and now (ie. 1.25.5's existence) === fginther` is now known as fginther [13:56] dosaboy, do you have icey's reviews covered? [13:58] jamespage: sure [14:00] dosaboy: doing a recheck-full right now [14:01] icey: 10-4 [14:07] what is the best way of accessing data provided by a relation interface? [14:08] for example when I get config-changed and I want to use the relation data again to render the configuration [14:10] is it possible to call the function that get the interface data again somehow? [14:11] or should I store the data? [14:13] I have a charm that uses the keystone-admin interface to retrieve the credentials [14:14] at the moment I use the keystone-admin interface to retrieve the credentials with the @when("identity-admin.connected") event [14:15] but I'm not sure what the best practice would be to get the credentials again - for example when some other config values are changed and I need to render the configuration file [14:22] anybody knows if "scope" for juju interface layers has to match on both side? I'd like to have provide side "global" (same data sent/received to all services) but on the require side "SERVICE" seems like better fit [14:23] jacekn - doesn't have to be the same scope on opposing sides of the relationship [14:24] thanks [14:25] simonklb - interfaces provide some level of caching for you already. The conversation data is stored in unitdata, so if you're referencing the conversation scope directly, you can re-use those values, directly off the interface. [14:25] simonklb - in other words, no need to maintain your own cache unless thats really what you want to do :) [14:26] lazyPower: that's great! [14:26] thanks [14:26] btw, did you see what I wrote about the docker layer before? [14:26] I'm not certain, its been undergoing a lot of churn these days [14:26] refresh me? [14:27] basically when I try to build the charm using the docker layer I'm getting "OSError: Unable to load tactics.docker.DockerWheelhouseTactic" [14:28] and from the looks of it the cause of the error is that it tries to load the docker tactics from the basic layer path as well [14:28] it's very possible that I've just configured something incorrectly though [14:34] simonklb nope :( [14:34] you got bit by our latest work in master [14:35] we were investigating a fix for offlining docker-compose and its required deps in the wheelhouse, but it requires a custom tactic, which has some bugs in the current version of charm-tools. Let me double check those commits were reverted [14:36] lazyPower: ah, well that's something you have to deal with when you're living on the bleeding edge :) [14:37] let me know if you know how to fix it, otherwise it would be great if I could follow an issue tracker somewhere so that I can be notified when it's fixed :) [14:37] simonklb - yeah but I really should have vet that more before i threw that up in tip :) or i should adopt tagging / pointing interfaces.juju.solutions @ tags when i rev [14:37] no worries! [14:38] simonklb https://github.com/juju-solutions/layer-docker/issues/46 [14:38] gnuoy, https://review.openstack.org/#/c/308372/ [14:38] tested ok locally for me [14:38] on xenial... [14:38] I'll get a patch out for that after i drop out of this hangout [14:38] | wsrep_incoming_addresses | 10.0.8.54:3306,10.0.8.105:3306,10.0.8.178:3306 | [14:39] lazyPower: thanks! [14:41] gnuoy, approved @ https://code.launchpad.net/~gnuoy/openstack-mojo-specs/mojo-openstack-specs-ext-port-rejig/+merge/292363 [14:41] thx again! [14:42] np [15:03] gnuoy: see my last post on https://bugs.launchpad.net/charms/+source/heat/+bug/1571830 Should we revert just the relation_set bits? It only affects upgrading from stable to next [15:03] Bug #1571830: Heat has status blocked missing database relation [15:05] or trigger a rerun of shared-db-joined? [15:05] thedac, can we discuss after the standup? I'm failing to get my head around it atm [15:05] sure, no problem [15:12] lazyPower: my first attempt at using the unitdata wasn't succesful - however I noticed that the @when("identity-admin.connected") function was invoked everytime the "config-changed" event fired [15:12] any idea why that is? [15:14] if the state is set, the decorated method will be triggered [15:14] thedac, that ha change also broke IPv6 support... [15:15] jamespage: ok, I'll check that as well [15:16] lazyPower: does this happen everytime a hook is fired? or what makes it trigger the decorated functions? [15:16] is there some periodical interval that executes the current state over and over? [15:17] simonklb - so in reactive, the idea is to work with synthetic states. There's a bus that executes every decorated method at least once, and if the state changes - the bus re-evaluates and re-calculates what needs to be executed [15:17] so yeah its basically an event loop, that knows when its states have changed, and then inserts/removes methods to be invoked appropriately. [15:18] when its executed everything it had in its execution list, it ends context until the next hook is invoked [15:19] simonklb - did that help clarify? [15:19] I see, do you still think I should use the cached values or would it be ok to simply use the fact that it will be in the state where the relation data can be fetched everytime? [15:19] the latter. Always opt for pulling the data right off the wire vs caching. Caching means you're responsible for keeping your cache of a cache in sync [15:20] as we dont really go over the wire anymore :) its all serialized in unitdata in the conversation objects. [15:23] great, thanks! [15:23] I'm starting to see the reasoning behind getting away from hooks and only working with states [15:26] you probably want to keep track of data changing or not though, so that you don't have to reconfigure your charms everytime? [15:27] there's a decorator for charm config, but relation data... i dont think we have any indicators that the relation data has changed, no [15:27] but could the unitdata be used for that? [15:27] comparing the previous relation data to the current one? [15:27] it could. cory_fu may already be tracking if relation data has changed in the conversation bits [15:29] simonklb: I would recommend using data_changed: https://pythonhosted.org/charms.reactive/charms.reactive.helpers.html#charms.reactive.helpers.data_changed [15:29] cory_fu: awesome, thanks [15:29] jamespage, I see your percona change has Canoical CI +1, I feel like thats good enough to land it. [15:30] gnuoy, yes [15:30] agreed [15:30] kk [15:38] lazyPower: It's not tracked automatically, because we expect the interface layers to provide an API on top of the raw relation data, and the data from the API is what should be passed in to data_changed [15:50] hi [15:50] hi everyone [15:50] yesterday i raised the issue regarding ceph-osd [15:50] any resolution uptill on that? [15:51] upon startup ceph-osd showing hook failed update status [16:06] [16:08] lazyPower: Still having issues with that custom tactic, eh? [16:08] i havent had a chance ot circle back to bens patch [16:09] ah [16:11] aisrael: are you also going to patch https://github.com/juju/charm-tools/issues/190 ? [16:11] marcoceppi: yup, I'll take it [16:44] gnuoy, beisner: does this - http://paste.ubuntu.com/15952999/ [16:44] look like a reasonable set of changes for the stable branches post release? [16:45] jamespage, ack those are the stable bits to flip wrt c-h sync and amulet tests [16:45] beisner, so creating the stable branches doe not require a review [16:45] beisner, so I think our process looks like [16:45] 1) create branches [16:45] 2) resync git -> bzr [16:46] 3) proposed stable updates as above [16:46] they can work through over time... [16:46] jamespage, feel like slipping a rm tests/*junu* ? [16:46] juno of course [16:46] beisner, did we not do that already? [16:46] we've only blacklisted it from running. we've not done a full sweep [16:46] of reviews [16:47] beisner, ah [16:50] beisner, did you seem my comments re not re-using the current 'stable' branch [16:50] but going for a 'stable/16.04' branch instead [16:50] jamespage, yah. we'll have some automation and helper updates to make for that i think. [16:50] beisner, yes [16:50] right now we're binary on stable||master [16:51] beisner, yup [16:51] beisner, I did have some thoughts on how we might make this easier [16:51] beisner, but they rely on us moving the push to charmstore post-commit for all branches... [16:51] which we should do anyway [16:52] beisner, for now /next and /trunk charm branches on launchpad will remain analogous to master||stable [16:52] ++charm-pusher to our list of things we need, and we shall be wise to work in catch/retry/thresholds in the upload/publish cmds. [16:53] jamespage, we can trigger that as a new job when gerrit 'foo merged' messages are heard. [16:53] so rather than a full pass scheduled thing, it'll just tack onto the end of the dev process [16:55] beisner, that would be ++ for me [16:57] jamespage, beisner: so, we're restoring the old broken DB for legacy store, but that'll take 10h it seems, so even if (and it's not certain) that works, ingestion will only start tomorrow again [16:57] urulama, ok [16:57] urulama, will check in first thing and then make a decision on process for our charm release... [16:58] jamespage, beisner: but everything seems borked, so, we redirected legacy charm store to jujucharms.com and turned off ingestion. it just might be it's end, just a bit too soon [16:58] jamespage: ok, will let you know first thing in the morning === natefinch-afk is now known as natefinch === scuttle|afk is now known as scuttlemonkey [17:19] hey, I upgraded my laptop to xenial, and have aliased juju-1 to juju since I have scripts that call juju commands (and I use mojo) [17:19] when I ran a spec, I got an error message about there being no current controller [17:19] I think that would be something in juju-2, right? [17:19] I guess having a zsh alias isn't going to work [17:20] is there a proper workaround for this? [17:20] instead of my hacky one [17:21] will model creation via the gui be fixed in tomorrow's release? === scuttlemonkey is now known as scuttle|afk === scuttle|afk is now known as scuttlemonkey === scuttlemonkey is now known as scuttle|afk [17:28] how do you force remove a service in a failed status without destroying the machine ? [17:29] freak_, is wondering this ^^ [17:29] :) [17:32] cholcombe - thats not so easy to do. best bet is to juju destroy-service and juju resolved the failures all the down [17:32] lazyPower, yeah I know this is tricky [17:32] there's a reaping setting on machines too, i believe by default, it will reap the machine once the service is off it [17:32] lazyPower, freak_, has a deploy with several lxc's on metal [17:33] i'd love to say force destroy the machine but that will cause collateral damage === scuttle|afk is now known as scuttlemonkey [17:33] well i should ammend that statement to say all services/containers are off of it. [17:33] yeah [17:33] so long as there's containers on that metal it'll keep the machine around if you destroy && resolve spam it [17:34] well all he did was reboot the machine and it got stuck in this error state :-/ [17:34] jinkies [17:34] if you recycle the agent it doesn't un-stick it? [17:35] i'm not sure [17:35] either i destroy unit or destroy service [17:35] it remain struck in hook failed state [17:36] doesn't get removed [17:36] freak_ - is there a relation in error state that is keeping it around? [17:36] thats the likely culprit [17:36] error is the hook failed "update status",,, otherwise no error [17:37] not even on single link or relation [17:37] jamespage, before we can do the charm upload in the flow though, i'd like to see the full set of commands known-to-work to do the upload and publish. i've got some bits and pieces but not the whole picture i think. === redir is now known as redir_afk [17:51] freak_: have you tried to mark it resolved? `juju resolved ceph-osd/0` [17:52] yes , i tried from GUI as well as from cli, but no use,,still in hook failed state [17:53] icey , it says : ERROR cannot set resolved mode for unit "ceph-osd/0": already resolved [17:54] freak_: what juju version are you running? [17:54] icey , 1.25.3-trusty-amd64 [17:56] urulama: yeah, we're still going to need a 30 days period of notice before we actually kill ingestion [18:05] lazyPower: how would freak_ go about recycling the agent? [18:09] marcoceppi: well, if the old service is dead and we can't bring it back, then, ingestion dies with it [18:14] urulama: this is less than ideal. [18:15] marcoceppi: yep. we'll know when repairing process finishes [18:15] urulama: thanks [18:16] urulama: we'll have a public beta of new review queue out next week, if this comes back we can then do the 30 day phase out for the month of may [18:19] marcoceppi: we'll see tomorrow EU early morning, and try to restore it asap ... but in the meantime, manual publishing is the only way [18:20] urulama: ack [18:33] icey : there's a service entry for it - /etc/init/jujud-unit-- [18:34] icey: so stop/start that service [18:34] awesome, thanks lazyPower; freak_ have you tried that, and if not, can you please? [18:38] icey , here is the output of that unit file http://paste.ubuntu.com/15954684/ [18:39] can you please guide me further,,, [18:39] freak_ - service jujud-unit-ceph-osd-0 restart [18:45] ......still in hook failed state [18:45] [18:49] freak_ - you're only interested in destroying the ceph-osd service on the machine correct? [18:49] for the time being ..yes...main goal is to delete it and install ceph-osd again [18:49] ok, are you attached to debug-hooks on the unit? [18:50] if not, can you be? [18:50] I'm trying to deploy ceph-radosgw to a container - but it is stuck at: "Waiting for agent initialization to finish" - how can I trigger it? [18:50] u mean run the command juju debug-hook [18:50] it simply takes me to the node 0 [18:50] freak_ - yes, juju debug-hooks ceph-osd/0 [18:51] that takes me to the cli of node 0 [18:51] freak_ - now, cycle the juju agent, and you should be placed in another tmux session with hook context. [18:51] or juju resolved --retry ceph-osd/0 === dpm is now known as dpm-mostly-afk [18:53] tried both...... [18:54] http://paste.ubuntu.com/15954803/ [18:54] http://paste.ubuntu.com/15954807/ [19:10] freak_ no change in the debug-hooks session? [19:11] no, the last output was of start and stop which i shared [19:11] after this no msg appeared [19:12] I'm not certain what ot recommend then if you cannot trap a hook, or resolve the service's hooks getting caught in error until it successfully removes. [19:12] freak_ - it would be helpful to collect logs on this and file a bug. icey if you have any further things to try? [19:13] lazyPower: freak_ I don't have any more ideas, agree with lazyPower about grabbing logs and opening a bug [19:13] let me share log with you guys regarding this service [19:13] may be that can help === matthelmke is now known as matthelmke-afk [19:16] dosaboy, i suspect cli difference for infernalis radosgw-admin cmds in the ceph-radosgw amulet test. http://pastebin.ubuntu.com/15955129/ re: https://review.openstack.org/#/c/308339/ [19:16] cholcombe, icey fyi ^ [19:17] beisner: dosaboy that wouldn't surprise me, we're working on something that'll let us stop using the cli so much to send these commands to ceph [19:17] dosaboy, frankly that cmd isn't used to inspect anything (Yet), the test is just checking that it returns zero. i'd be ok with rm'ing that from the test. [19:18] via #comment for later revisiting of course [19:18] icey, lazyPower , check these logs ubuntu@node0:/var/log/juju$ sudo cat unit-ceph-osd-0.log 2016-04-17 16:09:50 INFO juju.cmd supercommand.go:37 running jujud [1.25.5-trusty-amd64 gc] 2016-04-17 16:09:50 DEBUG juju.agent agent.go:482 read agent config, format "1.18" 2016-04-17 16:09:50 INFO juju.jujud unit.go:122 unit agent unit-ceph-osd-0 start (1.25.5-trusty-amd64 [gc]) 2016-04-17 16:09:50 INFO juju.network network.go:242 setting pr [19:18] [19:19] https://gist.github.com/anonymous/e82d5d143f67397d2f6667e1313776b1 [19:20] simonklb - Just landed a fix for #46, that should unblock you :) [19:27] dosaboy, fyi imagonna ++patchset on your change and recheck full [19:29] kwmonroe: did you have a bundle you wanted some extra testing on? [19:29] * arosales has aws and lxd on beta5 bootstrapped atm [19:31] icey, http://docs.ceph.com/docs/infernalis/man/8/radosgw-admin/ [19:32] according to that `radosgw-admin regions list` should still be valid [19:32] are the docs stale? [19:33] or is our binary fuxord? [19:33] seems one of the two [19:36] yeah arosales! if you don't mind, could you fire this off on lxd? https://jujucharms.com/u/bigdata-dev/bigtop-processing-mapreduce/bundle/1 [19:37] we had to work around some networking issues and i'm curious how the slave->namenode relation works on lxd. [19:37] sure, waiting for realtime-syslog-analytics to come back, and then I'll fire off that bundle [19:37] thanks! [19:37] kwmonroe: thanks for the bundle :-) [19:38] they could be, we found an issue with the erasure coded pool stuff a few days ago [19:38] icey, ack. deploying, will poke at it first hand [19:38] -> back in a bit === matthelmke-afk is now known as matthelmke [20:08] icey, yah that ceph manpage is cruft. http://pastebin.ubuntu.com/15955807/ [20:09] and i always chuckle at `radosgw-admin bucket list` === scuttlemonkey is now known as scuttle|afk [20:17] beisner: that doesn't surprise me [20:19] icey, bucket list appears to be broken in infernalis as well [20:19] 2016-04-20 20:15:37.510473 7f12f1ed1900 0 RGWZoneParams::create(): error creating default zone params: (17) File exists [20:20] icey, dosaboy - i suspect the infernalis packages revved or landed -after- the ceph-radosgw mitaka amulet tests initially passed their fulls. [20:20] beisner: xenial/mitaka is not jewel [20:21] icey, yah. i'm talking i here. [20:21] now* [20:21] it was jewel when these tests initially flew though ,yah? [20:22] or hammer or some cephy thing ;-) [20:24] added patchset @ https://review.openstack.org/#/c/308339/ [20:24] that passes in a one-off run, we'll let it churn on a full rerun [20:41] probably infernalis, if it was over a week ago [20:44] beisner, rockstar: can you take a look at https://review.openstack.org/#/c/308489 [20:44] beisner, icey: xenial/mitaka has been in jewel rc's for several weeks now [20:44] its possible an rc change behaviour... [20:45] jamespage, yah but ceph-radosgw full amulet hasn't had a reason to run in that timeframe i think. and i wasn't forcing full reruns out of the blue yet (we are now). [20:45] beisner, yup [20:45] quite... [20:45] special. :-) [20:46] yay for ++unit tests jamespage [20:47] beisner, always like to add a test... [20:47] or three :-) [20:51] beisner, tested OK for multiple config-changed executions post deployment... [20:54] okie cool === urulama is now known as urulama|____ [21:11] so I am being dense here, but what the syntax to juju deploy in a personal name space? [21:11] * arosales looking at https://jujucharms.com/docs/devel/charms-deploying [21:11] my combinations are working atm, http://paste.ubuntu.com/15956415/ [21:13] sorry my combinations aren't working atm, http://paste.ubuntu.com/15956415/ [21:13] icey, onto the next fail on rgw test_402. seems like swift client usage adjustments will be necessary [21:15] hmm deploy juju deploy https://jujucharms.com/u/bigdata-dev/bigtop-processing-mapreduce/bundle/1 seemed to do the trick [21:17] rockstar, remind me todo a minor refactor on the amulet test bundle for lxd... [21:17] but later.. not today.. [21:17] jamespage: specifics? [21:18] rockstar, well its testing using nova-network [21:18] rockstar, thats crazy talk... [21:18] :-) [21:18] jamespage: heh [21:19] that said it does avoid the need for another two services... [21:22] jamespage, yes it needs neutron foo. that was my january here's-something instead of nothing test, which is intended to be extended with network and an ssh check to the fired up instances. [21:23] jamespage, nova-compute amulet needs the same love [21:25] beisner, okies... [21:25] add them to the list of debts.... === blr_ is now known as blr [21:31] beisner, I need to crash - can you keep an eye on https://review.openstack.org/#/c/308489/ [21:32] I did a recheck - the amulet test failure appears unrelated to my changes, and not proposing to fix the world tonight... [21:36] jamespage ack will do & thx [22:02] OSCI +1'ed. Anyone for a review https://review.openstack.org/#/c/307492/ [22:02] wolsen: dosaboy: This is in your interests ^^ [22:08] beisner: jamespage: ^^ Really anyone :) [22:09] will model creation via juju-gui still be broken in tomorrow's release? [22:25] wolsen can you give that a review plz? tia. & thx thedac [22:26] thedac, do I missed that comment - beisner will do [22:26] s/do/doh [22:26] wolsen: thanks [22:27] wolsen: it is just reverting a mistaken change from earlier [22:27] thedac: ok [22:36] thedac, I'm slow but its +2 +W [22:37] wolsen: sweet. Thank you. [22:37] thedac, oh no - its a thank you [22:37] heh [22:37] True that would have been a problem [22:38] lol [22:56] kwmonroe: the bigtop-processing-mapreduce bundle is looking good on AWS on juju 2.0 beta5 [22:56] http://paste.ubuntu.com/15957451/ [22:56] kwmonroe: I can't get lxd to give me machines in 2.0-beta5 so I am still testing there [22:56] kwmonroe: juju fetch should return additional information correct?