=== redir is now known as redir_eod === alexisb-afk is now known as alexisb === natefinch-afk is now known as natefinch === cos1 is now known as c0s === menn0 is now known as menn0-afk === mup_ is now known as mup [08:43] gnuoy, now that juju 1.25.5 is release we could also add series data to the metadata.yaml [08:43] kk [09:24] jamespage, got time for https://review.openstack.org/#/c/304193/ and https://review.openstack.org/#/c/304513/ ? [09:26] gnuoy, +2'ed first; second still needed a recheck-full so +1 with comment [09:27] thanks [09:27] gnuoy, could you look at https://review.openstack.org/#/q/topic:shared-db-spaces for the canonical ci +1'ed ones polease [09:27] sure [09:28] gnuoy, they are pretty much copy/pastes... [09:31] jamespage, are you still planning to review https://review.openstack.org/#/c/299669/ for thedac ? [09:31] gnuoy, I have some drafts pending... [09:33] gnuoy, OK so I'm happy with the way we're installing the profiles... [09:34] gnuoy, my comments are all on the profiles hemselves === cargonza_ is now known as cargonza [10:05] jamespage, could you look at https://review.openstack.org/#/q/topic:pre1604chsync for the canonical ci +1'ed ones please? [10:08] gnuoy, ok all done where passing [10:08] thanks [10:08] gnuoy, it might be nice to try to move to a global-requirements approach for the charms [10:08] where we maintain the list of versions centrally; and we get auto-resyncs when that changes [10:09] I think the openstack projects do much the same - worth digging into next cycle... [10:09] defo [10:16] gnuoy, getting there is odl-controller - the config overrides for the single test where not quite right [10:16] gnuoy, https://github.com/openstack-charmers/bot-control for future reference [10:17] ah, ok [10:26] jamespage, fwiw https://review.openstack.org/#/q/topic:pbr-reqs is wandering toward the gate atm [10:26] gnuoy, awesome [10:33] gnuoy, I've kicked off a full recheck but https://review.openstack.org/#/c/304726/ would be ready for general review... [10:42] gnuoy, pbr ones are good to go on a unit test only - but I will wait for UOSCI to confirm that as well [10:42] so we wait for smokes... [10:51] gnuoy, beisner: I've seen the occasional [10:51] 10:45:44 WARNING:root:SSH command check FAIL for 172.17.114.66 [10:51] during _single tests [10:52] jamespage, ditto [10:52] all units are reporting OK... [10:54] gnuoy, re https://review.openstack.org/#/c/299669/ [10:55] my comments are all about the profiles themselves rather than how they get applied [10:55] ok [10:55] jamespage, I'm not sure what to take from that comment tbh [10:56] Are you saying you don't want to land it? [10:56] gnuoy, I'm looking for a second opinion on whether we should block on v0.1 profiles [10:56] gnuoy, not decided either way just yet [10:56] having the feature in allows us to iterate the profiles more easily... [10:56] and its default is 'off' so that's ok as well [10:57] jamespage, I think land it, I'll say so on the mp [10:58] gnuoy, OK kicked off landing... [11:11] gnuoy, hit a mitaka problem with ovs-odl [11:11] https://review.openstack.org/305121 [11:15] gnuoy, gah! [11:15] 11:11:37 pkg_resources.VersionConflict: (requests 2.7.0 (/usr/lib/python2.7/dist-packages), Requirement.parse('requests==2.6.0')) [11:15] jamespage, where are you seeing that ? [11:16] gnuoy, amulet_full [11:16] gnuoy, http://10.245.162.36:8080/view/Dashboards/view/TestCharmPipeline/job/test_charm_amulet_full/391/console [11:16] jamespage, I hit something similair http://paste.ubuntu.com/15808377/ and was asking beisner for a second pair of eyes [11:17] gnuoy, I think I know the problem [11:18] gnuoy, yah the version of theblues in the stable ppa is to old and has a fixed requests version requirement [11:18] gnuoy, the version in xenial is OK - I'll backport that now [11:18] marcoceppi, ^^ [11:18] thanks [11:19] gnuoy, I think the difference is that smoke does not use juju-test [11:19] where as recheck-full does [11:22] gnuoy, ok taking lunch a bit early toallow the queue to clear [11:22] * jamespage goes to rotavate the veg patch [11:23] I can tell whether that is a euphemism [11:23] * can't [11:23] gnuoy, so specs/full_stack/next_deploy/mitaka runs without a hitch now. Trying icehouse now. [11:23] tinwood, fantastic on multiple levels [11:24] gnuoy, from your gftw branch with the proposed fix. [11:24] gnuoy, yes, I can't quite believe it. I'm going to run it again to make sure!! [12:43] gnuoy, looks like a bug in nova-compute charm for icehouse. nova-api and nova-network not running and they have neutron-openvswitch subs. [12:44] jamespage, could you look at https://review.openstack.org/#/q/topic:pbr-reqs for the canonical ci +1'ed ones please? [12:44] tinwood, seeing the same thing just now [12:44] * gnuoy goes to look and see whats landed [12:44] gnuoy, I'm having a quick look at the charm. [12:45] gnuoy, maybe we don't have an amulet test for this scenario? [12:46] tinwood, I'd be suprised if we don't [12:46] gnuoy, although perhaps it's when the legacy neutron management was dropped?? [12:46] * tinwood is clutching at straws [12:46] tinwood, not sure, looking now [12:47] * tinwood goes to get tea whilst thinking ... [12:49] gnuoy, looking [12:49] gnuoy, https://review.openstack.org/#/c/304638/ is ready for review [12:52] gnuoy, ok all done where possible [12:52] gnuoy, tinwood: hmm thedacs apparmor changes introduced restarts for nova-api and nova-network... [12:52] thanks [12:53] I wonder whether than needs to be conditional on running nova-networking nodes... [13:00] jamespage, yeah, I see where you mean. They are unconditionally added to the BASE_RESOURCE_MAP (L175 in nova_compute_utils.py) [13:00] tinwood, hmm yeah [13:00] that's bad [13:01] actually L201-212 [13:01] jamespage, is there a config() item that they should depend on? [13:01] tinwood, fix is almost done [13:02] jamespage, ^ [13:02] ? [13:02] oh .. [13:02] I'd check the exsisting code in the charm - it must do something conditional based on whats passed to it from nova-cc [13:02] jamespage, I am [13:02] * jamespage is puzzled why nova-compute's amulet tests failed [13:03] did not fail rather... [13:03] do you mean didn't fail ? [13:03] yeah [13:03] jamespage, me too. Or it needs something extra. [13:09] jamespage, tinwood https://review.openstack.org/#/c/305261/ [13:10] * tinwood is looking ... [13:10] gnuoy, nit on your commit message but that is all [13:11] gnuoy, you might wanna ref the bug at the same time... [13:12] gnuoy, shouldn't we also have a test for this in case it regresses in the future? The existing tests didn't pick it up. [13:14] tinwood, +Lots for adding coverage when we hit a gap. i've not dug into the actual issue, but add'l unit or amulet test to hit that would be appreciated. [13:14] tinwood, we should absolutely, but I think we need to unblock things by landing this first [13:15] beisner, gnuoy okay - let's land a fix to unblock, but make a bug or card to add tests? - although this feels wrong :( [13:15] tinwood, tldr on what broke? [13:15] tinwood, still landing changes 5 days after the change freeeze is the big thing thats wrong [13:16] beisner, https://bugs.launchpad.net/charms/+source/nova-compute/+bug/1569886 [13:16] Bug #1569886: Nova-compute incorrectly reports workload status [13:16] gnuoy, indeed. [13:17] rick_h_: Who should I poke about charm store ingestion? This charm (https://code.launchpad.net/~treasure-data/charms/trusty/td-agent/trunk) was pushed ~10 hours ago but isn't deployable or found in the store [13:17] gnuoy, if we create a bug for add the tests, then you can assign it to me, and I'll pick it up post 16.04 [13:18] aisrael: urulama and the team were working on a production issue with that this morning [13:18] bac: mhilton is there any news for aisrael? last I recall seeing IS thought it was 'still running' [13:20] gnuoy, tinwood - see comment on https://code.launchpad.net/~ajkavanagh/openstack-mojo-specs/mojo-openstack-specs-fix-novaclient/+merge/291734 ... i've switched osci to run specs on that branch to confirm before we land. thx. [13:21] tinwood, Bug #1569896 [13:21] Bug #1569896: Nova-compute amulet tests do not seem to check workload status [13:21] beisner, that seems fine. I put the caveat in for you to see :) [13:22] beisner, fwiw I'm using my git enabled version of mojo specs for my testing [13:22] gnuoy, thanks. I'll put it on my todo list. [13:23] gnuoy, please rebase and proposed back @ our specs. we'll need to do some rapid fire merges there i suspect. we need to make sure everyone is using the same stuff. [13:23] beisner, I have done [13:23] I rebased after landing your branch [13:24] aisrael, rick_h_: charmstore was upgraded over night. the ingestion job recently finished. can you check again? [13:24] tinwood, the recent ceilometer api change in the amulet test did break precise, and i've found a middle ground there that works with p-->x. this nova mojo change looks very similar, so i just want to confirm before we land. [13:25] bac: rick_h_: Still not seeing it, at https://jujucharms.com/u/treasure-data/td-agent/trusty/ or deploying cs:~treasure-data/trusty/td-agent [13:25] beisner, okay sounds good. [13:27] gnuoy, ok so how about we revert the apparmor change and defer that all until 16.07 [13:27] I'm equally as concerned that we have features landing late... [13:31] aisrael: no, td-agent did not get published, we can check the logs to see what the issue is [13:32] urulama: That'd be great, thanks! [13:32] gnuoy, beisner: heat appears to have a amulet issue generall with the restart on change tests.. [13:32] gnuoy, cool. let's land your git wares then :) i think there are more specs to update similarly though [13:33] jamespage, heat full was passing before the c-h sync @ https://github.com/openstack/charm-heat/commit/52fd7b261c8b2ce11cb55a054d0eb1856a2b6dcb [13:33] aisrael: i'm still investigating [13:33] jrwren: would you check in freenode #juju channel and then check the charmload logs on guimaas to see why td-agent is not picked up, please? [13:34] jamespage, fwiw I'm pondering the aparmor thing [13:34] I may even consider spelling it correctly soon [13:34] jamespage, however ... i actually suspect amulet rev causing that fail (00:21:18.043 ValueError: could not convert string to float: ssh_exchange_identification: read: Connection reset by peer) [13:35] gnuoy, I think we're rushing it in to-late [13:35] jamespage, ok, agreed [13:35] gnuoy, +2 on reverting that change and doing a proper job for 16.07 [13:35] gnuoy, lemme raise the revert [13:35] jamespage, thanks [13:35] thedac can hate me then [13:36] haha [13:36] jamespage, is that across all charms or just nova-compute? [13:38] gnuoy, https://review.openstack.org/#/c/305275/ [13:38] tinwood, only compute has apparmor so far [13:40] jamespage, oh. my bad. I thought it was like pause/resume. never mind. [13:46] Hi, Where do I set the http-proxy in juju 2.0 beta3? [13:48] gnuoy, okay, what's next for me - I'm pending the nova-compute revert for running the next / icehouse test. [13:49] tinwood, I'd suggest taking a look at that nova-compute amulet bug, it's still a valid bug even if the code tripping it has been reverted === beuno_ is now known as beuno [13:52] gnuoy, okay, I'll look but I'll have to wait until the apparmor is reverted. [13:52] tinwood, why > [13:52] ? [13:52] jamespage: ack, thanks for backporting that [13:52] tinwood, Once apparmor is reverted we need to be back on the mojo functional tests [13:53] jamespage: my plan was to backport allthe new packages from xenial -> ppa/stable for wily and trusty I just got caught up in other things yesterday [13:54] gnuoy, the code's going to be reverted that caused the bug? [13:55] tinwood, the bug is that we believe amulet doesn't check workload status. that's true irrespective of the state of the apparmor code [13:55] gnuoy, yes re: mojo testing. [13:56] aisrael: nothing stands out in the logs. we're gonna try to push your charm directly and see what happens/catch any errors [13:56] urulama: Thanks for the update! [13:59] gnuoy, kk - I'll have a look at the amulet tests for nova-compute re: workload status [13:59] tinwood, thanks. === dimitern` is now known as dimitern [14:23] aisrael: We are still investigating why that td-agent charm is not ingesting. In the mean time we have verified that it can be pushed and published using that new method, if you care to try. [14:30] gnuoy, so the nova-compute amulet tests don't exhibit the problem because the scenario doesn't use neutron-openvswitch --- that's me hazarding a guess? [14:30] Hi [14:30] jamespage: wrt, apparmor, bummer. But glad we caught the issue [14:30] gnuoy, we need to get https://review.openstack.org/#/c/305275/ in [14:31] ci failed.. [14:31] tinwood, ok, yes, that would make sense, so we are checking workload status its just we didn't truip te workload status bug [14:31] jamespage, are you proposing we land it regardless because I've got a +2 in my pocket and I'm not afraid to use it [14:32] * tinwood looks the other way [14:32] I am facing some issue while deploying more than 1 consumer charm with my charm. [14:32] For the first service charm when deployed and when related to my charm sets some states . When the 2nd service charm is deployed, looks like the states are all set and it doesn't go as per the expected flow [14:33] My scope is SERVICE. [14:34] How do I reset the states so that for the next service (or consumer) it goes as per the correct flow ? [14:35] gnuoy, the only time status is actively sought is in the pause-resume check, otherwise it's just looking at the messages via the OpenStackAmuletDeployment base class. [14:35] gnuoy, however, the p-r test would notice the blocked on resume and crash out. [14:36] tinwood, ok [14:40] gnuoy: tinwood: I would have expected self._auto_wait_for_status in __init__ to check for status. That is odd it did not. [14:40] thedac, it does. [14:40] thedac, I think it did but the amulet tests don't use neutron [14:40] so the services were runnign [14:40] In the amulet tests everything is okay as gnuoy says it doesn't use neutron. [14:41] got it [14:43] jamespage, https://review.openstack.org/#/c/305275/ passed amulet smoke [14:44] gnuoy: jamespage I'll +2 it and weep quietly to myself. [14:45] although http://10.245.162.36:8080/job/test_charm_pipeline/954/console is still running [14:46] thedac, sorry... [14:46] gnuoy is too quick [14:46] not your fault [14:46] I appreciate the efforts you have made to get this feature in :( [14:46] thedac, +1000 and lets make that rock for 16.07 [14:46] it looks like it needs a slightly harder poke - it's on 'needs verified' [14:47] suchvenu: Just to confirm, you do mean an entirely second service (with a different name in `juju status`) and not another unit of the same service that you got with add-unit, right? If so, you should be aware that the states set for the first service will remain in effect for that service, but you should get a -relation-joined and -changed events for the second service, which should start a separate conversation flow [14:47] yes a separate service , not a unit [14:48] I guess I am getting realtion-joined and -changed for 2nd service. But there are some states apart from these in my reactive layer [14:50] You should also keep in mind that states set in the charm layer apply to the charm as a whole. Only states set in the interface layer, using conv.set_state(), will be per-service [14:50] suchvenu: Any chance I could see the code in question? [14:50] sure [14:50] how do I share? [14:51] yes, the these states are set in reactive layer [14:51] If it's in a public repo, a link would be good. Otherwise, a pastebin or even email would be fine [14:51] its still not pushed into lauchpad [14:51] pastebin link ? [14:51] That's fine. [14:52] how do I share in Pastebin? [14:53] suchvenu: You can just put it in http://pastebin.ubuntu.com/ [14:55] done [14:55] And then paste the resulting URL here (or PM me) [14:55] zul, https://review.openstack.org/#/c/304626 [14:56] rockstar, https://review.openstack.org/#/c/303175 [14:56] http://pastebin.ubuntu.com/15812802/ [14:56] ^ ie. review/land each other's wares :) === cos1 is now known as c0s [14:58] suchvenu: Is that code from the consumer charm? [14:58] no.. my charm [14:58] db2 charm [14:59] I don't see in that code where the db.sshconfigured state is ever set [15:00] oh.. i didn't paste full code i guess [15:01] see here pls. http://pastebin.ubuntu.com/15812926/ [15:04] beisner: yeah, I can't vote on that patch at all (hilariously, I can do Code-Review vote on my own patches) [15:04] rockstar, you should be able to do a +1 [15:05] beisner: oh, I can now. Weird. I didn't have that box until a refresh. [15:08] cory_fu , could you see the code now ? [15:08] suchvenu: Ok, I see. By setting that db.sshconfigured state and including @when_not, you're basically telling the charm that you only ever want to do that block once. What you actually want is to run that block *every* time, and just depend on your grepping logic to skip keys that are already added. That way, whenever a new key is provided (or if a key is changed), it will get added. It does mean, though, that this handler will run for every [15:08] hook, but as long as it handles being run again with the same data ok (which it should because of the grep), then that should be fine [15:10] I have few more functions after this which are based on previous states [15:10] all that would get affected [15:11] One other thing, though, is that get_sshkeys should return a list of keys, and you should loop over them. I'm not 100% certain how that will work in bash, though. I'll need to do some tests. [15:13] suchvenu: Oh, you have other handlers that depend on db.sshconfigured? Well, you can still set that state once you have at least one key, as long as you remove the @when_not decorator so that the file update still happens every time [15:13] yes [15:13] http://pastebin.ubuntu.com/15813149/ [15:15] This pastebin has the subsequent functions. Please look once. Can we reset all states per service ? That would actually solve the problem i guess. Not sure how to do that. [15:23] beisner: yeah i dont have +2 [15:23] zul +1 and i can land [15:23] beisner: done [15:24] suchvenu: Sorry, I'm caught up in a meeting. I will review your latest link shortly. [15:25] beisner: dont forget btrfs ;) [15:29] sure, please mail me if I am not online [15:41] I'm deploying a charm I'm working on to private openstack clouds. In doing so, I need a few things (endpoints, username/pw, token's), etc. [15:41] Currently I put them in config.yaml and use config-get to access them when the charm deploys but since they differ, per install this isn't going to work [15:41] Is there a better way to go about this? === CyberJacob is now known as zz_CyberJacob === mup_ is now known as mup [15:51] terje: that's the only way to get credential about the environment. What's the purpose of this charm? Is it to be deployed alongside an openstack (like another openstack service) or to do things to openstack from within it? [15:51] I'm using this charm to deploy a heat stack. [15:52] I have a user that uses juju and charms for everything and wants a way to deploy an application using juju, so basically I'm just doing a stack-deploy. [15:53] but in order to do so, I need to know, for example the network uuid, OS_USERNAME, etc. [15:57] thedac, can you put your mojo trousers on today and try and get some green on the spreadsheet ? [16:00] gnuoy: I can. Are your changes landed? [16:03] gnuoy, so using your branch, the icehouse has also passed (I ran it during the meetings). === cmagina90 is now known as cmagina [16:04] is there a way to make 'juju status' default to 'juju status --format=tabular' ? [16:04] terje: you can export the environment variable. /me goes to look it up [16:16] terje: export JUJU_CLI_VERSION=2 [16:17] terje: it seems odd that you'd use juju to deploy stuff via heat [16:17] it's not a scenario I've come across before [16:24] gnuoy: beisner have the required mojo spec changes landed yet? [16:24] yea, it's very weird. [16:25] thedac, still validating the nova client changes; haven't touched the git change proposal yet. [16:25] beisner: ok, please let me know when to start pushing buttons [16:25] cool, that worked for tabular marcoceppi [16:26] so, I have another question (sorry, pretty new to juju).. The charm I'm deploying has an error in my start hook [16:27] so, it's stuck in 'start' [16:27] a 'juju remove-service' or 'juju remove-unit' do nothing here. [16:27] is there a way to force a kill here? [16:28] terje: yeah, so when the charm is stuck in a hook error it tends to require admin intervention [16:28] so, login to the VM and kill the process it's haning on? [16:28] terje: does juju say the hook errored or is ithanging? [16:29] no, it's just in perpetual 'start' mode [16:29] terje: interesting, would you be able to share the output of `juju status --format yaml` to paste.ubuntu.com ? [16:30] sure - but before that, logging into the worker VM, I see this process just hanging around: /bin/bash /var/lib/juju/agents/unit-epdg-0/charm/hooks/start [16:30] thedac, will do [16:31] terje: yes, that's the process [16:31] http://paste.ubuntu.com/15815245/ [16:31] terje: can you send a sigterm to that process? [16:32] done, and it's gone. [16:32] terje: juju will kill it off,but the default time is like 2 hours. You can also force kill the machine it resides on to avoid logging into the VM [16:32] so, now it's in error state .. I assume I can juju resolved and make it go away [16:33] checking.. [16:33] terje: yes [16:33] terje: if you just issue a resloved and not a resolved --retry it'll move on to the destroy service hooks [16:33] yea, that worked. [16:33] it's gone. [16:33] terje: if it's really problematic, `juju destroy-machine --force 1` is a quick way to circumvent a pesky unit if you really want to get rid of it quick [16:34] where 1 corrosponds to the machine number the unit is deployed on [16:34] ah, great. That's probably what I want while working on this charm. thx. [16:34] terje: yeah, during dev is a great way to reset things [16:36] so, there's no way to pass variables during: juju deploy local:bleh/bleh ? [16:40] zul, awaiting rockstar's +1 on https://review.openstack.org/#/c/303175/ [16:40] rockstar ^ [16:40] k [16:45] ah-ha. juju deploy --config myconfig.yaml might be it [16:53] terje: you found it, the format for that is documented here: https://jujucharms.com/docs/1.25/charms-config#configuring-a-service-at-deployment === redir is now known as redir_brb [17:07] thedac, gnuoy - mojo spec git proposal merged; still waiting for validation results on the nova client change, expect that within an hr or so. [17:08] thx for the mods gnuoy === zz_CyberJacob is now known as CyberJacob [17:27] marcoceppi: using juju deploy --config file.yaml, are those values available to the hooks as (config-get setting) ? [17:27] seems like they are not.. :/ === redir_eod is now known as redir [17:31] terje: yes, they should absolutely be [17:31] terje: how did you format your file.yaml? [17:32] marcoceppi: did you move the layer search? can't find it in juju-solutions [17:33] jcastro: it was never there. it's in my personal github [17:33] oh! [17:34] I was going to repull it, having some git issues [17:43] thedac, ahead of mojo spec change merges, if you have cycles to poke at this, it's a definite new-failure: bug 1570032 [17:43] Bug #1570032: Precise-Icehouse: publicURL endpoint for network service not found [17:44] beisner: mind +1ing https://review.openstack.org/#/c/303175/ please [17:44] so to be clear, i'm trying to validate tinwood's nova api client usage change against precise, but stumbling over that, have re-confirmed the same outside of mojo specs. precise is no-talky wrt network api atm. [17:45] zul, done, it's otw to merge [17:45] beisner: thanks [17:49] beisner, sorry if I'm causing problems - do we use precise test runners? [17:49] tinwood, no. everything is trusty + liberty cloud archive for client tools [17:49] tinwood, and that can work all the way back to precise, and i think your change is sound -- i just want to see it work before we merge into the thing that automation consumes. [17:49] precise has other troubles atm [17:51] beisner, ok. [18:12] thedac, fyi, turning to a precise-icehouse stable charm deployment to validate nova client changes @ https://code.launchpad.net/~ajkavanagh/openstack-mojo-specs/mojo-openstack-specs-fix-novaclient/+merge/291734 [18:12] marcoceppi: back from lunch, so my charm is called 'epdg' and looks like this, currently: http://paste.ubuntu.com/15817580/ [18:13] I'm not actually sure if I want 'settings' or 'options' [18:13] the deploy error is: ERROR no settings found for "epdg" [18:28] terje: neither, check this page out https://jujucharms.com/docs/devel/developer-getting-started [18:28] terje: err, wrong page https://jujucharms.com/docs/1.25/charms-config#configuring-a-service-at-deployment [18:28] rockstar, can you abandon this one?: https://review.openstack.org/#/c/301839/ [18:29] terje: it should look more like this http://paste.ubuntu.com/15817918/ [18:30] :) [18:31] totally works - so then, OS_USERNAME=$(config-get os_username) for example [18:31] I'm set! [18:33] beisner: sorry I have been in meetings non stop. Let me look at the bug [18:34] thedac, np at all. we can use the stable charm deploy to validate the pending mojo spec nova client api usage change, then land that, and turn folks loose on specs (except mitaka) [18:35] ok, and is the precise icehouse issue still in play/ [18:35] ? [18:35] yes, next precise icehouse is bust [18:36] so that bug is still relevant, but i'm not blocked on exercising the pending mojo spec util change [18:36] ok === alexisb is now known as alexisb-afk === JoseeAntonioR is now known as jose [19:59] thedac, tinwood - mojo spec MP landed. thanks for the mods. all should be clear to exercise EXCEPT trusty-mitaka (pending cloud archive pkg flush tomorrow). [20:22] beisner: great. I'll get started === Zetas_ is now known as Zetas === freeflying__ is now known as freeflying === terje is now known as Guest13601 === \b is now known as benonsoftware === sarnold_ is now known as sarnold === fginther` is now known as fginther [21:03] coreycb, swift-storage or swift-proxy? [21:46] wth won't this merge, icey ? :-/ https://review.openstack.org/#/c/303014/ [21:53] do i still have to use LP to submit a charm for review? i'm developing mine with git and using the new 'charm' command to publish them [21:58] beisner, swift-proxy === axw___ is now known as axw === aluria` is now known as aluria