/srv/irclogs.ubuntu.com/2016/04/13/#juju.txt

=== redir is now known as redir_eod
=== alexisb-afk is now known as alexisb
=== natefinch-afk is now known as natefinch
=== cos1 is now known as c0s
=== menn0 is now known as menn0-afk
=== mup_ is now known as mup
jamespagegnuoy, now that juju 1.25.5 is release we could also add series data to the metadata.yaml08:43
gnuoykk08:43
gnuoyjamespage, got time for https://review.openstack.org/#/c/304193/ and https://review.openstack.org/#/c/304513/ ?09:24
jamespagegnuoy, +2'ed first; second still needed a recheck-full so +1 with comment09:26
gnuoythanks09:27
jamespagegnuoy, could you look at https://review.openstack.org/#/q/topic:shared-db-spaces for the canonical ci +1'ed ones polease09:27
gnuoysure09:27
jamespagegnuoy, they are pretty much copy/pastes...09:28
gnuoyjamespage, are you still planning to review https://review.openstack.org/#/c/299669/ for thedac ?09:31
jamespagegnuoy, I have some drafts pending...09:31
jamespagegnuoy, OK so I'm happy with the way we're installing the profiles...09:33
jamespagegnuoy, my comments are all on the profiles hemselves09:34
=== cargonza_ is now known as cargonza
gnuoyjamespage, could you look at https://review.openstack.org/#/q/topic:pre1604chsync  for the canonical ci +1'ed ones please?10:05
jamespagegnuoy, ok all done where passing10:08
gnuoythanks10:08
jamespagegnuoy, it might be nice to try to move to a global-requirements approach for the charms10:08
jamespagewhere we maintain the list of versions centrally; and we get auto-resyncs when that changes10:08
jamespageI think the openstack projects do much the same - worth digging into next cycle...10:09
gnuoydefo10:09
jamespagegnuoy, getting there is odl-controller - the config overrides for the single test where not quite right10:16
jamespagegnuoy, https://github.com/openstack-charmers/bot-control for future reference10:16
gnuoyah, ok10:17
gnuoyjamespage, fwiw https://review.openstack.org/#/q/topic:pbr-reqs is wandering toward the gate atm10:26
jamespagegnuoy, awesome10:26
jamespagegnuoy, I've kicked off a full recheck but https://review.openstack.org/#/c/304726/ would be ready for general review...10:33
jamespagegnuoy, pbr ones are good to go on a unit test only - but I will wait for UOSCI to confirm that as well10:42
jamespageso we wait for smokes...10:42
jamespagegnuoy, beisner: I've seen the occasional10:51
jamespage10:45:44 WARNING:root:SSH command check FAIL for 172.17.114.6610:51
jamespageduring _single tests10:51
gnuoyjamespage, ditto10:52
jamespageall units are reporting OK...10:52
jamespagegnuoy, re https://review.openstack.org/#/c/299669/10:54
jamespagemy comments are all about the profiles themselves rather than how they get applied10:55
gnuoyok10:55
gnuoyjamespage, I'm not sure what to take from that comment tbh10:55
gnuoyAre you saying you don't want to land it?10:56
jamespagegnuoy, I'm looking for a second opinion on whether we should block on v0.1 profiles10:56
jamespagegnuoy, not decided either way just yet10:56
jamespagehaving the feature in allows us to iterate the profiles more easily...10:56
jamespageand its default is 'off' so that's ok as well10:56
gnuoyjamespage, I think land it, I'll say so on the mp10:57
jamespagegnuoy, OK kicked off landing...10:58
jamespagegnuoy, hit a mitaka problem with ovs-odl11:11
jamespagehttps://review.openstack.org/30512111:11
jamespagegnuoy, gah!11:15
jamespage11:11:37 pkg_resources.VersionConflict: (requests 2.7.0 (/usr/lib/python2.7/dist-packages), Requirement.parse('requests==2.6.0'))11:15
gnuoyjamespage, where are you seeing that ?11:15
jamespagegnuoy, amulet_full11:16
jamespagegnuoy, http://10.245.162.36:8080/view/Dashboards/view/TestCharmPipeline/job/test_charm_amulet_full/391/console11:16
gnuoyjamespage, I hit something similair http://paste.ubuntu.com/15808377/ and was asking beisner for a second pair of eyes11:16
jamespagegnuoy, I think I know the problem11:17
jamespagegnuoy, yah the version of theblues in the stable ppa is to old and has a fixed requests version requirement11:18
jamespagegnuoy, the version in xenial is OK - I'll backport that now11:18
jamespagemarcoceppi, ^^11:18
gnuoythanks11:18
jamespagegnuoy, I think the difference is that smoke does not use juju-test11:19
jamespagewhere as recheck-full does11:19
jamespagegnuoy, ok taking lunch a bit early toallow the queue to clear11:22
* jamespage goes to rotavate the veg patch11:22
gnuoyI can tell whether that is a euphemism11:23
gnuoy* can't11:23
tinwoodgnuoy, so specs/full_stack/next_deploy/mitaka runs without a hitch now.  Trying icehouse now.11:23
gnuoytinwood, fantastic on multiple levels11:23
tinwoodgnuoy, from your gftw branch with the proposed fix.11:24
tinwoodgnuoy, yes, I can't quite believe it.  I'm going to run it again to make sure!!11:24
tinwoodgnuoy, looks like a bug in nova-compute charm for icehouse.  nova-api and nova-network not running and they have neutron-openvswitch subs.12:43
gnuoyjamespage, could you look at https://review.openstack.org/#/q/topic:pbr-reqs for the canonical ci +1'ed ones please?12:44
gnuoytinwood, seeing the same thing just now12:44
* gnuoy goes to look and see whats landed12:44
tinwoodgnuoy, I'm having a quick look at the charm.12:44
tinwoodgnuoy, maybe we don't have an amulet test for this scenario?12:45
gnuoytinwood, I'd be suprised if we don't12:46
tinwoodgnuoy, although perhaps it's when the legacy neutron management was dropped??12:46
* tinwood is clutching at straws12:46
gnuoytinwood, not sure, looking now12:46
* tinwood goes to get tea whilst thinking ...12:47
jamespagegnuoy, looking12:49
jamespagegnuoy, https://review.openstack.org/#/c/304638/ is ready for review12:49
jamespagegnuoy, ok all done where possible12:52
jamespagegnuoy, tinwood: hmm thedacs apparmor changes introduced restarts for nova-api and nova-network...12:52
gnuoythanks12:52
jamespageI wonder whether than needs to be conditional on running nova-networking nodes...12:53
tinwoodjamespage, yeah, I see where you mean.  They are unconditionally added to the BASE_RESOURCE_MAP (L175 in nova_compute_utils.py)13:00
jamespagetinwood, hmm yeah13:00
jamespagethat's bad13:00
tinwoodactually L201-21213:01
tinwoodjamespage, is there a config() item that they should depend on?13:01
gnuoytinwood, fix is almost done13:01
gnuoyjamespage, ^13:02
jamespage?13:02
jamespageoh ..13:02
jamespageI'd check the exsisting code in the charm - it must do something conditional based on whats passed to it from nova-cc13:02
gnuoyjamespage, I am13:02
* jamespage is puzzled why nova-compute's amulet tests failed13:02
jamespagedid not fail rather...13:03
gnuoydo you mean didn't fail ?13:03
gnuoyyeah13:03
tinwoodjamespage, me too.  Or it needs something extra.13:03
gnuoyjamespage, tinwood https://review.openstack.org/#/c/305261/13:09
* tinwood is looking ...13:10
jamespagegnuoy, nit on your commit message but that is all13:10
jamespagegnuoy, you might wanna ref the bug at the same time...13:11
tinwoodgnuoy, shouldn't we also have a test for this in case it regresses in the future?  The existing tests didn't pick it up.13:12
beisnertinwood, +Lots for adding coverage when we hit a gap.  i've not dug into the actual issue, but add'l unit or amulet test to hit that would be appreciated.13:14
gnuoytinwood, we should absolutely, but I think we need to unblock things by landing this first13:14
tinwoodbeisner, gnuoy okay - let's land a fix to unblock, but make a bug or card to add tests?   - although this feels wrong :(13:15
beisnertinwood, tldr on what broke?13:15
gnuoytinwood, still landing changes 5 days after the change freeeze is the big thing thats wrong13:15
gnuoybeisner, https://bugs.launchpad.net/charms/+source/nova-compute/+bug/156988613:16
mupBug #1569886: Nova-compute incorrectly reports workload status <nova-compute (Juju Charms Collection):In Progress by gnuoy> <https://launchpad.net/bugs/1569886>13:16
tinwoodgnuoy, indeed.13:16
aisraelrick_h_: Who should I poke about charm store ingestion? This charm (https://code.launchpad.net/~treasure-data/charms/trusty/td-agent/trunk) was pushed ~10 hours ago but isn't deployable or found in the store13:17
tinwoodgnuoy, if we create a bug for add the tests, then you can assign it to me, and I'll pick it up post 16.0413:17
rick_h_aisrael: urulama and the team were working on a production issue with that this morning13:18
rick_h_bac: mhilton is there any news for aisrael? last I recall seeing IS thought it was 'still running'13:18
beisnergnuoy, tinwood - see comment on https://code.launchpad.net/~ajkavanagh/openstack-mojo-specs/mojo-openstack-specs-fix-novaclient/+merge/291734 ... i've switched osci to run specs on that branch to confirm before we land.  thx.13:20
gnuoytinwood, Bug #156989613:21
mupBug #1569896: Nova-compute amulet tests do not seem to check workload status <nova-compute (Juju Charms Collection):New for ajkavanagh> <https://launchpad.net/bugs/1569896>13:21
tinwoodbeisner, that seems fine.  I put the caveat in for you to see :)13:21
gnuoybeisner, fwiw I'm using my git enabled version of mojo specs for my testing13:22
tinwoodgnuoy, thanks.  I'll put it on my todo list.13:22
beisnergnuoy, please rebase and proposed back @ our specs.  we'll need to do some rapid fire merges there i suspect.  we need to make sure everyone is using the same stuff.13:23
gnuoybeisner, I have done13:23
gnuoyI rebased after landing your branch13:23
bacaisrael, rick_h_: charmstore was upgraded over night.  the ingestion job recently finished. can you check again?13:24
beisnertinwood, the recent ceilometer api change in the amulet test did break precise, and i've found a middle ground there that works with p-->x.  this nova mojo change looks very similar, so i just want to confirm before we land.13:24
aisraelbac: rick_h_: Still not seeing it, at https://jujucharms.com/u/treasure-data/td-agent/trusty/ or deploying cs:~treasure-data/trusty/td-agent13:25
tinwoodbeisner, okay sounds good.13:25
jamespagegnuoy, ok so how about we revert the apparmor change and defer that all until 16.0713:27
jamespageI'm equally as concerned that we have features landing late...13:27
urulamaaisrael: no, td-agent did not get published, we can check the logs to see what the issue is13:31
aisraelurulama: That'd be great, thanks!13:32
jamespagegnuoy, beisner: heat appears to have a amulet issue generall with the restart on change tests..13:32
beisnergnuoy, cool.  let's land your git wares then :)  i think there are more specs to update similarly though13:32
beisnerjamespage, heat full was passing before the c-h sync @ https://github.com/openstack/charm-heat/commit/52fd7b261c8b2ce11cb55a054d0eb1856a2b6dcb13:33
bacaisrael: i'm still investigating13:33
urulamajrwren: would you check in freenode #juju channel and then check the charmload logs on guimaas to see why td-agent is not picked up, please?13:33
gnuoyjamespage, fwiw I'm pondering the aparmor thing13:34
gnuoyI may even consider spelling it correctly soon13:34
beisnerjamespage, however ... i actually suspect amulet rev causing that fail (00:21:18.043 ValueError: could not convert string to float: ssh_exchange_identification: read: Connection reset by peer)13:34
jamespagegnuoy, I think we're rushing it in to-late13:35
gnuoyjamespage, ok, agreed13:35
jamespagegnuoy, +2 on reverting that change and doing a proper job for 16.0713:35
jamespagegnuoy, lemme raise the revert13:35
gnuoyjamespage, thanks13:35
jamespagethedac can hate me then13:35
gnuoyhaha13:36
tinwoodjamespage, is that across all charms or just nova-compute?13:36
jamespagegnuoy, https://review.openstack.org/#/c/305275/13:38
jamespagetinwood, only compute has apparmor so far13:38
tinwoodjamespage, oh. my bad.  I thought it was like pause/resume.  never mind.13:40
chridoHi, Where do I set the http-proxy in juju 2.0 beta3?13:46
tinwoodgnuoy, okay, what's next for me - I'm pending the nova-compute revert for running the next / icehouse test.13:48
gnuoytinwood, I'd suggest taking a look at that nova-compute amulet bug, it's still a valid bug even if the code tripping it has been reverted13:49
=== beuno_ is now known as beuno
tinwoodgnuoy, okay, I'll look but I'll have to wait until the apparmor is reverted.13:52
gnuoytinwood, why >13:52
gnuoy?13:52
marcoceppijamespage: ack, thanks for backporting that13:52
gnuoytinwood, Once  apparmor is reverted we need to be back on the mojo functional tests13:52
marcoceppijamespage: my plan was to backport allthe new packages from xenial -> ppa/stable for wily and trusty I just got caught up in other things yesterday13:53
tinwoodgnuoy, the code's going to be reverted that caused the bug?13:54
gnuoytinwood, the bug is that we believe amulet doesn't check workload status. that's true irrespective of the state of the apparmor code13:55
tinwoodgnuoy, yes re: mojo testing.13:55
urulamaaisrael: nothing stands out in the logs. we're gonna try to push your charm directly and see what happens/catch any errors13:56
aisraelurulama: Thanks for the update!13:56
tinwoodgnuoy, kk - I'll have a look at the amulet tests for nova-compute re: workload status13:59
gnuoytinwood, thanks.13:59
=== dimitern` is now known as dimitern
jrwrenaisrael: We are still investigating why that td-agent charm is not ingesting. In the mean time we have verified that it can be pushed and published using that new method, if you care to try.14:23
tinwoodgnuoy, so the nova-compute amulet tests don't exhibit the problem because the scenario doesn't use neutron-openvswitch --- that's me hazarding a guess?14:30
suchvenuHi14:30
thedacjamespage: wrt, apparmor, bummer. But glad we caught the issue14:30
jamespagegnuoy, we need to get https://review.openstack.org/#/c/305275/ in14:30
jamespageci failed..14:31
gnuoytinwood, ok, yes, that would make sense, so we are checking workload status its just we didn't truip te workload status bug14:31
gnuoyjamespage, are you proposing we land it regardless because I've got a +2 in my pocket and I'm not afraid to use it14:31
* tinwood looks the other way14:32
suchvenuI am facing some issue while deploying more than 1 consumer charm with my charm.14:32
suchvenuFor the first service charm when deployed and when related to my charm sets some states . When the 2nd service charm is deployed, looks like the states are all set and it doesn't go as per the expected flow14:32
suchvenuMy scope is SERVICE.14:33
suchvenuHow do I reset the states so that for the next service (or consumer) it goes as per the correct flow ?14:34
tinwoodgnuoy, the only time status is actively sought is in the pause-resume check, otherwise it's just looking at the messages via the OpenStackAmuletDeployment base class.14:35
tinwoodgnuoy, however, the p-r test would notice the blocked on resume and crash out.14:35
gnuoytinwood, ok14:36
thedacgnuoy: tinwood: I would have expected self._auto_wait_for_status in __init__ to check for status. That is odd it did not.14:40
tinwoodthedac, it does.14:40
gnuoythedac, I think it did but the amulet tests don't use neutron14:40
gnuoyso the services were runnign14:40
tinwoodIn the amulet tests everything is okay as gnuoy says it doesn't use neutron.14:40
thedacgot it14:41
gnuoyjamespage, https://review.openstack.org/#/c/305275/ passed amulet smoke14:43
thedacgnuoy: jamespage I'll +2 it and weep quietly to myself.14:44
thedacalthough http://10.245.162.36:8080/job/test_charm_pipeline/954/console is still running14:45
jamespagethedac, sorry...14:46
thedacgnuoy is too quick14:46
thedacnot your fault14:46
jamespageI appreciate the efforts you have made to get this feature in :(14:46
jamespagethedac, +1000 and lets make that rock for 16.0714:46
tinwoodit looks like it needs a slightly harder poke - it's on 'needs verified'14:46
cory_fusuchvenu: Just to confirm, you do mean an entirely second service (with a different name in `juju status`) and not another unit of the same service that you got with add-unit, right?  If so, you should be aware that the states set for the first service will remain in effect for that service, but you should get a -relation-joined and -changed events for the second service, which should start a separate conversation flow14:47
suchvenuyes a separate service , not a unit14:47
suchvenuI guess I am getting realtion-joined and -changed for 2nd service. But there are some states apart from these in my reactive layer14:48
cory_fuYou should also keep in mind that states set in the charm layer apply to the charm as a whole.  Only states set in the interface layer, using conv.set_state(), will be per-service14:50
cory_fusuchvenu: Any chance I could see the code in question?14:50
suchvenusure14:50
suchvenuhow do I share?14:50
suchvenuyes, the these states are set in reactive layer14:51
cory_fuIf it's in a public repo, a link would be good.  Otherwise, a pastebin or even email would be fine14:51
suchvenuits still not pushed into lauchpad14:51
suchvenupastebin link ?14:51
cory_fuThat's fine.14:51
suchvenuhow do I share in Pastebin?14:52
cory_fusuchvenu: You can just put it in http://pastebin.ubuntu.com/14:53
suchvenudone14:55
cory_fuAnd then paste the resulting URL here (or PM me)14:55
beisnerzul, https://review.openstack.org/#/c/30462614:55
beisnerrockstar, https://review.openstack.org/#/c/30317514:56
suchvenuhttp://pastebin.ubuntu.com/15812802/14:56
beisner^ ie. review/land each other's wares :)14:56
=== cos1 is now known as c0s
cory_fusuchvenu: Is that code from the consumer charm?14:58
suchvenuno.. my charm14:58
suchvenudb2 charm14:58
cory_fuI don't see in that code where the db.sshconfigured state is ever set14:59
suchvenuoh.. i didn't paste full code i guess15:00
suchvenusee here pls. http://pastebin.ubuntu.com/15812926/15:01
rockstarbeisner: yeah, I can't vote on that patch at all (hilariously, I can do Code-Review vote on my own patches)15:04
beisnerrockstar, you should be able to do a +115:04
rockstarbeisner: oh, I can now. Weird. I didn't have that box until a refresh.15:05
suchvenucory_fu , could you see the code now ?15:08
cory_fusuchvenu: Ok, I see.  By setting that db.sshconfigured state and including @when_not, you're basically telling the charm that you only ever want to do that block once.  What you actually want is to run that block *every* time, and just depend on your grepping logic to skip keys that are already added.  That way, whenever a new key is provided (or if a key is changed), it will get added.  It does mean, though, that this handler will run for every15:08
cory_fuhook, but as long as it handles being run again with the same data ok (which it should because of the grep), then that should be fine15:08
suchvenuI have few more functions after this which are based on previous states15:10
suchvenuall that would get affected15:10
cory_fuOne other thing, though, is that get_sshkeys should return a list of keys, and you should loop over them.  I'm not 100% certain how that will work in bash, though.  I'll need to do some tests.15:11
cory_fusuchvenu: Oh, you have other handlers that depend on db.sshconfigured?  Well, you can still set that state once you have at least one key, as long as you remove the @when_not decorator so that the file update still happens every time15:13
suchvenuyes15:13
suchvenuhttp://pastebin.ubuntu.com/15813149/15:13
suchvenuThis pastebin has the subsequent functions. Please look once. Can we reset all states per service ? That would actually solve the problem i guess. Not sure how to do that.15:15
zulbeisner: yeah i dont have +215:23
beisnerzul +1 and i can land15:23
zulbeisner: done15:23
cory_fusuchvenu: Sorry, I'm caught up in a meeting.  I will review your latest link shortly.15:24
zulbeisner: dont forget btrfs ;)15:25
suchvenusure, please mail me if I am not online15:29
terjeI'm deploying a charm I'm working on to private openstack clouds. In doing so, I need a few things (endpoints, username/pw, token's), etc.15:41
terjeCurrently I put them in config.yaml and use config-get to access them when the charm deploys but since they differ, per install this isn't going to work15:41
terjeIs there a better way to go about this?15:41
=== CyberJacob is now known as zz_CyberJacob
=== mup_ is now known as mup
marcoceppiterje: that's the only way to get credential about the environment. What's the purpose of this charm? Is it to be deployed alongside an openstack (like another openstack service) or to do things to openstack from within it?15:51
terjeI'm using this charm to deploy a heat stack.15:51
terjeI have a user that uses juju and charms for everything and wants a way to deploy an application using juju, so basically I'm just doing a stack-deploy.15:52
terjebut in order to do so, I need to know, for example the network uuid, OS_USERNAME, etc.15:53
gnuoythedac, can you put your mojo trousers on today and try and get some green on the spreadsheet ?15:57
thedacgnuoy: I can. Are your changes landed?16:00
tinwoodgnuoy, so using your branch, the icehouse has also passed (I ran it during the meetings).16:03
=== cmagina90 is now known as cmagina
terjeis there a way to make 'juju status' default to 'juju status --format=tabular' ?16:04
rick_h_terje: you can export the environment variable. /me goes to look it up16:04
marcoceppiterje: export JUJU_CLI_VERSION=216:16
marcoceppiterje: it seems odd that you'd use juju to deploy stuff via heat16:17
marcoceppiit's not a scenario I've come across before16:17
thedacgnuoy: beisner have the required mojo spec changes landed yet?16:24
terjeyea, it's very weird.16:24
beisnerthedac, still validating the nova client changes;   haven't touched the git change proposal yet.16:25
thedacbeisner: ok, please let me know when to start pushing buttons16:25
terjecool, that worked for tabular marcoceppi16:25
terjeso, I have another question (sorry, pretty new to juju).. The charm I'm deploying has an error in my start hook16:26
terjeso, it's stuck in 'start'16:27
terjea 'juju remove-service' or 'juju remove-unit' do nothing here.16:27
terjeis there a way to force a kill here?16:27
marcoceppiterje: yeah, so when the charm is stuck in a hook error it tends to require admin intervention16:28
terjeso, login to the VM and kill the process it's haning on?16:28
marcoceppiterje: does juju say the hook errored or is ithanging?16:28
terjeno, it's just in perpetual 'start' mode16:29
marcoceppiterje: interesting, would you be able to share the output of `juju status --format yaml` to paste.ubuntu.com ?16:29
terjesure - but before that, logging into the worker VM, I see this process just hanging around: /bin/bash /var/lib/juju/agents/unit-epdg-0/charm/hooks/start16:30
beisnerthedac, will do16:30
marcoceppiterje: yes, that's the process16:31
terjehttp://paste.ubuntu.com/15815245/16:31
marcoceppiterje: can you send a sigterm to that process?16:31
terjedone, and it's gone.16:32
marcoceppiterje: juju will kill it off,but the default time is like 2 hours. You can also force kill the machine it resides on to avoid logging into the VM16:32
terjeso, now it's in error state .. I assume I can juju resolved and make it go away16:32
terjechecking..16:33
marcoceppiterje: yes16:33
marcoceppiterje: if you just issue a resloved and not a resolved --retry it'll move on to the destroy service hooks16:33
terjeyea, that worked.16:33
terjeit's gone.16:33
marcoceppiterje: if it's really problematic, `juju destroy-machine --force 1` is a quick way to circumvent a pesky unit if you really want to get rid of it quick16:33
marcoceppiwhere 1 corrosponds to the machine number the unit is deployed on16:34
terjeah, great. That's probably what I want while working on this charm. thx.16:34
marcoceppiterje: yeah, during dev is a great way to reset things16:34
terjeso, there's no way to pass variables during: juju deploy local:bleh/bleh ?16:36
beisnerzul, awaiting rockstar's +1 on https://review.openstack.org/#/c/303175/16:40
beisnerrockstar ^16:40
zulk16:40
terjeah-ha. juju deploy --config myconfig.yaml might be it16:45
marcoceppiterje: you found it, the format for that is documented here: https://jujucharms.com/docs/1.25/charms-config#configuring-a-service-at-deployment16:53
=== redir is now known as redir_brb
beisnerthedac, gnuoy - mojo spec git proposal merged;  still waiting for validation results on the nova client change, expect that within an hr or so.17:07
beisnerthx for the mods gnuoy17:08
=== zz_CyberJacob is now known as CyberJacob
terjemarcoceppi: using juju deploy --config file.yaml, are those values available to the hooks as (config-get setting) ?17:27
terjeseems like they are not.. :/17:27
=== redir_eod is now known as redir
marcoceppiterje: yes, they should absolutely be17:31
marcoceppiterje: how did you format your file.yaml?17:31
jcastromarcoceppi: did you move the layer search? can't find it in juju-solutions17:32
marcoceppijcastro: it was never there. it's in my personal github17:33
jcastrooh!17:33
jcastroI was going to repull it, having some git issues17:34
beisnerthedac, ahead of mojo spec change merges, if you have cycles to poke at this, it's a definite new-failure:  bug 157003217:43
mupBug #1570032: Precise-Icehouse:  publicURL endpoint for network service not found <uosci> <neutron-gateway (Juju Charms Collection):New> <https://launchpad.net/bugs/1570032>17:43
zulbeisner: mind +1ing https://review.openstack.org/#/c/303175/ please17:44
beisnerso to be clear, i'm trying to validate tinwood's nova api client usage change against precise, but stumbling over that, have re-confirmed the same outside of mojo specs.  precise is no-talky wrt network api atm.17:44
beisnerzul, done, it's otw to merge17:45
zulbeisner: thanks17:45
tinwoodbeisner, sorry if I'm causing problems - do we use precise test runners?17:49
beisnertinwood, no.  everything is trusty + liberty cloud archive for client tools17:49
beisnertinwood, and that can work all the way back to precise, and i think your change is sound -- i just want to see it work before we merge into the thing that automation consumes.17:49
beisnerprecise has other troubles atm17:49
tinwoodbeisner, ok.17:51
beisnerthedac, fyi, turning to a precise-icehouse stable charm deployment to validate nova client changes @ https://code.launchpad.net/~ajkavanagh/openstack-mojo-specs/mojo-openstack-specs-fix-novaclient/+merge/29173418:12
terjemarcoceppi: back from lunch, so my charm is called 'epdg' and looks like this, currently: http://paste.ubuntu.com/15817580/18:12
terjeI'm not actually sure if I want 'settings' or 'options'18:13
terjethe deploy error is: ERROR no settings found for "epdg"18:13
marcoceppiterje: neither, check this page out https://jujucharms.com/docs/devel/developer-getting-started18:28
marcoceppiterje: err, wrong page https://jujucharms.com/docs/1.25/charms-config#configuring-a-service-at-deployment18:28
beisnerrockstar, can you abandon this one?:  https://review.openstack.org/#/c/301839/18:28
marcoceppiterje: it should look more like this http://paste.ubuntu.com/15817918/18:29
terje:)18:30
terjetotally works - so then, OS_USERNAME=$(config-get os_username) for example18:31
terjeI'm set!18:31
thedacbeisner: sorry I have been in meetings non stop. Let me look at the bug18:33
beisnerthedac, np at all.  we can use the stable charm deploy to validate the pending mojo spec nova client api usage change, then land that, and turn folks loose on specs (except mitaka)18:34
thedacok, and is the precise icehouse issue still in play/18:35
thedac?18:35
beisneryes, next precise icehouse is bust18:35
beisnerso that bug is still relevant, but i'm not blocked on exercising the pending mojo spec util change18:36
thedacok18:36
=== alexisb is now known as alexisb-afk
=== JoseeAntonioR is now known as jose
beisnerthedac, tinwood - mojo spec MP landed.  thanks for the mods.  all should be clear to exercise EXCEPT trusty-mitaka (pending cloud archive pkg flush tomorrow).19:59
thedacbeisner: great. I'll get started20:22
=== Zetas_ is now known as Zetas
=== freeflying__ is now known as freeflying
=== terje is now known as Guest13601
=== \b is now known as benonsoftware
=== sarnold_ is now known as sarnold
=== fginther` is now known as fginther
beisnercoreycb, swift-storage or swift-proxy?21:03
beisnerwth won't this merge, icey ? :-/   https://review.openstack.org/#/c/303014/21:46
cmarsdo i still have to use LP to submit a charm for review? i'm developing mine with git and using the new 'charm' command to publish them21:53
coreycbbeisner, swift-proxy21:58
=== axw___ is now known as axw
=== aluria` is now known as aluria

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!