[00:26] <wolverineav> hi, this is related to openstack neutron-api charm.
[00:27] <wolverineav> after deploying openstack with neutron as the network manager, I see the following error in the log: http://paste.ubuntu.com/11875141/
[00:28] <wolverineav> I checked neutron.conf and the relevant section containing keystone_authtoken looked like this: http://paste.ubuntu.com/11875147/
[00:28] <wolverineav> also important to note is that I deployed neutron-api in a lxc
[00:30] <wolverineav> I've mostly followed the relations and everything as given here: https://jujucharms.com/openstack-base/34
[00:31] <wolverineav> except that I don't use ceph and rely completely on cinder. so minus those deploy and relation operations. also I try to keep everything in lxc, except rabbitmq-server and quantum-gateway
[00:32] <wolverineav> I wouldn't expect debugging the issue here, but quick pointers like which services cannot be deployed in lxc (rabbitmq-server was one, which I figured while debugging earlier :) )
[11:39] <suchvenu> Hi Team
[11:39] <suchvenu> I have pushed my code to Launchpad and trying to run amulet test. Getting the following error when i run python tests/10-bundles-test.py
[11:40] <suchvenu> charm@islrpbeixv665:~/charms/trusty/db2$ python tests/10-bundles-test.py E [11:40] <suchvenu> This code was working before using Amulet
[11:41] <lazyPower> suchvenu: can you link me to your test?
[11:41] <suchvenu> in Launchpad ?
[11:41] <lazyPower> sure
[11:42] <suchvenu> Its in my personal branch
[11:42] <suchvenu> https://code.launchpad.net/~suchvenu/charms/trusty/db2/ibmdb2
[11:43] <lazyPower> suchvenu: can you also pastebin the full traceback?
[11:44] <suchvenu> I am getting only this much when i run the test
[11:44] <suchvenu> charm@islrpbeixv665:~/charms/trusty/db2$ python tests/10-bundles-test.py E [11:44] <lazyPower> ok, give me a moment to wrap up my current test run. I'll branch the code and give it a run
[11:45] <suchvenu> sure
[11:45] <lazyPower> this test looks a bit funky to me at first glance, i'll clean it up and submit a MP to your branch shortly
[11:45] <suchvenu> Its working when I deploy it manually
[11:45] <lazyPower> for example, the config.get() stanza's during the standup
[11:46] <lazyPower> typically we stand up the charm, and isolate the config.get() bits under a scoped test
[11:46] <lazyPower> have you looked at any other amulet based tests?
[11:46] <suchvenu> Trhis test was working for me last day... Dont know suddenly what happened
[11:48] <suchvenu> Without pushing the code to Launchpad, can't we run amulet test ?
[11:48] <lazyPower> Certainly
[11:49] <lazyPower> are you using bundletester to kick off your tests?
[11:49] <suchvenu> yes
[11:49] <lazyPower> ok, so you're familiar with the JUJU_REPOSITORY variable?
[11:49] <suchvenu> i am using these two
[11:49] <suchvenu> 00-setup  10-bundles-test.py
[11:49] <suchvenu> no
[11:50] <lazyPower> if you export JUJU_REPOSITORY to your charm repo, eg: export JUJU_REPOSITORY=$HOME/charms  - if you specify local:<series>/<service> it will deploy a local copy 100% every time instead of reaching out to the charm store API to find the charm.
[11:50] <lazyPower> by default, if you only describe teh service name under test, in this instance db2,  the resulting amulet code looks like: d.add('db2') - it will deploy the local charm by default.
[11:51] <lazyPower> you can also override the behavior by setting the JUJU_TEST_CHARM=db2 environment variable as well, in the off chance that bundletester is still looking in the wrong place for the charm.
[11:51] <suchvenu> okk
[11:53] <suchvenu> the 2nd one i tried
[11:53] <suchvenu> but it didn;t work
[11:54] <suchvenu> After export JUJU_REPOSITORY=$HOME/charms also, its fialing for me
[11:54] <lazyPower> is that where the charm is located? $HOME/charms/trusty/db2 ?
[11:55] <suchvenu> yes
[11:55] <lazyPower> ok, i'm rounding the last legs of this test run
[11:55] <lazyPower> i should be able to switch context in a moment, and get hands on
[11:55] <suchvenu> sure
[12:30] <lazyPower> suchvenu: please ask your questions here so that others may benefit from the support :)
[12:31] <lazyPower> suchvenu: ok, i also see what you're doing here. The IBM charm payloads are behind a paywall, so you're attempting to validate the bundle has been configured with a payload url correct?
[12:33] <suchvenu> yes, But its not even reaching till there...
[12:33] <suchvenu> From our local directory how can we find out to which stream in Launchpad its connecting to ?
[12:34] <lazyPower> I'm not sure what you mean 'which stream in launchpad'
[12:35] <suchvenu> I mean branch
[12:37] <lazyPower> suchvenu: ah, if being routed through any of teh juju tooling unless implicitly defined via the branch: key in the bundle, it will always use /trunk
[12:37] <lazyPower> as /trunk is the only branch in launchpad that will be ingested
[12:37] <lazyPower> i'm retooling this test a bit, but i'm confused. You're validating local.yaml, then testing with bundles.yaml?
[12:37] <lazyPower> it seems counterintuitive to validate a bundle we're not using.
[12:40] <lazyPower> ok, so you're setting values in one bundle, then loading a deployment bundle, and passing the values into that bundle.
[12:40] <lazyPower> May I make a suggestion?
[12:41] <suchvenu> I am using both
[12:41] <suchvenu> sure
[12:43] <lazyPower> Move this validation logic out of the class setup constructor, and make the local.yaml optional. The charm should have sane defaults and do something meaningful if config is not provided.
[12:43] <lazyPower> if its present, invoke a method that loads, validates, and returns a dict of these config options that you can then just pass to d.configure('db2', options_dict)
[12:44] <lazyPower> if that options_dict isn't present, it should deploy the charm as is, and the charm should respond in kind to either noop, set-status that its pending data from the user.
[12:44] <lazyPower> i'll mock this up and submit a MP
[12:52] <suchvenu> ok. Did you try to run the test in your env? Are you getting the error which I got ?
[12:53] <marcoceppi> mbruzek: re resterday, insetad of set +e, just wrap it in an if statement and have grep exit based on match
[12:54] <marcoceppi> mbruzek: the if block will keep it from exiting, and you'll get your result
[12:54] <mbruzek> The if is what I was looking for.
[12:54] <mbruzek> marcoceppi: thanks for the follow up
[12:56] <marcoceppi> mbruzek: `if ! grep "lolo-don'texist" ~/.juju/environments.yaml; then echo "That does not exist in environment file"; fi`
[12:56] <marcoceppi> mbruzek: as an example
[12:57] <lazyPower> suchvenu: i did get the same error. I think this is a byproduct of loading a yaml for the deployment
[12:57] <lazyPower> marcoceppi: when loading a bundle in amulet, just defining the service name without JUJU_TEST_CHARM exported defaults to looking at the store w/ the default series does it not?
[12:58] <suchvenu> so what is the output you are getting ?
[12:58] <lazyPower> suchvenu: however, when implicitly passing local:trusty/db2 in the bundle as the charm, it behaves as expected.
[12:58] <lazyPower> and i've got this working, let me pastebin my work for your review
[12:58] <suchvenu> Atleast did it reach the isntall hook ?
[12:59] <lazyPower> https://gist.github.com/chuckbutler/4b0391675e77ae8ea62d
[13:00] <lazyPower> its deployed the charm and is pending a machine allocation from AWS
[13:00] <lazyPower> s/allocation/enlistment
[13:00] <marcoceppi> lazyPower: any service provided without a fully qualified URl assumes it's a charmstore charm unless the charm is specifically provided during the d.add() method or if it's the JUJU_TEST_CHARM
[13:00] <lazyPower> marcoceppi: thats what i thought, thanks for confirmation.
[13:00] <marcoceppi> lazyPower: if a bundle is loaded, the charm: key is passed to d.add
[13:01] <lazyPower> suchvenu: according to marcoceppi's reply above - this is side-effecty behavior of how you're constructing this test. if you did an implicit d.add('db2') it would do theright thing by default, vs this yaml load which is causing amulet to poll the store for db2
[13:02] <marcoceppi> lazyPower: hum
[13:02] <marcoceppi> suchvenu: lazyPower hum
[13:02] <lazyPower> marcoceppi: i dont know that i buy that ;) if you change the charm: key in this yaml to 'db2' - it does the wrong thing inherently.
[13:02] <marcoceppi> I see the problem
[13:02] <marcoceppi> yeah, because it bypases the logic in the add method
[13:02]  * lazyPower nods
[13:02] <marcoceppi> let me check the load method, we may need to add a case where it checks JUJU_TEST_CHARM there
[13:02] <marcoceppi> we have a 1.10.2 release on deck for another bug reported on friday
[13:03] <marcoceppi> so we may be able to patch this in that release as well
[13:03] <lazyPower> nice
[13:03] <lazyPower> suchvenu: another thing to be aware of - when i first pulled this i got a ton of import errors, you're working in the context of python3 and any modules you import will need to be added to 00-setup
[13:03] <mbruzek> lazyPower: do you have a link to the aufs docker issue you pointed me at yesterday?  Something about the extra libraries needed to be loaded?
[13:04] <lazyPower> suchvenu: such as apt-get install -y python3-yaml
[13:04] <lazyPower> mbruzek: i have one better
[13:04] <marcoceppi> lazyPower:  suchvenu can one of you report this as a bug real quick against lp:amulet?
[13:04] <lazyPower> mbruzek: i have a branch that implements it!
[13:04] <lazyPower> marcoceppi: on it
[13:04] <suchvenu> oh ok
[13:04] <lazyPower> suchvenu: i'll take the bug report, let me know if you have any questions about the test modifications i posted in a Gist for you
[13:05] <suchvenu> I couldn't open it
[13:05] <suchvenu> if you did an implicit d.add('db2') it would do theright thing by default, vs this yaml load which is causing amulet to poll the store for db-> you mean I need to add d.add('db2') to amulate code ?
[13:05] <lazyPower> mbruzek: https://github.com/chuckbutler/docker-charm/tree/aufs-impl
[13:06] <lazyPower> suchvenu: you cannot load gist.github.com urls?
[13:06] <lazyPower> ok, i can push this to launchpad and send you a MP so you get a proper diff, 1 moment
[13:06] <marcoceppi> lazyPower suchvenu I found the logic falicy
[13:06] <marcoceppi> if charm is supplied it never gets to check if the service/charm being deployed is infact the cwd
[13:07] <suchvenu> yes, i could open the link now
[13:07] <marcoceppi> lazyPower suchvenu have you tried just setting charm to "db2"?
[13:08] <marcoceppi> in the bundle?
[13:08] <marcoceppi> that should work
[13:08] <marcoceppi> and avoid a charm store lookup
[13:08] <lazyPower> marcoceppi: i have
[13:08] <lazyPower> and it broke
[13:08] <lazyPower> suchvenu: https://code.launchpad.net/~lazypower/charms/trusty/db2/test-fixup/+merge/264709
[13:08] <lazyPower> same code, but in a merge proposal format for your diff viewing pleasure ^
[13:09] <lazyPower> marcoceppi: sorry broke is the wrong word here - it misbehaved. it polled the charm store.
[13:09] <marcoceppi> lazyPower: acording to that diff you've changed it from db2 to local:trusty/db2
[13:09] <lazyPower> marcoceppi: because it was polling the charm store otherwise.
[13:10] <marcoceppi> what version of amulet is being used, the code says it should be doing otherwise
[13:10] <marcoceppi> unless the directory the charm is in is not named "db2"
[13:10] <marcoceppi> which is still to begin with, and we'll make sure it uses metadata.yaml to determine name, but that's what's up
[13:10] <lazyPower> http://paste.ubuntu.com/11877650/
[13:11] <lazyPower> dir name is db2, JUJU_TEST_CHARM is exported as db2
[13:13] <marcoceppi> lazyPower: https://github.com/juju/amulet/blob/863b3bbc0488eaadb8b7fcf4286b31026b1c8c68/amulet/charm.py#L44
[13:13] <lazyPower> schenanigans
[13:13] <lazyPower> i dont know why this misbehaved then :|
[13:14] <marcoceppi> lazyPower: me neither, is that MP the best to use to replicate?
[13:15] <lazyPower> unwind the changes to the bundles.yaml and it will be
[13:16] <marcoceppi> lazyPower: should I just use suchvenu's branch instead? or just undo the bundle?
[13:17] <lazyPower> i would undo the bundle changes
[13:19] <lazyPower> suchvenu: another suggestion for improvement in user experience: normalize your use of dash and underscore in the config options
[13:19] <lazyPower> minor nitpick, but small things like this count :)
[13:22] <suchvenu> hi.. lot of messages when i was away...
[13:22] <suchvenu> So should I try with the changes you sent
[13:23] <suchvenu> or just try charm: "local:trusty/db2" in bundles.yaml file ?
[13:24] <marcoceppi> hey stub are you around? Is the new cassandra charm able to run as a single node? I don't see the configuraiton option anymore
[13:25] <lazyPower> suchvenu: marco was asking with regard to testing amulets behavior
[13:25] <lazyPower> suchvenu: i highly recommend you adopt that test pattern i sent in a MP, and test cases where you have empty config, and provided config.
[13:27] <lazyPower> suchvenu: if the charm has behavior that's expected when you provide none of those configuration options, it should be clear to the user there is expected input from them :) this will help you achieve that requirement.
[13:28] <suchvenu> so inorder to deploy the charm from my local dir and not look at charm store, i need to give "local:trusty/db2" in bundles.yaml file, right ?
[13:33] <lazyPower> correct, until marcoceppi has a chance to look into the side-effecting behavior in amulet and issue a patch.
[13:37] <suchvenu> I still get the same error
[13:38] <suchvenu> charm@islrpbeixv665:~/charms/trusty/db2$ python tests/10-bundles-test.py E [13:38] <suchvenu> when i did only the change the bundles.yaml file
[13:38] <lazyPower> That error, is an error in your python
[13:39] <lazyPower> the test refactoring i submitted resolves that issue.
[13:48] <stub> marcoceppi: yes. It can be run with a single node or multiple - no configuration needed.
[13:48] <stub> marcoceppi: Just don't create any keyspaces with a replication factor higher than your node count.
[14:00] <marcoceppi> stub: cool, won't do
[14:00] <marcoceppi> stub: just helping Adam port our cassandra-stress action to the new charm
[14:00] <marcoceppi> so I just need one node to make sure parsing is working and the parameters are there
[14:00] <marcoceppi> cassandra 2.0 and 2.1 ship with different stress tools
[14:01] <stub> marcoceppi: Ta. I would have done that, but back then it was 2.0 by default and we needed 2.1 because 2.0 doesn't do authentication
[14:01] <sto> Is there a good document explaining how to configure neutron when deployed with juju and quantum-gateway?
[14:01] <marcoceppi> stub: no worries, we've got most of the ground work there, and we'll look to tap you for help with a postgresql benchmark (when you have time)
[14:02] <stub> marcoceppi: Extra points for an external load generator that can be scaled out :) I'd love to be able to show the difference between 5 quad core boxes and 20 single core boxes with 25% of the RAM.
[14:02] <marcoceppi> sto: afaiu quantum-gateway is no longer supported for deployments with openstack, isntead neutron-gateway should be used. coreycb and the openstack charmers have better insight into that
[14:03] <marcoceppi> stub: right now stress is part of the charm, but we're working on creating a "nosql" load generation tool that cassandra can hook up to
[14:03] <marcoceppi> stub: we could also create cassandra-stress as a standalone charm, but I'm not sure how that would work, right now it's just run on one of the nodes in the cluster
[14:04] <sto> marcoceppi: any doc pointers? what charms should I use now?
[14:04] <stub> marcoceppi: The cassandra-stress standalone charm would just need to install the packages and make use of the client relationship to the real cluster. Not sure if it is easier to start from scratch, or fork the cassandra charm and give it a lobotomy.
[14:05] <coreycb> sto, so you have a deployment and want to configure neutron?
[14:06] <marcoceppi> sto neutron-gateway with the neutron-api charm AIUI
[14:06] <sto> coreycb: I have a deployment but I can redeploy it, I'm testing
[14:06] <marcoceppi> stub: we'd probably just start from scratch to install the stress tool and then move our actions over and update them to use the relation data. Would you say that's the best way to proceed rather than having it as an action on cassandra?
[14:07] <coreycb> sto, ok just curious if you're asking about configuring the neutron charms vs configuring the neutron service (subnets, routers, etc)
[14:08] <stub> marcoceppi: It is the only way to get a genuine benchmark, even for non-clustered services. Otherwise you are measuring how efficient your load generator is.
[14:08] <sto> coreycb: I'm interested on both, if you have some documentation pointers that will be great ...
[14:08] <marcoceppi> stub: does cassandra-stress scale?
[14:08] <coreycb> sto, sure
[14:09] <stub> marcoceppi: Cool demo - run benchmark, add unit, run benchmark again, demonstrate double load
[14:09] <sto> marcoceppi: I only see a neutron-gateway-next charm at https://jujucharms.com/u/landscape/neutron-gateway-next/trusty/5 am I missing something?
[14:09] <stub> marcoceppi: I don't know about scaling the stress tool horizontally. But you *could* just run it on each unit at the same time, and divide the results by the number of units (?)
[14:10] <marcoceppi> stub: I guess, but if the point is to have an external load-gen, and cassandra-stress is that load gen, I guess we'd just scale that machine vertically give it a giant machine to churn data through
[14:10] <marcoceppi> then scale the cluster to see the effects of throughput?
[14:10] <coreycb> sto, I think the easiest all in one thing to look at is a bundle - this is the default bundle we use to deploy openstack for testing
[14:10] <coreycb> http://bazaar.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk/view/head:/bundles/sparse/default.yaml
[14:10] <jose> jcastro: ping
[14:10] <marcoceppi> if it's the same, then the stress tool is hitting celings, otherwise, note results
[14:11] <jcastro> jose: yo
[14:11] <jose> jcastro: quick PM?
[14:11] <jcastro> yes
[14:11] <marcoceppi> stub: but I agree, a great demo would be to stress one cassandra node, scale to 10, stress again
[14:11] <stub> marcoceppi: yes. Anyway, that is a limitation of the tool we need to live with. And demonstrating that the stress tool flatlines before the juju deployed cluster does is cool too.
[14:11] <coreycb> sto, if you want to dig into README's and config.yaml for the individual charms you can, but the bundle should get you started
[14:11] <marcoceppi> stub: agreed, cool, we'll finish up our work on the stress action, then just move it to it's own charm when we're confident it's working
[14:12] <coreycb> sto, are you familiar with juju-deployer?
[14:12] <marcoceppi> it's easier to troubleshoot when it's all on one box
[14:12] <sto> coreycb: I've been there already, now I'm deploying by hand
[14:12] <sto> coreycb: I need to put some services in known hosts and found it difficult with bundles and juju-deployer
[14:12] <coreycb> sto, ok fair enough, it's still a good reference for you to make sure you're using the right charms (e.g. quantum-gateway is deprecated)
[14:13] <sto> coreycb: not when I started... ;)
[14:13] <sto>     "neutron-gateway":
[14:13] <sto>       charm: "cs:trusty/quantum-gateway-16"
[14:13] <sto> From the bundle
[14:13] <coreycb> sto, actually, sorry quantum-gateway isn't deprecated quite yet --- it'll be deprecated end of july when the next round of stable charms are released
[14:14] <sto> coreycb: I'm deploying kilo, which charms should I use, the neutron-gateway-next?
[14:15] <coreycb> sto, no don't use the next branches unless you are testing something new
[14:15] <coreycb> sto, use the branches in that default.yaml I pasted
[14:16] <sto> lp:charms/trusty/quantum-gateway
[14:16] <sto> So it is not renamed yet
[14:17] <coreycb> sto, that's correct, sorry I misspoke.  it's only deprecated in our next charms which is what we use for development.  the next charms will be released to stable end of july.
[14:18] <sto> ok
[14:19] <sto> coreycb: and about how to configure the service? is there a document with a description about how the charm leaves things?
[14:20] <coreycb> sto, just the README's for the charms
[14:21] <coreycb> sto, the config.yaml files are also useful to understand config options and defaults
[14:21] <coreycb> sto, e.g. https://api.jujucharms.com/charmstore/v4/trusty/quantum-gateway-16/archive/config.yaml
[14:21] <stub> marcoceppi: Apple apparently has a 75,000 node cluster with 10PB, so that seems like a good target.
[14:21] <coreycb> sto, the charms default to using ovs
[14:22] <marcoceppi> stub: we'reworking on reproducing the Google Cassandra 1M writes benchmark they blogged about a few months ago, I don't think my wallet is big enough to shell out that many cloud instances ;)
[14:23] <coreycb> sto, have you come across this?  https://jujucharms.com/openstack-base/34
[14:24] <sto> coreycb: I know that, but I don't see right now what do I have to set up on openstack to access the public network
[14:25] <coreycb> sto, that link I just posted has some directions on setting up the ext-net, and initial tenant
[14:25] <sto> coreycb: I'll look into neutron-ext-net
[14:26] <sto> Maybe the problem is that I have a wrong network setup on the nodes
[14:45] <marcoceppi> sto: fwiw, I'm having similar issues with getting ext-net to run, if you get yours figured out let me know (and I'll try to do likewise)
[14:49] <sto> marcoceppi: I'll do, my guess is that my use of network interfaces is wrong, tomorrow I'll review the physical connections
[16:39] <jogarret6204> hi. Im looking for information about how parameters that are NOT shown in juju get xxxx are getting down to nodes.  example is tenant_network_types in ml2.ini
[17:22] <lazyPower> jogarret6204: i'm not sure i understand what you're asking. Is this configuration you're passing services that dont appear to be getting set on a deployment?
[17:26] <jogarret6204> it's not in the services, but in the config template in the charm..
[17:27] <jogarret6204> charms/trusty/neutron-api/templates/icehouse/ml2_conf.ini:type_drivers = gre,vxlan,vlan,flat,
[17:27] <jogarret6204> is on maas/juju server
[17:27] <lazyPower> ddellav: beisner ^ do you guys have a moment to eyeball this a bit? i'm out of my depth here as i'm not familiar with neutron
[17:28] <jogarret6204> I added tenant_network_types = gre,vxlan,vlan,flat,local
[17:28] <lazyPower> jogarret6204: sorry, i'll try to get some eyes on this from people that are more familar with neutron.
[17:28] <jogarret6204>   <----local
[17:28] <jogarret6204> It's more of a juju state question.
[17:28] <lazyPower> s/neutron/the neutron charm/
[17:29] <jogarret6204> I changed the ml2.ini inside a container, and I also changed it on the maas/juju server
[17:29] <jogarret6204> I added that local option at the end..
[17:29] <jogarret6204> today, it is back to the old one
[17:31] <jogarret6204> ml2_conf.ini:tenant_network_types = gre,vxlan,vlan,flat
[17:31] <jogarret6204> so I'm assuming there must be something in the middle (perhaps on the juju VM used for deploying?) that needs to be set
[17:32] <jogarret6204> thanks for looking, btw..  I know you are all busy
[17:33] <lazyPower> jogarret6204: did you deploy from the local charm or did you juju upgrade-charm neutron --switch local:trusty/neutron ?
[17:33] <lazyPower> if you updated on the server itself, those charm changes were not stored on the state server, and any number of things could have happened that caused the reversion, such as a charm upgrade which is an atomic update of the charm code
[17:34] <lazyPower> the way juju delivers charm is it ships a tarbal and nuke/repaves the charmdir for the most part. so any localized modifications you had in prod/staging/etc will not persist when a charm upgrade is issued.
[17:36] <lazyPower> jogarret6204: the only way to ensure those charm modifications are persisted is by switching the source of that charm from cs: to local: or a namespace,  or to have deployed from one of those resources initially
[17:36] <jogarret6204> this was local deployment
[17:36] <jogarret6204> mattrae is involved.  :-)
[17:36] <lazyPower> ugh, so local deployment and something overwrote the config. there is probably something in the charm thats generating that config template then that stripped the option
[17:37]  * lazyPower makes with magic hand waving
[17:37] <jogarret6204> can I trigger this "refresh" at all to see where it's happening?
[17:38] <lazyPower> i would think so, you can attach over debug-hooks/dhx, and then re-trigger the action required
[17:38] <lazyPower> either by upgrading the charm to run "normal hook contexts" or by removing/re-adding the relations to invoke those hook contexts
[17:47] <beisner> hi jogarret6204, generally speaking, charm config options are addressable via juju get / juju set.  anything not represented in juju get / juju set is either formulated by, or hard-coded by the charm and/or its included templates.
[17:47]  * beisner tries to assimilate the objective via backscroll
[17:48] <jogarret6204> ok - that's what I was seeing. no get/set optionn for this setting.  so I changed the central file
[17:48] <beisner> jogarret6204, are you trying to add "local" to the tenant_network_types in the deployed unit?
[17:48] <jogarret6204> and the remote file
[17:48] <jogarret6204> yes
[17:49] <beisner> jogarret6204, i see.  i've not flipped that particular switch personally, and i know we don't test that scenario.  if there is a use case for it, we are happy to take suggestions/proposals in the form of a new bug against the charm in question.
[17:50] <beisner> jogarret6204, also be aware that any time you manually modify a conf that is also parsed/managed by juju, you run the risk of losing that setting if/when things like config-changed happen, maybe even on a relation change and/or charm upgrade (?).
[17:51] <jogarret6204> I don't know how prevelant it would be, but in our case, and outside API call to OpenStack is requesting the option to do a local network.
[17:51] <jogarret6204> so we need a way to add it..
[17:52] <beisner> jogarret6204, i'd say if it is a real world use-case, it's a feature worth considering.
[17:52] <jogarret6204> now we will know ot have it in the initial design tho...
[17:53] <jogarret6204> so is there something between the charms local directory, and the deployed option out in the container?
[17:53] <jogarret6204> something on the juju VM that I need to change?
[17:54] <beisner> just so i make sure i follow - we're switching gears from "how do i change this conf, and keep it changed?"  to  "how should I use neutron with a local tenant_network_type?"
[17:55] <jogarret6204> no
[17:55] <jogarret6204> change config and keep it changed is exactly what I need
[17:55] <beisner> ok i misunderstood
[17:56] <jogarret6204> central server has local configured in it (using local charms)
[17:56] <jogarret6204> remote containers is showing this:
[17:56] <jogarret6204> tenant_network_types = gre,vxlan,vlan,flat
[17:57] <beisner> jogarret6204, you could modify the charm to either add a config option to manage that item, or modify the charm to include local as a hard-coded thing in the template;   then do a juju charm upgrade.   both would be forking the charm in effect.
[17:57] <jogarret6204> I changed both the central and the remote a few days ago to have the local option:
[17:57] <jogarret6204> can I do "upgrade" with a local charm?  that sounds perfect
[17:57] <beisner> jogarret6204, i think the-right-thing-to-do (tm) is to teach the relevant charms to manage that configuration directive via a charm config option.
[17:58] <beisner> then propose that back, have it reviewed, merged, and never have to worry about it again ;-)
[17:58] <beisner> i've got to run to a meeting, bbl
[17:58] <jogarret6204> k - thanks for the help
[18:00] <beisner> yw  jogarret6204 -  as far as doing the local charm upgrade, i think the docs are pretty good around that, or lazyPower ;-)  <<
[18:02] <lazyPower> beisner: you never know
[18:02] <lazyPower> i try to be thorough
[18:22] <marcoceppi> lazyPower: beisner there aren't any docs on upgrading, I just checked :\
[18:22] <marcoceppi> in the jujudocs
[19:05] <beisner> marcoceppi, oh i must've been thinking of this ;-)  http://marcoceppi.com/2015/01/force-upgrade-best-juju-secret/
[19:06] <beisner> also, upgrade-charm doc:  https://jujucharms.com/docs/stable/commands
[19:06] <beisner> jogarret6204 fyi ^^
[19:08] <jogarret6204> beisner - mine were all local.  did not work to copy down the changed parameters
[19:09] <jogarret6204> remote:  tenant_network_types = gre,vxlan,vlan,fla
[19:10] <jogarret6204> local: tenant_network_types = gre,vxlan,vlan,flat,local
[19:11] <jogarret6204> I don't mind changing the remote files manually - bot something changed them back over this past weekend.  so I'm trying to stop that part
[19:11] <jogarret6204> although if it's a coworker then maybe you can't help.  :-)
[19:37] <pmatulis> when i attempt to use juju-deployer i get a msg saying the environment is already bootstrapped. i then need to forcibly destroy the environment in order to proceed. strangely, destroying before using deployer says the env does not exist. any tips?
[19:38] <jogarret6204> biesner:  upgrade-charms works.  just took some time
[19:39] <tvansteenburgh> pmatulis: are you passing -e <envname> ?
[19:40] <pmatulis> tvansteenburgh: no
[19:40] <tvansteenburgh> pmatulis: try that :)
[19:40] <ddellav> pmatulis: also may need to use the force flag for destroy-environment. I had that issue the other day
[19:41] <tvansteenburgh> pmatulis: deployer should do the right thing re bootstrapping
[19:41] <pmatulis> ddellav: that's what i meant by 'forcibly'
[19:41] <pmatulis> tvansteenburgh: ok, lemme try it
[19:41] <ddellav> gotcha
[19:57] <pmatulis> tvansteenburgh: nope, didn't work (http://paste.ubuntu.com/11879399/)
[19:59] <tvansteenburgh> pmatulis: you can't tell it to bootstrap if it already is
[20:00] <pmatulis> tvansteenburgh: what i did: on a fresh vm i copied ~/.juju/environments.yaml in (default set to petermatulis env) and issued the deployer command. that's all
[20:03] <tvansteenburgh> pmatulis: is there a ~/.juju/environments/petermatulis.jenv file?
[20:04] <pmatulis> tvansteenburgh: right now? or before i issued any commands?
[20:04] <tvansteenburgh> right now
[20:04] <pmatulis> yes, there is
[20:05] <tvansteenburgh> and you're sure it wasn't there prior to running deployer?
[20:06] <pmatulis> yes, i'm sure it wasn't there. i even created the .juju directory manually. then scp'd environments.yaml over
[20:06] <tvansteenburgh> weird, what kind of vm?
[20:07] <pmatulis> openstack instance
[20:11] <tvansteenburgh> pmatulis: okay, i thought maybe it was lxc, but if it's not, i'm fresh out of ideas. no idea why that's happening
[20:17] <lazyPower> pmatulis: aiui - the glance bucket that gets created keeps a sentinel file in it w/ environment state/data
[20:18] <lazyPower> pmatulis: check your control-bin and see if a STATE file exists in there. if the machines are completely destroyed, you can safely delete that file and retry
[20:18] <lazyPower> er, swift not glance.
[20:19] <pmatulis> lazyPower: sorry, where is my control-bin?
[20:19] <lazyPower> pmatulis: you define it in your openstack environments.yaml config
[20:19] <lazyPower> eg:         control-bucket: lazypower-8286c32025b604f3815681ac40ac8551
[20:20] <pmatulis> lazyPower: yes, there is 'control-bucket: <some UUID thing>'
[20:20] <lazyPower> thats your swift control-bucket
[20:20] <pmatulis> cool
[20:20] <lazyPower> there's a sentinel file in there
[20:20] <lazyPower> so long as it exists, juju will think its bootstrapped and do its best to attempt to recreate teh JENV by communicating wth the API through whats in that control file.
[20:21] <pmatulis> lazyPower: can i just erase that line? or is it required?
[20:21] <lazyPower> natefinch: right? ^ i know this is the behavior of AWS and i'm 90% certain this is the case with the OpenStack provider.
[20:21] <lazyPower> pmatulis: you can probably just generate a new control-bucket, it really depends on what your ACL's are on your openstack instance
[20:22] <pmatulis> lazyPower: lemme alter that UUID thingy. i just recreated a fresh instance
[20:23] <natefinch> yeah pretty sure you can just comment out that line and it'll create one from scratch
[20:23] <lazyPower> ta, i thought so.
[20:23] <natefinch> needing that line in there is a holdover from before we really knew what we were doing ;)
[20:23] <lazyPower> natefinch: we have a TODO to remove that right? make it non-required so we can support providers w/out object storage
[20:24] <natefinch> lazyPower: we already do support providers without storage.  Those changes landed like 9 months ago
[20:24] <lazyPower> i thought so, but i wasn't going to quibble with evidence that states otherwise ;P
[20:24] <lazyPower> those particular providers are highly jenv centric right? if the jenv goes away you're basically in trouble.
[20:26] <marcoceppi> beisner: that's just the output of `juju help upgrade-charm` honetsly upgrade charm should be documetned just like deploy is
[20:26] <natefinch> well, sort of. I mean, obviously you need something local that can figure out where to talk to.  I'm not sure what the minimum amount of info is that you need to connect
[20:27] <lazyPower> ok, i may run a chaos lab over the weekend and find out for ya ;D
[20:28] <lazyPower> thanks for the clarification nate o/
[20:28] <natefinch> welcome
[20:33] <pmatulis> lazyPower: you nailed it; i incremented the value/ID by one and that did the trick. muchos!
[20:33] <lazyPower> anytime pmatulis :) happy to help
[20:34] <lazyPower> pmatulis: anytime that fails to work, a juju destroy-environment --force , should however clean up that sentinel file
[20:34] <lazyPower> if it doesnt, its bug worthy and should probably be filed.
[20:34] <lazyPower> as --force is taking a chainsaw to a bread and butter party.
[20:35] <pmatulis> yeah, since i expected the error i thought i could do that before deployer but evidently not
[20:37] <lazyPower> pmatulis: what were you trying to do if i can be nosey? we may have tooling already to help
[20:40] <pmatulis> lazyPower: set up openstack :)
[20:41] <lazyPower> ah ok
[21:03] <jogarret6204> experts:  not sure if I need to open a bug for an issue, after I fixed it.  looking for advice.  openstack dashboard x 3 wiht horizon ha-cluster.  2 of 3 nodes had the VIP in the corosync.conf file somehow.  So no qourum would come up
[21:03] <jogarret6204> chmod the file, replace the VIp address with the LXC address, quorum comes up
[21:04] <jogarret6204> second times I've seen this
[21:28] <lazyPower> jogarret6204: certainly bugworthy
[21:28] <lazyPower> jogarret6204: include steps to reproduce
[21:29] <jogarret6204> laxypower- that's the hard part.  dashboard down.  scratch head.  start restarting stuff.
[21:29] <jogarret6204> don't know how it gets that way is my point
[21:29] <jogarret6204> but I'll go open something to track
[21:30] <lazyPower> jogarret6204: did you use a bundle to deploy, manually deploying, etc.
[21:31] <lazyPower> if you have that info that should be a stellar start. it may be a path we dont have in CI
[21:31] <jogarret6204> ok.  btw - where to open?  I've only used juju-core before
[21:31] <lazyPower> i'm thinking horizon charm, let me fetch a link
[21:31] <lazyPower> 1 moment
[21:32] <lazyPower> jogarret6204:  https://bugs.launchpad.net/charms/+source/openstack-dashboard
[22:06] <beisner> o/ jogarret6204, please do file a bug ... that'd be stellar indeed.  it's important that we know some details about the deployment:  juju version, ubuntu release, openstack version, the charms' charmstore version, and the configuration options used for each charm.  hopefully that is all expressed in a bundle that you can sanitize and attach.  juju status output is also really helpful.
[22:07] <beisner> ps thanks jogarret6204, lazyPower
[22:07] <jogarret6204> sure.  bug is opened, already got info request from Billy Olsen
[23:10] <rick_h_> NOTICE: having an issue with prodstack and causing jujucharms.com to be unresponsive. Also means 1.24.X juju deploys will probably not be successfull atm. Working with webops to keep an eye and correcting