[00:02] <skylerberg> marcoceppi: charm == 'tintri-cinder', self.test_charm == 'deployer'
[00:02] <marcoceppi> skylerberg: that's so very wrong
[00:03] <marcoceppi> skylerberg: how are you running the tests?
[00:04] <marcoceppi> skylerberg: charm_name (and test_charm) are derived from either os.getcwd() or if JUJU_TEST_CHARM is in the environment
[00:05] <skylerberg> I am running them with python, but I was in the wrong directory I think. I just realized deployer was the directory I was in. However, it doesn't work in tests either.
[00:05] <marcoceppi> to get around this you can export JUJU_TEST_CHARM as cinder-tintri, but this should work if you're either running bundletester, juju test, or the test file directly (ala tests/test-file-name)
[00:05] <marcoceppi> skylerberg: all the test runners execute from the charm directory root, not the tests directory
[00:06] <marcoceppi> python tests/name-of-test should work from the root
[00:06] <skylerberg> Okay, I will give that a try
[00:07] <skylerberg> Thanks, that looks like it solved that problem.
[00:22] <beisner> hi coreycb, looks like ceph/next needs the c-h sync
[00:38] <beisner> coreycb, also fyi, that bit i mentioned earlier buggified:  T-L deploys are blocked on bug 1486293
[00:38] <mup> Bug #1486293: trusty-liberty - multiple packages cannot be authenticated <amulet> <openstack> <uosci> <keystone (Juju Charms Collection):New> <https://launchpad.net/bugs/1486293>
[00:53] <coreycb> beisner, ceph's updated now and I'll take a look at the liberty issue
[07:21] <stub> bcsaller: https://bugs.launchpad.net/charms/+source/postgresql/+bug/1486257 could be a real life use case for composer and relation stubs.
[07:21] <mup> Bug #1486257: make port configurable and send port+protocol on the syslog relation <postgresql (Juju Charms Collection):New> <rsyslog (Juju Charms Collection):New> <https://launchpad.net/bugs/1486257>
[11:53] <magicaltrout> hello chaps
[11:53] <magicaltrout> quick question
[11:53] <magicaltrout> if I want to require tomcat, but specifically tomcat 7
[11:53] <magicaltrout> can I configure that in metadata.yaml?
[13:25] <plars> wget: unable to resolve host address ''ubuntu-14.04-server-cloudimg-amd64-root.tar.gz'
[13:25] <plars> getting this when trying to deploy things to lxc under maas
[13:25] <plars> here's a more complete log, any ideas? http://paste.ubuntu.com/12121232/
[13:27] <plars> marcoceppi maybe? or any idea who to ask? The only significant difference I can see in this environment vs. the one I have at home (that works) is the working one has juju 1.24.4 and the non-working one has 1.24.5, but I haven't updated the one at home to see if it breaks
[13:54] <beisner> gnuoy, coreycb - along the lines of liberty uca enablement in the charms, i've selectively synced the fetch bits into mongodb as a merge proposal for review @ https://code.launchpad.net/~1chb1n/charms/trusty/mongodb/sync-fetch-helpers-liberty/+merge/268413
[13:56] <gnuoy> beisner, why a selective sync? Does a full sync break mongo>
[13:56] <gnuoy> ?
[13:56] <beisner> gnuoy, coreycb - i went minimal since it's not in the next cadence, and didn't want to introduce other potential issues with a full sync.  but i will if you think that'd be best.
[13:58] <beisner> gnuoy, i'll re-sync all of it now and let tests run.
[14:37] <marcoceppi> plars: is there any proxy in place in this environment?
[14:37] <plars> marcoceppi: no
[14:38] <marcoceppi> plars: it's trying to download the trusty template from the bootstrap node, https://10.101.49.149:17070/environment/4861d530-e810-405c-8f57-4b686db9581e/images/lxc/trusty/amd64/ubuntu-14.04-server-cloudimg-amd64-root.tar.gz but it doesn't appear to be working
[14:38] <marcoceppi> actually
[14:38] <marcoceppi> weird
[14:39] <plars> marcoceppi: yeah, it seems to complain about the certificate?
[14:39] <marcoceppi> it doesn't have a server name for the wget line
[14:39] <plars> marcoceppi: then later it's just... yeah, bad url
[14:39] <marcoceppi> looks like some weird logic tree, possibly an error. what base environment are you using? MAAS?
[14:40] <plars> marcoceppi: trusty+ppa, maas version is 1.8.0+bzr4001-0ubuntu2~trusty1
[14:41] <plars> marcoceppi: maas version is the same as the one I have at home, and works fine there
[14:41] <marcoceppi> plars: if possible I'd try downgrading to 1.24.4 - this might be a regression
[14:42] <plars> marcoceppi: that was going to be the next thing I tried, would I just need juju and juju-core?
[14:42] <plars> marcoceppi: and do you have a link to the old version somewhere?
[14:43] <marcoceppi> plars: just juju-core, juju is a meta package
[14:43] <marcoceppi> if you're on x86 I could upload a file, I don't think they're just floating around
[14:44] <marcoceppi> plars: http://ppa.launchpad.net/juju/stable/ubuntu/pool/main/j/juju-core/
[14:50] <plars> marcoceppi: thanks, downgrading it now, then I'll redeploy everything
[15:10] <g3naro> how do i juju deploy local and set the bridge device  to use?
[15:10] <g3naro> withouth changing /etc/lxc/default.conff ?
[15:15] <plars> marcoceppi: it hasn't given up yet, but I ran juju status in another session and it still seems to have the same problem after downgrading: http://paste.ubuntu.com/12125553/
[15:16] <lazyPower> g3naro: to clarify, you want to use a juju local provider, on a different networking bridge, but not edit the default lxc network bridge?
[15:20] <puzzolo> strange problems with juju. My lxc machines dont get ips, while maas offers them
[15:25] <g3naro> what is the configuration in /etc/lxc/default.conf ?
[15:26] <puzzolo> LXC_AUTO="true"
[15:26] <puzzolo> USE_LXC_BRIDGE="false"  # overridden in lxc-net
[15:26] <puzzolo> [ -f /etc/default/lxc-net ] && . /etc/default/lxc-net
[15:26] <puzzolo> LXC_SHUTDOWN_TIMEOUT=120
[15:28] <puzzolo> even configuring lxc statically wont do the trick
[15:29] <marcoceppi> plars: this seems like a misconfiguration somehwere, can you confirm that the agent-version for node 0 is 1.24.4?
[15:30] <puzzolo> lxc-net, which overrides lxc.conf uses lxcbr0
[15:30] <plars> marcoceppi: hmm, no it seems to be 1.24.5, but I downgraded... does it cache somewhere?
[15:30] <g3naro> hmm do you have the lxcbr0 made?
[15:30] <plars> marcoceppi: and I did re-bootstrap after downgrading
[15:31] <marcoceppi> plars: agent stream will always look for latest tools, you may need to bootstrap with an explicit version
[15:31] <plars> marcoceppi: of course, I just downgraded juju-core, not juju
[15:31] <plars> marcoceppi: how do I do that?
[15:32] <marcoceppi> plars: I'm not entirely certain now
[15:32] <puzzolo> g3naro: machine was provisioned by juju. lxbr0 is there. Still in containers config i get: lxc.container.link = juju-br0
[15:40] <puzzolo> i changed config to not using lxc bridge. lxcbr0 disappeared
[15:41] <puzzolo> problem is still there
[15:41] <g3naro> ok
[15:41] <g3naro> try this
[15:41] <g3naro> edit your ~/.juju/environments.yaml configuration
[15:42] <g3naro> you have an option to specify the bridge device there
[15:49] <puzzolo> i am no expert, as this is my first deployment with lxc containers. And i did not quit get which layer of configuration comes first. For which container I get a config file with points to juju-br0, which would expose all containers to maas dhcp. And that should be ok, making the services reacheable. lxcbr0 instead is a natted network for lxc continaers. This seems like a bug to me.. still I cant get to understand why it fails.
[15:50] <puzzolo> juju-br0 is the right bridge for lxc-containers or should services be natted?
[15:55] <puzzolo> strange is that with default deployment, containers do point to juju-br0, do ask dhcp .. and maas does offer them. ufw disabled anywhere. I must use tcpdump at least to see where the udp/tcp breaks
[16:00] <asanjar> kwmonroe: updated and tested hbase with jujubigdata4
[16:02] <kwmonroe> excellent asanjar!
[16:02] <plars> marcoceppi: yeah, sshing to node 0 I see that it had the old version, but also seems to have downloaded 1.24.5
[16:38] <puzzolo> mu problem seems to be related to juju-br0 bridged to eth0, which goes to maas internal network. juju-br0 does receive dhcp offers from maas for all containers, still wont "redirect" them to veth interfaces
[16:48] <lazyPower> puzzolo: is this a LXC networking reachability issue you're encountering? eg: Juju deploy --to lxc:# and then attempting to relate/route to those containers fails?
[16:53] <jogarret6204> hi all.   I have juju 1.24.3 upgrade to 1.24.5 stuck.  anyone tell me how to "undo" or kik it along to finish?
[16:53] <plars> marcoceppi: is there some way to force it to downgrade? or to debug where things might be going wrong and why it's getting those errors in 1.24.5?
[16:54] <lazyPower> jogarret6204: any further details than the upgrade is stuck? has the bootstrap node completed the upgrade cycle and its now stuck pushing out to the agent nodes?
[16:55] <lazyPower> jogarret6204: also do you have any debug log output during the upgrade that could help point us to a root cause? juju debug-log can assist here, but you might have to specify a legnth and scroll back through the election spam if its been a long while since you initiated the upgrade   juju debug-log -n <number>
[16:56] <puzzolo> lazyPower: indeed. I tryed deploying a juju charm, landscape-maas-dense. It installes apache2 on phys0, inquiring maas for a new node. On that machine it builds 5 lxc containers. Everything ok, till containers attached on juju-br0 need an ip. They'll ask for it. Maas will provide them, but ack all stop at juju-br0.
[16:58] <jogarret6204> lazyPower: ERROR WatchDebugLog not supported
[16:58] <jogarret6204> .
[16:58] <jogarret6204> dont think bootstrap node is upgraded either
[16:58] <puzzolo> it is truly frustating.. as i cant get this continaers to get an ip, or understand where i am wrong.
[16:59] <jogarret6204> machine 0: agent-version: 1.24.3.1
[16:59] <lazyPower> dimitern: ping
[17:00] <lazyPower> jogarret6204: which provider are you using?
[17:00] <jogarret6204> I'm in no hurry this is a lab.
[17:00] <jogarret6204> maas
[17:00] <lazyPower> hmm, its not giving you a debug log, thats a weird bug
[17:02] <jogarret6204> I have machine 0 as a VM.  I can cat the machine-0 log there...
[17:03] <lazyPower> puzzolo: we had some networking changes land in 1.24 that should have addressed that
[17:04] <lazyPower> but it appears we've missed the TZ window to contact the developer, let me put out some feelers and see what i can turn up about this
[17:05] <lazyPower> i've run into this with other substrates however, where LXC networking isn't adding the forwarding rule to the machines and therefore the containers are unreachable outside of the host
[17:07] <dimitern> lazyPower, pong
[17:07] <lazyPower> dimitern: heyo sorry for the late ping :)
[17:07] <dimitern> lazyPower, no worries :) what's up?
[17:07] <puzzolo> lazyPower: it felt like a bug. Anyway to check this is my issue?
[17:07] <lazyPower> dimitern: i've had a couple questions over the networking in 1.24 wrt lxc containers. Correct me if i'm wrong but on certain substrates the forwarding should "just work" and cross host container communication w/ lxc should be enabled?
[17:09] <lazyPower> puzzolo: this conversation above is related to your scenario :)
[17:09] <puzzolo> i guessed that guys :*
[17:09] <lazyPower> jogarret6204: if you can grab the all-machines log that would be helpful
[17:10] <lazyPower> jogarret6204: that machine-0 log might have the details of whats happening with the upgrade as well, so thats probably a good place to start
[17:10] <jogarret6204> relevant log message seems to be this one
[17:10] <jogarret6204> https://10.20.0.36:17070/environment/b2c12e86-2a25-4d90-886b-e595c500f432/tools/1.24.5-trusty-amd64
[17:10] <jogarret6204> sorry - was trying it.. all of it
[17:10] <jogarret6204> failed to fetch tools from "https://10.20.0.36:17070/environment/b2c12e86-2a25-4d90-886b-e595c500f432/tools/1.24.5-trusty-amd64": bad HTTP response: 400 Bad Request
[17:12] <jogarret6204> I see my lab proxy changed when I try that...  let me go fix that
[17:29] <puzzolo> lazyPower: can this "forwarding" thingie be handput, or should i just wait for devs to do the dev's things?
[17:29] <lazyPower> puzzolo: i'm  not certain what is actually put on the host in terms of forwarding, so i'm pending a response from dimitern
[17:30] <puzzolo> lazyPower: we'll wait together then. Strange is that those containers do reach maas, but get nothing back.
[17:31] <puzzolo> req ok. offer ok. never "acks"...
[17:31] <lazyPower> weird
[17:31] <puzzolo> but sure is forwarding issue, as static ips wont do either
[17:32] <lazyPower> yeah, i think its just an iptables rulechain or route thats added
[17:32] <lazyPower> i'm not sure which
[17:32] <lazyPower> its been a while since i've dealt with this by hand
[17:34] <jogarret_6204> lazyPower: fixed proxy.  logs still showing upgrade in progress in error logs of juju state VM.  but nothing is upgrade.  can I back out and redo upgrade?
[17:34] <jogarret_6204> On puzzolo issue - you guys check ebtables and ufw too?
[17:34] <lazyPower> marcoceppi: do you know if there is a way to stop an in progress upgrade?
[17:34] <dimitern> lazyPower, hmm in 1.24 we have this only in a few places
[17:34] <lazyPower> dimitern: AWS, and MAAS correct?
[17:34] <puzzolo> lazyPower: sure this thing affects quite a bunch of charms. I tried landscape just to get the feeling of what will be openstack deployment.
[17:35] <dimitern> lazyPower, a few more - AWS and MAAS with address-allocation feature flag
[17:35] <lazyPower> ahh so thats behind a feature flag during deployment?
[17:35] <dimitern> lazyPower, by otherwise by default only on MAAS
[17:35] <lazyPower> puzzolo: you're using maas correct?
[17:35] <dimitern> lazyPower, yeah, it's possible to lift it for 1.25, but it's not decided yet
[17:54] <firl> yay, I will be able to come to the juju charm summit
[17:55] <lazyPower> firl: awesome!
[17:56] <firl> yeah, I work remote in texas but the office I Work with is 2 miles from the summit so it’s perfect
[18:06] <jcastro> marcoceppi: which would you say is a good hello world for explaining actions to people
[18:06] <lazyPower> jcastro: a great one would be to get troubleshooting information from a system
[18:07] <lazyPower> dump the versions of software, configuration data, and relevant things from the service
[18:07] <jcastro> sorry I meant, hello world example charm
[18:07] <lazyPower> o
[18:07] <jcastro> like, one I can point to people and then explain
[18:07] <lazyPower> i got meta there fo ra second, sorry
[18:07] <jcastro> no worries
[18:08] <lazyPower> etcd has a system health action
[18:08] <lazyPower> very simple to parse
[18:08] <lazyPower> https://github.com/whitmo/etcd-charm/blob/master/actions/health
[18:08] <jcastro> cool, any others?
[18:09] <lazyPower> i have some actions for DroneCI
[18:09] <lazyPower> https://github.com/chuckbutler/drone-ci-charm/tree/master/actions
[18:09] <jcastro> perfect, anyone else have some decent actions they want to share?
[18:11] <puzzolo> lazyPower: maas
[18:13] <puzzolo> jogarret_6204: i checked ufw only. defaulting it to accept all, or even disabling it. From maas down to phys to even lxc containers
[18:14] <puzzolo> jogarret_6204: didnt try ebtables.. as it never touched me in these years to custom firewall  bridge prots
[18:18] <puzzolo> [    8.062781] IPv6: ADDRCONF(NETDEV_UP): vethQ3EQVL: link is not ready
[18:18] <puzzolo> [    8.062787] juju-br0: port 2(vethQ3EQVL) entered forwarding state
[18:18] <puzzolo> [    8.062790] juju-br0: port 2(vethQ3EQVL) entered forwarding state
[18:18] <puzzolo> [    8.065762] juju-br0: port 2(vethQ3EQVL) entered disabled state
[18:18] <jogarret_6204> puzzolo:  Those are just some things I have tried.  I'm not experienced with containers.
[18:21] <plars> marcoceppi: it's worth noting that if I use the --no-check-certificate flag to wget, I can get it to download from the url in the error, but I don't think I can inject that anywhere
[18:21] <plars> marcoceppi: It downloads ok, but still gives: WARNING: certificate common name ‘*’ doesn't match requested host name ‘10.101.49.149’
[18:23] <puzzolo> on maas, everything seems to be working just fine
[18:23] <puzzolo> Aug 19 20:23:01 maas-1 dhcpd: DHCPDISCOVER from 00:16:3e:43:ce:3f (juju-machine-11-lxc-0) via eth1
[18:23] <puzzolo> Aug 19 20:23:01 maas-1 dhcpd: DHCPOFFER on 10.1.1.136 to 00:16:3e:43:ce:3f (juju-machine-11-lxc-0) via eth1
[18:27] <puzzolo> tell me if i'm flooding too much guys. Bridge configuration, seems to be outstanding
[18:28] <puzzolo> http://pastebin.com/JHvQRs3s
[18:39] <lazyPower> puzzolo: I'm not positive on where to go from here. I need to talk with dimiter more tomorrow in the AM to get a better view of whats implemented and how to consume it
[18:40] <lazyPower> puzzolo: once i've had that conversation i should be able to lend a better helping hand. You're the third user this week thats had container networking issues and we've got work landed that should make any manual intevention moot
[18:40] <lazyPower> juju should be able to do what is right for that networking component in the stack, and if we do anything by hand its not as reproduceable as juju doing it for you :) so i hesitate to help in the creation of a snowflake
[18:41] <puzzolo> a small bit different in my setup, is that i am using a kvm machine with libvirt bridged networking.
[18:41] <lazyPower> puzzolo: i can circle back tomorrow if you're going to be here, i'll also be pinging the list with my findings to help the broader user audience in general.
[18:42] <plars> marcoceppi: I tried to fake it out by resetting the symlink under /var/lib/juju/machine-0, but it replace it. Also tried just replacing the 1.24.5 binary with the 1.24.4 one, but it just gets stuck in an endless update loop if I do that, and doesn't let me deploy :(
[18:42] <puzzolo> thank you for your time. I'll check what those users had on lxc netwroking, to get a better picture myself
[18:44] <lazyPower> puzzolo: sorry i didn't have better info today. this is very much new stuff that we've been working on for the last cycle
[18:44] <lazyPower> so its a known problem, but we've got some tools out there to help, i jsut need to re-up on that info and we should have you sorted in short order
[18:47] <puzzolo> no problem. One thing did catch my eye now, on lds-1, the first service unit phys machine, where containers are deployed for the other services in the charm bundle:
[18:47] <puzzolo> 2015-08-19 18:39:28 INFO juju.networker networker.go:163 networker is disabled - not starting on machine "machine-11"
[18:47] <plars> ah, hang on, it was way easier. I didn't see that upgrade-juju takes a version parameter, and it doesn't seem to try to circumvent what you specify there :)
[18:57] <plars> marcoceppi: ok, I'm getting farther... once it's completely done, I'd like to file a bug on this. Is lp the best place for bug reports? github?
[18:57] <plars> marcoceppi: I've just about convinced myself it's a regression in 1.24.5
[19:42] <puzzolo> lazyPower: mumble mumble, can it actually be a problem regarding my managed switch?
[19:57] <lazyPower> i wouldn't think so, but its possible
[19:58] <beisner> gnuoy, coreycb - mongodb c-h sync for liberty uca - mp ready for review @ https://code.launchpad.net/~1chb1n/charms/trusty/mongodb/sync-fetch-helpers-liberty/+merge/268413
[19:58] <coreycb> beisner, I'll have to defer to gnuoy.  I can't land to mongodb
[19:59] <beisner> coreycb, ok np, thanks.   fyi, mbruzek is on the mysql sync review as soon as tests complete.
[20:00] <beisner> o/ mbruzek :-)   the queue was thick today, but that mysql amulet test is next up.
[20:00] <puzzolo> upgrading to 1.24.5 over 1.24.4
[20:00] <mbruzek> in meeting beisner I will get to that after that
[20:00] <beisner> mbruzek, yep np, the test will be +~1hr i imagine.  thanks a ton.
[20:18] <puzzolo> lazyPower: http://askubuntu.com/questions/615433/landscapes-openstack-installation-fails-due-to-containers-unable-to-obtain-an-i after a couple of hours of diggin' around, this guys in the link seem to have my same issue. I am not using esxi, but libvirt alone, with kvm as phys0 where lxc get created. I will try to see if my problem is related to libvirt other then juju.
[20:30] <puzzolo> I am starting to think that problem is related to macvlan on libvirt
[20:31] <puzzolo> which is a sort of  cannibilized bridge, working splendidly for kvm networking... but is not aware of lxc containers mac addresses inside kvm.
[20:32] <lazyPower> puzzolo: that is very possible
[20:32] <lazyPower> sorry i've been unresponsive, in a community hangout with some users of our k8's bundle
[20:32] <lazyPower> split brain today :)
[20:37] <skylerberg> I need to change the configuration in nova-compute (I need to pass in some nfs protocol settings). However, there doesn't seem to be an applicable interface for me to connect to from my charm. What is the way forward? Add an interface to nova-compute? Hijack an interface meant for something else?
[20:54] <lazyPower> skylerberg: is this just for adding NFS support? or is this to extend into a new region of NFS goodness?
[20:54] <lazyPower> skylerberg: meaning the NFS charm relation
[20:57] <skylerberg> lazyPower: I just need to edit a couple of settings in nova.conf (nfs_mount_options and maybe cpu_mode).
[20:57] <lazyPower> skylerberg: its a bad idea to have something else editing nova.conf outside of the nova charm
[20:57] <lazyPower> that smells of config race conditions should nova ever receive a hook event firing that will re-write the template
[20:58] <lazyPower> iirc there is a config manager lib, that you can send data to over the relation. we have a similar construct in teh cinder charm
[20:58] <lazyPower> beisner: does nova have the same config manager that cinder does?
[20:58] <lazyPower> skylerberg: let me loop in our test master of openstack, if thats the case i should be able to get you a recommended path forward that has a high liklihood of getting accepted as a PR for the charms
[20:58] <skylerberg> lazyPower: Right, so I am using the cinder storage-backend relation and that is working great. I just don't see an equivalent in nova.
[21:00] <skylerberg> lazyPower: It might be nice to have a generic interface for editing nova's config. Right now it looks like all the interfaces are very specific (ceph, rabbit, etc.)
[21:00] <lazyPower> skylerberg: thats by design as its p2p orchestration in that instance :)
[21:00] <lazyPower> which makes it ery clear to anyone thats relating into nova, what data is being sent/received
[21:01] <lazyPower> and allows nova to respond in kind based on whats incoming, eg: installing any storage drivers, et-al
[21:01] <beisner> hi lazyPower - they both use the contexts approach with regard to collecting config data and rendering the conf template, which keeps things safe-ish.
[21:01] <lazyPower> beisner: so is it safe to link to the cinder-vnx charm to illustrate how that hsould be used?
[21:01] <lazyPower> or are they different context managers?
[21:04] <puzzolo> this problem is killing me
[21:05] <beisner> not sure i'm fully following.  i think the cinder-vnx + cinder charms are a good example of a subordinate affecting a principle's config if that's what you mean.
[21:05] <lazyPower> beisner: well skylerberg is wanting to add NFS storage backend to nova
[21:06] <lazyPower> and i'm recommending using the context approach to add that config update, vs a racey interface approach that edits the config outside of the context aware template approach
[21:06] <beisner> oh neat, for instance image storage?
[21:07] <beisner> if so, we already have nova-compute plumbed for ceph-backed instance storage, and it may be worth looking at that code
[21:08] <beisner> caveat there of course is:  all instance disk i/o then traverses the wire, so the network needs to be ROCKING.
[21:08] <lazyPower> skylerberg: seems like you've got some template code you can consume, and i would recommend adding an interface unless you're going to recycle the same data coming from nfs, then use the NFS provided interface :)
[21:09] <lazyPower> sorry that wsa a bit round about, i just wanted to make sure you were setup for success vs running into a racing config scenario. I've had my fair share of those and have lost sleep over it.
[21:09] <beisner> +1000 for race avoidance
[21:12] <skylerberg> lazyPower, beisner: So the way forward is to add an interface to nova-compute that uses the same type of mechanism as the storage-backend interface in cinder to update the config?
[21:22] <beisner> lazyPower, skylerberg - so re: design logic in adding new features, i'd prefer to pull in one of the folks like coreycb, gnuoy, wolsen or dosaboy who are primary authors.  i become familiar with the code paths through testing, but i don't generally pave new paths in these charms as such.
[21:23] <coreycb> skylerberg, sorry, catching up
[21:23] <beisner> that, and inspecting how we've already done the ceph-backed instance storage in nova-compute.
[21:25] <coreycb> skylerberg, there's a config-flags optoin in nova.conf to edit general config options, but be careful
[21:25] <coreycb> skylerberg, in nova-compute that is
[21:28] <coreycb> skylerberg, nevermind me, saw you had a question about general nova.conf settings earlier
[21:28] <skylerberg> coreycb: It seems like I would either need to still use an interface so that users could connect my charm to nova and then my charm would set the config-flags or I would need to have instructions with my charm saying that they need to set this option on nova-compute, which seems less than ideal.
[21:29] <skylerberg> coreycb: What I need to do with nova-compute is pretty simple, it just needs to set a couple config options and make sure the service is restarted so that the config is loaded. It should be a lot like how cinder-vnx connects to cinder.
[21:32] <wolsen> skylerberg, this configuration setting needs to be applied after a subsequent charm is related to it?
[21:33] <skylerberg> wolsen: That is what I am imagining. Someone deploys cinder and nova-compute, then they connect my charm to both to alter their configurations to use the proper storage backend.
[21:35] <wolsen> skylerberg, yeah unfortunately thare's not a generic interface that exists to relate nova and cinder backends, its more specific (e.g. the ceph-client interface which ceph can be related with nova-compute)
[21:36] <wolsen> skylerberg, though there's the possibility of having a shared-storage relation or something similar that may be more generic
[21:37] <skylerberg> wolsen: I think my use case is even more generic than relating storage backends, it is just editing the config settings. Could we add a generic config setting interface?
[21:37] <wolsen> skylerberg, sorry my son & I are both sick today - can I get back to you?
[21:38] <skylerberg> wolsen: Yeah, no problem. Take it easy.
[21:40] <beisner> thanks skylerberg, lazyPower - i've also got to run, eod.  i think most of the openstack charmers are also end-of-day.  it may be worth starting a thread on the juju mailing list to state the end goal, gather input, ideas, etc.
[21:40] <beisner>  (thanks too, coreycb wolsen  )
[21:41] <wolsen> skylerberg, one of my concerns is that adding a generic config interface for the charms will now compete with the charm config itself
[21:42] <wolsen> skylerberg, so it becomes ambiguous if both the charm itself and a related charm specify "create this config option with this value" - which one is the right one?
[21:44] <skylerberg> wolsen: I see. Yeah, I will think about what makes sense for my use case and get back to you.
[21:56] <puzzolo> lazyPower: nothing. I tried everything i could. from kernel flags on bridge till promisc and "allmulti" options on all layers  of network interfaces. Still nothing. Hope to hear from you and dmnitry tomorrow
[21:56] <lazyPower> puzzolo: i've got you at the top of my list to have that discussion before i switch feet back to k8's :)
[22:03] <puzzolo> thank you dude :*