[01:26] <axw> hallyn: hmm. I think you may need to set it in your environment too, to affect the client process. i.e. env http_proxy=... juju ...
[01:37] <axw> hallyn: BTW I'm waiting on a license to test against ESXi 5.1. the free version is too crippled to work with Juju
[01:38] <axw> hallyn: from the looks of things so far, though, changing to hardware version 8 will be fine
[02:45] <hallyn> axw: hm.  seems like no matter what i try, i get http://paste.ubuntu.com/25711441/ .  trying with http_proxy in env too now, though it does also need to respect no_proxy
[02:46] <axw> no_proxy should be honoured
[04:12] <hallyn> nope, i just keep getting the same thing...
[04:12] <hallyn> "vm '*' not found" seems like it would stem from some confusion
[04:23] <hallyn> axw: http://paste.ubuntu.com/25711792/
[04:26] <axw> hallyn: yep, I don't know what that's about. can you please try 2.3-beta1? there have been significant changes to the vsphere provider since 2.0.2
[04:27] <hallyn> axw: waht the...
[04:27] <hallyn> i installed from snap intending to have 2.3-beta1
[04:32] <axw> hallyn: $PATH order I guess?
[04:32] <axw> oh, or maybe you just need to specify --edge
[04:33] <hallyn> i did sudo snap install juju --beta --classic
[04:34] <hallyn> ah yes /snap/bin/juju is 2.3-beta1
[04:34] <axw> beta, that's the one
[04:34] <hallyn> looking better
[04:34] <hallyn> things are being done
[04:35] <hallyn> juju-vmdks are being uploaded
[04:35] <axw> hallyn: is this with ESXi 5.1?
[04:35] <hallyn> oh, no.  this is with a DC that only has my lone 6.0 :(
[04:36] <hallyn> i'll re-try with a 5.1 added in later, if this works
[04:36] <chamar> curious, is it with the free ESXi version?
[04:36] <axw> hallyn: ok cool. you'll need to modify the source (OVF) though for that
[04:37] <hallyn> chamar: i don't think so.  there's licenses at any rate.  (i inherited the lab...)
[04:37] <axw> chamar: I tried with vSphere Hypervisor 5.1 earlier, doesn't work. Juju wants to create folders and clone VMs, which apparently don't work with the free version
[04:38] <chamar> Thanks.  Got the same result with the free hypervisor.  sadly.
[04:39] <chamar> there's feature that are not available / enabled.  Same goes with MAAS... ho well.
[04:40] <chamar> hum. removing a kubernetes-worker unit.. works well.  except it still appears in the k8s dashboard..humm
[04:48] <hallyn> axw: so is there any point in trying with a 5.1?
[04:48] <hallyn> sounds like no
[04:48] <axw> hallyn: not without source changes, no
[04:52] <axw> hallyn: did bootstrap complete on 6.0?
[04:59] <hallyn> still running
[05:00] <hallyn> axw: not without source changes to make it not try to upload the files, or do i really only need to change the min machine type?  (not sure based on waht you were telling chamar)
[05:01] <axw> hallyn: just changing vmx-10 to vmx-8. the other stuff was to do with the free version
[05:01] <hallyn> axw: ok, i'll hopefully try that this week then.  one holdup has been trying to find hte actualy source package to pull-lp-source :)
[05:02] <hallyn> or am i gonna have to learn how to do the snap thing
[05:03] <axw> hallyn: git clone https://github.com/juju/juju/, then: make JUJU_MAKE_GODEPS=true install
[05:03] <axw> requires go 1.8+
[05:04] <axw> develop branch will become 2.3-beta2
[05:05] <hallyn> cool thanks
[05:07]  * hallyn sets up a build env
[05:09] <hallyn> say, can no_proxy be a subnet?
[05:14] <axw> hallyn: from 2.3-beta1 onwards, yes: https://github.com/juju/juju/pull/7885
[05:15] <axw> I suspect there's some gotchas when it comes to external processes though, when juju shells out to wget/curl/etc., because it's non-standard
[05:20] <hallyn> axw: nifty
[05:22] <hallyn> ok, juju build going.  can i just scp the built ~/go/bin/juju over, or do i needmore?
[05:35] <hallyn> well the other juju bootstrap is still going.  new juju is built - i assume i'll have to rebootstrap?
[05:35] <hallyn> will deal with it in the morning
[05:35] <hallyn> thanks axw!
[05:35] <hallyn> \o
[05:36] <axw> hallyn: no worries, let me know if I can help any more. you'll need to scp the juju and jujud binaries, and yes you'll need to rebootstrap
[05:37] <axw> hallyn: (I assume you mean scp to wherever you're bootstrapping from - you can't just just copy over the top of the binaries in a bootstrapped environment)
[05:38] <axw> also, seems like a long time for bootstrap - might be borked. if you can ssh to the VM, /var/log/cloud-init-output.log should tell you what's happening
[05:38] <axw> assuming it got that far
[10:01] <xavpaice> is there any way for an lxd unit to know what the hostname of it's parent host is?  Would be handy for exporting nagios checks.
[14:26] <hallyn> axw: /var/log/cloud-init-output.log on which ohst?
[15:17] <bdx> on CMR, what things do we want to relate across models, should we only be concerned with logical groupings?
[15:18] <bdx> for example
[15:18] <bdx> I have an web application deployed to web-app-model, and a monitoring stack deployed to monitoring-model
[15:19] <bdx> rick_h: for example, lets say I have the prometheus monitoring stack described in your blog deployed to monitoring bundle
[15:19] <bdx> so I have this telegraf subordinate component
[15:20] <rick_h> bdx: so mentally (and you can see it based on the status output) we think folks will basically have SaaS-like setups
[15:20] <bdx> part of me wants to deploy telegraf to my "web-app-model", and make the CMR to prometheus in the "monitoring-model"
[15:20] <rick_h> so a model will be the bits needed to offer up a SaaS endpoint (or DBaaS) and such
[15:20] <rick_h> bdx: exactly
[15:21] <rick_h> bdx: so you want the subordinate on each thing (many) but only one prometheus gathering the data
[15:21] <bdx> the other part of me wants to deploy telegraf to the "monitoring-model", and make the CMR from the web server in web-app-model to telegraf in the "monitoring-model"
[15:21] <bdx> rick_h: missing the point
[15:21] <bdx> rick_h: see what I'm saying
[15:22] <rick_h> bdx: k, sec sorry otp with folks on your models issue and trying to do two things at once
[15:22] <bdx> ok, no rush
[15:22] <bdx> lol thx
[15:26] <rick_h> bdx: there might be a temp way to improve your models call until we can get some  updates into juju-core and new releases/etc. So was just seeing how we can make that happen.
[15:26] <rick_h> bdx: but ok, phone over. /me rereads
[15:27] <rick_h> bdx: so, I think that you'd put telegraf in the webapp model
[15:27] <rick_h> bdx: you want that model to say that things are in fact being wired up to be monitored. telegraf is installed on each of those machines. It's a subordinate and does not effect the number of VMs and such in the web-app-models
[15:28] <bdx> totally
[15:28] <rick_h> bdx: and it'll be a LOT easier to see which future models have telegraf setup vs not
[15:28] <rick_h> bdx: that's how my brain thinks anyway.
[15:28] <bdx> right
[15:29] <bdx> rick_h: so, the way I was thinking about it was, if a user needs something monitored, I just grant access to the telegraf:juju-info endpoints
[15:29] <bdx> to that user
[15:29] <bdx> telegraf is already related to prometheus in the monitoring-model
[15:29] <rick_h> bdx: thinking...for some reason I really don't like subordinate realtions over CMR...but I'm not 100% sure why
[15:30] <bdx> so then I could essentially gate which users could monitor things by granting access to the telegraf:juju-info endpoint
[15:30] <bdx> yeah, feel you on that
[15:30] <bdx> rick_h: possibly because we want the charm to live on the controller for which model its being deployed into
[15:30] <rick_h> bdx: so...but at that point you're locked into telegraf
[15:30] <rick_h> bdx: vs just "send stuff to prometheus"
[15:30] <bdx> ahh, I see
[15:31] <bdx> yeah
[15:31] <rick_h> bdx: so if you used anything else, you'd need those setup as well
[15:31] <bdx> totally
[15:31] <rick_h> bdx: I think it's because prometheus is basically a database. I'd want to control access to the DB, not which apps are already wired to the DB
[15:32] <bdx> entirely
[15:32] <bdx> that makes sense
[15:32] <rick_h> bdx: so I think what you're asking "would work" but it just feels really off in my head
[15:32] <bdx> yeah, its does now for me too
[15:32] <bdx> rick_h: thanks for being the voice of reason here
[15:32]  * rick_h writes that down on the calendar "voice of reason!" :P
[15:33] <bdx> I'm hooking up my first live CMR deploy with monitoring decoupled from the web app stuff, and the db decoupled from the web app stuff too, pretty exciting this is finally happening
[15:34] <bdx> I'm expecting it all to work, 1st try, using beta2
[15:34] <bdx> jp
[15:34] <bdx> :)
[15:34] <bdx> high hopes
[15:34] <bdx> ^^
[15:36] <rick_h> bdx: know that the prometheus charm needs an update to use the new netwokring stuff. I'm working on tests against charm-helpers to add the tooling for it
[15:36] <bdx> ohhhh niceee
[15:38] <bdx> rick_h: thats separate from CMR though right? or are they linked? like if you use CMR then you have to use the new network-get too?
[15:40] <rick_h> bdx: so if you relate telegraf to prometheus over CMR, prometheus needs to use the public IP vs the 10.x one of the vm
[15:40] <rick_h> bdx: right now the prometheus charm asks for the relation-get private address
[15:40] <rick_h> bdx: vs using network-get -r and that will be CMR aware and provide a public address
[15:45] <stub> rick_h: I'd say that subordinates, by definition, are in the same container as their primary. And splitting the container over two models seems wrong.
[15:45] <rick_h> stub: +1
[15:47] <bdx> rick_h: that makes sense
[15:48] <bdx> rick_h: what about the scenario where the models are in the same vpc
[15:48] <bdx> ooooh
[15:48] <bdx> I see, thats where the new network-get functionality comes in
[15:49] <bdx> you can now make your charm choose which network interface to get the relation info for, so if you want to set the private interface info, then you can
[15:50] <rick_h> bdx: right, so network-get is all "bindings aware" and provides more full featured network dump
[15:50] <rick_h> bdx: so if I can get this test to pass I'll have a PR for new network_get() charmhelper to use for that
[15:52] <bdx> oh nice, I think I see .... what you are working on is a wrapper in charmhelpers for the new network-get that will allow us to access the new functionality via the python api
[15:52] <bdx> but is that only for "bindings", or does it differentiate between public vs private address too?
[15:53] <bdx> e.x. aws instance
[15:53] <bdx> deployed to an a subnet in which it gets a public ip
[15:54] <bdx> will have an private and public address, but may not use bindings, and may not be deployed to spaces via constraint
[15:55] <bdx> then deploy another charm to another model (in the same vpc/private address space, just a different model) and relate those two charms via CMR
[15:56] <bdx> what will happen? how do I control this?
[15:56] <rick_h> bdx: https://github.com/pmatulis/juju-docs/blob/00f06dfa4f62020e5598253f0b066af9610df032/src/en/developer-network-primitives.md
[15:56] <rick_h> bdx: making up some lunchables so in and out atm but give that a read
[15:57] <rick_h> bdx: so in the meantime, you have to manually edit the prometheus config with the proper addresses for prometheus to reach telegraf across models...but hopefully that's not true by the EOD today
[15:57] <bdx> nice, so your wrapper will basically give us access to all of the things that I'm concerned about I think
[15:57] <bdx> great
[15:58] <pmatulis> rick_h, https://jujucharms.com/docs/devel/developer-network-primitives :)
[15:58] <rick_h> bdx: right
[15:58] <rick_h> pmatulis: ty :)
[15:58] <rick_h> Had the tab open a while. Heh
[15:58] <pmatulis> ha ha
[16:06] <bdx> so
[16:06] <bdx> "Both ingress address and egress subnets may vary depending on the relation. This is because if the relation is cross model, the ingress address is the public / floating address of the unit to allow ingress from outside the model. And a given relation may see traffic originate from different egress subnets."
[16:07] <rick_h> bdx: exactly
[16:07] <bdx> rick_h: ok, so this better exposes what I'm concerned about
[16:07] <bdx> "if the relation is cross model, the ingress address is the public / floating address of the unit"
[16:08] <rick_h> So what we need to do is test this out and make sure in a vpc it behaves
[16:08] <bdx> rick_h: so a common network setup/use case I use is to have multiple models in the same vpc
[16:08] <rick_h> bdx: and file bugs and feedback during the betas on it
[16:09] <bdx> right
[16:09] <rick_h> If I follow your concern through
[16:09] <bdx> like, I want to monitor things from one model to the next
[16:09] <bdx> I don't want default talk over WAN
[16:09] <rick_h> Yea, and no expose anything in the internet you don't have to
[16:09] <bdx> "if the relation is cross model, the ingress address is the public / floating address of the unit"
[16:09] <rick_h> +1 so we've got to help test and build clear rules for how juju can"do the right thing"
[16:10] <rick_h> bdx: that might involve specifying binding of endpoints to put things together clearly
[16:10] <bdx> for me this means all of my monitoring cross talk and database <-> web app cross talk that I want to stay inside the vpc will automatically be forced over the wan if its CMR
[16:11] <bdx> righ
[16:12] <bdx> possibly^ is worded incorrectly
[16:12] <rick_h> bdx: so what I'm saying is that might be the default behavior.
[16:13] <bdx> oh
[16:13] <rick_h> bdx: but, if you deploy and bind the endpoints to the internal vpc networks
[16:13] <bdx> got it
[16:13] <rick_h> bdx: then perhaps that overrides the default WAN behavior?
[16:13] <bdx> I see
[16:13] <rick_h> bdx: so that's my "you might have to set things up more clearly"
[16:13] <bdx> ok, I'm tracking
[16:13] <rick_h> bdx: and if that's failing, then we file bugs and ask wallyworld to help us out :)
[16:13] <bdx> got it
[16:14] <rick_h> bdx: so mentally, I'd expect it to WAN so that working > * as a default behavior.
[16:14] <bdx> right
[16:14] <rick_h> bdx: but that manually targeting the network paths through endpoint binding would do what an experienced user wants to be done
[16:14]  * rick_h starts disclaimer'ing that he's not tested that out atm though....sooo....
[17:02] <rick_h> any charmer folks know what error I'm getting here? https://pastebin.canonical.com/200266/
[17:02] <rick_h> kwmonroe: ^
[18:52] <cory_fu> rick_h: What channel of charm-tools are you using?
[18:53] <rick_h> cory_fu: I'm trying to use a custom build to test out the PR https://github.com/juju/charm-helpers/pull/20 in a charm
[18:54] <rick_h> cory_fu: so I did a make source in charmhelpers, updated the wheelhouse charmhelpers tar.gz to try to use my patched version
[18:54] <rick_h> cory_fu: maybe there's an easier path but not sure how this balance works out
[18:54] <cory_fu> rick_h: A custom build of charm-tools to test a charm-helpers change?
[18:55] <rick_h> cory_fu: sorry, for charm-tools I'm using edge channel of charm
[18:59] <rick_h> cory_fu: so I just did a charm pull prometheus and attempted to build it. I ended up doing a --no-local-layers --force to get it working enough to move forward.
[19:00] <cory_fu> rick_h: So, from your pastebin, it's picking up the current directory as the source of the prometheus interface layer for some reason
[19:00] <cory_fu> rick_h: (Note the "(from .)" at the end of the pastebin)
[19:00] <rick_h> cory_fu: yea, I wasn't sure why there. I tried to unset the interfaces path
[19:00] <rick_h> cory_fu: but it continues to think so
[19:01] <cory_fu> It might actually be because you *don't* have INTERFACES_PATH set.  I'm going to test that.
[19:01] <cory_fu> We should probably just skip any local interface layers if the path isn't set
[19:01] <cory_fu> Or have better detection about what local path is an interface path
[19:01] <rick_h> cory_fu: so originally it was set to my interfaces path, but to build this charm I didn't need any so I unset it in an effort to make it work out.
[19:02] <cory_fu> Nope, having it unset does the right thing for me, too
[19:02] <rick_h> cory_fu: k, I'm feeling my way through the best practices on working on these tools/charms and stumbling a bit. I assume I'm holding it wrong so curious what folks say when I hit stuff
[19:02] <cory_fu> What was set to your interfaces path?
[19:03] <rick_h> export INTERFACE_PATH=$JUJU_REPOSITORY/interfaces
[19:03] <rick_h> echo $JUJU_REPOSITORY
[19:03] <rick_h> /home/rharding/src/charms
[19:03] <cory_fu> Yeah, that should be fine.  You don't have the prometheus charm checked out in that interfaces sub directory, do you?
[19:03] <rick_h> and the charm is in the charms directory ".../charms/prometheus"
[19:05] <cory_fu> rick_h: Ah!  That let me reproduce it.  I have my layers in a layers subdir (e.g., ~/charms/layers/prometheus).  Putting it directly into JUJU_REPOSITORY causes it to break
[19:05] <rick_h> cory_fu: no, the only interface in the interfaces path is grafana-source. No other directories in there
[19:05] <rick_h> cory_fu: oic, so yea holding it different than everyone else :)
[19:05]  * rick_h has to run to get the boy from school, biab
[19:05] <cory_fu> rick_h: I'll open a bug for this
[19:06] <cory_fu> It should definitely be doing the right thing here and it's not
[19:07] <zeestrat> cory_fu: are you the right person to bother for some questions regarding charm tools?
[19:15] <zeestrat> I am having a bit of a hard time figuring out the intended/preferred way to use charm tools when developing some charms in regards to which distribution/version to use. Asked on the mailing list (https://lists.ubuntu.com/archives/juju/2017-October/009553.html) but not much luck.
[19:37] <cory_fu> zeestrat: marcoceppi would probably be better to answer that question, because I'm not really clear on the versioning there, either.  I suspect that the versioning has just fallen out of maintenance since we moved to snaps as the preferred deployment and snap revisions already handle that need to some degree, but it definitely needs to be cleaned u
[19:37] <cory_fu> p.
[20:00] <rick_h> cory_fu: kwmonroe is there a way to get the relation_ids just from juju run xxxx ?
[20:20] <cory_fu> rick_h: juju run --unit <unit/0> -- relation-ids <endpoint-name>
[20:21] <rick_h> cory_fu: gotcha ty
[20:21] <rick_h> cory_fu: I figured they looked numeric and started with 0 and then 1 and found it :)
[20:24] <zeestrat> cory_fu: Thanks. I'll try to ping him then. I'm already using the snaps in my dev environment which works great, but when I try to run some tests with bundletester as recommended in https://jujucharms.com/docs/stable/developer-testing, bundletester pulls and old charm-tools from PyPI which is a bit frustrating. How are y'all testing these charms with bundletester?
[21:31] <rick_h> cory_fu: for getting a review and feedback on https://github.com/juju/charm-helpers/pull/20/files is there anything I should do?
[21:31] <rick_h> cory_fu: that's going to hold up changes to the prometheus charm to leverage the updated networking information.
[21:47] <cory_fu> zeestrat: Sorry for the delayed response.  marcoceppi will update PyPI to fix bundletester and we'll look in to getting things updated to be more consistent.
[21:47] <cory_fu> rick_h: Merged
[21:49] <rick_h> cory_fu: ty