[01:26] hallyn: hmm. I think you may need to set it in your environment too, to affect the client process. i.e. env http_proxy=... juju ... [01:37] hallyn: BTW I'm waiting on a license to test against ESXi 5.1. the free version is too crippled to work with Juju [01:38] hallyn: from the looks of things so far, though, changing to hardware version 8 will be fine [02:45] axw: hm. seems like no matter what i try, i get http://paste.ubuntu.com/25711441/ . trying with http_proxy in env too now, though it does also need to respect no_proxy [02:46] no_proxy should be honoured [04:12] nope, i just keep getting the same thing... [04:12] "vm '*' not found" seems like it would stem from some confusion [04:23] axw: http://paste.ubuntu.com/25711792/ [04:26] hallyn: yep, I don't know what that's about. can you please try 2.3-beta1? there have been significant changes to the vsphere provider since 2.0.2 [04:27] axw: waht the... [04:27] i installed from snap intending to have 2.3-beta1 [04:32] hallyn: $PATH order I guess? [04:32] oh, or maybe you just need to specify --edge [04:33] i did sudo snap install juju --beta --classic [04:34] ah yes /snap/bin/juju is 2.3-beta1 [04:34] beta, that's the one [04:34] looking better [04:34] things are being done [04:35] juju-vmdks are being uploaded [04:35] hallyn: is this with ESXi 5.1? [04:35] oh, no. this is with a DC that only has my lone 6.0 :( [04:36] i'll re-try with a 5.1 added in later, if this works [04:36] curious, is it with the free ESXi version? [04:36] hallyn: ok cool. you'll need to modify the source (OVF) though for that [04:37] chamar: i don't think so. there's licenses at any rate. (i inherited the lab...) [04:37] chamar: I tried with vSphere Hypervisor 5.1 earlier, doesn't work. Juju wants to create folders and clone VMs, which apparently don't work with the free version [04:38] Thanks. Got the same result with the free hypervisor. sadly. [04:39] there's feature that are not available / enabled. Same goes with MAAS... ho well. [04:40] hum. removing a kubernetes-worker unit.. works well. except it still appears in the k8s dashboard..humm [04:48] axw: so is there any point in trying with a 5.1? [04:48] sounds like no [04:48] hallyn: not without source changes, no [04:52] hallyn: did bootstrap complete on 6.0? [04:59] still running [05:00] axw: not without source changes to make it not try to upload the files, or do i really only need to change the min machine type? (not sure based on waht you were telling chamar) [05:01] hallyn: just changing vmx-10 to vmx-8. the other stuff was to do with the free version [05:01] axw: ok, i'll hopefully try that this week then. one holdup has been trying to find hte actualy source package to pull-lp-source :) [05:02] or am i gonna have to learn how to do the snap thing [05:03] hallyn: git clone https://github.com/juju/juju/, then: make JUJU_MAKE_GODEPS=true install [05:03] requires go 1.8+ [05:04] develop branch will become 2.3-beta2 [05:05] cool thanks [05:07] * hallyn sets up a build env [05:09] say, can no_proxy be a subnet? [05:14] hallyn: from 2.3-beta1 onwards, yes: https://github.com/juju/juju/pull/7885 [05:15] I suspect there's some gotchas when it comes to external processes though, when juju shells out to wget/curl/etc., because it's non-standard [05:20] axw: nifty [05:22] ok, juju build going. can i just scp the built ~/go/bin/juju over, or do i needmore? [05:35] well the other juju bootstrap is still going. new juju is built - i assume i'll have to rebootstrap? [05:35] will deal with it in the morning [05:35] thanks axw! [05:35] \o [05:36] hallyn: no worries, let me know if I can help any more. you'll need to scp the juju and jujud binaries, and yes you'll need to rebootstrap [05:37] hallyn: (I assume you mean scp to wherever you're bootstrapping from - you can't just just copy over the top of the binaries in a bootstrapped environment) [05:38] also, seems like a long time for bootstrap - might be borked. if you can ssh to the VM, /var/log/cloud-init-output.log should tell you what's happening [05:38] assuming it got that far [10:01] is there any way for an lxd unit to know what the hostname of it's parent host is? Would be handy for exporting nagios checks. [14:26] axw: /var/log/cloud-init-output.log on which ohst? [15:17] on CMR, what things do we want to relate across models, should we only be concerned with logical groupings? [15:18] for example [15:18] I have an web application deployed to web-app-model, and a monitoring stack deployed to monitoring-model [15:19] rick_h: for example, lets say I have the prometheus monitoring stack described in your blog deployed to monitoring bundle [15:19] so I have this telegraf subordinate component [15:20] bdx: so mentally (and you can see it based on the status output) we think folks will basically have SaaS-like setups [15:20] part of me wants to deploy telegraf to my "web-app-model", and make the CMR to prometheus in the "monitoring-model" [15:20] so a model will be the bits needed to offer up a SaaS endpoint (or DBaaS) and such [15:20] bdx: exactly [15:21] bdx: so you want the subordinate on each thing (many) but only one prometheus gathering the data [15:21] the other part of me wants to deploy telegraf to the "monitoring-model", and make the CMR from the web server in web-app-model to telegraf in the "monitoring-model" [15:21] rick_h: missing the point [15:21] rick_h: see what I'm saying [15:22] bdx: k, sec sorry otp with folks on your models issue and trying to do two things at once [15:22] ok, no rush [15:22] lol thx [15:26] bdx: there might be a temp way to improve your models call until we can get some updates into juju-core and new releases/etc. So was just seeing how we can make that happen. [15:26] bdx: but ok, phone over. /me rereads [15:27] bdx: so, I think that you'd put telegraf in the webapp model [15:27] bdx: you want that model to say that things are in fact being wired up to be monitored. telegraf is installed on each of those machines. It's a subordinate and does not effect the number of VMs and such in the web-app-models [15:28] totally [15:28] bdx: and it'll be a LOT easier to see which future models have telegraf setup vs not [15:28] bdx: that's how my brain thinks anyway. [15:28] right [15:29] rick_h: so, the way I was thinking about it was, if a user needs something monitored, I just grant access to the telegraf:juju-info endpoints [15:29] to that user [15:29] telegraf is already related to prometheus in the monitoring-model [15:29] bdx: thinking...for some reason I really don't like subordinate realtions over CMR...but I'm not 100% sure why [15:30] so then I could essentially gate which users could monitor things by granting access to the telegraf:juju-info endpoint [15:30] yeah, feel you on that [15:30] rick_h: possibly because we want the charm to live on the controller for which model its being deployed into [15:30] bdx: so...but at that point you're locked into telegraf [15:30] bdx: vs just "send stuff to prometheus" [15:30] ahh, I see [15:31] yeah [15:31] bdx: so if you used anything else, you'd need those setup as well [15:31] totally [15:31] bdx: I think it's because prometheus is basically a database. I'd want to control access to the DB, not which apps are already wired to the DB [15:32] entirely [15:32] that makes sense [15:32] bdx: so I think what you're asking "would work" but it just feels really off in my head [15:32] yeah, its does now for me too [15:32] rick_h: thanks for being the voice of reason here [15:32] * rick_h writes that down on the calendar "voice of reason!" :P [15:33] I'm hooking up my first live CMR deploy with monitoring decoupled from the web app stuff, and the db decoupled from the web app stuff too, pretty exciting this is finally happening [15:34] I'm expecting it all to work, 1st try, using beta2 [15:34] jp [15:34] :) [15:34] high hopes [15:34] ^^ [15:36] bdx: know that the prometheus charm needs an update to use the new netwokring stuff. I'm working on tests against charm-helpers to add the tooling for it [15:36] ohhhh niceee [15:38] rick_h: thats separate from CMR though right? or are they linked? like if you use CMR then you have to use the new network-get too? [15:40] bdx: so if you relate telegraf to prometheus over CMR, prometheus needs to use the public IP vs the 10.x one of the vm [15:40] bdx: right now the prometheus charm asks for the relation-get private address [15:40] bdx: vs using network-get -r and that will be CMR aware and provide a public address [15:45] rick_h: I'd say that subordinates, by definition, are in the same container as their primary. And splitting the container over two models seems wrong. [15:45] stub: +1 [15:47] rick_h: that makes sense [15:48] rick_h: what about the scenario where the models are in the same vpc [15:48] ooooh [15:48] I see, thats where the new network-get functionality comes in [15:49] you can now make your charm choose which network interface to get the relation info for, so if you want to set the private interface info, then you can [15:50] bdx: right, so network-get is all "bindings aware" and provides more full featured network dump [15:50] bdx: so if I can get this test to pass I'll have a PR for new network_get() charmhelper to use for that [15:52] oh nice, I think I see .... what you are working on is a wrapper in charmhelpers for the new network-get that will allow us to access the new functionality via the python api [15:52] but is that only for "bindings", or does it differentiate between public vs private address too? [15:53] e.x. aws instance [15:53] deployed to an a subnet in which it gets a public ip [15:54] will have an private and public address, but may not use bindings, and may not be deployed to spaces via constraint [15:55] then deploy another charm to another model (in the same vpc/private address space, just a different model) and relate those two charms via CMR [15:56] what will happen? how do I control this? [15:56] bdx: https://github.com/pmatulis/juju-docs/blob/00f06dfa4f62020e5598253f0b066af9610df032/src/en/developer-network-primitives.md [15:56] bdx: making up some lunchables so in and out atm but give that a read [15:57] bdx: so in the meantime, you have to manually edit the prometheus config with the proper addresses for prometheus to reach telegraf across models...but hopefully that's not true by the EOD today [15:57] nice, so your wrapper will basically give us access to all of the things that I'm concerned about I think [15:57] great [15:58] rick_h, https://jujucharms.com/docs/devel/developer-network-primitives :) [15:58] bdx: right [15:58] pmatulis: ty :) [15:58] Had the tab open a while. Heh [15:58] ha ha [16:06] so [16:06] "Both ingress address and egress subnets may vary depending on the relation. This is because if the relation is cross model, the ingress address is the public / floating address of the unit to allow ingress from outside the model. And a given relation may see traffic originate from different egress subnets." [16:07] bdx: exactly [16:07] rick_h: ok, so this better exposes what I'm concerned about [16:07] "if the relation is cross model, the ingress address is the public / floating address of the unit" [16:08] So what we need to do is test this out and make sure in a vpc it behaves [16:08] rick_h: so a common network setup/use case I use is to have multiple models in the same vpc [16:08] bdx: and file bugs and feedback during the betas on it [16:09] right [16:09] If I follow your concern through [16:09] like, I want to monitor things from one model to the next [16:09] I don't want default talk over WAN [16:09] Yea, and no expose anything in the internet you don't have to [16:09] "if the relation is cross model, the ingress address is the public / floating address of the unit" [16:09] +1 so we've got to help test and build clear rules for how juju can"do the right thing" [16:10] bdx: that might involve specifying binding of endpoints to put things together clearly [16:10] for me this means all of my monitoring cross talk and database <-> web app cross talk that I want to stay inside the vpc will automatically be forced over the wan if its CMR [16:11] righ [16:12] possibly^ is worded incorrectly [16:12] bdx: so what I'm saying is that might be the default behavior. [16:13] oh [16:13] bdx: but, if you deploy and bind the endpoints to the internal vpc networks [16:13] got it [16:13] bdx: then perhaps that overrides the default WAN behavior? [16:13] I see [16:13] bdx: so that's my "you might have to set things up more clearly" [16:13] ok, I'm tracking [16:13] bdx: and if that's failing, then we file bugs and ask wallyworld to help us out :) [16:13] got it [16:14] bdx: so mentally, I'd expect it to WAN so that working > * as a default behavior. [16:14] right [16:14] bdx: but that manually targeting the network paths through endpoint binding would do what an experienced user wants to be done [16:14] * rick_h starts disclaimer'ing that he's not tested that out atm though....sooo.... [17:02] any charmer folks know what error I'm getting here? https://pastebin.canonical.com/200266/ [17:02] kwmonroe: ^ [18:52] rick_h: What channel of charm-tools are you using? [18:53] cory_fu: I'm trying to use a custom build to test out the PR https://github.com/juju/charm-helpers/pull/20 in a charm [18:54] cory_fu: so I did a make source in charmhelpers, updated the wheelhouse charmhelpers tar.gz to try to use my patched version [18:54] cory_fu: maybe there's an easier path but not sure how this balance works out [18:54] rick_h: A custom build of charm-tools to test a charm-helpers change? [18:55] cory_fu: sorry, for charm-tools I'm using edge channel of charm [18:59] cory_fu: so I just did a charm pull prometheus and attempted to build it. I ended up doing a --no-local-layers --force to get it working enough to move forward. [19:00] rick_h: So, from your pastebin, it's picking up the current directory as the source of the prometheus interface layer for some reason [19:00] rick_h: (Note the "(from .)" at the end of the pastebin) [19:00] cory_fu: yea, I wasn't sure why there. I tried to unset the interfaces path [19:00] cory_fu: but it continues to think so [19:01] It might actually be because you *don't* have INTERFACES_PATH set. I'm going to test that. [19:01] We should probably just skip any local interface layers if the path isn't set [19:01] Or have better detection about what local path is an interface path [19:01] cory_fu: so originally it was set to my interfaces path, but to build this charm I didn't need any so I unset it in an effort to make it work out. [19:02] Nope, having it unset does the right thing for me, too [19:02] cory_fu: k, I'm feeling my way through the best practices on working on these tools/charms and stumbling a bit. I assume I'm holding it wrong so curious what folks say when I hit stuff [19:02] What was set to your interfaces path? [19:03] export INTERFACE_PATH=$JUJU_REPOSITORY/interfaces [19:03] echo $JUJU_REPOSITORY [19:03] /home/rharding/src/charms [19:03] Yeah, that should be fine. You don't have the prometheus charm checked out in that interfaces sub directory, do you? [19:03] and the charm is in the charms directory ".../charms/prometheus" [19:05] rick_h: Ah! That let me reproduce it. I have my layers in a layers subdir (e.g., ~/charms/layers/prometheus). Putting it directly into JUJU_REPOSITORY causes it to break [19:05] cory_fu: no, the only interface in the interfaces path is grafana-source. No other directories in there [19:05] cory_fu: oic, so yea holding it different than everyone else :) [19:05] * rick_h has to run to get the boy from school, biab [19:05] rick_h: I'll open a bug for this [19:06] It should definitely be doing the right thing here and it's not [19:07] cory_fu: are you the right person to bother for some questions regarding charm tools? [19:15] I am having a bit of a hard time figuring out the intended/preferred way to use charm tools when developing some charms in regards to which distribution/version to use. Asked on the mailing list (https://lists.ubuntu.com/archives/juju/2017-October/009553.html) but not much luck. [19:37] zeestrat: marcoceppi would probably be better to answer that question, because I'm not really clear on the versioning there, either. I suspect that the versioning has just fallen out of maintenance since we moved to snaps as the preferred deployment and snap revisions already handle that need to some degree, but it definitely needs to be cleaned u [19:37] p. [20:00] cory_fu: kwmonroe is there a way to get the relation_ids just from juju run xxxx ? [20:20] rick_h: juju run --unit -- relation-ids [20:21] cory_fu: gotcha ty [20:21] cory_fu: I figured they looked numeric and started with 0 and then 1 and found it :) [20:24] cory_fu: Thanks. I'll try to ping him then. I'm already using the snaps in my dev environment which works great, but when I try to run some tests with bundletester as recommended in https://jujucharms.com/docs/stable/developer-testing, bundletester pulls and old charm-tools from PyPI which is a bit frustrating. How are y'all testing these charms with bundletester? [21:31] cory_fu: for getting a review and feedback on https://github.com/juju/charm-helpers/pull/20/files is there anything I should do? [21:31] cory_fu: that's going to hold up changes to the prometheus charm to leverage the updated networking information. [21:47] zeestrat: Sorry for the delayed response. marcoceppi will update PyPI to fix bundletester and we'll look in to getting things updated to be more consistent. [21:47] rick_h: Merged [21:49] cory_fu: ty