=== anthonyf is now known as Guest37830 [07:45] Hi, my charm is on revision 4 here (search for the realmarv) https://code.launchpad.net/charms and here https://code.launchpad.net/~therealmarv/charms/trusty/pybossa/trunk but not on https://jujucharms.com/u/therealmarv/pybossa/trusty/1 I’m doing a major update before pushing it again to the review queue. Can someone explain me why the revision number does not update on jujucharms.com (it is still on 2) ? [08:43] Is there a problem with ingestion ? === scuttlemonkey is now known as scuttle|afk [12:44] * Mmike grabs food === anthonyf is now known as Guest35437 === scuttle|afk is now known as scuttlemonkey === lazyPower|eow is now known as lazyPower [14:08] hi, is anyone having troubles with local deploy (KVM/LXC) and dhcp changes for using some machine in differents dhcp server? (home/work) === kadams54 is now known as kadams54-away [14:09] juju set apiaddresses on all agents to wlan ip === redelmann is now known as redelmann|afk [14:19] redelmann|afk: i'm not sure what you're asking - whats happening? === redelmann|afk is now known as redelmann [14:24] lazyPower, after add a machine "juju add-machine" [14:25] lazyPower, for some reason inside added machine [14:26] lazyPower, in "/var/lib/agents/machine-n/agent.conf" [14:27] lazyPower, apiaddresses: - my.wlan.ip:17070 [14:27] lazyPower, so agents are using my wlan0 ip that always changes if i connect to another network [14:28] lazyPower, after a reboot, if wlan0 ip changes all agents stop working. [14:33] redelmann: did you alter the default lxc networking config? [14:34] redelmann, no, everything is in default. fresh installation [14:34] which version of Juju? [14:35] lazyPower, i'm using container: kvm in environment.yaml [14:35] 1.23 [14:35] dimitern: ping ^ [14:36] lazyPower, i could see STORAGE_ADDR: 192.168.122.1:8040, which is my virbr0 ip [14:36] redelmann: i'm not sure why thats happening, but i've pinged a dev thats been working on networking support in juju, which may or may not be related [14:37] and he might be able to help square you away. What I suggest is to file a bug w/ this behavior, and ping the list with it to get some eyes on it [14:37] ack [14:38] lazyPower, pong [14:41] redelmann, lazyPower, ack, will look a bit later [14:43] thanks dimitern === kadams54-away is now known as kadams54 === redelmann is now known as redelmann|lunch === redelmann__ is now known as redelmann [17:45] in the about-juju docs [1], what is meant by "reuse whatever they want from other teams" [17:45] [1]: https://jujucharms.com/docs/stable/about-juju === kadams54 is now known as kadams54-away [17:50] pmatulis: so let's say you're part of two teams and the application stack is mostly the same (apache, django, postgresql) but my team uses redis and your team uses memcache. [17:51] pmatulis: we can work together, using charm interfaces to reuse the rest of the stack yet the two teams can easily disagree on the caching part of the stack [17:51] e.g. if I find better apache config settings and update the charm, you can reuse that and gain the same benifits [17:52] rick_h_: gotcha, so share some charms but not necessarily all [17:52] pmatulis: exactly, and you can even have two different charms that implement the same inteface and swap them out [17:53] pmatulis: so let's say I want haproxy up front but you want to proxy throgh nginx. As long as we use the same proxy relation guide we can take the same app but proxy it different just by connecting it to different services [17:53] rick_h_: makes sense, thank you [17:54] pmatulis: np === redelmann__ is now known as redelmann [19:02] hey [19:35] hazmat: ping [19:35] lazyPower: pong [19:35] ahoy, I just noticed a comment on the DO api v1 sunset issue over on our juju provider [19:36] are you still working through that? if not i'll advise to use manual for the time being, v2 contributions appreciated. [19:36] lazyPower: if we allow for growing the dependencies, its easy enough to knock out [19:37] i dont think there's any reason to be stingy with the dependency tree at this time [19:37] considering projects like this exist: https://github.com/koalalorenzo/python-digitalocean [19:38] we could pretty much bind on that API lib, and do the rewrite in a weekend (thats engineering optimism at its finest right?) [19:41] hazmat: replied. Thanks mate [19:41] lazyPower: i'd guess a few hrs ;-) [19:42] Yeah, but you're a wizard harry [19:43] lazyPower: i like this lib the most re doV2 api but its got some unfortunate ssh helpers that widen the dependency tree to include paramiko https://github.com/changhiskhan/poseidon [19:44] ah yeah [19:45] well, lets cross fingers someone in the community steps up to do the v2 migration in the next couple of weeks. It would be nice to see the consumers give back :) === redelmann__ is now known as redelmann === alexisb is now known as alexisb_lunch === kadams54 is now known as kadams54-away === natefinch is now known as natefinch-afk === alexisb_lunch is now known as alexisb [22:33] lazyPower: do you know how maas zones are supposed to work? [22:34] i'm wondering what gets relayed back to my charm and what i'm supposed to do with it [22:44] can amulet use an apt-proxy? [22:45] cholcombe: sorry i was in my audio studio and didn't get your pings until just now [22:45] no worries [22:45] cholcombe: AIUI, MAAS zones are the same as regions, your charm wont see it, as thats abstracted away by the "provider" layer of juju [22:46] oh.. [22:46] what if i want to see it? haha [22:46] my charm needs to layout bricks across fault zones [22:46] also, re: amulet using an apt-proxy - it will default to route through the config of whichever machine you're using. If your charms demand an apt mirror it would be good tos etup a squid-apt-proxy in your test env so they can zero conf configure+utilize if its available. [22:47] that would be something you leverage in the bundle [22:47] thats a layer above the charms concerns [22:47] hmm [22:47] interesting alright [22:47] bcsaller: am I correct in this statement? i'm starting to second guess myself ^ [22:47] i'm wondering how to do this now [22:49] cholcombe: i'm 90% certain thats the case [22:49] you use constraints to set the zone [22:50] so with constraints in place is it the idea that juju will hand me units in the right order across zones? [22:50] i'm missing something [22:51] the case i'm thinking of is i have 3 racks of machines. each rack is defined as a failure zone. how does my charm get a list of units from those zones? [22:54] i can't be the first one to have thought of this lol [23:01] lazyPower: iirc you used to be able to juju set-env http(s)-proxy xxx or set-env apt-http(s)-proxy and apply those changes across the whole env, Is that what you mean? [23:01] bcsaller: ah no, i was referring to zones [23:01] but thats good to know about pthe proxies too, it's been a good while since i've tinkered with the proxy settings. [23:02] cholcombe: you define 3 service groups, that deploy to 3 sep. zones as constraints [23:02] eg: "gluster-group1" -- constraints="zone=rack1" "gluster-group2" --constraints="zone=rack2" [23:02] so effectively my charm has no idea what is going on [23:02] so forth and so on [23:02] all i need to do is layout across servers and i'm good [23:03] alright that's good enough i think [23:03] basically, so long as they can communicate with one another that should work as intended [23:03] right [23:03] you get a full service group, comprised of 3 clusters in different zones. [23:03] and i just need to make sure i layout bricks across whatever i see [23:03] and you dont even need to name them differently afaict, you can just juju deploy, and/or add-unit witht eh constraint in place and it should "just work" [23:04] but ymmv with "just work" [23:04] right [23:04] mostly because you're listening to me :P [23:04] so slightly annoying from an admin perspective but easy from my charm's perspective [23:04] dangerous situation to be in [23:04] heh [23:04] well, if you think about it like this [23:04] the admin has to setup these service pools/zones in the first place [23:04] yup [23:04] how would charm know what to do that is proper? [23:04] definitely [23:05] it wouldn't [23:05] it doesn't have enough info to act [23:05] i mean, just because you have 8 zones available to you, it doesn't make sense to propigate across all 8 zones [23:05] right [23:05] cholcombe: what kind of music do you listen to? [23:06] electronic while i'm working. heavy metal when i'm working out :) [23:06] oh really? [23:06] Snap [23:06] why? [23:06] I'll have something for you on dropbox in a couple minutes then [23:06] haha [23:06] Not going to publish this until tomorrow [23:06] ok [23:06] but i digress, would be awesome to get feedback [23:06] sure thing [23:07] so these constraints. will i have separate clusters for each zone ? [23:07] cause that could get messy [23:07] that awkward moment when you realize you're in the public channel [23:07] * lazyPower facepalms === JoshStrobl is now known as JoshStrobl|AFK [23:07] :D [23:07] Nah, you can use the same service name [23:07] just ensure when youd eploy, and you set the constraints, that you use different zone names for the unit(s) [23:08] i'm going to have to test this lol. [23:08] might be a good idea to model it on the CLI and export from teh gui so you get a sense of what it should look like as the final bundle form [23:08] right [23:08] but yeah, juju add-unit gluster --constraints="zone=foobar" should do the trick when you go to add unit(s) [23:08] i know what it should look like from gluster's perspective. i'm just trying to figure out what my charm ends up seeing [23:09] lemme read the constraint page again === anthonyf is now known as Guest93759