=== thumper-afk is now known as thumper === CyberJacob is now known as zz_CyberJacob === urulama|__ is now known as urulama === frankban|afk is now known as frankban === stub` is now known as stub [09:53] can i tell a reactive charm to not execute a block until all the units are in the same state? [09:54] kjackal: you'll know the answer to that [09:55] Hi magicaltrout [09:55] let me think... [09:56] magicaltrout: ok so, here is how you can do that. You can have a relation of peer type among the service units [09:57] you should then make sure the units set the correct state within the interface implementing the peer relation [09:59] ah right yeah, so just plonk in a "we're all ready for the next step" state in my interface [09:59] for example, when you enter the correct stare you do something like this inside the interface: "remove_state('wrong.state') set_state('correct.state')" [10:00] then inside the reactive part you should have something like @when_not('peers.in.wrong.state') def do_whatever: .... [10:01] Let me think.... [10:01] So there might be another way to do that [10:01] You could you the leader layer [10:02] the leader should be gathering info on the state of the units and the units should ask the leader if all of them are in the right state [10:03] I am sure this solution with the leader is doable, but I do not have it in my head right now, I will need to do ome research on this [10:03] magicaltrout: ^ [10:03] magicaltrout: any comment on the referendum? [10:03] yeah i'm just looking at stub 's github page thanks kjackal [10:04] yeah, fucking shit [10:04] anyway [10:04] not great, the uk has had better days [10:06] in the bigdata world the need of coordination among units is a bad sign, and is usualy outsorced to services like zookeeper. I am sure you already know that [10:06] DC/OS has an immutable configuration for whatever reason [10:06] so they expect all masters to be up and running before installing the config on them [10:07] i'm sure there is a way to do it on the fly, but they don't document it. So, spin up the masters, wait for all the ip's to be addressable and then run the config setup [10:17] magicaltrout: The problem you might trip over is that you don't know when all the units have actually joined the peer relation. You can detect all units that have joined so far are in a particular state, but you don't know how many more nodes are yet to be provisioned and join. [10:17] yeah i know stub I was just mulling that over [10:19] magicaltrout: The approach I use is have the leader set itself up as a stand alone unit, and as other units join have them grab a lock (using the coordinator layer) and have them join one at a time. [10:20] yes stub is right. Question for you stub, how do you know if you will have 4 or 5 masters for example? [10:20] Assuming your service can dynamically scale out like that [10:20] does the leader know upfront the exact number of masters that should join? [10:21] the 2nd option i have is just to figure out how to make the DC/OS masters scaleable but they lock down the setup reconfig so I need to reverse engineer their installer :) [10:21] the other idea I had for simplicity was just to dump it all in an action [10:21] that would at least get it going quicker ;) [10:21] kjackal: Unless it is specified by the user in configuration ('I want 3 masters' or 'there will be at least 10 nodes'), then the leader has no way of knowing how many units are expected to join. [10:22] kjackal: So the leader will need to make a decision, starting with 1 master since it is standalone, and revise this decision as more units join. [10:23] stub: understood, yes the leader needs to know or guess the number of masters [10:33] on the bright side... my new 4k monitor just turned up [10:33] might need to buy a 2nd one now [11:04] magicaltrout: I think next step should be to get 3 more 4k monitors so that you would make an 8k! [11:04] kjackal: I have a thunderbolt 3 port [11:05] which drives dual hdmi [11:05] plus a hdmi port on the laptop [11:05] so....... [11:05] do you turn your back on a challenge? [11:05] hehe [11:05] only britain staying in the EU === zz_CyberJacob is now known as CyberJacob [11:49] one thing I do know [11:49] today is a really bad day to get paid in USD..... === spammy is now known as Guest86283 === Guest86283 is now known as spammy === zul_ is now known as zul [13:03] lots of aws service failures this morning === barry` is now known as barry === redelmann is now known as rudi|wfh === alexisb-afk is now known as alexisb === alexisb is now known as alexisb-afk [15:09] arosales, beisner : ICYMI - there's a mongodb build for z now - https://bugs.launchpad.net/ubuntu/+source/mongodb/+bug/1595242/comments/1 [15:09] Bug #1595242: mongodb xenial s390x packages are needed (blocks ceilometer) [15:18] dannf: very cool. OpenStack is still blocked until we can put this into a charm [15:18] dannf: I am working with marcoceppi to work on this which we should hit by end of June, but OpenStack won't be able to consume till post that [15:20] arosales: i pasted a link to my MP for mongodb that used a ppa for arm64 into that bug [15:36] dannf, arosales - awesome. look forward to revisiting :) [15:39] dannf: https://launchpad.net/~ubuntu-s390x-community/+archive/ubuntu/mongodb is the latest correct? [15:45] dannf: also think for the pointer to the MP for the ARM enablment in mongo [15:48] kjackal: sorry for being dense here but what steps am I missing @ http://paste.ubuntu.com/17806346/ to reproduce https://bugs.launchpad.net/juju-core/+bug/1593185 [15:48] Bug #1593185: In lxd the containers own fqdn is not inclused in /etc/hosts [16:00] arosales: yep, that's the latest (and only) :) [16:02] arosales: Ok, you deployed the ubuntu. The bug is that you cannot "ping juju-1706c4-0" [16:02] cory_fu - got a sec to chat about https://github.com/juju-solutions/charmbox/pull/40 ? [16:02] let me try to find my repro of the bug [16:02] lazyPower: Sure [16:03] cory_fu - just to be clear on intent, we want this to target devel, and only land in devel. [16:03] and this is to reduce the divergence of those two branches? [16:03] dannf: thanks [16:04] i think this only effects mbruzeks PR that was made slightly before this pr, but this looks pretty sane. [16:04] kjackal: that does fail for me, but does that break models? [16:04] lazyPower: Yes, that branch (new-devel) should *replace* devel and then devel should remain only that single commit different from master and be constantly rebased up to master [16:04] * arosales doesn't ping hostnames if I can ssh to them [16:05] arosales: http://pastebin.ubuntu.com/17395368/ [16:05] but if inter charm communication is broke, then that is a huge issue [16:05] cory_fu - ok, makes sense. I'll start getting this landed and ping if any additional questions crop up. I'm more interested in making sure I was reading the directions clearly :) [16:06] arosales: yes, for a few minutes the connection using the fqdn is broken [16:07] arosales: from what dimitern told me there is a dhcpclient that is firing for no good reason and it breaks the resolution [16:08] AFAIU adding the hostname on the etc/hosts file will cause the dhcpclient process not to fire [16:10] arosales: our discussion is here: https://irclogs.ubuntu.com/2016/06/16/%23juju-dev.txt at around 10:00 [16:11] kjackal: ok let me see if hadoop-processing works here in lxd [16:12] it inter charm communication works, agreed it is a bug, but perhaps not as huge a blocker as I initially thought [16:14] arosales: I know for certain kafka start-up is failing because of this bug, petevg can confirm this because he hit this bug when reviewing the kafka charm [16:15] Yep. Can confirm. [16:15] I was able to reproduce consistently, too. [16:16] Should I leave a comment on a ticket? === degville- is now known as degville [16:18] kjackal: is you latest charm https://jujucharms.com/u/bigdata-dev/kafka/trusty or https://jujucharms.com/u/bigdata-dev/apache-kafka/trusty or should I build from layers? [16:18] https://jujucharms.com/u/bigdata-dev/kafka/trusty is the latest BT kafka [16:19] you should relate that kafka charm to openjdk and apache-zookeeper [16:20] kjackal: ok [16:20] kjackal: thanks [16:20] I'll post my info to the bug, and I work to follow up next week with the juju team [16:20] note that you have to be fast enough (or the machine slow enough) so that the dhcp client wont finish [16:20] at least its on the target milestones though just need to be sure it gets to a release soon [16:29] anyone know where I can find the juju log if debug-log doesn't work? [16:31] jcastro: if you got to a point where a unit was deployed, you can do "juju ssh /", and then look in /var/log/juju/ [16:31] If you don't have a working unit, you can try doing "juju deploy --debug " [16:31] nope, the instance launches and the container immediately stops and is removed, so I think it has something to do with provisioning the container [16:32] Hmmm ... If you do a deploy with the --debug flag, do you get any output that hints at what might be going wrong? [16:34] http://paste.ubuntu.com/17808709/ [16:34] I can launch instances just fine from the lxc command line [16:37] jcastro: hmmm. That looks normal to me. I'm afraid that you're bumping up against the limits of my knowledge. Does anyone else have any ideas as far as troubleshooting goes? === frankban is now known as frankban|afk [16:41] jcastro: what does `juju status --format yaml` say? [16:45] aha! [16:45] more information [16:45] http://paste.ubuntu.com/17809286/ === CyberJacob is now known as Guest66797 === alexisb-afk is now known as alexisb [16:48] jcastro: but lxc launch starts an instance ok? i'm not familiar with lxd forkstart and how juju starts lxd instances. [16:49] yeah, mannual launching works, so it's this forkstart that must be the issue [16:49] https://bugs.launchpad.net/juju-core/+bug/1577524 [16:49] Bug #1577524: Error calling ''lxd forkstart juju-machine-2-lxd-0 /var/lib/lxd/containers /var/log/lxd/juju-machine-2-lxd-0/lxc.conf'': err=''exit [16:49] status 1'' [16:50] jcastro: and the container is already gone? can you `lxc info --show-log juju-5c3262-0` ? [16:50] nope, the container comes up and goes away long before I can do that [16:51] aha, the units log to /var/log/lxd and those are still there, going to go dig [16:54] jrwren: can you pastebin me the results of your: lxc profile show juju-default [16:55] the error in the logs was a networking one related to the modified template I was using to get openstack running on lxd [16:56] jcastro: its pretty empty. http://paste.ubuntu.com/17809808/ [17:05] that appears to have fixed me up! [17:28] WOOT! [17:33] so I have asked about this a few different ways, but I am still unclear and the documentation is.. lacking. With juju 2.0 how do I tell a container what device(s) it can access on the hardware it is running on? I can't find any way to tell it to use '/dev/sdb1' or allow access to '/dev/ttyUSB0' or such. I can do it with lxc but I know that is replaced with lxd. [17:37] jcastro, tvansteenburgh: I'm hitting the same problem testing wiki-simple with the new mysql that I was seeing before: It's not honoring the setup: directive in the mysql charm's tests.yaml file. That means it had nothing to do with me testing it with local: in the bundle before, because I'm using cs: now [17:37] The "it" in this case is bundletester. I'm wondering if it's only honoring tests.yaml from the bundle itself? [17:38] cory_fu: please file a bug on bt with steps to repro [17:56] tvansteenburgh: Actually, I jumped the gun. The problem isn't BT. [17:57] marcoceppi: You're going to love this. [17:57] marcoceppi: Going to need another fix to mysql. >_< [17:58] My bad [18:11] I'm also running up against the '==' length error from `pip3 list` that's due to an older lib version in trusty which is causing 00-setup to fail. Does anyone recall what lib caused that issue and if there's a good way to fix it? [18:11] For reference: http://pastebin.ubuntu.com/17813844/ [18:14] marcoceppi: Should I just remove this check altogether and always try to install PyMySQL? https://github.com/marcoceppi/charm-mysql/blob/master/tests/setup/00-setup#L33 [18:15] 00-setup now gets called before each case (e.g., charm proof, make lint, etc) so I wanted to minimize network traffic [18:17] Actually, it looks like it doesn't do network traffic if it's already present, so yes, I should just remove it [18:21] cory_fu: do what you must, I support any change [18:22] marcoceppi: https://github.com/marcoceppi/charm-mysql/pull/1 [18:23] cory_fu: ta [18:24] man, I really want a post commit push process for github now [18:24] like, commit, build/test, push to norevision/development channel [18:25] marcoceppi: Yeah, +1 [18:25] cory_fu: published [18:26] cory_fu: I wonder if travis could do this. Like with a travis plugin [18:26] * marcoceppi investigates [18:28] marcoceppi: Huh. Why'd it go from 52 to 54? [18:28] cory_fu: it actually was 53 at one point [18:29] Not when I looked at it. I think you might have forgotten to publish 53. I saw you mention that it had gone up again but still saw it at 52 and just didn't say anything [18:29] Anyway, it's sorted now [18:32] cory_fu: I also want to find a way to notify bundle owners [18:32] like "mysql has been incremented to 54, please check yo bundles bro" [18:32] Indeed [18:33] Shouldn't be very difficult. The store has a list of bundles using a given charm rev [18:34] yeah, and we know owners and bug urls [18:34] I should be able to do if gh || lp - open a bug from bundlebot [18:35] first, figure out travis [18:40] marcoceppi: Ok, I'm super pissed at the mysql charm right now [18:40] cory_fu: y u do dis [18:41] marcoceppi: http://pastebin.ubuntu.com/17815449/ [18:42] So, the test has the max connections that it is testing hard-coded, but the bundle deploys with a different number. So the charm test and the bundle are incompatible [18:42] Actually, the bundle gives -1 so there is no way the test will work [18:43] cory_fu: which bundle, yours? [18:43] wiki-simple [18:44] weird, shouldn't the test run the configure? [18:44] OIC [18:44] it's set prior to setup, so it's not executed live [18:45] So I guess I could fix it by removing reset: false [18:45] cory_fu: this should fix it in MySQL [18:45] http://paste.ubuntu.com/17815662/ [18:45] * marcoceppi tests [18:49] cory_fu: this, I think, is the fix [18:49] https://github.com/marcoceppi/charm-mysql/pull/2 [18:49] cory_fu: going to test the bundle with my version [18:56] cory_fu: what flags are you using on bundletester? [18:56] -vlDEBUG -b bundle.yaml [18:57] marcoceppi: I also have these changes pending: http://pastebin.ubuntu.com/17816358/ [18:58] (The seemingly no-ops are eol-whitespace deletes) [18:58] cory_fu: I have a few as well [18:58] marcoceppi: Still getting the max connections error [18:59] tvansteenburgh1: Is there a way to make BT not delete the tmp directory for a charm being tested as part of a bundle so I can verify it's using the right change? [18:59] I guess I can breakpoint [19:01] Odd. Second time through (with already deployed charms, to add the breakpoint), it passed the config check. grr [19:04] cory_fu - https://hub.docker.com/r/jujusolutions/charmbox/builds/byulnqrh6s44sovbabborg/ new charmbox:devel is being built from your branch work. Thanks for submitting that [19:04] lazyPower: Sweet. I'm looking forward to not having to manually install bundletester and make every time now. ;) [19:05] (Not that they work with the current betas >_<) [19:05] cory_fu - make sure it got everything you wanted... i had to manually merge due to conflicts [19:05] o_O [19:05] well, you targeted master not devel [19:06] then left explicit instructions this was to be applied against devel [19:06] soooo *throws confetti* [19:06] Oh noooooooooo! [19:06] do i need to back this out? [19:06] lazyPower: I said it should *replace* devel, not merge into it! [19:06] ah ok [19:06] welp easy enough [19:06] 1 sec incoming fix, disregard that build as its garbage [19:06] :) [19:07] lazyPower: The whole point of that commit is to keep the devel branch as *exactly* 1 commit different from master. Then, any changes that need to happen should go in master and the devel branch is then rebased against master [19:08] no i got the principal [19:08] i just botched the instructions [19:08] Ok, sure [19:08] But yeah, my intention was just to delete the whole of the current devel (or move it to old-devel if you want to preserve it for some reason) and replace it with the new-devel [19:10] https://hub.docker.com/r/jujusolutions/charmbox/builds/bnoyhnumjzrrrfp2gn3emiy/ [19:10] that should make a bit more sense then :) [19:11] lazyPower: Build failed. :( [19:11] Very strange [19:12] marcoceppi: I'm getting intermittent failures on that stupid max connections test. :( [19:12] of all the stupid configuration options to test, it has to be /that one/ [19:12] cory_fu pip install directive failed [19:12] i'll pull it local and get it building before we submit to the builders again [19:13] cory_fu: I'm waiting for my tests to finish, then I'm going to make better tests [19:14] cory_fu: I don't need to test that MySQL knows how to handle max connections, I just need to test the damn file got written [19:14] Yeah, +1 [19:14] make tests great again [19:15] marcoceppi: Mind if I just hand wiki-simple off to you, then, so I can get back to some Bigtop stuff I've been neglecting? [19:15] cory_fu: uh, sure [19:15] So confident. :p [19:17] marcoceppi: Actually, I'll go ahead and update wiki-simple with mysql-54 without reset: false so that you don't have to worry about the bundle [19:19] cory_fu: don't we want reset: false? [19:19] I know I do [19:20] Yeah, we do, because it'll make the bundle tests faster [19:21] Also, I'm pretty sure that max-connections test is racey anyway, so nevermind, I'll leave the bundle to you after all [19:21] Or you can just let me know when you've fixed up mysql and I'll update the bundle then [19:21] Whatever [19:25] cory_fu: cool [19:32] cory_fu - it appears upgrading pip befor einstalling the deps resolves the build failure === scuttlemonkey is now known as scuttle|afk [20:49] blimey [20:49] its taken all day to figure out how to make the DC/OS masters scale dynamically [20:49] but i think I've finally solved the problem [20:49] \o/ [20:49] magicaltrout nice :) [20:50] they make their configs immutable [20:50] not an easy task from what i've gleaned [20:50] which is a *right* pain in the backside [20:50] their offical advice if you want to add more masters is to tear down what you have and rebuild [20:50] seriously? [20:50] yeah [20:51] welp, i'm happy we wont have that in the README [20:51] but your masters should be pretty static, so its not a huge deal [20:51] but [20:51] three cheers for magicaltrout [20:51] if your nodes fail and stuff [20:51] then eventually you'd run out of masters [20:51] are they not running any kind of consensus on the leaders? [20:52] k8s has the same limitations, but its trivial to add a replica of the apiserver/scheduler-manager [20:52] its probably an *enterprise* feature ;) [20:52] in our current model the only downside is the PKI [20:52] although i've not found it their either [20:53] but its only a bunch of zookeeper backed stuff [20:53] so i'm not sure why its so static [20:53] surely thats part of the point of ZK? [20:54] That sounds right, but ive only interfaced with ZK in terms of big data deployments, and didn't fully understand what it brought to the table [20:59] its just distributed configuration management, nothing special but makes sure all your nodes stay in sync [20:59] which these days is pretty important [21:00] also though with DC/OS they have something called Exhibitor [21:00] which appears to come from netflix [21:01] which seems to ensure ZK is running and stuff [21:01] which seems a bit weird [21:01] jcastro: So, I've updated wiki-simple with the latest mysql and verified the tests all pass. I see that cs:~jorge/bundle/wiki-simple (stable) has Write: charmers (because it was promulgated) but only you have read or write perms to the unpublished channel, so you'll have to push (or grant to me or charmers) [21:01] its like the old who's monitoring the monitoring scenario [21:01] jcastro: where is the upstream source for wiki-simple? did you make a gh repo for it? [21:01] marcoceppi: Yes [21:02] marcoceppi: https://github.com/juju-solutions/wiki-simple [21:02] nice [21:04] 20GB of Virtual Box VMs to debug dcos locally [21:04] good job I bought a new laptop [21:07] cory_fu / lazyPower help me out its been a few weeks. I need a list of all IP's for a service [21:07] I also need to update all units in my service if a new unit is added [21:07] in python/reactive [21:07] can you point me at some code [21:13] or anyone else.... [21:14] magicaltrout: If I understand you correctly, you want something like https://github.com/juju-solutions/interface-zookeeper-quorum/blob/master/peers.py#L39 (ignore dismiss_joined, that's a bad pattern and should be removed) or possibly something like https://github.com/juju-solutions/interface-namenode-cluster/blob/master/peers.py#L49 [21:14] resolve_private_address is defined here: https://github.com/juju-solutions/jujubigdata/blob/master/jujubigdata/utils.py#L427 [21:15] hookenv.unit_private_ip() is poorly named because it usually won't actually return an IP, but sometimes will [21:15] ah yeah the freaky conversations stuff i remember now [21:16] magicaltrout: Usage would be something like https://github.com/juju-solutions/bigtop/blob/zookeeper/bigtop-packages/src/charm/zookeeper/layer-zookeeper/lib/charms/layer/zookeeper.py#L70 [21:16] okay and to keep it all in sync i don't need to wire it up, i just need to check for changes i guess [21:16] Though the RelationBase stuff would be better served by getting your instance from a @when decorator [21:17] magicaltrout: Checking for changes like https://github.com/juju-solutions/bigtop/blob/zookeeper/bigtop-packages/src/charm/zookeeper/layer-zookeeper/reactive/zookeeper.py#L50 [21:18] yeah thanks a lot cory_fu [21:18] looks spot on [21:18] magicaltrout: Sorting of lists is important with data_changed, FYI [21:18] hmm k [21:18] Glad I could help. I'm about to head out, tho, so further questions will have to be directed at kwmonroe ;) [21:18] i can sort ip addressed like usual ? [21:19] addresses [21:19] Sure [21:19] If you're curious how data_changed works, it's pretty simple: https://github.com/juju-solutions/charms.reactive/blob/master/charms/reactive/helpers.py#L168 [21:19] cool, kwmonroe is about as useful as a chocolate fireguard.... [21:19] lol [21:19] mmmm, chocolate [21:19] hehe [21:20] alright, i should be good, i've used that pattern somewhere else so i should be able to figure it out [21:20] thanks cory_fu [21:20] finally get this stuff pushed to the charm store [21:21] before my country goes bankrupt and our internet turns into something akin to north korea [21:23] #brexitproblems [21:23] hehe [21:24] i'm moving to scotland [21:24] I think i'm going to aim for Nova Scotia, and wherever I land in between will be fine with me. [21:25] or sealand.... lazyPower i could take over sealand and you could be my only minion whilst i'm king of the fort [21:25] i mean, sure, but i'm a terrible minion [21:25] hehe [21:25] i'm a minion with megalomania [21:25] i do like Nova Scotia [21:25] i'm reigniting the discussion of moving with the mrs whilst she's in a state of wild depression over whats happened ;) [21:26] Canada is great plus their PM is cool and knows his shit [21:26] It certainly is putting uncertainty in my moving plans as well [21:26] that said, I did like the Obama / Fallon slow jam the other day [21:26] I was going to head over to the UK and AirBNB it for a month or two closer to fall... [21:26] that would be so bad in the UK [21:27] lazyPower: in all honest unless the bottom falls out of the economy nothing will happen for ages [21:27] so i wouldn't worry about changing plans, just worry about the exchange rate ;) [21:28] although technically today I got a 10% pay rise on NASA stuff without doing anything :) [21:29] nice ^5 [21:29] and there is another devops day in Ghent I'm sure jcastro would like you to attend in October ;) [21:31] magicaltrout - if you dug arctic life, i'm jammin out a playlist i made in the seattle airport lounge now. http://24.3.228.120:8000/listen.m3u [21:34] hold on , i shall stab some buttons and have a listen [21:34] buttonstabbing++ [21:36] you're either giving me an incorrect ip or you're behind a firewall [21:36] i lie [21:36] i can't type [21:36] nah my girl is connected on that link. should be g2g if you're plugging it into a shoutcast compliant player [21:37] i'll have to make it a point to setup the pirate radio again so its got an html5 player [21:38] cool i'm in [21:40] such a pain in the balls that Sonos doesn't let you add new radio stations on a phone/tablet else I'd have you streaming over my hifi ;) [21:52] I hear ya. I just tore down all my audio gear in the house. I was kind of weepy when all that started coming down, because it got really real at that point. [21:53] awww [21:53] kwmonroe: i might have been a big harsh earlier... i need a tip ;) [21:58] cory pointed me to that peer releation side of the interface, and my understanding of relations is master <-> slave type stuff, but I'm after all the IP addresses of the units in the same service not related to each other via add-relations [21:58] just units that coexist via add-unit [22:00] magicaltrout - so does the charm have a peer relationship? if so, you can scrape it from that. Thats the only way you'll get all service IP's of deployed charms units. I do something very similar in the etcd 'cluster' interface. [22:00] well thats what cory pointed me to, but as I don't relate anything I'm not sure how that would work [22:00] for example, juju deploy dcos-master [22:00] peering is implicit when you add-unit. they automagically get that relationship added. [22:00] juju add-unit dcos-master [22:01] ah [22:01] hmm [22:01] also running support between cuts :P [22:01] hehe [22:01] * lazyPower flex's [22:01] told you kwmonroe was as much use as a chocolate fireguard [22:01] between you and cory_fu he gets that all the time [22:01] hehe [22:03] first christmas related email of the year \o/ [22:04] in june? [22:04] lolwut [22:05] yup [22:05] it has santa on and everything [22:06] hello magicaltrout. i trust i've left you waiting long enough. watch the harshies next time. anyway, the zk quorum relation that cory pointed you to should be what you need.. https://github.com/juju-solutions/interface-zookeeper-quorum/blob/master/peers.py#L39 [22:07] yeah i didn't realise there was an auto peer relationship in the background [22:07] mystery solved [22:07] and like lazyPower said, it's implicit, so you'd create a method on that relation that returned a list of all peer ip addrs [22:07] ah [22:07] well, i shall stop re-explaining :) [22:08] also, it's important to note that "auto peering" may not always work.. the charm that you want to peer needs to specify a peer relation in its metadata.yaml. [22:09] ... which corresponds to an interface with a peers.py, etc, etc, blah, blah. [22:09] i love the way everyone at canonical always caveats anything i ever want to do with "may not always work" :P [22:09] why can't you just be happy with mediawiki? you're always throwing new wrenches and acronyms in our shiny. [22:10] hehe [22:10] well i'm on a mission to finish off a bunch of the ones i've started [22:10] saiku will be done as soon as we ship 3.9, DCOS just needs master <-> master support, PDI just needs Hadoop pluggability [22:11] and, to be clear, when i said it "may not always work", what i should have said is "it will never work unless your charms has a peer relation in the metadata.yaml; it will always work easy peasy if you do it right." [22:11] time to clean up all the charms i've started [22:11] nice! [22:12] lmk if/when you're hooking pdi up to hadoop. we can walk through the hadoop-plugin relation, which will probably just work right out of the box. ;) [22:12] hehe. Yeah I need to investigate the PDI side of it. Pentaho devs say "oh you just add the libs", personally i'm more pessimistic [22:12] we shall see [22:12] cool [22:13] after this list i'm going to drop into some new stuff i picked up from Pooldor (a belgium artist who'm i've become quite infatuated with recently) [22:13] not.... artist infactuation! [22:13] *Poldoore [22:13] lazyPower: i've hacked into your m3u. thanks for the tunes this fine friday! [22:14] hehe [22:14] kwmonroe aww yeee bruddah [22:14] glad you could make it [22:15] i'd also like to port scispark in the not too distant future kwmonroe to try and entice the JPL guys a bit mor [22:15] e [22:16] sweet magicaltrout! maybe they'll give you another raise ;) [22:16] it's what 'merica does when we feel bad for other countries. [22:16] hehe [22:16] i'll speak to donald [22:16] he's up in scotland making a tit of himself [22:17] lol. i expect nothing less. [22:17] told the scottish that its a great day for the uk when scotland backed staying in the EU massively [22:17] good work Donald! [22:27] liking this track lazyPower [22:27] good work [22:27] All Poldoore my friend. no lazy intervention on this perfection [22:28] hehe [22:32] by far an away my favorite jam he's done [22:32] this rocks my car on every road trip in 2016 so far :) [22:33] yeah this is some cool stuff, i'll be looking it up next week [22:33] thats part of the problem working at home [22:33] Thanks for letting me share :) [22:33] i have music on all day [22:33] finding new stuff is always hard [22:33] i really do dig getting to alienate people with my taste in music [22:35] urgh monday sucks, trip into london and a 9pm SFO meeting [22:35] *bork* [22:36] oh well tunes for the train at least! [22:37] http://poldoore.bandcamp.com/ [22:37] ;) [22:37] i should get an affiliate link, get me on the insider track to getting pre-release jams [22:52] right i'm offski, got some cricket to umpire in the morning... I know you americans don't understand that concept.... [22:52] thanks for the tunes lazyPower! [22:53] Thanks for tuning in magicaltrout o/ have a good weekend