=== thumper-afk is now known as thumper | ||
=== CyberJacob is now known as zz_CyberJacob | ||
=== urulama|__ is now known as urulama | ||
=== frankban|afk is now known as frankban | ||
=== stub` is now known as stub | ||
magicaltrout | can i tell a reactive charm to not execute a block until all the units are in the same state? | 09:53 |
---|---|---|
magicaltrout | kjackal: you'll know the answer to that | 09:54 |
kjackal | Hi magicaltrout | 09:55 |
kjackal | let me think... | 09:55 |
kjackal | magicaltrout: ok so, here is how you can do that. You can have a relation of peer type among the service units | 09:56 |
kjackal | you should then make sure the units set the correct state within the interface implementing the peer relation | 09:57 |
magicaltrout | ah right yeah, so just plonk in a "we're all ready for the next step" state in my interface | 09:59 |
kjackal | for example, when you enter the correct stare you do something like this inside the interface: "remove_state('wrong.state') set_state('correct.state')" | 09:59 |
kjackal | then inside the reactive part you should have something like @when_not('peers.in.wrong.state') def do_whatever: .... | 10:00 |
kjackal | Let me think.... | 10:01 |
kjackal | So there might be another way to do that | 10:01 |
kjackal | You could you the leader layer | 10:01 |
kjackal | the leader should be gathering info on the state of the units and the units should ask the leader if all of them are in the right state | 10:02 |
kjackal | I am sure this solution with the leader is doable, but I do not have it in my head right now, I will need to do ome research on this | 10:03 |
kjackal | magicaltrout: ^ | 10:03 |
kjackal | magicaltrout: any comment on the referendum? | 10:03 |
magicaltrout | yeah i'm just looking at stub 's github page thanks kjackal | 10:03 |
magicaltrout | yeah, fucking shit | 10:04 |
magicaltrout | anyway | 10:04 |
magicaltrout | not great, the uk has had better days | 10:04 |
kjackal | in the bigdata world the need of coordination among units is a bad sign, and is usualy outsorced to services like zookeeper. I am sure you already know that | 10:06 |
magicaltrout | DC/OS has an immutable configuration for whatever reason | 10:06 |
magicaltrout | so they expect all masters to be up and running before installing the config on them | 10:06 |
magicaltrout | i'm sure there is a way to do it on the fly, but they don't document it. So, spin up the masters, wait for all the ip's to be addressable and then run the config setup | 10:07 |
stub | magicaltrout: The problem you might trip over is that you don't know when all the units have actually joined the peer relation. You can detect all units that have joined so far are in a particular state, but you don't know how many more nodes are yet to be provisioned and join. | 10:17 |
magicaltrout | yeah i know stub I was just mulling that over | 10:17 |
stub | magicaltrout: The approach I use is have the leader set itself up as a stand alone unit, and as other units join have them grab a lock (using the coordinator layer) and have them join one at a time. | 10:19 |
kjackal | yes stub is right. Question for you stub, how do you know if you will have 4 or 5 masters for example? | 10:20 |
stub | Assuming your service can dynamically scale out like that | 10:20 |
kjackal | does the leader know upfront the exact number of masters that should join? | 10:20 |
magicaltrout | the 2nd option i have is just to figure out how to make the DC/OS masters scaleable but they lock down the setup reconfig so I need to reverse engineer their installer :) | 10:21 |
magicaltrout | the other idea I had for simplicity was just to dump it all in an action | 10:21 |
magicaltrout | that would at least get it going quicker ;) | 10:21 |
stub | kjackal: Unless it is specified by the user in configuration ('I want 3 masters' or 'there will be at least 10 nodes'), then the leader has no way of knowing how many units are expected to join. | 10:21 |
stub | kjackal: So the leader will need to make a decision, starting with 1 master since it is standalone, and revise this decision as more units join. | 10:22 |
kjackal | stub: understood, yes the leader needs to know or guess the number of masters | 10:23 |
magicaltrout | on the bright side... my new 4k monitor just turned up | 10:33 |
magicaltrout | might need to buy a 2nd one now | 10:33 |
kjackal | magicaltrout: I think next step should be to get 3 more 4k monitors so that you would make an 8k! | 11:04 |
magicaltrout | kjackal: I have a thunderbolt 3 port | 11:04 |
magicaltrout | which drives dual hdmi | 11:05 |
magicaltrout | plus a hdmi port on the laptop | 11:05 |
magicaltrout | so....... | 11:05 |
kjackal | do you turn your back on a challenge? | 11:05 |
magicaltrout | hehe | 11:05 |
magicaltrout | only britain staying in the EU | 11:05 |
=== zz_CyberJacob is now known as CyberJacob | ||
magicaltrout | one thing I do know | 11:49 |
magicaltrout | today is a really bad day to get paid in USD..... | 11:49 |
=== spammy is now known as Guest86283 | ||
=== Guest86283 is now known as spammy | ||
=== zul_ is now known as zul | ||
icey | lots of aws service failures this morning | 13:03 |
=== barry` is now known as barry | ||
=== redelmann is now known as rudi|wfh | ||
=== alexisb-afk is now known as alexisb | ||
=== alexisb is now known as alexisb-afk | ||
dannf | arosales, beisner : ICYMI - there's a mongodb build for z now - https://bugs.launchpad.net/ubuntu/+source/mongodb/+bug/1595242/comments/1 | 15:09 |
mup | Bug #1595242: mongodb xenial s390x packages are needed (blocks ceilometer) <s390x> <uosci> <mongodb (Ubuntu):New> <ceilometer (Juju Charms Collection):New> <ceilometer-agent (Juju Charms Collection):New> <mongodb (Juju Charms Collection):New> <https://launchpad.net/bugs/1595242> | 15:09 |
arosales | dannf: very cool. OpenStack is still blocked until we can put this into a charm | 15:18 |
arosales | dannf: I am working with marcoceppi to work on this which we should hit by end of June, but OpenStack won't be able to consume till post that | 15:18 |
dannf | arosales: i pasted a link to my MP for mongodb that used a ppa for arm64 into that bug | 15:20 |
beisner | dannf, arosales - awesome. look forward to revisiting :) | 15:36 |
arosales | dannf: https://launchpad.net/~ubuntu-s390x-community/+archive/ubuntu/mongodb is the latest correct? | 15:39 |
arosales | dannf: also think for the pointer to the MP for the ARM enablment in mongo | 15:45 |
arosales | kjackal: sorry for being dense here but what steps am I missing @ http://paste.ubuntu.com/17806346/ to reproduce https://bugs.launchpad.net/juju-core/+bug/1593185 | 15:48 |
mup | Bug #1593185: In lxd the containers own fqdn is not inclused in /etc/hosts <addressability> <hours> <lxd> <lxd-provider> <network> <juju-core:Triaged> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1593185> | 15:48 |
dannf | arosales: yep, that's the latest (and only) :) | 16:00 |
kjackal | arosales: Ok, you deployed the ubuntu. The bug is that you cannot "ping juju-1706c4-0" | 16:02 |
lazyPower | cory_fu - got a sec to chat about https://github.com/juju-solutions/charmbox/pull/40 ? | 16:02 |
kjackal | let me try to find my repro of the bug | 16:02 |
cory_fu | lazyPower: Sure | 16:02 |
lazyPower | cory_fu - just to be clear on intent, we want this to target devel, and only land in devel. | 16:03 |
lazyPower | and this is to reduce the divergence of those two branches? | 16:03 |
arosales | dannf: thanks | 16:03 |
lazyPower | i think this only effects mbruzeks PR that was made slightly before this pr, but this looks pretty sane. | 16:04 |
arosales | kjackal: that does fail for me, but does that break models? | 16:04 |
cory_fu | lazyPower: Yes, that branch (new-devel) should *replace* devel and then devel should remain only that single commit different from master and be constantly rebased up to master | 16:04 |
* arosales doesn't ping hostnames if I can ssh to them | 16:04 | |
kjackal | arosales: http://pastebin.ubuntu.com/17395368/ | 16:05 |
arosales | but if inter charm communication is broke, then that is a huge issue | 16:05 |
lazyPower | cory_fu - ok, makes sense. I'll start getting this landed and ping if any additional questions crop up. I'm more interested in making sure I was reading the directions clearly :) | 16:05 |
kjackal | arosales: yes, for a few minutes the connection using the fqdn is broken | 16:06 |
kjackal | arosales: from what dimitern told me there is a dhcpclient that is firing for no good reason and it breaks the resolution | 16:07 |
kjackal | AFAIU adding the hostname on the etc/hosts file will cause the dhcpclient process not to fire | 16:08 |
kjackal | arosales: our discussion is here: https://irclogs.ubuntu.com/2016/06/16/%23juju-dev.txt at around 10:00 | 16:10 |
arosales | kjackal: ok let me see if hadoop-processing works here in lxd | 16:11 |
arosales | it inter charm communication works, agreed it is a bug, but perhaps not as huge a blocker as I initially thought | 16:12 |
kjackal | arosales: I know for certain kafka start-up is failing because of this bug, petevg can confirm this because he hit this bug when reviewing the kafka charm | 16:14 |
petevg | Yep. Can confirm. | 16:15 |
petevg | I was able to reproduce consistently, too. | 16:15 |
petevg | Should I leave a comment on a ticket? | 16:16 |
=== degville- is now known as degville | ||
arosales | kjackal: is you latest charm https://jujucharms.com/u/bigdata-dev/kafka/trusty or https://jujucharms.com/u/bigdata-dev/apache-kafka/trusty or should I build from layers? | 16:18 |
kjackal | https://jujucharms.com/u/bigdata-dev/kafka/trusty is the latest BT kafka | 16:18 |
kjackal | you should relate that kafka charm to openjdk and apache-zookeeper | 16:19 |
arosales | kjackal: ok | 16:20 |
arosales | kjackal: thanks | 16:20 |
arosales | I'll post my info to the bug, and I work to follow up next week with the juju team | 16:20 |
kjackal | note that you have to be fast enough (or the machine slow enough) so that the dhcp client wont finish | 16:20 |
arosales | at least its on the target milestones though just need to be sure it gets to a release soon | 16:20 |
jcastro | anyone know where I can find the juju log if debug-log doesn't work? | 16:29 |
petevg | jcastro: if you got to a point where a unit was deployed, you can do "juju ssh <service>/<unit>", and then look in /var/log/juju/ | 16:31 |
petevg | If you don't have a working unit, you can try doing "juju deploy --debug <charm>" | 16:31 |
jcastro | nope, the instance launches and the container immediately stops and is removed, so I think it has something to do with provisioning the container | 16:31 |
petevg | Hmmm ... If you do a deploy with the --debug flag, do you get any output that hints at what might be going wrong? | 16:32 |
jcastro | http://paste.ubuntu.com/17808709/ | 16:34 |
jcastro | I can launch instances just fine from the lxc command line | 16:34 |
petevg | jcastro: hmmm. That looks normal to me. I'm afraid that you're bumping up against the limits of my knowledge. Does anyone else have any ideas as far as troubleshooting goes? | 16:37 |
=== frankban is now known as frankban|afk | ||
jrwren | jcastro: what does `juju status --format yaml` say? | 16:41 |
jcastro | aha! | 16:45 |
jcastro | more information | 16:45 |
jcastro | http://paste.ubuntu.com/17809286/ | 16:45 |
=== CyberJacob is now known as Guest66797 | ||
=== alexisb-afk is now known as alexisb | ||
jrwren | jcastro: but lxc launch starts an instance ok? i'm not familiar with lxd forkstart and how juju starts lxd instances. | 16:48 |
jcastro | yeah, mannual launching works, so it's this forkstart that must be the issue | 16:49 |
jcastro | https://bugs.launchpad.net/juju-core/+bug/1577524 | 16:49 |
mup | Bug #1577524: Error calling ''lxd forkstart juju-machine-2-lxd-0 /var/lib/lxd/containers /var/log/lxd/juju-machine-2-lxd-0/lxc.conf'': err=''exit | 16:49 |
mup | status 1'' <ci> <deploy> <intermittent-failure> <juju-release-support> <lxd> <juju-core:Triaged by cherylj> <https://launchpad.net/bugs/1577524> | 16:49 |
jrwren | jcastro: and the container is already gone? can you `lxc info --show-log juju-5c3262-0` ? | 16:50 |
jcastro | nope, the container comes up and goes away long before I can do that | 16:50 |
jcastro | aha, the units log to /var/log/lxd and those are still there, going to go dig | 16:51 |
jcastro | jrwren: can you pastebin me the results of your: lxc profile show juju-default | 16:54 |
jcastro | the error in the logs was a networking one related to the modified template I was using to get openstack running on lxd | 16:55 |
jrwren | jcastro: its pretty empty. http://paste.ubuntu.com/17809808/ | 16:56 |
jcastro | that appears to have fixed me up! | 17:05 |
jrwren | WOOT! | 17:28 |
xilet | so I have asked about this a few different ways, but I am still unclear and the documentation is.. lacking. With juju 2.0 how do I tell a container what device(s) it can access on the hardware it is running on? I can't find any way to tell it to use '/dev/sdb1' or allow access to '/dev/ttyUSB0' or such. I can do it with lxc but I know that is replaced with lxd. | 17:33 |
cory_fu | jcastro, tvansteenburgh: I'm hitting the same problem testing wiki-simple with the new mysql that I was seeing before: It's not honoring the setup: directive in the mysql charm's tests.yaml file. That means it had nothing to do with me testing it with local: in the bundle before, because I'm using cs: now | 17:37 |
cory_fu | The "it" in this case is bundletester. I'm wondering if it's only honoring tests.yaml from the bundle itself? | 17:37 |
tvansteenburgh | cory_fu: please file a bug on bt with steps to repro | 17:38 |
cory_fu | tvansteenburgh: Actually, I jumped the gun. The problem isn't BT. | 17:56 |
cory_fu | marcoceppi: You're going to love this. | 17:57 |
cory_fu | marcoceppi: Going to need another fix to mysql. >_< | 17:57 |
cory_fu | My bad | 17:58 |
cory_fu | I'm also running up against the '==' length error from `pip3 list` that's due to an older lib version in trusty which is causing 00-setup to fail. Does anyone recall what lib caused that issue and if there's a good way to fix it? | 18:11 |
cory_fu | For reference: http://pastebin.ubuntu.com/17813844/ | 18:11 |
cory_fu | marcoceppi: Should I just remove this check altogether and always try to install PyMySQL? https://github.com/marcoceppi/charm-mysql/blob/master/tests/setup/00-setup#L33 | 18:14 |
cory_fu | 00-setup now gets called before each case (e.g., charm proof, make lint, etc) so I wanted to minimize network traffic | 18:15 |
cory_fu | Actually, it looks like it doesn't do network traffic if it's already present, so yes, I should just remove it | 18:17 |
marcoceppi | cory_fu: do what you must, I support any change | 18:21 |
cory_fu | marcoceppi: https://github.com/marcoceppi/charm-mysql/pull/1 | 18:22 |
marcoceppi | cory_fu: ta | 18:23 |
marcoceppi | man, I really want a post commit push process for github now | 18:24 |
marcoceppi | like, commit, build/test, push to norevision/development channel | 18:24 |
cory_fu | marcoceppi: Yeah, +1 | 18:25 |
marcoceppi | cory_fu: published | 18:25 |
marcoceppi | cory_fu: I wonder if travis could do this. Like with a travis plugin | 18:26 |
* marcoceppi investigates | 18:26 | |
cory_fu | marcoceppi: Huh. Why'd it go from 52 to 54? | 18:28 |
marcoceppi | cory_fu: it actually was 53 at one point | 18:28 |
cory_fu | Not when I looked at it. I think you might have forgotten to publish 53. I saw you mention that it had gone up again but still saw it at 52 and just didn't say anything | 18:29 |
cory_fu | Anyway, it's sorted now | 18:29 |
marcoceppi | cory_fu: I also want to find a way to notify bundle owners | 18:32 |
marcoceppi | like "mysql has been incremented to 54, please check yo bundles bro" | 18:32 |
cory_fu | Indeed | 18:32 |
cory_fu | Shouldn't be very difficult. The store has a list of bundles using a given charm rev | 18:33 |
marcoceppi | yeah, and we know owners and bug urls | 18:34 |
marcoceppi | I should be able to do if gh || lp - open a bug from bundlebot | 18:34 |
marcoceppi | first, figure out travis | 18:35 |
cory_fu | marcoceppi: Ok, I'm super pissed at the mysql charm right now | 18:40 |
marcoceppi | cory_fu: y u do dis | 18:40 |
cory_fu | marcoceppi: http://pastebin.ubuntu.com/17815449/ | 18:41 |
cory_fu | So, the test has the max connections that it is testing hard-coded, but the bundle deploys with a different number. So the charm test and the bundle are incompatible | 18:42 |
cory_fu | Actually, the bundle gives -1 so there is no way the test will work | 18:42 |
marcoceppi | cory_fu: which bundle, yours? | 18:43 |
cory_fu | wiki-simple | 18:43 |
marcoceppi | weird, shouldn't the test run the configure? | 18:44 |
marcoceppi | OIC | 18:44 |
marcoceppi | it's set prior to setup, so it's not executed live | 18:44 |
cory_fu | So I guess I could fix it by removing reset: false | 18:45 |
marcoceppi | cory_fu: this should fix it in MySQL | 18:45 |
marcoceppi | http://paste.ubuntu.com/17815662/ | 18:45 |
* marcoceppi tests | 18:45 | |
marcoceppi | cory_fu: this, I think, is the fix | 18:49 |
marcoceppi | https://github.com/marcoceppi/charm-mysql/pull/2 | 18:49 |
marcoceppi | cory_fu: going to test the bundle with my version | 18:49 |
marcoceppi | cory_fu: what flags are you using on bundletester? | 18:56 |
cory_fu | -vlDEBUG -b bundle.yaml | 18:56 |
cory_fu | marcoceppi: I also have these changes pending: http://pastebin.ubuntu.com/17816358/ | 18:57 |
cory_fu | (The seemingly no-ops are eol-whitespace deletes) | 18:58 |
marcoceppi | cory_fu: I have a few as well | 18:58 |
cory_fu | marcoceppi: Still getting the max connections error | 18:58 |
cory_fu | tvansteenburgh1: Is there a way to make BT not delete the tmp directory for a charm being tested as part of a bundle so I can verify it's using the right change? | 18:59 |
cory_fu | I guess I can breakpoint | 18:59 |
cory_fu | Odd. Second time through (with already deployed charms, to add the breakpoint), it passed the config check. grr | 19:01 |
lazyPower | cory_fu - https://hub.docker.com/r/jujusolutions/charmbox/builds/byulnqrh6s44sovbabborg/ new charmbox:devel is being built from your branch work. Thanks for submitting that | 19:04 |
cory_fu | lazyPower: Sweet. I'm looking forward to not having to manually install bundletester and make every time now. ;) | 19:04 |
cory_fu | (Not that they work with the current betas >_<) | 19:05 |
lazyPower | cory_fu - make sure it got everything you wanted... i had to manually merge due to conflicts | 19:05 |
cory_fu | o_O | 19:05 |
lazyPower | well, you targeted master not devel | 19:05 |
lazyPower | then left explicit instructions this was to be applied against devel | 19:06 |
lazyPower | soooo *throws confetti* | 19:06 |
cory_fu | Oh noooooooooo! | 19:06 |
lazyPower | do i need to back this out? | 19:06 |
cory_fu | lazyPower: I said it should *replace* devel, not merge into it! | 19:06 |
lazyPower | ah ok | 19:06 |
lazyPower | welp easy enough | 19:06 |
lazyPower | 1 sec incoming fix, disregard that build as its garbage | 19:06 |
cory_fu | :) | 19:06 |
cory_fu | lazyPower: The whole point of that commit is to keep the devel branch as *exactly* 1 commit different from master. Then, any changes that need to happen should go in master and the devel branch is then rebased against master | 19:07 |
lazyPower | no i got the principal | 19:08 |
lazyPower | i just botched the instructions | 19:08 |
cory_fu | Ok, sure | 19:08 |
cory_fu | But yeah, my intention was just to delete the whole of the current devel (or move it to old-devel if you want to preserve it for some reason) and replace it with the new-devel | 19:08 |
lazyPower | https://hub.docker.com/r/jujusolutions/charmbox/builds/bnoyhnumjzrrrfp2gn3emiy/ | 19:10 |
lazyPower | that should make a bit more sense then :) | 19:10 |
cory_fu | lazyPower: Build failed. :( | 19:11 |
cory_fu | Very strange | 19:11 |
cory_fu | marcoceppi: I'm getting intermittent failures on that stupid max connections test. :( | 19:12 |
marcoceppi | of all the stupid configuration options to test, it has to be /that one/ | 19:12 |
lazyPower | cory_fu pip install directive failed | 19:12 |
lazyPower | i'll pull it local and get it building before we submit to the builders again | 19:12 |
marcoceppi | cory_fu: I'm waiting for my tests to finish, then I'm going to make better tests | 19:13 |
marcoceppi | cory_fu: I don't need to test that MySQL knows how to handle max connections, I just need to test the damn file got written | 19:14 |
cory_fu | Yeah, +1 | 19:14 |
marcoceppi | make tests great again | 19:14 |
cory_fu | marcoceppi: Mind if I just hand wiki-simple off to you, then, so I can get back to some Bigtop stuff I've been neglecting? | 19:15 |
marcoceppi | cory_fu: uh, sure | 19:15 |
cory_fu | So confident. :p | 19:15 |
cory_fu | marcoceppi: Actually, I'll go ahead and update wiki-simple with mysql-54 without reset: false so that you don't have to worry about the bundle | 19:17 |
marcoceppi | cory_fu: don't we want reset: false? | 19:19 |
marcoceppi | I know I do | 19:19 |
cory_fu | Yeah, we do, because it'll make the bundle tests faster | 19:20 |
cory_fu | Also, I'm pretty sure that max-connections test is racey anyway, so nevermind, I'll leave the bundle to you after all | 19:21 |
cory_fu | Or you can just let me know when you've fixed up mysql and I'll update the bundle then | 19:21 |
cory_fu | Whatever | 19:21 |
marcoceppi | cory_fu: cool | 19:25 |
lazyPower | cory_fu - it appears upgrading pip befor einstalling the deps resolves the build failure | 19:32 |
=== scuttlemonkey is now known as scuttle|afk | ||
magicaltrout | blimey | 20:49 |
magicaltrout | its taken all day to figure out how to make the DC/OS masters scale dynamically | 20:49 |
magicaltrout | but i think I've finally solved the problem | 20:49 |
magicaltrout | \o/ | 20:49 |
lazyPower | magicaltrout nice :) | 20:49 |
magicaltrout | they make their configs immutable | 20:50 |
lazyPower | not an easy task from what i've gleaned | 20:50 |
magicaltrout | which is a *right* pain in the backside | 20:50 |
magicaltrout | their offical advice if you want to add more masters is to tear down what you have and rebuild | 20:50 |
lazyPower | seriously? | 20:50 |
magicaltrout | yeah | 20:50 |
lazyPower | welp, i'm happy we wont have that in the README | 20:51 |
magicaltrout | but your masters should be pretty static, so its not a huge deal | 20:51 |
magicaltrout | but | 20:51 |
lazyPower | three cheers for magicaltrout | 20:51 |
magicaltrout | if your nodes fail and stuff | 20:51 |
magicaltrout | then eventually you'd run out of masters | 20:51 |
lazyPower | are they not running any kind of consensus on the leaders? | 20:51 |
lazyPower | k8s has the same limitations, but its trivial to add a replica of the apiserver/scheduler-manager | 20:52 |
magicaltrout | its probably an *enterprise* feature ;) | 20:52 |
lazyPower | in our current model the only downside is the PKI | 20:52 |
magicaltrout | although i've not found it their either | 20:52 |
magicaltrout | but its only a bunch of zookeeper backed stuff | 20:53 |
magicaltrout | so i'm not sure why its so static | 20:53 |
magicaltrout | surely thats part of the point of ZK? | 20:53 |
lazyPower | That sounds right, but ive only interfaced with ZK in terms of big data deployments, and didn't fully understand what it brought to the table | 20:54 |
magicaltrout | its just distributed configuration management, nothing special but makes sure all your nodes stay in sync | 20:59 |
magicaltrout | which these days is pretty important | 20:59 |
magicaltrout | also though with DC/OS they have something called Exhibitor | 21:00 |
magicaltrout | which appears to come from netflix | 21:00 |
magicaltrout | which seems to ensure ZK is running and stuff | 21:01 |
magicaltrout | which seems a bit weird | 21:01 |
cory_fu | jcastro: So, I've updated wiki-simple with the latest mysql and verified the tests all pass. I see that cs:~jorge/bundle/wiki-simple (stable) has Write: charmers (because it was promulgated) but only you have read or write perms to the unpublished channel, so you'll have to push (or grant to me or charmers) | 21:01 |
magicaltrout | its like the old who's monitoring the monitoring scenario | 21:01 |
marcoceppi | jcastro: where is the upstream source for wiki-simple? did you make a gh repo for it? | 21:01 |
cory_fu | marcoceppi: Yes | 21:01 |
cory_fu | marcoceppi: https://github.com/juju-solutions/wiki-simple | 21:02 |
marcoceppi | nice | 21:02 |
magicaltrout | 20GB of Virtual Box VMs to debug dcos locally | 21:04 |
magicaltrout | good job I bought a new laptop | 21:04 |
magicaltrout | cory_fu / lazyPower help me out its been a few weeks. I need a list of all IP's for a service | 21:07 |
magicaltrout | I also need to update all units in my service if a new unit is added | 21:07 |
magicaltrout | in python/reactive | 21:07 |
magicaltrout | can you point me at some code | 21:07 |
magicaltrout | or anyone else.... | 21:13 |
cory_fu | magicaltrout: If I understand you correctly, you want something like https://github.com/juju-solutions/interface-zookeeper-quorum/blob/master/peers.py#L39 (ignore dismiss_joined, that's a bad pattern and should be removed) or possibly something like https://github.com/juju-solutions/interface-namenode-cluster/blob/master/peers.py#L49 | 21:14 |
cory_fu | resolve_private_address is defined here: https://github.com/juju-solutions/jujubigdata/blob/master/jujubigdata/utils.py#L427 | 21:14 |
cory_fu | hookenv.unit_private_ip() is poorly named because it usually won't actually return an IP, but sometimes will | 21:15 |
magicaltrout | ah yeah the freaky conversations stuff i remember now | 21:15 |
cory_fu | magicaltrout: Usage would be something like https://github.com/juju-solutions/bigtop/blob/zookeeper/bigtop-packages/src/charm/zookeeper/layer-zookeeper/lib/charms/layer/zookeeper.py#L70 | 21:16 |
magicaltrout | okay and to keep it all in sync i don't need to wire it up, i just need to check for changes i guess | 21:16 |
cory_fu | Though the RelationBase stuff would be better served by getting your instance from a @when decorator | 21:16 |
cory_fu | magicaltrout: Checking for changes like https://github.com/juju-solutions/bigtop/blob/zookeeper/bigtop-packages/src/charm/zookeeper/layer-zookeeper/reactive/zookeeper.py#L50 | 21:17 |
magicaltrout | yeah thanks a lot cory_fu | 21:18 |
magicaltrout | looks spot on | 21:18 |
cory_fu | magicaltrout: Sorting of lists is important with data_changed, FYI | 21:18 |
magicaltrout | hmm k | 21:18 |
cory_fu | Glad I could help. I'm about to head out, tho, so further questions will have to be directed at kwmonroe ;) | 21:18 |
magicaltrout | i can sort ip addressed like usual ? | 21:18 |
magicaltrout | addresses | 21:19 |
cory_fu | Sure | 21:19 |
cory_fu | If you're curious how data_changed works, it's pretty simple: https://github.com/juju-solutions/charms.reactive/blob/master/charms/reactive/helpers.py#L168 | 21:19 |
magicaltrout | cool, kwmonroe is about as useful as a chocolate fireguard.... | 21:19 |
cory_fu | lol | 21:19 |
kwmonroe | mmmm, chocolate | 21:19 |
magicaltrout | hehe | 21:19 |
magicaltrout | alright, i should be good, i've used that pattern somewhere else so i should be able to figure it out | 21:20 |
magicaltrout | thanks cory_fu | 21:20 |
magicaltrout | finally get this stuff pushed to the charm store | 21:20 |
magicaltrout | before my country goes bankrupt and our internet turns into something akin to north korea | 21:21 |
lazyPower | #brexitproblems | 21:23 |
magicaltrout | hehe | 21:23 |
magicaltrout | i'm moving to scotland | 21:24 |
lazyPower | I think i'm going to aim for Nova Scotia, and wherever I land in between will be fine with me. | 21:24 |
magicaltrout | or sealand.... lazyPower i could take over sealand and you could be my only minion whilst i'm king of the fort | 21:25 |
lazyPower | i mean, sure, but i'm a terrible minion | 21:25 |
magicaltrout | hehe | 21:25 |
lazyPower | i'm a minion with megalomania | 21:25 |
magicaltrout | i do like Nova Scotia | 21:25 |
magicaltrout | i'm reigniting the discussion of moving with the mrs whilst she's in a state of wild depression over whats happened ;) | 21:25 |
magicaltrout | Canada is great plus their PM is cool and knows his shit | 21:26 |
lazyPower | It certainly is putting uncertainty in my moving plans as well | 21:26 |
magicaltrout | that said, I did like the Obama / Fallon slow jam the other day | 21:26 |
lazyPower | I was going to head over to the UK and AirBNB it for a month or two closer to fall... | 21:26 |
magicaltrout | that would be so bad in the UK | 21:26 |
magicaltrout | lazyPower: in all honest unless the bottom falls out of the economy nothing will happen for ages | 21:27 |
magicaltrout | so i wouldn't worry about changing plans, just worry about the exchange rate ;) | 21:27 |
magicaltrout | although technically today I got a 10% pay rise on NASA stuff without doing anything :) | 21:28 |
lazyPower | nice ^5 | 21:29 |
magicaltrout | and there is another devops day in Ghent I'm sure jcastro would like you to attend in October ;) | 21:29 |
lazyPower | magicaltrout - if you dug arctic life, i'm jammin out a playlist i made in the seattle airport lounge now. http://24.3.228.120:8000/listen.m3u | 21:31 |
magicaltrout | hold on , i shall stab some buttons and have a listen | 21:34 |
lazyPower | buttonstabbing++ | 21:34 |
magicaltrout | you're either giving me an incorrect ip or you're behind a firewall | 21:36 |
magicaltrout | i lie | 21:36 |
magicaltrout | i can't type | 21:36 |
lazyPower | nah my girl is connected on that link. should be g2g if you're plugging it into a shoutcast compliant player | 21:36 |
lazyPower | i'll have to make it a point to setup the pirate radio again so its got an html5 player | 21:37 |
magicaltrout | cool i'm in | 21:38 |
magicaltrout | such a pain in the balls that Sonos doesn't let you add new radio stations on a phone/tablet else I'd have you streaming over my hifi ;) | 21:40 |
lazyPower | I hear ya. I just tore down all my audio gear in the house. I was kind of weepy when all that started coming down, because it got really real at that point. | 21:52 |
magicaltrout | awww | 21:53 |
magicaltrout | kwmonroe: i might have been a big harsh earlier... i need a tip ;) | 21:53 |
magicaltrout | cory pointed me to that peer releation side of the interface, and my understanding of relations is master <-> slave type stuff, but I'm after all the IP addresses of the units in the same service not related to each other via add-relations | 21:58 |
magicaltrout | just units that coexist via add-unit | 21:58 |
lazyPower | magicaltrout - so does the charm have a peer relationship? if so, you can scrape it from that. Thats the only way you'll get all service IP's of deployed charms units. I do something very similar in the etcd 'cluster' interface. | 22:00 |
magicaltrout | well thats what cory pointed me to, but as I don't relate anything I'm not sure how that would work | 22:00 |
magicaltrout | for example, juju deploy dcos-master | 22:00 |
lazyPower | peering is implicit when you add-unit. they automagically get that relationship added. | 22:00 |
magicaltrout | juju add-unit dcos-master | 22:00 |
magicaltrout | ah | 22:01 |
magicaltrout | hmm | 22:01 |
lazyPower | also running support between cuts :P | 22:01 |
magicaltrout | hehe | 22:01 |
* lazyPower flex's | 22:01 | |
magicaltrout | told you kwmonroe was as much use as a chocolate fireguard | 22:01 |
lazyPower | between you and cory_fu he gets that all the time | 22:01 |
magicaltrout | hehe | 22:01 |
magicaltrout | first christmas related email of the year \o/ | 22:03 |
lazyPower | in june? | 22:04 |
lazyPower | lolwut | 22:04 |
magicaltrout | yup | 22:05 |
magicaltrout | it has santa on and everything | 22:05 |
kwmonroe | hello magicaltrout. i trust i've left you waiting long enough. watch the harshies next time. anyway, the zk quorum relation that cory pointed you to should be what you need.. https://github.com/juju-solutions/interface-zookeeper-quorum/blob/master/peers.py#L39 | 22:06 |
magicaltrout | yeah i didn't realise there was an auto peer relationship in the background | 22:07 |
magicaltrout | mystery solved | 22:07 |
kwmonroe | and like lazyPower said, it's implicit, so you'd create a method on that relation that returned a list of all peer ip addrs | 22:07 |
kwmonroe | ah | 22:07 |
kwmonroe | well, i shall stop re-explaining :) | 22:07 |
kwmonroe | also, it's important to note that "auto peering" may not always work.. the charm that you want to peer needs to specify a peer relation in its metadata.yaml. | 22:08 |
kwmonroe | ... which corresponds to an interface with a peers.py, etc, etc, blah, blah. | 22:09 |
magicaltrout | i love the way everyone at canonical always caveats anything i ever want to do with "may not always work" :P | 22:09 |
kwmonroe | why can't you just be happy with mediawiki? you're always throwing new wrenches and acronyms in our shiny. | 22:09 |
magicaltrout | hehe | 22:10 |
magicaltrout | well i'm on a mission to finish off a bunch of the ones i've started | 22:10 |
magicaltrout | saiku will be done as soon as we ship 3.9, DCOS just needs master <-> master support, PDI just needs Hadoop pluggability | 22:10 |
kwmonroe | and, to be clear, when i said it "may not always work", what i should have said is "it will never work unless your charms has a peer relation in the metadata.yaml; it will always work easy peasy if you do it right." | 22:11 |
magicaltrout | time to clean up all the charms i've started | 22:11 |
kwmonroe | nice! | 22:11 |
kwmonroe | lmk if/when you're hooking pdi up to hadoop. we can walk through the hadoop-plugin relation, which will probably just work right out of the box. ;) | 22:12 |
magicaltrout | hehe. Yeah I need to investigate the PDI side of it. Pentaho devs say "oh you just add the libs", personally i'm more pessimistic | 22:12 |
magicaltrout | we shall see | 22:12 |
kwmonroe | cool | 22:12 |
lazyPower | after this list i'm going to drop into some new stuff i picked up from Pooldor (a belgium artist who'm i've become quite infatuated with recently) | 22:13 |
magicaltrout | not.... artist infactuation! | 22:13 |
lazyPower | *Poldoore | 22:13 |
kwmonroe | lazyPower: i've hacked into your m3u. thanks for the tunes this fine friday! | 22:13 |
magicaltrout | hehe | 22:14 |
lazyPower | kwmonroe aww yeee bruddah | 22:14 |
lazyPower | glad you could make it | 22:14 |
magicaltrout | i'd also like to port scispark in the not too distant future kwmonroe to try and entice the JPL guys a bit mor | 22:15 |
magicaltrout | e | 22:15 |
kwmonroe | sweet magicaltrout! maybe they'll give you another raise ;) | 22:16 |
kwmonroe | it's what 'merica does when we feel bad for other countries. | 22:16 |
magicaltrout | hehe | 22:16 |
magicaltrout | i'll speak to donald | 22:16 |
magicaltrout | he's up in scotland making a tit of himself | 22:16 |
kwmonroe | lol. i expect nothing less. | 22:17 |
magicaltrout | told the scottish that its a great day for the uk when scotland backed staying in the EU massively | 22:17 |
magicaltrout | good work Donald! | 22:17 |
magicaltrout | liking this track lazyPower | 22:27 |
magicaltrout | good work | 22:27 |
lazyPower | All Poldoore my friend. no lazy intervention on this perfection | 22:27 |
magicaltrout | hehe | 22:28 |
lazyPower | by far an away my favorite jam he's done | 22:32 |
lazyPower | this rocks my car on every road trip in 2016 so far :) | 22:32 |
magicaltrout | yeah this is some cool stuff, i'll be looking it up next week | 22:33 |
magicaltrout | thats part of the problem working at home | 22:33 |
lazyPower | Thanks for letting me share :) | 22:33 |
magicaltrout | i have music on all day | 22:33 |
magicaltrout | finding new stuff is always hard | 22:33 |
lazyPower | i really do dig getting to alienate people with my taste in music | 22:33 |
magicaltrout | urgh monday sucks, trip into london and a 9pm SFO meeting | 22:35 |
magicaltrout | *bork* | 22:35 |
magicaltrout | oh well tunes for the train at least! | 22:36 |
lazyPower | http://poldoore.bandcamp.com/ | 22:37 |
lazyPower | ;) | 22:37 |
lazyPower | i should get an affiliate link, get me on the insider track to getting pre-release jams | 22:37 |
magicaltrout | right i'm offski, got some cricket to umpire in the morning... I know you americans don't understand that concept.... | 22:52 |
magicaltrout | thanks for the tunes lazyPower! | 22:52 |
lazyPower | Thanks for tuning in magicaltrout o/ have a good weekend | 22:53 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!