lazyPower | arosales iirc we were still pending making that PR to the upstream repo | 01:29 |
---|---|---|
=== redir is now known as redir_afk | ||
=== frankban|afk is now known as frankban | ||
=== mmcc_ is now known as mmcc | ||
kjackal | admcleod-: So yes | 09:19 |
kjackal | almost working HBase | 09:19 |
kjackal | HBase is very cool | 09:20 |
admcleod- | kjackal: why is that! | 09:20 |
kjackal | I knew the internal indexing structure that they implement but i've never seen how it is setup | 09:20 |
magicaltrout | kjackal: you'll know this. If I want to use a 3rd party python lib in my reactive charm, how do I package it? | 09:20 |
admcleod- | ooh ooh i know | 09:21 |
magicaltrout | or you | 09:21 |
kjackal | what do you mean by third party? It cannot go to wheelhouse | 09:21 |
admcleod- | but .. oh, apparently i dont know | 09:21 |
admcleod- | never mind! ;) | 09:21 |
admcleod- | kjackal: how is it to actually use? | 09:22 |
magicaltrout | see, now I don't know what you're saying | 09:22 |
kjackal | lol | 09:22 |
admcleod- | magicaltrout: i was going to say, if you can pip install it, then you can just put it in wheelhouse | 09:22 |
magicaltrout | charm -> random python lib -> build -> deploy | 09:22 |
magicaltrout | okay, remember I stupid. How do you "just put it in wheelhouse" | 09:23 |
magicaltrout | clearly when you compile your charm, wheelhouse appears | 09:23 |
kjackal | let me find | 09:23 |
admcleod- | magicaltrout: well, again, i thought you could put it in wheelhouse.txt, e.g. https://github.com/juju-solutions/layer-apache-bigtop-resourcemanager/blob/master/wheelhouse.txt | 09:23 |
kjackal | https://github.com/juju-solutions/layer-apache-spark/blob/master/wheelhouse.txt | 09:23 |
magicaltrout | men of many links | 09:24 |
magicaltrout | thanks | 09:24 |
kjackal | :) | 09:24 |
admcleod- | kjackal: why did you say 'it _cannot_ go to wheelhouse'? | 09:24 |
kjackal | admcleod-: Sorry I was asking | 09:24 |
kjackal | forgot the "?" | 09:24 |
magicaltrout | the vital missing questionmark | 09:24 |
admcleod- | kjackal: ah, i see, so you used all your ?s up in the shortest conversation ever | 09:25 |
magicaltrout | english is a fickle language | 09:25 |
kjackal | yeap! | 09:25 |
admcleod- | yes. dont i know it. | 09:26 |
magicaltrout | indeed semi scot | 09:26 |
admcleod- | i got nothin | 09:27 |
magicaltrout | i had a blue cheese chocolate in my speakers gift from apachecon | 09:27 |
magicaltrout | it was awful | 09:27 |
admcleod- | lol | 09:27 |
magicaltrout | not speaking in vancouver again | 09:28 |
admcleod- | maybe they just assumed, from your sophisiticated queens english, that you would appreciate the finer things, and then gave you something else instead | 09:28 |
admcleod- | anyway i swear, it works if its done right. | 09:29 |
magicaltrout | lies | 09:30 |
admcleod- | i would never | 09:31 |
magicaltrout | kjackal: when you have working hbase | 09:31 |
magicaltrout | https://phoenix.apache.org/Phoenix-in-15-minutes-or-less.html build a phoenix subordinate | 09:32 |
magicaltrout | oh there is one in bigdata-dev | 09:32 |
kjackal | magicaltrout: That is cool! SQL! | 09:32 |
magicaltrout | what do i know | 09:32 |
magicaltrout | okay when you've finished hbase | 09:33 |
magicaltrout | make sure phoenix in bigdata-dev works ;) | 09:33 |
magicaltrout | ah that one is pretty old | 09:34 |
magicaltrout | okay we'll revamp that | 09:34 |
admcleod- | magicaltrout: i bet you a pint you could do it quicker | 09:34 |
magicaltrout | admcleod-: doubt it, I've got to see SaMnCo tomorrow, and fly to san diego on Friday followed by a week of hacking broken stuff | 09:34 |
magicaltrout | actually... you're probably right, but i should really do stuff that earns me money, not endlessly hack charms ;) | 09:35 |
admcleod- | yeah but a pint | 09:35 |
magicaltrout | i know, its tempting! ;) | 09:35 |
kjackal | magicaltrout: Phoenix seems to be in bigtop, but it does not have puppet scripts. Thats cool! | 09:35 |
kjackal | magicaltrout: cool in the sence we can contribute the puppet scripts as well! | 09:36 |
magicaltrout | kjackal: you basically just need to chuck the right libs in the libs dir and it sorts it out | 09:36 |
admcleod- | it shouldnt take us long but we'd have to backlog it for a bit until we get some other higher priority bits done | 09:36 |
magicaltrout | so it should be pretty straight forward to bootstrap in bigtop | 09:36 |
admcleod- | kjackal: its probably in a feature branch | 09:36 |
magicaltrout | https://drill.apache.org/docs/hbase-storage-plugin/ I also need to figure out how to crowbar juju hbase into phoenix | 09:37 |
magicaltrout | s/phoenix/drill | 09:37 |
=== Anita is now known as Guest77367 | ||
kjackal | Oh come on magicaltrout this is a piece of cake for you now! | 09:38 |
magicaltrout | hehe its certainly getting easier, doesn't make me a living though does it? :P | 09:38 |
magicaltrout | not yet at least. But this is why I'm working on the Drill/SQL side of things because it means we do get that all in one end to end solution, that makes it easy to demo to clients | 09:39 |
admcleod- | eating blue cheese chocolate doesnt make ANYONE a living, but we still do it. | 09:40 |
magicaltrout | hehe, indeed | 09:40 |
admcleod- | magicaltrout: so this talks about inserting this storage plugin info via the web ui - i guess from a charm perspective its an insert statement (or maybe even just appending to a config file) | 09:41 |
magicaltrout | admcleod-: yeah, it has a rest API, so for my mongo test I just prod it in python with some connection details | 09:41 |
admcleod- | magicaltrout: ill make a note to discuss both of these things @ our next sync | 09:41 |
admcleod- | magicaltrout: in a few hours | 09:41 |
magicaltrout | you could write a file, but you'd need to cycle your drill cluster, so I use the rest api instead which just writes to ZK and updates the cluster | 09:42 |
magicaltrout | in reality, I'm more interested in HDFS and Filesystem based connectivity initially, so I'll be working on those later | 09:43 |
magicaltrout | so I can dump GB's of Parquet files into HDFS and have distributed Drill query them | 09:44 |
admcleod- | right | 09:44 |
magicaltrout | if you install Drill onto your slave nodes, it will keep the data for that node on that node, whilst doing some distributed SQL to keep the data local, which gives a good performance boost | 09:44 |
magicaltrout | data locality and all that | 09:44 |
admcleod- | who'da thought | 09:45 |
magicaltrout | well its logical, but its hard for SQL engines to do | 09:45 |
magicaltrout | distrtibuting a query and running it on different nodes then collating the result | 09:45 |
admcleod- | yep | 09:45 |
magicaltrout | whats the deal with juju storage | 09:49 |
magicaltrout | is it worth my while adding it to the charm? | 09:49 |
magicaltrout | like is it enough of a *thing*? | 09:49 |
admcleod- | babbageclunk: so, tell us, whats the deal with juju storage, is it worth _someones_ while adding it to their charms? like.. is it enough of a __**thing**__? | 09:50 |
admcleod- | magicaltrout: ;} | 09:50 |
magicaltrout | pfft | 09:51 |
magicaltrout | so drill also supports filesystem based access of CSV's/JSON stuff, so I was thinking of adding Storage support | 09:52 |
admcleod- | magicaltrout: well, i know very little about it so i was asking babbageclunk | 09:54 |
babbageclunk | admcleod-, magicaltrout: I'm pretty new, so I don't know *that* much about it either. What kinds of things are you wanting to know? | 09:55 |
magicaltrout | not much, i'm just writing a charm for apache drill. One thing it supports is SQL analysis over local CSV/JSON/Parquet files. Now, this is entirely optional and probably not used *that* much, but if someone did use it, I was thinking about juju storage support for it | 09:56 |
magicaltrout | but was wondering a) if it was worth it (this is just idle gossip, I realise) b) if its enough of a *thing* that it wont go missing in a few releases time | 09:57 |
babbageclunk | magicaltrout: I'm fairly sure it's not going anywhere - it's really a requirement for more careful deployments. | 09:58 |
admcleod- | magicaltrout: at a lower (infrastructure) level, how would you envisage using it/ | 09:59 |
admcleod- | ? | 09:59 |
=== admcleod- is now known as admcleod | ||
magicaltrout | admcleod: something like, person deploys charm in EC2 XLarge or something, but then tacks on a couple of TB of SSD/High performance disks or something | 09:59 |
magicaltrout | for example, Drill supports querying of multiple CSVs as a single table. So your app writes to a high performance filestore in various directories and files. At which point Drill has a plugin configured to use that folder. So you can then run | 10:01 |
magicaltrout | select * from my_files | 10:01 |
magicaltrout | and it will traverse your entire file storage pool | 10:01 |
magicaltrout | not everyone in this world wants to run a HDFS cluster ;) | 10:02 |
babbageclunk | magicaltrout: I might be wrong, but I think the juju storage features are less about machines that get storage added later, and more for restricting where a charm gets deployed to. | 10:08 |
magicaltrout | fair enough | 10:09 |
magicaltrout | i'll skip it then | 10:09 |
admcleod | magicaltrout: are you saying, kind of 'preferential' storage utilisation, i.e., you add 10TB SSD, and then it 'prefers' to use that because its tagged as high-iops, for example | 10:09 |
magicaltrout | yup | 10:09 |
admcleod | magicaltrout: or, just that that new disk would be part of a larger... | 10:09 |
admcleod | ..storage array | 10:09 |
magicaltrout | well it makes no odds to me, I was just making sure I wasn't missing a trick there :) | 10:13 |
babbageclunk | magicaltrout: Or if you're deploying to AWS or OpenStack then the charm can grab storage from EBS or Cinder when it's being deployed. | 10:14 |
admcleod | magicaltrout: actually i think it sounds like a good idea for io intensive stuff | 10:15 |
magicaltrout | because its hook based | 10:16 |
magicaltrout | I can, it appears, add the pool to Drill via a hook just like I do with any other data source | 10:17 |
admcleod | magicaltrout: yeah so you could ideally have your charm deploy and go 'ok ill use whatever disk is available but prefer something tagged SSD' for example - then on hook 'oh ssd has been added' *charm stuff* | 10:18 |
magicaltrout | yup | 10:19 |
magicaltrout | if someone adds storage to a drill node, you'd assume its for data processing, so create a datasource storage entry within drill to use it | 10:19 |
admcleod | magicaltrout: sounds like a very nice idea. (with lots of further potential too) | 10:22 |
babbageclunk | magicaltrout: I think I was wrong about that - I'm only familiar with the constraint/deploying parts, but adding extra storage to the unit later sounds really handy. | 10:23 |
magicaltrout | the other case as well, is, the amount of times I've deployed a MySQL staging DB with what I thought was enough storage....... | 10:24 |
magicaltrout | and 6 months later when my client has gone mental and used the DB for stuff they shouldn't | 10:25 |
magicaltrout | you blow your disk space and end up with random unmanaged mount points all over the place, tacking on extra storage | 10:25 |
magicaltrout | which you know full well if the server dies you're screwed because you couldn't replicate it in a month of sundays :) | 10:25 |
admcleod | magicaltrout: yep for sure, seems like there may be a few ways to achieve that too | 10:30 |
magicaltrout | okay back to more mundane matters. I want to implement an HDFS relation admcleod | 10:32 |
magicaltrout | is there an interface that will provide me with the namenode ip and port? | 10:32 |
magicaltrout | https://github.com/juju-solutions/interface-namenode-cluster ? | 10:32 |
admcleod | magicaltrout: thats the peer relation.. you want... | 10:33 |
admcleod | magicaltrout: https://github.com/juju-solutions/interface-dfs | 10:34 |
magicaltrout | cool thanks | 10:34 |
admcleod | magicaltrout: or actually https://github.com/juju-solutions/interface-dfs-slave | 10:34 |
admcleod | magicaltrout: or.. use the plugin | 10:35 |
admcleod | magicaltrout: cos if you use the plugin itll install hadoop for you, configure hadoop correctly.. and the interface-plugin will give you the namenodes and port | 10:37 |
magicaltrout | yeah so I see | 10:37 |
magicaltrout | I don't need hadoop installed but lets run with that | 10:37 |
magicaltrout | where's charms.templating gone in juju2? | 11:40 |
magicaltrout | gaa | 12:16 |
magicaltrout | wtf | 12:16 |
magicaltrout | admcleod: https://github.com/juju-solutions/interface-hadoop-plugin | 12:16 |
magicaltrout | in my charm to interface with the plugin I want something like: | 12:17 |
magicaltrout | https://gist.github.com/buggtb/ff2d2e26b45657bfcc7566a393bb9b7d ? | 12:17 |
=== scuttle|afk is now known as scuttlemonkey | ||
magicaltrout | i think you sold me a lie admcleod | 12:36 |
magicaltrout | because I don't provide hadoop stuff, but similarly I don't connect to hadoop client, pig etc | 12:37 |
lazyPower | kjackal congrats on the +2 on your ~charmer app | 13:36 |
kjackal | lazyPower: Thank you! I can now offload cory and kevin from some of the promulgation tasks | 13:38 |
lazyPower | if anyone missed it and has worked with kjackal and would like to endorse his application - https://lists.ubuntu.com/archives/juju/2016-May/007273.html :) the more the merrier i always say! | 13:38 |
kjackal | lazyPower: Thank you for your vote of confidence | 13:38 |
lazyPower | its firmly seated though :D you're comin with us to the front lines | 13:38 |
=== natefinch-afk is now known as natefinch | ||
admcleod | magicaltrout: sorry, i got caught up in diagnosing some io problems + lunch, did you get anywhere? | 14:42 |
magicaltrout | yeah no worries admcleod | 14:42 |
magicaltrout | I ended up using the dfs interface | 14:43 |
magicaltrout | got it mostly strung together I think | 14:43 |
admcleod | magicaltrout: right... just as a note, in this case drill is the client, attaching to hadoop, so if it did need hadoop installed it would be appropriate. | 14:58 |
magicaltrout | yeah, I have used the plugin layer elsewhere | 14:58 |
admcleod | cool | 14:58 |
magicaltrout | but in this instance its just prodding a hdfs endpoint somehow | 14:59 |
admcleod | yup | 14:59 |
=== redir_afk is now known as redir | ||
magicaltrout | okay well the relation seems to work | 15:18 |
magicaltrout | time to find out if it can actually query any data | 15:18 |
LiftedKilt | has anyone deployed cephfs with juju? | 15:51 |
LiftedKilt | I don't see a metadata server charm - unless I'm just completely blind | 15:51 |
admcleod | cory_fu: are there any other bigtop layers which are including apache-bigtop-plugin as a layer? | 15:54 |
magicaltrout | https://ibin.co/2jB6YH1bwE6G.png | 16:00 |
magicaltrout | woop | 16:01 |
magicaltrout | very boring sample data out of hdfs | 16:01 |
admcleod | magicaltrout: nice one. | 16:01 |
magicaltrout | aye, needs some tidying up and yet more config options, but not bad | 16:02 |
cory_fu | admcleod: Yeah, should be. One second | 16:08 |
cory_fu | admcleod: Spark is using it via hadoop-client: https://github.com/juju-solutions/bigtop/blob/spark/bigtop-packages/src/charm/spark/layer-spark/layer.yaml | 16:09 |
cory_fu | admcleod: Note that you should not be adding 'layer:hadoop-plugin' to layers.yaml; instead you should add 'layer:hadoop-client'. Or, if you want to do it manually, 'interface:hadoop-plugin' and then add the relation to metadata.yaml | 16:11 |
admcleod | cory_fu: ah, i see - thanks :) | 16:13 |
kwmonroe | petevg: really nice job on the bigtop layer readme! https://github.com/juju-solutions/layer-apache-bigtop-base/pull/11 is merged. | 16:26 |
=== frankban is now known as frankban|afk | ||
LiftedKilt | rick_h_ marcoceppi: have you seen anyone working on a metadata server charm for cephfs? | 16:50 |
marcoceppi | LiftedKilt: not sure, but icey or cholcombe would be the ones who knew if someone was | 16:50 |
LiftedKilt | marcoceppi: thanks for the heads up - wasn't sure who to direct the question to | 16:51 |
cholcombe | LiftedKilt, yes that's on the roadmap | 16:57 |
cholcombe | LiftedKilt, https://github.com/openstack-charmers/charm-specs/pull/2 | 16:58 |
cholcombe | LiftedKilt, do you have a few cycles to help or want to be on the review once it's posted? | 17:00 |
LiftedKilt | cholcombe: I'm always happy to help | 17:04 |
LiftedKilt | cholcombe: so the current plan is to release the cephfs charm concurrent with the newton cycle? | 17:05 |
lazyPower | magicaltrout - you still kicking around over there? | 17:06 |
cholcombe | LiftedKilt, Hopefully much sooner | 17:06 |
LiftedKilt | cholcombe: awesome - I need to start deploying ceph immediately, but I can wait on cephfs for a little bit during the transition | 17:10 |
LiftedKilt | have a 250 node lizardfs cluster that had some...ahem...data loss last week | 17:10 |
cholcombe | LiftedKilt, ok. We just started a review on the ceph base layer. https://review.openstack.org/#/c/323574/ Once that is settled we can leverage that for the ceph-fs layer which should be very small | 17:11 |
cholcombe | LiftedKilt, lizardfs sounds familiar. I seem to remember that being a fork of something | 17:12 |
LiftedKilt | it's a fork of moosefs | 17:12 |
cholcombe | right! | 17:12 |
cholcombe | how did it sustain data loss? | 17:13 |
cholcombe | i'm just curious. i know almost nothing about it | 17:13 |
petevg | kwmonroe: Cool. Thx for the review/merge :-) | 17:13 |
LiftedKilt | the primary metadata server went down because of some unrelated network problems. It failed over to the backup metadata server, which it turns out had pretty stale metadata | 17:14 |
LiftedKilt | the backup then said "I have 250,000 more chunks than I should - better start deleting them!" | 17:14 |
LiftedKilt | oops haha | 17:15 |
cholcombe | wow | 17:20 |
cholcombe | i really despise metadata servers in distributed filesystems | 17:20 |
* cholcombe remembers much pain with hadoop | 17:20 | |
cholcombe | LiftedKilt, so the plan is to start breaking off chunks of lizard and converting them to ceph? | 17:21 |
cholcombe | LiftedKilt, if you need NFS you're prob better served using glusterfs | 17:22 |
aroundabout | What constraint on juju bootstrap can be set to select a specific AWS EC2 region? | 17:51 |
aroundabout | I'm able to select instance-type=t2.micro | 17:51 |
aroundabout | But region= throws an error unknown constraint "region" | 17:51 |
aroundabout | Using juju version 2.0-beta7-xenial-amd64 | 17:52 |
rick_h_ | aroundabout: what version of Juju? | 17:59 |
aroundabout | Using juju version 2.0-beta7-xenial-amd64 | 17:59 |
rick_h_ | aroundabout: http://askubuntu.com/questions/172643/can-juju-be-used-in-other-region-than-the-us shows for 1.25 | 17:59 |
rick_h_ | aroundabout: and for juju 2.0 you should be able to bootstrap with --config region=eu-west-1 | 18:00 |
rick_h_ | to follow the examples from there | 18:00 |
aroundabout | Thanks I'll run it. I was using --bootstrap-constraints | 18:00 |
rick_h_ | aroundabout: ah, yes that's the old way. It's simpler with the --config key=value syntax | 18:00 |
aroundabout | The command is not working. | 18:06 |
aroundabout | It still punts the juju agent to us-east-1 | 18:06 |
aroundabout | `sudo juju bootstrap --config region=eu-west-1 aws-devenv-controller aws` | 18:07 |
rick_h_ | aroundabout: sorry, looking | 18:07 |
rick_h_ | aroundabout: oh so sorry, I lead you wrong there | 18:07 |
aroundabout | I may have just noticed something vital | 18:12 |
aroundabout | # juju bootstrap [options] <controller_name> <cloud_name>[/region] | 18:12 |
aroundabout | That usage has an optional [/region] part I've never used. | 18:13 |
aroundabout | I also think that's new since 1.0 | 18:13 |
aroundabout | Yup, worked | 18:14 |
aroundabout | I'll update that SO question. | 18:15 |
magicaltrout | you rang lazyPower | 18:21 |
lazyPower | magicaltrout - http://54.175.167.243/app/kibana#/dashboard/Packetbeat-Dashboard | 18:22 |
lazyPower | you had made mention of packetbeat? here's where i'm at with the layer | 18:22 |
lazyPower | might want to expand the data-range to ~ 1 hr or greater | 18:22 |
* magicaltrout stabs some buttons | 18:25 | |
magicaltrout | looking sweet lazyPower | 18:26 |
magicaltrout | thats very useful | 18:26 |
bdx | admcleod: hey, just diving into your puppet refactor/merge now | 18:34 |
bdx | admcleod: I've a few things to run by you when you have a moment | 18:35 |
lazyPower | o/ bdx so i hear you'r einterested in packetbeat. https://github.com/juju-solutions/layer-packetbeat is current if you'd like to get in early on the comment threads/issue tracker :) | 18:45 |
bdx | you know it | 18:45 |
lazyPower | no readme, its for the intrepid explorer | 18:46 |
lazyPower | #noregrets | 18:46 |
bdx | ha, perfect | 18:46 |
mbruzek | arosales: ping. I addressed the issues you created with the Kubernetes readme in https://github.com/juju-solutions/bundle-observable-kubernetes/pull/15 | 18:50 |
mbruzek | arosales: please review when you get a chance, so we can update and get the bundle promulgated | 18:51 |
mbruzek | Assuming it passes your review | 18:51 |
arosales | mbruzek: hello, thanks for working on that. I'll take a look this afternoon | 18:51 |
lazyPower | mbruzek - i took liberty of some prelim review nits. I wont merge until arosales has taken a look though | 18:55 |
bdx | lazyPower: what to do about geo_points? | 19:02 |
lazyPower | bdx - https://www.digitalocean.com/community/tutorials/how-to-map-user-location-with-geoip-and-elk-elasticsearch-logstash-and-kibana | 19:03 |
bdx | yeah, I know ... | 19:03 |
lazyPower | bdx - if you have time, plugging that in would be choice | 19:03 |
lazyPower | even if you rough cut it, i can throw finishing touches on it | 19:03 |
bdx | lazyPower: what I'm thinking is a modification to the logstash http-relation-joined? | 19:03 |
bdx | such that it would write out 11-nginx-filter.conf | 19:04 |
bdx | if config['log_geo_point']: | 19:05 |
bdx | render_nginx_geo_point_filter_conf | 19:05 |
bdx | mmmm | 19:07 |
lazyPower | currently logstash isn't shipping with any configuration for transforms. I see this being a config-addition to layer-logstash. packetbeat already supports shipping through logstash by virtue of beats-base | 19:07 |
lazyPower | so i dont think you can get away with the nginx-filter.conf | 19:08 |
bdx | lazyPower: but logstash needs to extract the client ip, and convert it using the geoipdatabase | 19:08 |
lazyPower | this will need to be leveraging whatever the beats protocol mandates for geoip data. none of the graphs in the default setup uses anything than the beats shippers, so we should look closer at the docs. I think that do article i got from jcastro is a red herring | 19:08 |
bdx | looks like that is done with the nginx-filter.conf, no? | 19:09 |
lazyPower | bdx - https://github.com/elastic/beats/issues/60 | 19:09 |
lazyPower | this looks applicable to our topic. | 19:10 |
lazyPower | they're using the geoip-database package, and just configured packetbeat with teh path to the db | 19:10 |
lazyPower | i think the shipper does this all already, we just need to tell it to do so, and make sure we've got the package installed | 19:10 |
bdx | yea, same as the link you linked me | 19:10 |
bdx | lazyPower: what I'm getting at is the other part of the implementation ... how to tell logstash to provision that file | 19:11 |
lazyPower | bdx - you dont need to tell logstash anything :) | 19:11 |
lazyPower | if you configure it on the beat shipper, it ships w/ all the required data needed. the dash is already setup to read and translate that lat/long data | 19:12 |
bdx | yeah, but logstash has to get that data | 19:12 |
bdx | from what I can tell, that is done with nginx-filter.conf -> http://paste.ubuntu.com/16902035/ | 19:13 |
lazyPower | bdx - hrm, i think there's a disconnect here in what i'm thinking and you're thinking. | 19:13 |
lazyPower | Thats only if you need to transform teh data coming from the beat... | 19:14 |
lazyPower | if the beat is already encoding that geo-ip data, why do you need to ngrok it? | 19:14 |
bdx | lazyPower: I see that, I'm unclear on where the geoip data is being generated, how does it exists? | 19:16 |
lazyPower | its a lookup between the ip and whatever the geo database has for where that ip originates | 19:16 |
lazyPower | ipv4 is a very final database at this point, the last ipv4 address having been sold a few months ago | 19:16 |
lazyPower | not sure how this will stand up to ipv6 geo-ip data, but for ipv4 this database they're listing for curl has all that for us. its basically a lookup table for the golang shipper to compare client_ip with that db and pull out the lat/long of its origin | 19:18 |
lazyPower | curl http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz -o /usr/share/GeoIP/GeoIP.dat.gz <- this thing | 19:18 |
bdx | I see .... if nginx is logging to nginx-access.log, and you have nginx-access.log shipping to elastic/kibana via filebeat | 19:18 |
lazyPower | i would hook up packetbeat for that, its reading the http headers | 19:18 |
lazyPower | 1 sec let me get you a link of what the packetbeat monitor is actually sending | 19:19 |
bdx | i see http://54.175.167.243/app/kibana#/settings/indices/packetbeat-*?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:now-15m,mode:quick,to:now))&_a=(tab:indexedFields) | 19:20 |
lazyPower | http://54.175.167.243/app/kibana#/discover | 19:21 |
lazyPower | search for "mediawiki" | 19:21 |
bdx | lazyPower: hmmm, no results show | 19:23 |
jrwren | you should be aware that the maxmind geoip db is not very accurate. If you need accurate geoip databases, they are out there, often for purchase. neustar has a pretty good one. | 19:23 |
lazyPower | jrwren - that was my initial response to the problem domain. Free by default is fine, i think what we'll do is resource it, and provide the db as an interchangeable thing | 19:24 |
magicaltrout | Juju GUI should support charm configuration pagination.... that is all. | 19:24 |
lazyPower | you start with free, and can provide your own if you've got a fancy geoip db | 19:24 |
jrwren | sounds great. | 19:24 |
lazyPower | unless we're in good with a geoip provider and i dont know this? :D | 19:25 |
jrwren | I've seen a lot of folks expect a bit too much from the free maxmind data and be disappointed when things are incorrect. | 19:25 |
lazyPower | yeah, i wonder what those vendors will do now that the database is basically done and the ipv6 rollout has to happen | 19:25 |
lazyPower | bdx - hang on, i'm modifying the packetbeat install by hand on the unit | 19:26 |
lazyPower | we'll see if this works and then i'll land some code if it does | 19:26 |
jrwren | its far from done. really good database track ip moves down to the customer DHCP lease. e.g. I get a new IP from my ISP, the geo ip database vendor detects this and updates that data. | 19:26 |
lazyPower | yeah? | 19:27 |
jrwren | yup | 19:27 |
lazyPower | i didnt know you were so tapped into the wire | 19:27 |
lazyPower | hook me up jrwren | 19:27 |
jrwren | previous job. | 19:27 |
jrwren | i can send you 2+yr old data ;] | 19:27 |
lazyPower | ;_; | 19:27 |
jrwren | see the pretty animated red dots on the map: http://atlas.arbor.net ? ;] | 19:27 |
jrwren | also, keep in mind: http://fusion.net/story/287592/internet-mapping-glitch-kansas-farm/ | 19:28 |
arosales | mbruzek: https://github.com/juju-solutions/bundle-observable-kubernetes/pull/15 lgtm | 19:54 |
arosales | mbruzek: before promulgating the bundle I think we just need to address issue 9 and 10 | 19:55 |
mbruzek | arosales: thanks | 19:55 |
arosales | mbruzek: cory_fu has an update to add a config option to kibana to load the dashboard. So you would need to make a charm update in the bundle as well as set the config option. We will need to add instructions in the bundle readme on how to access the kibana dashboard | 19:57 |
arosales | mbruzek: also suggest to expose the services, per issue 10, which is a bundle.yaml update | 19:57 |
petevg | cory_fu, kwmonroe: do we have a shortcut in the charm tools for getting the ip address of the unit that you're running a command from? | 19:57 |
arosales | mbruzek: lazyPower: most of my comments are UX polish, thanks for your work on those bundles. | 19:58 |
mbruzek | arosales: the bundle already exposes those charms. | 19:58 |
mbruzek | I am adding words to unexpose them now | 19:58 |
lazyPower | arosales on that veign, i saw the repository had a non-obvious naming. i've renamed it and its here: https://github.com/juju-solutions/bundle-swarm-core | 19:58 |
arosales | mbruzek: ah excellent, was that a recent update? | 19:58 |
arosales | lazyPower: it was obvious before. I just missed it | 19:59 |
lazyPower | bundle-swarm-consul doesn't tell me its swarm-core in the store :/ | 19:59 |
lazyPower | i shouldn't be allowed to name things | 19:59 |
kwmonroe | petevg: charmhelpers.core.hookenv.unit_get('private-address') | 19:59 |
mbruzek | arosales: I don't know there were some merges to the bundle that were queued up, butI don't think any of them were specifically for exposing the charms. | 19:59 |
arosales | well I was looking for observable-swarm | 19:59 |
petevg | kwmonroe: thank you. | 20:00 |
arosales | lazyPower: which I found. I was just mult-tasking and logged issues under the wrong repo | 20:00 |
arosales | lazyPower: I think https://github.com/juju-solutions/bundle-observable-swarm/issues is the correct location for the issues against the observable-swarm bundle | 20:01 |
lazyPower | ah | 20:01 |
lazyPower | well that one was obvious, clearly i didnt name it | 20:01 |
lazyPower | :) | 20:01 |
arosales | mbruzek: ack, i see those now @ https://github.com/juju-solutions/bundle-observable-kubernetes/blob/master/bundle.yaml | 20:02 |
arosales | mbruzek: thanks, so some instructions on accessing the kibana console and ensuring your are using the latest kibana with cory_fu config option to load the dashboard and that should resolve my issues | 20:03 |
arosales | mbruzek: lazyPower, cory_fu: we should use the same verbage on how to access the kibana dashboard as swarm, k8, and spark bundles all use beats for logging | 20:04 |
lazyPower | i can make some pretty gifs to do it | 20:04 |
lazyPower | http://imgur.com/tgYFSjM | 20:04 |
arosales | lazyPower: do animated gifs render in the charmstore readmes? | 20:06 |
lazyPower | i'm not sure. There's an open bug against jujucharms.com to support GFMD | 20:07 |
lazyPower | i know the docker icon showed up when i had that embedded in the docker charm, no reason to think thats changed unless they are now explicitly blocking image tags | 20:07 |
arosales | and most admins should have access to a web browser to read the instructions if they are viewing them locally | 20:10 |
arosales | or query the charm store if they are new to kibana and want to see instrucitons | 20:10 |
arosales | lazyPower: that would be great if the charm store renders it. I guess we could test it in /u/containers | 20:11 |
lazyPower | or /u/arosales :D | 20:11 |
lazyPower | \o/ charm push liberates us to experiment | 20:11 |
arosales | mmmm, no ingestion wait times :-) | 20:12 |
lazyPower | i realize thats an optimistic goal, to have you push the test :) you're a busy dude | 20:12 |
magicaltrout | sat, writing charms when I should be packing | 20:13 |
magicaltrout | what have I forgotten? | 20:13 |
magicaltrout | i'll ask here instead of just looking in my suitcase | 20:13 |
arosales | lazyPower: no worries at all, you guys are busy too :-) | 20:15 |
lazyPower | so long as you hvae your gutchies you'll do fine magicaltrout | 20:15 |
arosales | lazyPower: I am looking at https://jujucharms.com/u/containers/observable-swarm/bundle for a refresh | 20:15 |
magicaltrout | you know i had to google that lazyPower :P | 20:15 |
arosales | magicaltrout: as long as you have your charms, power adapter, and computer your set | 20:15 |
magicaltrout | arosales's answer is more accurae | 20:16 |
magicaltrout | t | 20:16 |
lazyPower | I dunno man, i could do a lot more with gutchies than i can a power adapter when things are going sideways | 20:16 |
magicaltrout | supposedly my plane has at seat power, so i might get the ultimate amount of work done | 20:17 |
magicaltrout | i love flights to just get backlog stuff cleared | 20:17 |
arosales | lazyPower: really gutchies, SW Pennsylvanian slang for underwear. | 20:17 |
arosales | magicaltrout: seat power and wireless. I love it when that happens | 20:17 |
arosales | lazyPower: lolz | 20:17 |
magicaltrout | ah no you see. I'm happy i'm on flights w/o internet | 20:17 |
magicaltrout | because it makes you do the stuff you keep putting off | 20:18 |
lazyPower | i use flights as the opportunity to unplug | 20:18 |
magicaltrout | yup | 20:18 |
lazyPower | and if i can load up some beats and just chill on the flight, thats ok by me | 20:18 |
magicaltrout | i'm not usually on flights w/ power | 20:18 |
arosales | magicaltrout: ah interesting, no network distraction, but power still | 20:18 |
lazyPower | magicaltrout - want 54 minutes of triphop to keep you company on the flight? | 20:18 |
magicaltrout | arosales: it could be a lie, i could get on and find no power and 5 hrs battery life | 20:18 |
mbruzek | arosales: https://github.com/juju-solutions/bundle-observable-kubernetes/pull/16 <-- Please review | 20:19 |
magicaltrout | hook me up lazyPower | 20:19 |
lazyPower | https://slack-files.com/T04G0GYCP-F1D9BUE57-0209e54ffd | 20:19 |
arosales | magicaltrout: note lazyPower's beats are addictive | 20:19 |
magicaltrout | hehe | 20:19 |
lazyPower | you flatter me :) | 20:20 |
arosales | lazyPower: where is your gif readme pushed to? I was looking at https://jujucharms.com/u/containers/observable-swarm/bundle | 20:20 |
lazyPower | arosales - we're testing it out now. but all signs point go awesome when the next k8s readme goes up | 20:21 |
lazyPower | sorry, observable-k8s-bundle | 20:21 |
arosales | lazyPower: probably also need to apply the updated kibana dashboard config to https://jujucharms.com/beats-core/bundle with same readme instructions | 20:21 |
arosales | mbruzek: hammering on it huh :-) | 20:21 |
* arosales will take a look @ https://github.com/juju-solutions/bundle-observable-kubernetes/pull/16 | 20:21 | |
magicaltrout | here's a random question thats more useful than discussing underwear | 20:22 |
lazyPower | debateable | 20:23 |
magicaltrout | with lazyPower et al working on beats | 20:23 |
magicaltrout | will it one day become like juju gui, and just something that happens? | 20:23 |
lazyPower | i think with cross environment relations, sure | 20:24 |
lazyPower | sorry | 20:24 |
lazyPower | cross model | 20:24 |
lazyPower | old habits die had | 20:24 |
lazyPower | *hard | 20:24 |
lazyPower | in the fact you can relate it to something thats not physically in your model map, but is related to | 20:25 |
lazyPower | you basically deploy the beats you want, and relate it to whatever your central collector is running elsewhere | 20:25 |
lazyPower | i think there will still be a lot of room fo rthis ot be hot pluggable. Beats isn't the only metrics game in town. I've been questioned about monitoring using promethius | 20:26 |
lazyPower | but now i'm rambling | 20:26 |
magicaltrout | cool. Seeing Beats within or alongside GUI along with extended stuff like Status within GUI would be cool. Not that I'm a big Juju GUI user but adding something like beats just adds to the usability | 20:26 |
lazyPower | oh certainly. I'm still a fan of tools like weave-scope to visualize the container topology running in the cluster | 20:27 |
lazyPower | its not as powerful as juju, but just being able to visualize the network map is a strangely handy tool | 20:27 |
magicaltrout | yeah thats for certain | 20:28 |
lazyPower | and sure, you win. this was more useful than talking about gutchis | 20:28 |
arosales | mbruzek: https://github.com/juju-solutions/bundle-observable-kubernetes/pull/16 lgtm | 20:28 |
magicaltrout | hehe | 20:28 |
magicaltrout | I'm waiting slowly for Drill & Hadoop on EC2 to check it works so I can blog it tomorrow | 20:29 |
magicaltrout | more stuff I can get done whilst sat on a train | 20:29 |
arosales | mbruzek: cory_fu, lazyPower we just need to coordinate the kibana updates to the observable-swarm bundle and the spark-processing bundle | 20:29 |
mbruzek | arosales: let mine be the template and all is good | 20:30 |
arosales | cory_fu: and lazyPower note ^ | 20:30 |
marcoceppi | or, progressively | 20:35 |
skay | bloodearnest: hey, thanks for https://gist.github.com/bloodearnest/ebf044476e70c4baee59c5000a10f4c8 | 20:36 |
skay | bloodearnest: I'm still in the process of learning set up a dev environment in lxd for my work, and that helped | 20:36 |
bloodearnest | skay, no worries, it's working well for me | 21:09 |
skay | bloodearnest: <picture a mug of beer or some favorite beverage here> | 21:09 |
bloodearnest | skay, are you using it with juju? | 21:18 |
skay | bloodearnest: not yet. | 21:18 |
bloodearnest | it does work, fwiw | 21:19 |
bloodearnest | I have a setup that works, using manual provider, and manually creating the lxd | 21:19 |
bloodearnest | then deploying the charm onto that | 21:20 |
magicaltrout | https://github.com/apache/incubator-joshua/tree/master/distribution/joshua-full there you go arosales, we're still working on the implementation details as the incubator project works through the paces, but a charm in another ASF project | 21:20 |
arosales | cory_fu: is zookeeper not promulgated ? | 21:21 |
arosales | cory_fu: https://jujucharms.com/u/bigdata-dev/apache-zookeeper/trusty/20 | 21:21 |
magicaltrout | https://jujucharms.com/apache-zookeeper/trusty/ | 21:21 |
magicaltrout | i've been using that for the last 2 days | 21:22 |
arosales | magicaltrout: ah thanks :-) | 21:22 |
arosales | magicaltrout: and great to see joshua-full and I hear your working on drill | 21:26 |
arosales | magicaltrout: ramping up the apache projects :-) | 21:26 |
magicaltrout | yeah drill is up and running sorta. enough for people to use at least | 21:26 |
lazyPower | jcastro http://i.imgur.com/Yk4GHhs.png | 21:26 |
magicaltrout | some patchy relations to mongodb and HDFS | 21:27 |
magicaltrout | need to clean that up a bit, but the actual setup is pretty straightforward, of course I could spend months adding config options | 21:27 |
lazyPower | jcastro mbruzek - sorry i fixed it http://i.imgur.com/1RDywtV.png | 21:28 |
mbruzek | we rock | 21:35 |
mbruzek | assult cube is a punk | 21:35 |
aisrael | I had a juju 1.25 env running on trusty, and upgraded to xenial. Now, when I boot up, I can't get juju to run (no api server). Specifically, `sudo service juju-db-stone-local start` throws the error "Failed to start juju-db-stone-local.service: Unit juju-db-stone-local.service not found." Any ideas where to start debugging that? | 21:36 |
cmars | aisrael, sounds like the upgrade didn't successfully navigate upstart -> systemd | 21:38 |
cmars | i'm not even sure if that's supported tbh | 21:38 |
cmars | aisrael, hmm, maybe try `sudo service jujud-machine-0 start` ? | 21:40 |
cmars | aisrael, that's what the systemd unit is named on my xenial controller | 21:40 |
cmars | oh, but that's 2.0, nvrmind | 21:40 |
cmars | aisrael, does `systemctl list-units` show anything juju-related? | 21:41 |
aisrael | There's only two scripts in /etc/init, juju-agent and juju-db | 21:41 |
aisrael | cmars: no juju | 21:41 |
cmars | aisrael, scripts in /etc/init or upstart confs? | 21:46 |
aisrael | cmars: /etc/init. Where else should I be looking? | 21:47 |
cmars | aisrael, if there's no juju showing up in `systemctl list-units`... i think you'll need to create these units on your system to start the agent & db as the upstart confs used to | 21:48 |
cmars | aisrael, i'm not sure to what extent 1.25 supports post-upstart distros. i thought it did, but even so, it might only set up the init on bootstrap | 21:49 |
aisrael | cmars: ack, thanks. So on Xenial, juju should be using systemd, and those script(s) are missing for me | 21:50 |
cmars | aisrael, yeah. i think this might be a bug worth opening | 21:51 |
aisrael | cmars: Will do, thanks for the debug help! | 21:52 |
rloloc | I want to drop a controller, but I get stuck with repeating "Waiting on 1 model, 2 machines" notification. | 22:12 |
rloloc | How can I flush everything. These are machines seem to be misbehaving or not configured. | 22:13 |
rloloc | I opted to just drop the lxd container manually and re-bootstrap. Was there another way? | 22:30 |
lazyPower | and on the topic of beats, they just keep multiplying | 23:10 |
lazyPower | https://www.elastic.co/guide/en/beats/metricbeat/master/metricbeat-overview.html | 23:10 |
aisrael | nice | 23:37 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!