lazyPowerarosales iirc we were still pending making that PR to the upstream repo01:29
=== redir is now known as redir_afk
=== frankban|afk is now known as frankban
=== mmcc_ is now known as mmcc
kjackaladmcleod-: So yes09:19
kjackalalmost working HBase09:19
kjackalHBase is very cool09:20
admcleod-kjackal: why is that!09:20
kjackalI knew the internal indexing structure that they implement but i've never seen how it is setup09:20
magicaltroutkjackal: you'll know this. If I want to use a 3rd party python lib in my reactive charm, how do I package it?09:20
admcleod-ooh ooh i know09:21
magicaltroutor you09:21
kjackalwhat do you mean by third party? It cannot go to wheelhouse09:21
admcleod-but .. oh, apparently i dont know09:21
admcleod-never mind! ;)09:21
admcleod-kjackal: how is it to actually use?09:22
magicaltroutsee, now I don't know what you're saying09:22
admcleod-magicaltrout: i was going to say, if you can pip install it, then you can just put it in wheelhouse09:22
magicaltroutcharm -> random python lib -> build -> deploy09:22
magicaltroutokay, remember I stupid. How do you "just put it in wheelhouse"09:23
magicaltroutclearly when you compile your charm, wheelhouse appears09:23
kjackallet me find09:23
admcleod-magicaltrout: well, again, i thought you could put it in wheelhouse.txt, e.g. https://github.com/juju-solutions/layer-apache-bigtop-resourcemanager/blob/master/wheelhouse.txt09:23
magicaltroutmen of many links09:24
admcleod-kjackal: why did you say 'it _cannot_ go to wheelhouse'?09:24
kjackaladmcleod-: Sorry I was asking09:24
kjackalforgot the "?"09:24
magicaltroutthe vital missing questionmark09:24
admcleod-kjackal: ah, i see, so you used all your ?s up in the shortest conversation ever09:25
magicaltroutenglish is a fickle language09:25
admcleod-yes. dont i know it.09:26
magicaltroutindeed semi scot09:26
admcleod-i got nothin09:27
magicaltrouti had a blue cheese chocolate in my speakers gift from apachecon09:27
magicaltroutit was awful09:27
magicaltroutnot speaking in vancouver again09:28
admcleod-maybe they just assumed, from your sophisiticated queens english, that you would appreciate the finer things, and then gave you something else instead09:28
admcleod-anyway i swear, it works if its done right.09:29
admcleod-i would never09:31
magicaltroutkjackal: when you have working hbase09:31
magicaltrouthttps://phoenix.apache.org/Phoenix-in-15-minutes-or-less.html build a phoenix subordinate09:32
magicaltroutoh there is one in bigdata-dev09:32
kjackalmagicaltrout: That is cool! SQL!09:32
magicaltroutwhat do i know09:32
magicaltroutokay when you've finished hbase09:33
magicaltroutmake sure phoenix in bigdata-dev works ;)09:33
magicaltroutah that one is pretty old09:34
magicaltroutokay we'll revamp that09:34
admcleod-magicaltrout: i bet you a pint you could do it quicker09:34
magicaltroutadmcleod-: doubt it, I've got to see SaMnCo tomorrow, and fly to san diego on Friday followed by a week of hacking broken stuff09:34
magicaltroutactually... you're probably right, but i should really do stuff that earns me money, not endlessly hack charms ;)09:35
admcleod-yeah but a pint09:35
magicaltrouti know, its tempting! ;)09:35
kjackalmagicaltrout: Phoenix seems to be in bigtop, but it does not have puppet scripts. Thats cool!09:35
kjackalmagicaltrout: cool in the sence we can contribute the puppet scripts as well!09:36
magicaltroutkjackal: you basically just need to chuck the right libs in the libs dir and it sorts it out09:36
admcleod-it shouldnt take us long but we'd have to backlog it for a bit until we get some other higher priority bits done09:36
magicaltroutso it should be pretty straight forward to bootstrap in bigtop09:36
admcleod-kjackal: its probably in a feature branch09:36
magicaltrouthttps://drill.apache.org/docs/hbase-storage-plugin/ I also need to figure out how to crowbar juju hbase into phoenix09:37
=== Anita is now known as Guest77367
kjackalOh come on magicaltrout this is a piece of cake for you now!09:38
magicaltrouthehe its certainly getting easier, doesn't make me a living though does it? :P09:38
magicaltroutnot yet at least. But this is why I'm working on the Drill/SQL side of things because it means we do get that all in one end to end solution, that makes it easy to demo to clients09:39
admcleod-eating blue cheese chocolate doesnt make ANYONE a living, but we still do it.09:40
magicaltrouthehe, indeed09:40
admcleod-magicaltrout: so this talks about inserting this storage plugin info via the web ui - i guess from a charm perspective its an insert statement (or maybe even just appending to a config file)09:41
magicaltroutadmcleod-: yeah, it has a rest API, so for my mongo test I just prod it in python with some connection details09:41
admcleod-magicaltrout: ill make a note to discuss both of these things @ our next sync09:41
admcleod-magicaltrout: in a few hours09:41
magicaltroutyou could write a file, but you'd need to cycle your drill cluster, so I use the rest api instead which just writes to ZK and updates the cluster09:42
magicaltroutin reality, I'm more interested in HDFS and Filesystem based connectivity initially, so I'll be working on those later09:43
magicaltroutso I can dump GB's of Parquet files into HDFS and have distributed Drill query them09:44
magicaltroutif you install Drill onto your slave nodes, it will keep the data for that node on that node, whilst doing some distributed SQL to keep the data local, which gives a good performance boost09:44
magicaltroutdata locality and all that09:44
admcleod-who'da thought09:45
magicaltroutwell its logical, but its hard for SQL engines to do09:45
magicaltroutdistrtibuting a query and running it on different nodes then collating the result09:45
magicaltroutwhats the deal with juju storage09:49
magicaltroutis it worth my while adding it to the charm?09:49
magicaltroutlike is it enough of a *thing*?09:49
admcleod-babbageclunk: so, tell us, whats the deal with juju storage, is it worth _someones_ while adding it to their charms? like.. is it enough of a __**thing**__?09:50
admcleod-magicaltrout: ;}09:50
magicaltroutso drill also supports filesystem based access of CSV's/JSON stuff, so I was thinking of adding Storage support09:52
admcleod-magicaltrout: well, i know very little about it so i was asking babbageclunk09:54
babbageclunkadmcleod-, magicaltrout: I'm pretty new, so I don't know *that* much about it either. What kinds of things are you wanting to know?09:55
magicaltroutnot much, i'm just writing a charm for apache drill. One thing it supports is SQL analysis over local CSV/JSON/Parquet files. Now, this is entirely optional and probably not used *that* much, but if someone did use it, I was thinking about juju storage support for it09:56
magicaltroutbut was wondering a) if it was worth it (this is just idle gossip, I realise) b) if its enough of a *thing* that it wont go missing in a few releases time09:57
babbageclunkmagicaltrout: I'm fairly sure it's not going anywhere - it's really a requirement for more careful deployments.09:58
admcleod-magicaltrout: at a lower (infrastructure) level, how would you envisage using it/09:59
=== admcleod- is now known as admcleod
magicaltroutadmcleod: something like, person deploys charm in EC2 XLarge or something, but then tacks on a couple of TB of SSD/High performance disks or something09:59
magicaltroutfor example, Drill supports querying of multiple CSVs as a single table. So your app writes to a high performance filestore in various directories and files. At which point Drill has a plugin configured to use that folder. So you can then run10:01
magicaltroutselect * from my_files10:01
magicaltroutand it will traverse your entire file storage pool10:01
magicaltroutnot everyone in this world wants to run a HDFS cluster ;)10:02
babbageclunkmagicaltrout: I might be wrong, but I think the juju storage features are less about machines that get storage added later, and more for restricting where a charm gets deployed to.10:08
magicaltroutfair enough10:09
magicaltrouti'll skip it then10:09
admcleodmagicaltrout: are you saying, kind of 'preferential' storage utilisation, i.e., you add 10TB SSD, and then it 'prefers' to use that because its tagged as high-iops, for example10:09
admcleodmagicaltrout: or, just that that new disk would be part of a larger...10:09
admcleod..storage array10:09
magicaltroutwell it makes no odds to me, I was just making sure I wasn't missing a trick there :)10:13
babbageclunkmagicaltrout: Or if you're deploying to AWS or OpenStack then the charm can grab storage from EBS or Cinder when it's being deployed.10:14
admcleodmagicaltrout: actually i think it sounds like a good idea for io intensive stuff10:15
magicaltroutbecause its hook based10:16
magicaltroutI can, it appears, add the pool to Drill via a hook just like I do with any other data source10:17
admcleodmagicaltrout: yeah so you could ideally have your charm deploy and go 'ok ill use whatever disk is available but prefer something tagged SSD' for example - then on hook 'oh ssd has been added' *charm stuff*10:18
magicaltroutif someone adds storage to a drill node, you'd assume its for data processing, so create a datasource storage entry within drill to use it10:19
admcleodmagicaltrout: sounds like a very nice idea. (with lots of further potential too)10:22
babbageclunkmagicaltrout: I think I was wrong about that - I'm only familiar with the constraint/deploying parts, but adding extra storage to the unit later sounds really handy.10:23
magicaltroutthe other case as well, is, the amount of times I've deployed a MySQL staging DB with what I thought was enough storage.......10:24
magicaltroutand 6 months later when my client has gone mental and used the DB for stuff they shouldn't10:25
magicaltroutyou blow your disk space and end up with random unmanaged mount points all over the place, tacking on extra storage10:25
magicaltroutwhich you know full well if the server dies you're screwed because you couldn't replicate it in a month of sundays :)10:25
admcleodmagicaltrout: yep for sure, seems like there may be a few ways to achieve that too10:30
magicaltroutokay back to more mundane matters. I want to implement an HDFS relation admcleod10:32
magicaltroutis there an interface that will provide me with the namenode ip and port?10:32
magicaltrouthttps://github.com/juju-solutions/interface-namenode-cluster ?10:32
admcleodmagicaltrout: thats the peer relation.. you want...10:33
admcleodmagicaltrout: https://github.com/juju-solutions/interface-dfs10:34
magicaltroutcool thanks10:34
admcleodmagicaltrout: or actually https://github.com/juju-solutions/interface-dfs-slave10:34
admcleodmagicaltrout: or.. use the plugin10:35
admcleodmagicaltrout: cos if you use the plugin itll install hadoop for you, configure hadoop correctly.. and the interface-plugin will give you the namenodes and port10:37
magicaltroutyeah so I see10:37
magicaltroutI don't need hadoop installed but lets run with that10:37
magicaltroutwhere's charms.templating gone in juju2?11:40
magicaltroutadmcleod: https://github.com/juju-solutions/interface-hadoop-plugin12:16
magicaltroutin my charm to interface with the plugin I want something like:12:17
magicaltrouthttps://gist.github.com/buggtb/ff2d2e26b45657bfcc7566a393bb9b7d ?12:17
=== scuttle|afk is now known as scuttlemonkey
magicaltrouti think you sold me a lie admcleod12:36
magicaltroutbecause I don't provide hadoop stuff, but similarly I don't connect to hadoop client, pig etc12:37
lazyPowerkjackal congrats on the +2 on your ~charmer app13:36
kjackallazyPower: Thank you! I can now offload cory and kevin from some of the promulgation tasks13:38
lazyPowerif anyone missed it and has worked with kjackal and would like to endorse his application - https://lists.ubuntu.com/archives/juju/2016-May/007273.html   :) the more the merrier i always say!13:38
kjackallazyPower: Thank you for your vote of confidence13:38
lazyPowerits firmly seated though :D you're comin with us to the front lines13:38
=== natefinch-afk is now known as natefinch
admcleodmagicaltrout: sorry, i got caught up in diagnosing some io problems + lunch, did you get anywhere?14:42
magicaltroutyeah no worries admcleod14:42
magicaltroutI ended up using the dfs interface14:43
magicaltroutgot it mostly strung together I think14:43
admcleodmagicaltrout: right... just as a note, in this case drill is the client, attaching to hadoop, so if it did need hadoop installed it would be appropriate.14:58
magicaltroutyeah, I have used the plugin layer elsewhere14:58
magicaltroutbut in this instance its just prodding a hdfs endpoint somehow14:59
=== redir_afk is now known as redir
magicaltroutokay well the relation seems to work15:18
magicaltrouttime to find out if it can actually query any data15:18
LiftedKilthas anyone deployed cephfs with juju?15:51
LiftedKiltI don't see a metadata server charm - unless I'm just completely blind15:51
admcleodcory_fu: are there any other bigtop layers which are including apache-bigtop-plugin as a layer?15:54
magicaltroutvery boring sample data out of hdfs16:01
admcleodmagicaltrout: nice one.16:01
magicaltroutaye, needs some tidying up and yet more config options, but not bad16:02
cory_fuadmcleod: Yeah, should be.  One second16:08
cory_fuadmcleod: Spark is using it via hadoop-client: https://github.com/juju-solutions/bigtop/blob/spark/bigtop-packages/src/charm/spark/layer-spark/layer.yaml16:09
cory_fuadmcleod: Note that you should not be adding 'layer:hadoop-plugin' to layers.yaml; instead you should add 'layer:hadoop-client'.  Or, if you want to do it manually, 'interface:hadoop-plugin' and then add the relation to metadata.yaml16:11
admcleodcory_fu: ah, i see - thanks :)16:13
kwmonroepetevg: really nice job on the bigtop layer readme!  https://github.com/juju-solutions/layer-apache-bigtop-base/pull/11 is merged.16:26
=== frankban is now known as frankban|afk
LiftedKiltrick_h_ marcoceppi: have you seen anyone working on a metadata server charm for cephfs?16:50
marcoceppiLiftedKilt: not sure, but icey or cholcombe would be the ones who knew if someone was16:50
LiftedKiltmarcoceppi: thanks for the heads up - wasn't sure who to direct the question to16:51
cholcombeLiftedKilt, yes that's on the roadmap16:57
cholcombeLiftedKilt, https://github.com/openstack-charmers/charm-specs/pull/216:58
cholcombeLiftedKilt, do you have a few cycles to help or want to be on the review once it's posted?17:00
LiftedKiltcholcombe: I'm always happy to help17:04
LiftedKiltcholcombe: so the current plan is to release the cephfs charm concurrent with the newton cycle?17:05
lazyPowermagicaltrout - you still kicking around over there?17:06
cholcombeLiftedKilt, Hopefully much sooner17:06
LiftedKiltcholcombe: awesome - I need to start deploying ceph immediately, but I can wait on cephfs for a little bit during the transition17:10
LiftedKilthave a 250 node lizardfs cluster that had some...ahem...data loss last week17:10
cholcombeLiftedKilt, ok.  We just started a review on the ceph base layer.  https://review.openstack.org/#/c/323574/  Once that is settled we can leverage that for the ceph-fs layer which should be very small17:11
cholcombeLiftedKilt, lizardfs sounds familiar.  I seem to remember that being a fork of something17:12
LiftedKiltit's a fork of moosefs17:12
cholcombehow did it sustain data loss?17:13
cholcombei'm just curious.  i know almost nothing about it17:13
petevgkwmonroe: Cool. Thx for the review/merge :-)17:13
LiftedKiltthe primary metadata server went down because of some unrelated network problems. It failed over to the backup metadata server, which it turns out had pretty stale metadata17:14
LiftedKiltthe backup then said "I have 250,000 more chunks than I should - better start deleting them!"17:14
LiftedKiltoops haha17:15
cholcombei really despise metadata servers in distributed filesystems17:20
* cholcombe remembers much pain with hadoop17:20
cholcombeLiftedKilt, so the plan is to start breaking off chunks of lizard and converting them to ceph?17:21
cholcombeLiftedKilt, if you need NFS you're prob better served using glusterfs17:22
aroundaboutWhat constraint on juju bootstrap can be set to select a specific AWS EC2 region?17:51
aroundaboutI'm able to select instance-type=t2.micro17:51
aroundaboutBut region= throws an error unknown constraint "region"17:51
aroundaboutUsing juju version 2.0-beta7-xenial-amd6417:52
rick_h_aroundabout: what version of Juju?17:59
aroundaboutUsing juju version 2.0-beta7-xenial-amd6417:59
rick_h_aroundabout: http://askubuntu.com/questions/172643/can-juju-be-used-in-other-region-than-the-us shows for 1.2517:59
rick_h_aroundabout: and for juju 2.0 you should be able to bootstrap with --config region=eu-west-118:00
rick_h_to follow the examples from there18:00
aroundaboutThanks I'll run it. I was using --bootstrap-constraints18:00
rick_h_aroundabout: ah, yes that's the old way. It's simpler with the --config key=value syntax18:00
aroundaboutThe command is not working.18:06
aroundaboutIt still punts the juju agent to us-east-118:06
aroundabout`sudo juju bootstrap --config region=eu-west-1 aws-devenv-controller aws`18:07
rick_h_aroundabout: sorry, looking18:07
rick_h_aroundabout: oh so sorry, I lead you wrong there18:07
aroundaboutI may have just noticed something vital18:12
aroundabout# juju bootstrap [options] <controller_name> <cloud_name>[/region]18:12
aroundaboutThat usage has an optional [/region] part I've never used.18:13
aroundaboutI also think that's new since 1.018:13
aroundaboutYup, worked18:14
aroundaboutI'll update that SO question.18:15
magicaltroutyou rang lazyPower18:21
lazyPowermagicaltrout -
lazyPoweryou had made mention of packetbeat? here's where i'm at with the layer18:22
lazyPowermight want to expand the data-range to ~ 1 hr or greater18:22
* magicaltrout stabs some buttons18:25
magicaltroutlooking sweet lazyPower18:26
magicaltroutthats very useful18:26
bdxadmcleod: hey, just diving into your puppet refactor/merge now18:34
bdxadmcleod: I've a few things to run by you when you have a moment18:35
lazyPowero/ bdx so i hear you'r einterested in packetbeat. https://github.com/juju-solutions/layer-packetbeat is current if you'd like to get in early on the comment threads/issue tracker :)18:45
bdxyou know it18:45
lazyPowerno readme, its for the intrepid explorer18:46
bdxha, perfect18:46
mbruzekarosales: ping. I addressed the issues you created with the Kubernetes readme in https://github.com/juju-solutions/bundle-observable-kubernetes/pull/1518:50
mbruzekarosales: please review when you get a chance, so we can update and get the bundle promulgated18:51
mbruzekAssuming it passes your review18:51
arosalesmbruzek: hello, thanks for working on that. I'll take a look this afternoon18:51
lazyPowermbruzek - i took liberty of some prelim review nits. I wont merge until arosales has taken a look though18:55
bdxlazyPower: what to do about geo_points?19:02
lazyPowerbdx - https://www.digitalocean.com/community/tutorials/how-to-map-user-location-with-geoip-and-elk-elasticsearch-logstash-and-kibana19:03
bdxyeah, I know ...19:03
lazyPowerbdx - if you have time, plugging that in would be choice19:03
lazyPowereven if you rough cut it, i can throw finishing touches on it19:03
bdxlazyPower: what I'm thinking is a modification to the logstash http-relation-joined?19:03
bdxsuch that it would write out 11-nginx-filter.conf19:04
bdxif config['log_geo_point']:19:05
lazyPowercurrently logstash isn't shipping with any configuration for transforms. I see this being a config-addition to layer-logstash. packetbeat already supports shipping through logstash by virtue of beats-base19:07
lazyPowerso i dont think you can get away with the nginx-filter.conf19:08
bdxlazyPower: but logstash needs to extract the client ip, and convert it using the geoipdatabase19:08
lazyPowerthis will need to be leveraging whatever the beats protocol mandates for geoip data. none of the graphs in the default setup uses anything than the beats shippers, so we should look closer at the docs. I think that do article i got from jcastro is a red herring19:08
bdxlooks like that is done with the nginx-filter.conf, no?19:09
lazyPowerbdx - https://github.com/elastic/beats/issues/6019:09
lazyPowerthis looks applicable to our topic.19:10
lazyPowerthey're using the geoip-database package, and just configured packetbeat with teh path to the db19:10
lazyPoweri think the shipper does this all already, we just need to tell it to do so, and make sure we've got the package installed19:10
bdxyea, same as the link you linked me19:10
bdxlazyPower: what I'm getting at is the other part of the implementation ... how to tell logstash to provision that file19:11
lazyPowerbdx - you dont need to tell logstash anything :)19:11
lazyPowerif you configure it on the beat shipper, it ships w/ all the required data needed. the dash is already setup to read and translate that lat/long data19:12
bdxyeah, but logstash has to get that data19:12
bdxfrom what I can tell, that is done with nginx-filter.conf -> http://paste.ubuntu.com/16902035/19:13
lazyPowerbdx - hrm, i think there's a disconnect here in what i'm thinking and you're thinking.19:13
lazyPowerThats only if you need to transform teh data coming from the beat...19:14
lazyPowerif the beat is already encoding that geo-ip data, why do you need to ngrok it?19:14
bdxlazyPower: I see that, I'm unclear on where the geoip data is being generated, how does it exists?19:16
lazyPowerits a lookup between the ip and whatever the geo database has for where that ip originates19:16
lazyPoweripv4 is a very final database at this point, the last ipv4 address having been sold a few months ago19:16
lazyPowernot sure how this will stand up to ipv6 geo-ip data, but for ipv4 this database they're listing for curl has all that for us. its basically a lookup table for the golang shipper to compare client_ip with that db and pull out the lat/long of its origin19:18
lazyPowercurl http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz -o /usr/share/GeoIP/GeoIP.dat.gz <- this thing19:18
bdxI see .... if nginx is logging to nginx-access.log, and you have nginx-access.log shipping to elastic/kibana via filebeat19:18
lazyPoweri would hook up packetbeat for that, its reading the http headers19:18
lazyPower1 sec let me get you a link of what the packetbeat monitor is actually sending19:19
bdxi see*?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:now-15m,mode:quick,to:now))&_a=(tab:indexedFields)19:20
lazyPowersearch for "mediawiki"19:21
bdxlazyPower: hmmm, no results show19:23
jrwrenyou should be aware that the maxmind geoip db is not very accurate. If you need accurate geoip databases, they are out there, often for purchase. neustar has a pretty good one.19:23
lazyPowerjrwren - that was my initial response to the problem domain. Free by default is fine, i think what we'll do is resource it, and provide the db as an interchangeable thing19:24
magicaltroutJuju GUI should support charm configuration pagination.... that is all.19:24
lazyPoweryou start with free, and can provide your own if you've got a fancy geoip db19:24
jrwrensounds great.19:24
lazyPowerunless we're in good with a geoip provider and i dont know this? :D19:25
jrwrenI've seen a lot of folks expect a bit too much from the free maxmind data and be disappointed when things are incorrect.19:25
lazyPoweryeah, i wonder what those vendors will do now that the database is basically done and the ipv6 rollout has to happen19:25
lazyPowerbdx - hang on, i'm modifying the packetbeat install by hand on the unit19:26
lazyPowerwe'll see if this works and then i'll land some code if it does19:26
jrwrenits far from done. really good database track ip moves down to the customer DHCP lease. e.g. I get a new IP from my ISP, the geo ip database vendor detects this and updates that data.19:26
lazyPoweri didnt know you were so tapped into the wire19:27
lazyPowerhook me up jrwren19:27
jrwrenprevious job.19:27
jrwreni can send you 2+yr old data ;]19:27
jrwrensee the pretty animated red dots on the map: http://atlas.arbor.net ? ;]19:27
jrwrenalso, keep in mind: http://fusion.net/story/287592/internet-mapping-glitch-kansas-farm/19:28
arosalesmbruzek: https://github.com/juju-solutions/bundle-observable-kubernetes/pull/15 lgtm19:54
arosalesmbruzek:  before promulgating the bundle I think we just need to address issue 9 and 1019:55
mbruzekarosales: thanks19:55
arosalesmbruzek: cory_fu has an update to add a config option to kibana to load the dashboard. So you would need to make a charm update in the bundle as well as set the config option. We will need to add instructions in the bundle readme on how to access the kibana dashboard19:57
arosalesmbruzek: also suggest to expose the services, per issue 10, which is a bundle.yaml update19:57
petevgcory_fu, kwmonroe: do we have a shortcut in the charm tools for getting the ip address of the unit that you're running a command from?19:57
arosalesmbruzek: lazyPower: most of my comments are UX polish, thanks for your work on those bundles.19:58
mbruzekarosales: the bundle already exposes those charms.19:58
mbruzekI am adding words to unexpose them now19:58
lazyPowerarosales on that veign, i saw the repository had a non-obvious naming. i've renamed it and its here: https://github.com/juju-solutions/bundle-swarm-core19:58
arosalesmbruzek: ah excellent, was that a recent update?19:58
arosaleslazyPower: it was obvious before. I just missed it19:59
lazyPowerbundle-swarm-consul doesn't tell me its swarm-core in the store :/19:59
lazyPoweri shouldn't be allowed to name things19:59
kwmonroepetevg: charmhelpers.core.hookenv.unit_get('private-address')19:59
mbruzekarosales: I don't know there were some merges to the bundle that were queued up, butI don't think any of them were specifically for exposing the charms.19:59
arosaleswell I was looking for observable-swarm19:59
petevgkwmonroe: thank you.20:00
arosaleslazyPower: which I found. I was just mult-tasking and logged issues under the wrong repo20:00
arosaleslazyPower: I think https://github.com/juju-solutions/bundle-observable-swarm/issues is the correct location for the issues against the observable-swarm bundle20:01
lazyPowerwell that one was obvious, clearly i didnt name it20:01
arosalesmbruzek: ack, i see those now @ https://github.com/juju-solutions/bundle-observable-kubernetes/blob/master/bundle.yaml20:02
arosalesmbruzek:  thanks, so some instructions on accessing the kibana console and ensuring your are using the latest kibana with cory_fu config option to load the dashboard and that should resolve my issues20:03
arosalesmbruzek: lazyPower, cory_fu: we should use the same verbage on how to access the kibana dashboard as swarm, k8, and spark bundles all use beats for logging20:04
lazyPoweri can make some pretty gifs to do it20:04
arosaleslazyPower: do animated gifs render in the charmstore readmes?20:06
lazyPoweri'm not sure. There's an open bug against jujucharms.com to support GFMD20:07
lazyPoweri know the docker icon showed up when i had that embedded in the docker charm, no reason to think thats changed unless they are now explicitly blocking image tags20:07
arosalesand most admins should have access to a web browser to read the instructions if they are viewing them locally20:10
arosalesor query the charm store if they are new to kibana and want to see instrucitons20:10
arosaleslazyPower: that would be great if the charm store renders it. I guess we could test it in /u/containers20:11
lazyPoweror /u/arosales :D20:11
lazyPower\o/ charm push liberates us to experiment20:11
arosalesmmmm, no ingestion wait times :-)20:12
lazyPoweri realize thats an optimistic goal, to have you push the test :) you're a busy dude20:12
magicaltroutsat, writing charms when I should be packing20:13
magicaltroutwhat have I forgotten?20:13
magicaltrouti'll ask here instead of just looking in my suitcase20:13
arosaleslazyPower: no worries at all, you guys are busy too :-)20:15
lazyPowerso long as you hvae your gutchies you'll do fine magicaltrout20:15
arosaleslazyPower: I am looking at https://jujucharms.com/u/containers/observable-swarm/bundle for a refresh20:15
magicaltroutyou know i had to google that lazyPower :P20:15
arosalesmagicaltrout: as long as you have your charms, power adapter, and computer your set20:15
magicaltroutarosales's answer is more accurae20:16
lazyPowerI dunno man, i could do a lot more with gutchies  than i can a power adapter when things are going sideways20:16
magicaltroutsupposedly my plane has at seat power, so i might get the ultimate amount of work done20:17
magicaltrouti love flights to just get backlog stuff cleared20:17
arosaleslazyPower: really gutchies, SW Pennsylvanian slang for underwear.20:17
arosalesmagicaltrout: seat power and wireless. I love it when that happens20:17
arosaleslazyPower: lolz20:17
magicaltroutah no you see. I'm happy i'm on flights w/o internet20:17
magicaltroutbecause it makes you do the stuff you keep putting off20:18
lazyPoweri use flights as the opportunity to unplug20:18
lazyPowerand if i can load up some beats and just chill on the flight, thats ok by me20:18
magicaltrouti'm not usually on flights w/ power20:18
arosalesmagicaltrout: ah interesting, no network distraction, but power still20:18
lazyPowermagicaltrout - want 54 minutes of triphop to keep you company on the flight?20:18
magicaltroutarosales: it could be a lie, i could get on and find no power and 5 hrs battery life20:18
mbruzekarosales: https://github.com/juju-solutions/bundle-observable-kubernetes/pull/16 <-- Please review20:19
magicaltrouthook me up lazyPower20:19
arosalesmagicaltrout: note lazyPower's beats are addictive20:19
lazyPoweryou flatter me :)20:20
arosaleslazyPower: where is your gif readme pushed to? I was looking at https://jujucharms.com/u/containers/observable-swarm/bundle20:20
lazyPowerarosales - we're testing it out now. but all signs point go awesome when the next k8s readme goes up20:21
lazyPowersorry, observable-k8s-bundle20:21
arosaleslazyPower: probably also need to apply the updated kibana dashboard config to https://jujucharms.com/beats-core/bundle with same readme instructions20:21
arosalesmbruzek: hammering on it huh :-)20:21
* arosales will take a look @ https://github.com/juju-solutions/bundle-observable-kubernetes/pull/1620:21
magicaltrouthere's a random question thats more useful than discussing underwear20:22
magicaltroutwith lazyPower et al  working on beats20:23
magicaltroutwill it one day become like juju gui, and just something that happens?20:23
lazyPoweri think with cross environment relations, sure20:24
lazyPowercross model20:24
lazyPowerold habits die had20:24
lazyPowerin the fact you can relate it to something thats not physically in your model map, but is related to20:25
lazyPoweryou basically deploy the beats you want, and relate it to whatever your central collector is running elsewhere20:25
lazyPoweri think there will still be a lot  of room fo rthis ot be hot pluggable. Beats isn't the only metrics game in town. I've been questioned about monitoring using promethius20:26
lazyPowerbut now i'm rambling20:26
magicaltroutcool. Seeing Beats within or alongside GUI along with extended stuff like Status within GUI would be cool. Not that I'm a  big Juju  GUI user but adding something like beats just adds to the usability20:26
lazyPoweroh certainly. I'm still a fan of tools like weave-scope to visualize the container topology running in the cluster20:27
lazyPowerits not as powerful as juju, but just being able to visualize the network map is a strangely handy tool20:27
magicaltroutyeah thats for certain20:28
lazyPowerand sure, you win. this was more useful than talking about gutchis20:28
arosalesmbruzek: https://github.com/juju-solutions/bundle-observable-kubernetes/pull/16 lgtm20:28
magicaltroutI'm waiting slowly for Drill & Hadoop on EC2 to check it works so I can blog it tomorrow20:29
magicaltroutmore stuff I can get done whilst sat on a train20:29
arosalesmbruzek: cory_fu, lazyPower we just need to coordinate the kibana updates to the observable-swarm bundle and the spark-processing bundle20:29
mbruzekarosales: let mine be the template and all is good20:30
arosalescory_fu: and lazyPower note ^20:30
marcoceppi or, progressively20:35
skaybloodearnest: hey, thanks for https://gist.github.com/bloodearnest/ebf044476e70c4baee59c5000a10f4c820:36
skaybloodearnest: I'm still in the process of learning set up a dev environment in lxd for my work, and that helped20:36
bloodearnestskay, no worries, it's working well for me21:09
skaybloodearnest: <picture a mug of beer or some favorite beverage here>21:09
bloodearnestskay, are you using it with juju?21:18
skaybloodearnest: not yet.21:18
bloodearnestit does work, fwiw21:19
bloodearnestI have a setup that works, using manual provider, and manually creating the lxd21:19
bloodearnestthen deploying the charm onto that21:20
magicaltrouthttps://github.com/apache/incubator-joshua/tree/master/distribution/joshua-full there you go arosales, we're still working on the implementation details as the incubator project works through the paces, but a charm in another ASF project21:20
arosalescory_fu: is zookeeper not promulgated ?21:21
arosalescory_fu: https://jujucharms.com/u/bigdata-dev/apache-zookeeper/trusty/2021:21
magicaltrouti've been using that for the last 2 days21:22
arosalesmagicaltrout: ah thanks :-)21:22
arosalesmagicaltrout: and great to see joshua-full and I hear your working on drill21:26
arosalesmagicaltrout: ramping up the apache projects :-)21:26
magicaltroutyeah drill is up and running sorta. enough for people to use at least21:26
lazyPowerjcastro http://i.imgur.com/Yk4GHhs.png21:26
magicaltroutsome patchy relations to mongodb and HDFS21:27
magicaltroutneed to clean that up a bit, but the actual setup is pretty straightforward, of course I could spend months adding config options21:27
lazyPowerjcastro mbruzek - sorry i fixed it http://i.imgur.com/1RDywtV.png21:28
mbruzekwe rock21:35
mbruzekassult cube is a punk21:35
aisraelI had a juju 1.25 env running on trusty, and upgraded to xenial. Now, when I boot up, I can't get juju to run (no api server). Specifically, `sudo service juju-db-stone-local start` throws the error "Failed to start juju-db-stone-local.service: Unit juju-db-stone-local.service not found." Any ideas where to start debugging that?21:36
cmarsaisrael, sounds like the upgrade didn't successfully navigate upstart -> systemd21:38
cmarsi'm not even sure if that's supported tbh21:38
cmarsaisrael, hmm, maybe try `sudo service jujud-machine-0 start` ?21:40
cmarsaisrael, that's what the systemd unit is named on my xenial controller21:40
cmarsoh, but that's 2.0, nvrmind21:40
cmarsaisrael, does `systemctl list-units` show anything juju-related?21:41
aisraelThere's only two scripts in /etc/init, juju-agent and juju-db21:41
aisraelcmars: no juju21:41
cmarsaisrael, scripts in /etc/init or upstart confs?21:46
aisraelcmars: /etc/init. Where else should I be looking?21:47
cmarsaisrael, if there's no juju showing up in `systemctl list-units`... i think you'll need to create these units on your system to start the agent & db as the upstart confs used to21:48
cmarsaisrael, i'm not sure to what extent 1.25 supports post-upstart distros. i thought it did, but even so, it might only set up the init on bootstrap21:49
aisraelcmars: ack, thanks. So on Xenial, juju should be using systemd, and those script(s) are missing for me21:50
cmarsaisrael, yeah. i think this might be a bug worth opening21:51
aisraelcmars: Will do, thanks for the debug help!21:52
rlolocI want to drop a controller, but I get stuck with repeating "Waiting on 1 model, 2 machines" notification.22:12
rlolocHow can I flush everything. These are machines seem to be misbehaving or not configured.22:13
rlolocI opted to just drop the lxd container manually and re-bootstrap. Was there another way?22:30
lazyPowerand on the topic of beats, they just keep multiplying23:10

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!