[02:47] <ryebot> I keep getting "ERROR invalid entity name or password" when I try to run `juju debug-log`, not sure how to get rid of it - any tips?
[02:47] <ryebot> I've tried a juju logout/back in and rebooting, but no luck.
[02:48] <ryebot> this is on juju2
[02:54] <ryebot> also looked in the /var/log/juju logs in machine-0; nothing there either
[03:49] <ryebot> making a new controller fixed it
[05:51] <tx> Hey guys, I'm trying to bootstrap openstack from a fresh ubuntu 16.04 LTS install, it has been on the initializing model stage for about an hour now
[08:23] <Dirler> Hi All. If I have kvm machines listed in “virsh list” but not listed in “juju status” for kvm environment, Can I add such machines in juju kvm environment?
[09:10] <jamespage> gnuoy, morning
[09:10] <jamespage> can you take a look at https://review.openstack.org/#/c/318641/
[09:10] <jamespage> and
[09:10] <jamespage> https://review.openstack.org/#/c/318612/
[09:14] <gnuoy> sure
[09:38] <jamespage> gnuoy, and https://review.openstack.org/#/c/319138/ :-)
[10:31] <jamespage> gnuoy, https://review.openstack.org/#/c/319172/ please (stable counterpart to the one before)
[10:32] <jamespage> gnuoy, ta
[11:14] <gahan> how do I force cancelling of activity in landscape? 'Add hardware' is hanging on 'Add jujue machine...' for 24 hours now.
[11:58] <magicaltrout> is it acceptable to crackout the beers when you are at work with the cricket on the TV and everyone in the stadium is drinking?
[12:55] <ryebot> juju2 - how do I remove a service that has a unit in an "agent is lost, sorry!" state?
[12:57] <tvansteenburgh> ryebot: you could try remove-machine
[12:58] <ryebot> tvansteenburgh: excellent, thanks, I'll give that a shot
[13:37] <jamespage> gnuoy, https://review.openstack.org/#/c/318611/ and then I'll push master and stable branches to the charm store...
[13:43] <lazyPower> kjackal - if you have the spare cycles, this is a pretty small review https://github.com/juju-solutions/layer-filebeat/pull/8
[13:49] <kjackal> I will do it in a moment lazyPower
[13:49] <lazyPower> ta
[13:49] <kjackal> jcastro ?
[13:50] <kjackal> jcastro, I hear we have an issue with the pluin?
[13:59] <jcastro> kjackal: yeah!
[13:59] <jcastro> let me refire up the bundle, give me a moment
[14:00] <jcastro> https://jujucharms.com/u/bigdata-dev/apache-processing-spark/
[14:00] <jcastro> this is the bundle I am trying
[14:00] <jcastro> hmm, issues with the store today?
[14:00] <kjackal> yes, it seems so...
[14:01] <kjackal> in any case, which bundle are you deploying, jcastro? Can you show me the .yaml you are deploying?
[14:01] <jcastro> hah no, I was deploying right from the store
[14:01] <jcastro> yikes
[14:02] <kjackal> Ah, that might be an issue, we havent published the charms yet, so the referenced ones might to be working
[14:02] <kjackal> *not
[14:08] <kjackal> jcastro, I guess you are deploying this: https://jujucharms.com/u/bigdata-dev/apache-processing-spark/bundle/1
[14:08] <jcastro> yep
[14:09] <jcastro> I needed a bundle that used beats so I could write about it, and cory recommended this one
[14:09] <jcastro> but I'm not wedded to any specific bundle, I just need to be able to deploy a workload, the post itself is about how to use beats to get metrics from a workload
[14:09] <kjackal> so the bundle.yaml there is suposed to use the production charm: cs:trusty/apache-spark)
[14:10] <kjackal> This yaml is the one we want https://api.jujucharms.com/charmstore/v5/~bigdata-dev/bundle/apache-processing-spark-1/archive/bundle-dev.yaml
[14:10] <jcastro> yep, I see that
[14:10] <jcastro> oh ok, so deploy the -dev bundle instead is what you're saying?
[14:10] <kjackal> Yes, this should work
[14:11] <kjackal> I wonder if there is a way to specify the yaml within the bundle when you reference the bundle from the store?
[14:11] <jcastro> not yet
[14:11] <jcastro> but that's fine
[14:12] <jcastro> though next time you guys publish please fill out the metadata url's so I can click through to the code, etc.
[14:12] <jcastro> right now "view code" is broken, etc. on the charm store page
[14:12] <jcastro> oh, nevermind, you did fill it out
[14:12] <jcastro> it's just "Home" and not "view code" on the page.
[14:22] <kjackal> lazyPower, the patch looks good. Merged!
[14:22] <lazyPower> ta kjackal!
[14:56] <suchvenu> Hello
[14:59] <suchvenu> I have a query regarding naming the relation in  ibm-db2 charm. The relation name is "db2" defined in metadatafile as
[14:59] <suchvenu> provides:  db:   interface: db2
[14:59] <suchvenu> sorry the relation name is "db"
[15:00] <jcastro> lazyPower: I think I found a bug in ~containers/kibana
[15:00] <jcastro> unit-kibana-0: 2016-05-20 14:55:26 INFO unit.kibana/0.install logger.go:40 groupadd: group 'kibana' already exists
[15:00] <jcastro> that causes the install hook to fail
[15:00] <suchvenu> Is this name ok ? Or should it be renamed to some other name as db2-db or so ?
[15:00] <lazyPower> ah yeah
[15:01] <lazyPower> there's a patch to fix this, its known
[15:01] <jcastro> ack, thanks
[15:01] <jcastro> is this the promulgated charm or is that somewhere else?
[15:02] <lazyPower> negative, the promulgated charm is missing the dashboard action
[15:02] <jcastro> hmm, it sucks that the page doesn't make it obvious which is the promulgated charm
[15:02] <jcastro> other than looking at the URL I mean
[15:02] <lazyPower> when we were riffing a couple weeks ago, i've delayed pushing the updates until i can get the config option to deploy *with* a dashboard
[15:02] <lazyPower> and still need to trim the fat on the demonstration boards
[15:02] <suchvenu> I see the same relation name used by mysql charm. So will it cause confusion, when both are used in any bundle ?
[15:03] <lazyPower> suchvenu - the relation-name is arbitrary. The interface is the important part of that equation.
[15:03] <suchvenu> when the states of the interface are used in the reactive layer
[15:03] <suchvenu> the states from the db2 interface would be like db.available, db.ready etc
[15:04] <lazyPower> well, interestingly enough - do you use both mysql as well as db2 in the same charm? and is this a common deployment formation?
[15:04] <lazyPower> sorry, model formation
[15:04] <suchvenu> i am not using both in any of the charm as of now
[15:05] <suchvenu> I just told an example
[15:05] <lazyPower> jcastro - i'm in a lull with other things i have in flight, i can spend some time on that today. Do you mind being my primary stakeholder? I think i can get you an updated charm pushed @ containers just before standup
[15:05] <jcastro> yeah no worries
[15:05] <lazyPower> suchvenu - ok. yeah. you should be fine. The onus of using those states and how its named is up to the charm author consuming DB2
[15:05] <jcastro> I can't blog about beats without a dashboard, heh
[15:05] <lazyPower> ok, i'll fold in that patch and get you a revised dashboard
[15:06] <suchvenu> does the relation name need to be unique so that the states also remain unique ?
[15:06] <jcastro> what's the tldr difference between this kibana and the promulgated one?
[15:06] <lazyPower> the dashboard loader action and test updates so it actually passes and flex's the deployment
[15:08] <suchvenu> http://pastebin.ubuntu.com/16522846/
[15:08] <suchvenu> a decorater like this , is fine  ?
[15:08] <jcastro> https://github.com/CanonicalLtd/jujucharms.com/issues/274
[15:10] <lazyPower> suchvenu - That looks fine, but without context its hard to say
[15:11] <lazyPower> suchvenu - you can mitigate any name collisions by setting the mysql database relation to something other than db when you add it ot metadata, like "mysql-db" or "mdb" or "notibmdb2"
[15:12] <suchvenu> I am thinking of the naming convention for the relation name . I have used db as the relation and if some other database (other than db2) also uses the same relation name , will it cause any issues
[15:13] <suchvenu> you mean to say, the consumer charm of db2 or mysql can create a specific relation name as db2-db or msql-db and so it will not have issues ?
[15:13] <lazyPower> correct
[15:13] <suchvenu> ok
[15:13] <suchvenu> Thanks lazyPower
[15:14] <lazyPower> relation-names are arbitrary. As the charm author consuming the interface, you define what that nomenclature is, and are responsible for the associated states.
[15:14] <lazyPower> np happy to help suchvenu
[15:14] <suchvenu> :)
[15:29] <cory_fu> kwmonroe, kjackal: I submit https://github.com/juju-solutions/layer-apache-bigtop-base/pull/7 and https://github.com/juju-solutions/layer-hadoop-datanode/pull/1 for your consideration
[15:31] <cory_fu> kwmonroe: Also, looks like we don't have to mess around with the bzr-owner hack any more: https://github.com/CanonicalLtd/jujucharms.com/issues/245#event-666466300
[15:32] <kwmonroe> woohoo!  nice cory_fu
[15:34] <kwmonroe> cory_fu: for the dn PR, do you feel ok by returning from start_datanode with the status set as "starting datanode"?
[15:34] <kwmonroe> i'm talking about if/when start_datanode hits the timeout and returns, of course
[15:34] <cory_fu> Oh, no, I suppose not
[15:34] <cory_fu> Good catch
[15:37] <magicaltrout> random question big data folk: https://github.com/juju-solutions/layer-hadoop-client doesn't actually provide me with a hadoop executable
[15:38] <magicaltrout> I need my charm to do something like: hadoop jar $THRAX/bin/thrax.jar
[15:39] <cory_fu> kwmonroe: Updated.  Though, I should hope that the status handling in the slave charm would actually override it
[15:41] <cory_fu> magicaltrout: Right.  Maybe we need to update the README.  The way that is intended to work is that you use layer:hadoop-client as a base layer, and then you connect it to a hadoop-plugin charm (e.g., https://jujucharms.com/hadoop-plugin/ or the older https://jujucharms.com/apache-hadoop-plugin/ for the non-Bigtop charms)
[15:41] <bdx> openstack-charmers: will there be a charm rev'ing again before 16.07 ?
[15:41] <cory_fu> magicaltrout: The plugin is what provides the Hadoop libraries, and it ensures you get the correct libraries for the particular deployment of Hadoop to which you are connecting
[15:42] <magicaltrout> yeah cory_fu when I connect the plugin to my layer:hadoop-client enabled charm it all works
[15:42] <magicaltrout> but "which hadoop" is empty and I can't figure out what its doing :)
[15:42] <magicaltrout> I thought it main role was to provide config information over the interface
[15:42] <cory_fu> magicaltrout: Can you give me a pastebin of your `juju status --format=tabular`?
[15:43] <cory_fu> It provides config info over the interface but also installs the  client libs (the plugin; the client just manages the plugin relation for you to make it a bit easier)
[15:43] <magicaltrout> hmm
[15:43] <magicaltrout> i tried that yesterday and got zip
[15:43] <magicaltrout> give me 5 mins to spin up my charm and I'll get back to you
[15:44] <cory_fu> magicaltrout: Are you deploying the older apache-hadoop-X charms, or the newer Bigtop charms?
[15:44] <magicaltrout> the older stuff
[15:44] <cory_fu> Ok
[15:44] <magicaltrout>  juju deploy cs:bundle/apache-processing-mapreduce-0
[15:44] <cory_fu> The status messages ought to tell you what, if anything is missing, and if they all say ready then you should have the hadoop bin
[15:44] <magicaltrout> that one to be exact
[15:45] <magicaltrout> all my status flags were green
[15:45] <cory_fu> magicaltrout: One question.  Are you trying to run the Hadoop binary from inside charm code?  There is environment data that doesn't seem to get populated in the hook context for whatever reason, so it requires a bit of additional work.
[15:46] <kwmonroe> cory_fu: both PRs lgtm.  merged.
[15:46] <magicaltrout> nope cory_fu juju ssh'd in
[15:46] <cory_fu> Odd
[15:46] <cory_fu> That should definitely work
[15:47] <cory_fu> kwmonroe: Shall we do a JBD release, charm builds and publishes, and update the Bigtop PR?
[15:48] <cory_fu> kwmonroe: Also, would you consider this a bugfix release for JBD or a minor feature release?
[15:48] <cory_fu> I'm inclined to go with bugfix
[15:54] <kwmonroe> yeah cory_fu, i've got jbd 7.1.2 building now
[16:32] <Brochacho> cholcombe: Is there anyway to ignore ceph config parsing errors?
[16:33] <cholcombe> Brochacho, yeah put a newline at the end of the file.  We have a patch going in to fix that
[16:36] <Brochacho> cholcombe: Thanks! Was driving me nuts, doesn't seem there's an option to silence that?
[16:36] <cholcombe> Brochacho, no and ceph is really bitchy about it
[16:36] <cholcombe> i thought about submitting a patch to ceph because it's so annoying haha
[16:39] <Brochacho> cholcombe: dang, also seems to be no way to avoid that 'dumped all in format json'?
[16:39] <cholcombe> Brochacho, when you say --format=json is it appending something to stdout?
[16:43] <Brochacho> cholcombe: One sec
[16:44] <Brochacho> cholcombe: No, wasn't grabbing stdout correctly -_-
[16:44] <cholcombe> ah ok haha
[18:04] <magicaltrout> i'm talking $hit cory_fu
[18:04] <magicaltrout> I suspect i probably sshd in yesterday before the path was set
[18:04] <magicaltrout> and never noticed
[18:04] <cory_fu> :)  Glad it's working for you
[18:05] <magicaltrout> well you'll have a bunch of happy linguists now who can build translation models on proper infrastructure and not standalone hadoop on their laptops
[18:19] <cory_fu> Awesome.  :)
[18:19] <cory_fu> kwmonroe: Do you use Chrome?
[18:27] <cory_fu> kwmonroe: Well, anyway.  You chastising me about approving the PR with trailing whitespace made me feel really bad, so I made this: https://github.com/johnsca/github-trailing-whitespace
[18:30] <lazyPower> cory_fu https://twitter.com/lazypower/status/733726628828852224
[18:31] <cory_fu> lazyPower: ha.
[18:31] <cory_fu> If I'd known you were going to tweet it, I would have ponied up the $5 to publish it to the Chrome Web Store
[18:47] <kwmonroe> cory_fu: sometimes i wonder what greatness you could achieve if i didn't bug you about petty stuff.
[18:47] <kwmonroe> too bad we'll never know
[18:47] <cory_fu> heh
[19:20] <kwmonroe> cory_fu: i literally think you're great.  i just spent the afternoon loading all kinds of unpacked extensions and now chrome works like firefox... in that it's mostly broken.
[19:20] <cory_fu> lol
[19:20] <cory_fu> What other unpacked extensions did you load?
[19:21] <kwmonroe> mostly stuff i wrote
[19:22] <kwmonroe> but let's not dwell on the past.  i have a bundle update for bigtop.  can you teach me git real quick so i can update pr 108?
[19:22] <lazyPower> hahaha
[19:22] <lazyPower> i can tell its Friday
[19:22] <kwmonroe> https://appear.in/kevin-does-git
[19:23] <magicaltrout> git commit -a -m "mega commit"
[19:24] <magicaltrout> git push -f
[19:24] <magicaltrout> nuke it all!
[19:24] <kwmonroe> magicaltrout: join the appear.in!
[19:25] <lazyPower> magicaltrout - thats fun when it pipelines into prod on friday at 4:59pm
[19:25] <kwmonroe> being that it's 2:25pm, i'm not that concerned
[19:28] <magicaltrout> i'm drinking beer and writing code for NASA at 8:30pm
[19:28] <magicaltrout> what could possibly go wrong?
[19:28] <magicaltrout> git push -f
[19:28] <magicaltrout> oooh sugar
[20:03] <lazyPower> jcastro cory_fu  - i have a branch ready which should squash the last issues with kibana and bring this up to spec for CWR efforts - https://code.launchpad.net/~lazypower/charms/trusty/kibana/add-dashboard-loader-action/+merge/295359
[20:03] <lazyPower> not sure if you want to poke this now or leave it, but i thought I would ping with it regardless
[20:57] <gennadiy> hello everyone, can we deploy local charms to juju2 ? i try to use "juju deploy local:trusty/sipp" but got error "unknown schema for charm URL "local:trusty/sipp""
[20:59] <tvansteenburgh> gennadiy: just give it a path
[20:59] <magicaltrout> gennadiy:  juju deploy --repository=/home/bugg/charms local:trusty/joshua-full joshua-full
[21:00] <gennadiy> a lot of changes in version 2 :) thanks
[21:00] <magicaltrout> bit of that
[21:00] <tvansteenburgh> magicaltrout: that's juju1
[21:01] <tvansteenburgh> gennadiy: all the details at `juju help deploy`
[21:02] <magicaltrout> aww tvansteenburgh whatever
[21:02] <marcoceppi> gennadiy magicaltrout `juju deploy /home/bugg/charms/trusty/joshua-full` is all you need
[21:02] <magicaltrout>  
[21:02] <magicaltrout> bugg@tomsdevbox:~$ juju --version
[21:02] <magicaltrout> 2.0-beta4-xenial-amd64
[21:03] <marcoceppi> magicaltrout: beta7 is the latest ;)
[21:03] <magicaltrout> i'm just a bit slow to update :P
[21:03] <magicaltrout> marcoceppi: i get scared of updating these days :P
[21:05] <marcoceppi> magicaltrout: I understand, we're getting close to RC though! when we get that it'd be nice ot have you an dothers update to help shake the tree
[21:05] <magicaltrout> i'm only messing marcoceppi its just a dev environment, i'm not sure why its stuck on beta4, must come from a random ppa or something
[21:06] <marcoceppi> magicaltrout: probably, there were package renames and such for 2.0
[21:06] <arosales> kwmonroe: cory_fu are https://jujucharms.com/u/bigdata-dev/apache-processing-mapreduce/bundle/2 and https://jujucharms.com/u/bigdata-dev/apache-processing-spark/bundle/1 the latest or have these been promulagated?
[21:23] <cory_fu> arosales: Sorry, those are the most recent, yes
[21:30] <gennadiy> i use local env with lxd how to run machines in privileged mode? i need to change "ulimit -n"
[21:32] <gennadiy> maybe i can provide lxd profile for juju
[21:34] <arosales> cory_fu: oh need to apologize, just making sure I found  / using the current ones
[22:42] <cory_fu> kwmonroe: You still around?
[22:44] <arosales> kwmonroe: if your still around
[22:44] <arosales> kwmonroe: http://paste.ubuntu.com/16538524/
[22:44] <arosales> waiting on plugin :-(
[22:45] <arosales> cory_fu: admcleod  ^
[22:45] <cory_fu> arosales: Hrm.  I think we had a fix for that, and I thought kjackal published it to bd-dev earlier today
[22:46] <cory_fu> arosales: Can you give me the [Services] section to see what charm revs you're  using?
[22:46] <arosales> ya, seems it cropped back up or didn't make to dev branch
[22:46] <cory_fu> arosales: I think the charm may have been updated but the bundle missed
[22:46] <arosales> http://paste.ubuntu.com/16538576/
[22:47] <cory_fu> arosales: You're not even using the bd-dev versions of the charms
[22:48] <cory_fu> The main bundle.yaml must be pointing to the prod charms, which makes sense, but doesn't help for demo / testing
[22:49] <cory_fu> arosales: Do you need a fix asap?
[22:50] <arosales> cory_fu: just doing a lightning talk, but I can gloss over that as I am almost up
[22:50] <arosales> cory_fu: should I have been using a different bundle?
[22:51] <cory_fu> arosales: If you had time, I'd say `juju upgrade-charm --switch spark cs:~bigdata-dev/trusty/apache-spark` but it sounds like that would be risky and too close to the wire
[22:51]  * arosales will try it :-)
[22:51] <magicaltrout> go on arosales live life on the edge!
[22:52] <arosales> error: invalid service name "cs:~bigdata-dev/trusty/apache-spark"
[22:52] <cory_fu> arosales: the bundle-dev.yaml in that bundle should have worked, but we need to get those charms promulgated and sorted.  We switched focus all on to bigtop and they got left behind
[22:52] <cory_fu> arosales: My args were backwards
[22:53] <arosales> magicaltrout: keeps things exciting
[22:53] <magicaltrout> indeed
[22:53]  * arosales waves to magicaltrout
[22:53] <cory_fu> juju upgrade-charm cs:~bigdata-dev/trusty/apache-spark spark
[22:53] <cory_fu> maybe
[22:53] <cory_fu> I forgot the --switch
[22:53] <arosales> error: unrecognized args: ["spark"]
[22:53] <cory_fu> -_-
[22:53] <cory_fu> Does anyone here remember the syntax for upgrade-charm --switch?  :p
[22:54] <cory_fu> Also, does that even still work in 2.0?
[22:54] <magicaltrout> the way things change i barely get through deploying with out doing juju help commands  :P
[22:55] <cory_fu> juju upgrade-charm spark --switch cs:~bigdata-dev/trusty/apache-spark
[22:55] <cory_fu> arosales: ^
[22:55]  * arosales reading help
[22:55] <arosales> ya that looks better
[22:55] <arosales> cory_fu: thanks
[22:56]  * cory_fu sits back and watches arosales's lightning talk burn.
[22:56] <arosales> lol
[22:56] <cory_fu> Er, I mean, good luck!
[22:56]  * kwmonroe has much faith
[22:56] <arosales> I only have 39 machines running
[22:56] <arosales> what could go wrong
[22:56] <cory_fu> lol
[22:56] <arosales> kube, swarm, spark, and 2 hadoop clusters
[22:57]  * arosales had to make another ec2 support request to up limits
[22:58] <kwmonroe> arosales: up from 50?!?!
[22:58] <arosales> kwmonroe: well for us-west-2 as us-east-1 errored out on provisioning
[22:59] <kwmonroe> azure will go until your wallet empties without all this "limit" nonsense
[23:00]  * magicaltrout makes a note not to use azure
[23:00] <kwmonroe> :)
[23:01] <arosales> the upgrade worked, but I lost ha on spark
[23:01] <cory_fu> kwmonroe: The reason I pinged you, btw, is I wanted to know where to file bugs against layer:ibm-base
[23:02] <cory_fu> >_<
[23:02] <kwmonroe> cory_fu: file them against cory.johns@canonical.com
[23:02] <cory_fu> arosales: Did you lose HA, or did the status message just change?  I think it reports it slightly differently
[23:02] <cory_fu> arosales: Can you give me the new status?
[23:02] <cory_fu> kwmonroe: ha
[23:03] <cory_fu> kwmonroe: Also, did you know that Monday is a holiday?  I didn't.
[23:03] <kwmonroe> cory_fu: a holiday for who(m)?
[23:03] <arosales> http://paste.ubuntu.com/16538972/
[23:03] <cory_fu> kwmonroe: All of the US
[23:03] <cory_fu> kwmonroe: It's Memorial Day
[23:04] <kwmonroe> holy crap, is it May already?!?!
[23:04] <cory_fu> Indeed.  It's almost my birthday, even.  :p
[23:04] <cory_fu> arosales: "Fetching resources" isn't the most helpful status message
[23:05] <cory_fu> arosales: I think the status message may no longer include "HA" but it should mention which is master
[23:05] <arosales> sorry wrong buffer
[23:05] <arosales> http://paste.ubuntu.com/16539022/
[23:06] <arosales> workload status = standalone
[23:06] <kwmonroe> arosales: that looks legit
[23:06] <cory_fu> arosales: Yep.  The one that says (standalone - master) means that it is in HA
[23:06] <kwmonroe> you're in 'standalone' mode, which means any of those can take over at any moment
[23:06] <arosales> ah yes, I was looking at unit 1
[23:06] <arosales> unit 0 = Ready (standalone - master)
[23:07] <arosales> good stuff
[23:07] <magicaltrout> midnight rolled around, it is my birthday, i beat you cory_fu :P
[23:07] <arosales> Happy Birthday magicaltrout !
[23:07] <cory_fu> kwmonroe: But for reals, clicking "Bugs" on https://code.launchpad.net/~ibmcharmers/layer-ibm-base/trunk goes to a 404
[23:07] <magicaltrout> meh
[23:07] <kwmonroe> happy bday magicaltrout!!!
[23:08] <cory_fu> magicaltrout: Really?!?  Happy birthday!!!!!
[23:08] <magicaltrout> i got my age wrong earlier
[23:08] <magicaltrout> i'm over it
[23:08] <arosales> happy early birthday cory_fu
[23:08] <kwmonroe> cory_fu: please file a bug re: the bug url.  i'll get to it monday.
[23:09] <cory_fu> magicaltrout: I do that all the time.  Actually, my health insurance had my birth year as 1995, and I was super excited to have de-aged until they fixed the glitch
[23:09] <cory_fu> kwmonroe: ha
[23:09] <cory_fu> arosales: :)  Thanks
[23:09] <arosales> 1995, wow
[23:09] <magicaltrout> hehe
[23:09] <magicaltrout> people born in the 90's and 2000's depress me
[23:10] <arosales> cory_fu: should have quickly filled for life insurance
[23:10] <arosales> pretty amazing how well upgrade charm worked
[23:10] <arosales> and the operational knowledge distilled in those charms
[23:11] <cory_fu> arosales: In case you didn't notice (not at all because I forgot to put it in until just now), I asked for next Friday off for my (belated) birthday.  ;)
[23:11] <arosales> pretty amazing. On one controller I have 5 models with 40 machines with completly functioning kubernetes, swarm, spark cluster, hadoop cluster (ha), and hadoop cluster (bigtop)
[23:12] <arosales> cory_fu: most def you should take Friday off
[23:12] <arosales> magicaltrout: you too :-)
[23:12] <magicaltrout> aww thanks arosales
[23:12] <cory_fu> arosales: Well, upgrade-charm --switch is a little duplicitous, considering it changes the entire charm out from under the service.  But Spark is pretty resilient to being switched out, as long as jobs aren't actively running in standalone mode.
[23:12] <kwmonroe> hehe, magicaltrout, i will give you next thursday off too.  keep being great.
[23:13] <arosales> kwmonroe: is soo generous
[23:13] <magicaltrout> hehe
[23:13] <arosales> cory_fu: ya upgrade implies you are already upgrading
[23:13] <magicaltrout> i get tomorrow off.....
[23:13] <cory_fu> magicaltrout: I was sad to realize I didn't quite make it under the wire to get hugs if I were born in the 80's
[23:13] <arosales> ha, but only tomorrow
[23:14] <magicaltrout> yeah sunday the mrs gets her mum day... so i'm lumped with the kids
[23:14] <kwmonroe> magicaltrout: are you still coding for nasa?  i feel like i don't want my rovers coded by you at this time of night.
[23:14] <magicaltrout> thats work... right?
[23:15] <arosales> #daddyduty
[23:15] <magicaltrout> i am kwmonroe..... currently wondering why requirejs hates me so much :)
[23:15] <arosales> magicaltrout: indeed it is
[23:15] <magicaltrout> yeah thought so
[23:15] <magicaltrout> so i get 1 day off
[23:15] <kwmonroe> not because you're not a great coder, mind you, but because i fear your maliciousness when you don't get cake for your birthday.
[23:15] <magicaltrout> oh well, better than none
[23:15] <magicaltrout> die rover! die!!!!
[23:16] <arosales> magicaltrout: you need another jubilee year
[23:16] <cory_fu> magicaltrout: Or worse: https://www.youtube.com/watch?v=Y6ljFaKRTrI
[23:16] <kwmonroe> excactly
[23:17] <magicaltrout> lol
[23:17] <kwmonroe> aight arosales, i have to bid you farewell and good luck.  i'm sure the LT will go great because juju is frickin amazing.  lmk how you fare.  be well all!
[23:17] <arosales> kwmonroe: have a good long weekend
[23:17] <magicaltrout> cya
[23:18] <arosales> thanks for the last minute help here
[23:18] <cory_fu> arosales: np.  Are you up already for your lightning talk? Are you IRCing from the podium?
[23:18] <arosales> not up yet
[23:18] <arosales> like 15 min out
[23:19] <magicaltrout> makes me feel as prepared as i was for my talk
[23:20] <cory_fu> I had the impression you were upgrading charms as you walked up to the stage.  :p
[23:22] <cory_fu> Alright.  Really must go, or my wife will murder me.  Have a good weekend, all!
[23:22]  * cory_fu disconnects from the Matrix.
[23:22] <cory_fu> Oh, and happy birthday again, magicaltrout!
[23:22] <arosales> cory_fu: have a good weekend
[23:22] <magicaltrout> thanks....
[23:25] <arosales> juju spoils one. ya I can spin up 5 clusters across 40 machines 1 hour before my talk :-)
[23:26] <magicaltrout> depends if you write the charms yourself or not.... if you use the lovely bigdata dev charms thats probably true ;)
[23:28]  * arosales would do the same with saiku :-)