[07:07] <kjackal> Good morning Juju world!
[12:39] <cnf> wow, juju is doing weird stuff
[12:39] <cnf> or maybe i am
[12:45] <cnf> hmm
[12:45] <cnf> why won't it spin up machines?
[12:47] <cnf> it seems to be ignoring the space constraints and configuration o,O
[12:51] <kjackal> Hello lazyPwr, are you around?
[12:51] <cnf> k, meeting, i'll debug this later
[13:32] <cnf> ok
[13:33] <cnf> ugh, i can't get juju to deploy anything sane o,O
[13:33] <cnf> how do you debug what juju is doing?
[13:34] <SimonKLB> cnf: you can watch the machine and/or unit logs using `juju debug-log`
[13:35] <cnf> debug-log isn't showing anything
[13:35] <SimonKLB> it's empty?
[13:36] <cnf> i do a juju debug-log, and then i do a deploy, and it produces no output
[13:36] <SimonKLB> what does juju status tell you?
[13:36] <SimonKLB> is a machine even provisioned?
[13:36] <cnf> it says a lot of things are up
[13:36] <cnf> it failed to boot 1 machine, and it is using the wrong ip's for things
[13:36] <SimonKLB> that's odd, the debug-log should definately say something if that's the case
[13:37] <cnf> but it has  thinga booted and assigned
[13:37] <SimonKLB> it might be some disconnect between your client and the machine then
[13:37] <cnf> i don't get why it didn't boot the one machine
[13:37] <cnf> and i don't understand why it is using the wrong network on the other ones
[13:37] <SimonKLB> can you ssh into the machine using juju ssh #?
[13:38] <cnf> which one?
[13:38] <cnf> i can access all the ones that are up, i just used ssh without juju though
[13:39] <SimonKLB> are you logged in as the default "admin" user in juju?
[13:39] <SimonKLB> or are you logged in as a newly created user with custom ssh keys?
[13:40] <cnf> uhm, idno?
[13:40] <cnf> i can do juju ssh <machineid>
[13:40] <SimonKLB> check juju whoami
[13:40] <SimonKLB> oh you can?
[13:40] <cnf> yes
[13:40] <SimonKLB> then check the logs in /var/log/juju
[13:40] <cnf> hmz, ffs, it seems it totally ignored my constraints
[13:40] <cnf> and why isn;t it booting the 4th machine?
[13:41] <cnf> SimonKLB: how will that tell me why it is putting things on the wrong machines?
[13:41] <SimonKLB> i would make sure that one machine is deployed and installed correctly first
[13:41] <cnf> well, it can't install correctly
[13:41] <SimonKLB> probably just create a fresh model and deploy the super-small ubuntu charm
[13:41] <cnf> it doesn't have the right networks
[13:42] <SimonKLB> which provider are you deploying juju on?
[13:42] <cnf> maas
[13:42] <SimonKLB> ah, so youre in charge of setting up the "cloud" as well then
[13:43] <SimonKLB> the issue might be with maas and not juju
[13:43] <SimonKLB> if you want to get used to juju id suggest setting it up on LXD or some cloud like aws first
[13:43] <cnf> lxd is local only
[13:43] <cnf> and i don't have AWS credit
[13:44] <cnf> and debugging why it's putting stuff wrong should be the same
[13:44] <SimonKLB> yea, well it might be good to do it locally with LXD just to get to know it all before you try to create a production ready environment
[13:44] <cnf> LXD is LINUX local only
[13:45] <cnf> and this is a PoC, not a production env
[13:46] <SimonKLB> just saying, setting up maas and juju when both are totally new to you might be overwhelming, it was for me
[13:46] <cnf> well, totally new, been at this for over 2 weeks
[13:46] <cnf> neither are very confidence inspiring so far
[13:47] <SimonKLB> im no maas expert, so you might want to check with #maas for more network debugging
[13:47] <cnf> either way, LXD doesn't really let me test constraints etc
[13:47] <cnf> the network works fine
[13:47] <cnf> juju is putting things on the WRONG machines
[13:47] <SimonKLB> oh i thought you said it did not, my bad
[13:47] <cnf> and assigning the WRONG network to services
[13:48] <cnf> if it is in subnet A on one machine and subnet B on another, then they can't talk to each other
[13:48] <cnf> which will obviously make things break
[13:48] <SimonKLB> yea, that's what the bindings are for, right?
[13:48] <cnf> and juju is ignoring them
[13:49] <SimonKLB> i'd create a bug for that then
[13:49] <SimonKLB> however, you should still be able to watch the logs even though the services are getting the wrong subnet
[13:49] <SimonKLB> it might give you some clues
[13:50] <SimonKLB> check the machine log inside the machine if youre unable to get it via juju debug-log
[13:50] <SimonKLB> that is /var/log/juju/machine-#.log
[13:50] <cnf> i find it annoying i can't get why it's doing this
[13:52] <cnf> ugh, ffs >,<
[13:53] <SimonKLB> are the spaces added to juju? juju spaces
[13:54] <SimonKLB> and do the bindings correctly correlate to the spaces in juju?
[13:56] <cnf> yes
[13:56] <SimonKLB> which version of juju and which version of maas are you running?
[13:57] <cnf> 2.1.1-sierra-amd64 and  2.1.3+bzr5573-0ubuntu1 (
[13:58] <cnf> why can't i ask juju why it isn't booting a machine o,O
[13:59] <cnf> hmz
[14:04] <SimonKLB> cnf: there might be information in the controller node logs
[14:05] <cnf> there should be a lot more debug info on this
[14:06] <cnf> hmm, now it's booting all 4
[14:06] <cnf> o,O
[14:06] <cnf> i didn't change anything
[14:09] <SimonKLB> great :)
[14:09] <cnf> well, no, not when i don't know why
[14:10] <SimonKLB> what you could do is checking the cloud-init log as well, that has som initial setup stuff before juju is installed
[14:50] <cnf> hmz
[14:50] <cnf> now all my machines came up
[14:50] <cnf> but everything is red anyway
[14:50] <cnf> http://termbin.com/n81q
[14:52] <SimonKLB> and what does the machine log say?
[14:53] <ybaumy> is there a way to display juju stautus without the relations section
[14:53] <cnf> ybaumy: if you find out, please let me know :P
[14:54] <cnf> hmm, juju is doing silly things with networking again! >,<
[14:55] <cnf> trying to use vlan tags that are not present
[14:57] <ybaumy> juju status | sed -e '/^Relation/,$d'
[14:57] <ybaumy> just to let you know
[14:57] <ybaumy> much nicer
[14:57] <ybaumy> i will create alias for that
[14:58] <SimonKLB> ybaumy: juju status --color | sed -e '/^Relation/,$d'
[14:58] <SimonKLB> even better ;)
[14:59] <ybaumy> ahh i can see now
[14:59] <ybaumy> :D
[15:00] <cnf> :P
[15:01] <cnf> i'm still not sure how to deal with machine loss in juju
[15:02] <cnf> atm my action for a failure is "delete model, add model, deploy again"
[15:05] <ybaumy> cnf: what do you mean with machine loss
[15:05] <ybaumy> one machine?
[15:05] <cnf> yes
[15:05] <ybaumy> a ceph osd ?
[15:05] <cnf> a machine craches
[15:05] <cnf> it's lost
[15:06] <cnf> you have plenty in your maas, so how do you replace it?
[15:06] <SimonKLB> cnf: just saw your link, your initial poc is openstack? :D
[15:06] <cnf> yes
[15:06] <SimonKLB> courageous!
[15:06] <cnf> openstack is the _entire_ and only reason for looking at juju
[15:07] <SimonKLB> i would probably start with something smaller if i were you though
[15:07] <ybaumy> cnf: cant you just setup a new ceph-osd and remove the other one out of the system
[15:07] <ybaumy> SimonKLB: for me too openstack is the reason im here
[15:07] <cnf> SimonKLB: start with that?
[15:07] <cnf> what*
[15:08] <cnf> ybaumy: if it's just a single unit, i guess
[15:08] <cnf> ybaumy: if it's something that has more functionality, i have to do all that manually?
[15:08] <SimonKLB> ybaumy: yea juju is great at deploying complex things, but deploying openstack the first time you use juju and maas might be a bit much
[15:08] <ybaumy> cnf: i guess so http://docs.ceph.com/docs/hammer/rados/operations/add-or-rm-osds/
[15:09] <cnf> SimonKLB: as i said, i have been at this for 2 weeks now
[15:09] <SimonKLB> cnf: just try getting a smaller deployment to work well to make sure that maas and juju is running correctly
[15:09] <cnf> something "simple" workls
[15:09] <cnf> as soon as I step off the yellow brick road, shit breaks spectacularly
[15:10] <cnf> and no real debugging
[15:10] <cnf> SimonKLB: what smaller deployment?
[15:10] <cnf> wordpress?
[15:10] <SimonKLB> you could try adding complex network to a simpler bundle
[15:10] <cnf> that works, and has no relevance to me
[15:10] <ybaumy> SimonKLB: i tried it manually the first time. and i really had problems. and it tool me a very long time to get to the point where it was working. but the thing is ..  we want our customers to setup their own cloud via a website so juju is a very good thing
[15:10] <SimonKLB> try setting up wordpress + mysql with multiple subnets?
[15:11] <cnf> SimonKLB: and then what? i know nothing of wordpress
[15:12] <SimonKLB> cnf: what i would do is deploy it with the bindings you want later on with openstack, ssh into the machines and make sure that the network is getting configured correctly
[15:12] <cnf> SimonKLB: but wordpress doesn't need 4 networks
[15:12] <cnf> i think?
[15:12] <cnf> idno, i never use wordpress
[15:13] <SimonKLB> haha sure no, but just to have something a little bit more managable to start off with
[15:13] <cnf> i don't find something i don't know and don't need "simpler" personally
[15:14] <cnf> "waiting for machine"
[15:14] <cnf> waiting for _what_ machine? o,O
[15:14] <SimonKLB> i'd think it would be easier to debug just two machines with one charm each rather than the complete openstack bundle
[15:14] <SimonKLB> especially if youre having network problems
[15:15] <cnf> i'm not having network problems
[15:15] <cnf> the NETWORK works fine
[15:16] <SimonKLB> cnf: "waiting for machine" means the machine isnt provisioned yet
[15:16] <SimonKLB> in your case it looks like the lxd containers?
[15:17] <cnf> idno, i find juju very lacking in debugging tools, and ways to ask it what is going on
[15:17] <SimonKLB> ssh to machine 0, check the juju logs, make sure lxd works correctly
[15:17] <SimonKLB> ive never had any issue working out what is going on during provisioning from reading the logs
[15:18] <SimonKLB> but if you cant reach them using juju debug-log you have to enter the machine
[15:24] <cnf> neutron-gateway/0*        error     idle        0        195.130.158.10           hook failed: "config-changed"
[15:24] <cnf> great
[15:25] <SimonKLB> so that is super easy to debug if you check the logs
[15:25] <cnf> FFS!
[15:26] <cnf> it is once again trying to use the WRONG vlan id!
[15:26] <cnf> hmz
[15:26] <SimonKLB> ive never used vlans with juju, sorry!
[15:26] <cnf> and it's frustrating it takes 25 minutes between tries
[15:26] <cnf> because HP hardware is very slow in booting :P
[15:27] <SimonKLB> yea we have a maas setup here as well, but i never use it until im very sure everything works as it should on other providers first
[15:27] <SimonKLB> i dont want to do test-iterations on physical machines
[15:27] <SimonKLB> it takes forever
[15:29] <cnf> it is what i have
[15:29] <cnf> hmm, how do i make it try again if i changed config?
[15:29] <magicaltrout> resolved
[15:29] <SimonKLB> cnf: juju resolved [unit]
[15:31] <cnf> k
[15:31] <cnf> hmz
[15:31] <cnf> this stuff is starting to get on my nerve
[15:38] <ybaumy> haha i tried to add alias as jj for that status command. and i always got error. hmm no i found out that jj is another command
[16:00] <marcoceppi> tvansteenburgh: can I "add-user" with libjuju?
[16:02] <tvansteenburgh> marcoceppi: when this lands https://github.com/juju/python-libjuju/pull/89
[16:03] <marcoceppi> tvansteenburgh: niceee
[16:04] <tvansteenburgh> marcoceppi: community contrib even
[16:07] <marcoceppi> tvansteenburgh: on line 149, acl was removed, but I don't see it implemented anywhere else. How would you add-user then set grants?
[16:07] <marcoceppi> tvansteenburgh: nvm, model.grant
[16:07] <marcoceppi> that makes more sense
[16:08] <marcoceppi> time to make a pull request
[17:21] <Zic> Cynerva: hey, do you remember about my bug and certificate error from kubectl exec/logs? I just solved it!
[17:21] <Zic> I'm going to post how on the GitHub's issue
[17:22] <Zic> (TL;DR: something override KUBELET_ARGS in /etc/default/kubelet at my nodes, and so the --client-ca-file was not here after the upgrade)
[17:22] <Zic> I think it's a mix of juju upgrade-charm vs. apt upgrade
[17:36] <lazyPwr> Zic: https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/238
[17:41] <Zic> lazyPower: oh, welcome back :p I didn't know you were here :)
[17:43] <Zic> lazyPower: yeah, it's exactly this bug
[17:43] <lazyPower> Zic: yeah, illness can only defer me for so long ;)
[17:44] <lazyPower> sorry you encountered this, but we are aware of the issue though. And its got a lot of weird manifestations
[17:44] <lazyPower> namely those x509 errors littering the logs
[17:44] <lazyPower> and some failure scenarios when doing operations
[17:44] <Zic> was non-blocking as we did our urgent-debug with `docker exec` directly on the node
[17:45] <Zic> (in waiting for a resolution)
[17:45] <Zic> it's not confortable but it works
[17:45] <lazyPower> Zic: I'm glad it was low enough impact you were able to work around it,  I'm still sorry you hit this though :(  I ack'd this change and didn't do a full upgrade test on it
[17:45] <lazyPower> you can point a finger at me for this one
[17:47] <Zic> lazyPower: do you want that I mark my bug as duplicate of yours so?
[17:48] <lazyPower> Zic: Where did you file your bug?
[17:48] <Zic> https://github.com/kubernetes/kubernetes/issues/43209
[17:49] <lazyPower> Zic: nah you predate my bug filed and this is upstream so a bit higher visibility
[17:49] <lazyPower> i xreffed them, thanks for filing the bug with detailed info
[17:49] <Zic> :)
[17:50] <Zic> I just posted ~20min ago how I workarround-ed this
[17:50] <Zic> it seems that Juju does not actively override /etc/default/kubelet
[17:50] <lazyPower> yeah its a couple missing flags during upgrade
[17:50] <lazyPower> i'm not sure why it didn't trigger that defaults file to get re-written...
[17:51] <Zic> (was a bit sceptic since /etc/default/kubelet has read-only permissions)
[17:51] <lazyPower> i think we have a stale state guard somewhere that prevented teh flags from getting added on an upgrade run
[17:51] <lazyPower> Zic: well we set that read-only permission so it discourages users from going to put manually edited changes in there
[17:52] <lazyPower> a they wont be persisted... the next time that file gets an update it'll nuke whatever customization you put in there that isn't in unitdata for the charm
[17:52] <Zic> yeah, and I feared that my KUBELET_ARGS will be overwritten in next few minutes
[17:52] <Zic> but no, it holds for now :p
[17:52] <lazyPower> each k8s service has a set of flags under control by this little python module i put together to manage flags.
[17:52] <lazyPower> it was easier to  just probe an object than grep files... that was my reasoning anyway. So without the flags existing in there, its likely things are going to go MIA
[17:54] <lazyPower> but yeah, whatever blocked it fromg etting updated is apparently blocking that whole scenario from coming true
[17:54] <lazyPower> i suspect there's a stale state or data_changed guard to blame for why it didn't get the update.
[17:55] <lazyPower> Zic: I should be done writing the etcd3 upgrade tests today and can move to trying to get a patch release for this tomorrow. Thanks again for following up
[17:56] <Zic> other than that, the upgrade goes pretty well, and kube-dns is much more stable in this release
[17:56] <Zic> don't know if it's a new kube-dns image or something else but it don't crash one time in three days
[18:00] <lazyPower> :) I noticed the same
[18:00] <lazyPower> its the image
[18:00] <ybaumy> is the guy here who wrote that charmscaler ?
[18:01] <tvansteenburgh> ybaumy: SimonKLB
[18:02] <ybaumy> thanks
[18:02] <ybaumy> SimonKLB: does that already work with ceph and nova-compute?
[18:02] <stormmore> o/ juju world... miss me?
[18:15] <lazyPower> ybaumy: it works with any charm, it depends telegraph sending CPU metrics to influxdb
[18:15] <lazyPower> stormmore: always ;)
[18:18] <ybaumy> lazyPower: but ceph is io. how about that one then
[18:19] <ybaumy> io and space
[18:20] <ybaumy> would be cool if a certain iops barries is broken new nodes are added as well as low space issues
[18:21] <ybaumy> sorry for my typos
[18:21] <ybaumy> i need beer soon
[18:21] <lazyPower> ybaumy: it doesn't have support for metrics outside of CPU that i'm aware of today (room for improvement for future iterations)
[18:22] <ybaumy> lazyPower: ok thanks
[18:22] <lazyPower> i talked to SimonKLB about this briefly during the review cycle, and they're talking about adding more metrics to use in the charmscaler
[18:22] <lazyPower> but i dont have an ETA or what those metrics would be
[18:22] <ybaumy> lazyPower: good to know that somebody is working on something like that.
[18:23] <lazyPower> SimonKLB: if i've grossly misrepresented anything here, i apologize :) Please correct me and cc me on the response :)
[18:23] <lazyPower> ybaumy: yeah :D They started with an openstack scaler and took a step back and wrote a generic juju scaler, which is pretty choice that it now works with more things than just openstack vm's
[18:24] <ybaumy> lazyPower: thats cool even if im just for openstack here .. currently
[18:24] <lazyPower> ybaumy: it works there too ;)
[18:24] <ybaumy> lazyPower: i understood that
[18:30] <ybaumy> beer and football. bye
[18:34] <Cynerva> Zic: cool, glad you were able to get that fixed :)
[18:37] <Cynerva> Zic lazyPower: I feel dumb for not putting two and two together, but we saw similar issues with flags not updating on kubernetes-worker in our snap branch, and ended up putting a fix in there
[18:37] <lazyPower> Cynerva: we have 4 parallell long running branches of kubernetes right this minute, i'm not surprised at all things are getting messy
[18:37] <Cynerva> heh, yeah
[18:38] <lazyPower> Cynerva: fyi, i pinged you matt rye and marco on the revised registry action i validated this morning
[18:38] <Cynerva> hopefully we can get the fix landed with our branch, but i'll update the issue in case we need to fix it separatel
[18:38] <lazyPower> it didn't look like it was going to collide with anything, but i do want you rinput to see if any of that will cause a headache with your snap branch, or tims gpu.
[18:39] <lazyPower> the only things that were updated is the action yaml, and readme, the rest was isolated.
[18:39] <lazyPower> oh and ingress configmap bits
[18:39] <Cynerva> lazyPower: okay, looking
[18:39] <lazyPower> that might be fun with our addon compiler
[18:39] <lazyPower> we'll see though.
[18:44] <stormmore> lazyPower, probably like a hole in the head ;-)
[18:44] <lazyPower> stormmore: i have 2 of those
[18:45] <lazyPower> actually i dont know where i'm going with this... so nevermind.
[18:52] <lazyPower> tvansteenburgh: marcoceppi - is there a way for me to run an upgrade-charm --switch in amulet? I didn't see anything while skimming the api docs.
[18:52] <marcoceppi> lazyPower: you have access to a `juju` method which you can pass in any args you want
[18:52] <lazyPower> marcoceppi: self.d.juju?
[18:52] <marcoceppi> lazyPower: looking
[18:53] <lazyPower> is this right off the deployment object or is this a different object all together?
[18:53] <lazyPower> ah ok ty
[18:53] <stormmore> lazyPower, do we have an eta on when that bug we identified last week will be implemented?
[18:53] <lazyPower> stormmore: x509 certs?
[18:54] <lazyPower> stormmore: related to worker upgrade path from pre 1.5.3?
[18:54] <stormmore> yeah
[18:55] <marcoceppi> lazyPower: https://github.com/juju/amulet/blob/master/amulet/helpers.py#L61
[18:55] <lazyPower> marcoceppi: ty
[18:58] <lazyPower> stormmore: i've got that on my list to tackle first thing tomorrow and propose a hotfix for it
[18:59] <lazyPower> stormmore: i was going to work with Cynerva who already has a branch fix for it, but that particular branch has not been folded back into master yet, so cherrypicking ftw
[19:00] <lazyPower> marcoceppi: i have a more advanced question thats going to be messy i think...
[19:01] <lazyPower> marcoceppi: how does amulet resolve the current charm under test? eg: d.add('etcd') is all you do...  there's some derivative of that, that puts things together for the author that I wont have by routing to this helpers.juju method.
[19:01] <marcoceppi> lazyPower: what.
[19:01] <lazyPower> well, i probably phrased this all wrong
[19:01] <lazyPower> so in amulet, you d.add('thing') and it resolves where 'thing' is and puts it in /tmp and deploys that, right?
[19:02] <lazyPower> i cant just say --switch /path/on/disk and expect this to work in CI without using the same path resolution amulet uses to determine where my local charm is.
[19:02] <lazyPower> or am i overthinking it?
[19:03] <marcoceppi> lazyPower: you're trying to validate an answer whichout asking a question. What are you trying to? I might be able to give you a better path forward
[19:04] <lazyPower> marcoceppi: i need to deploy etcd from the store at revision 24, it will be the last deb based release. To ensure i dont break upgrades moving forward, i'm adding a test to deploy the local charm as an upgrade from revision 24
[19:04] <marcoceppi> lazyPower: ah, so that's probably problematic
[19:04] <lazyPower> yeahhhhh
[19:05] <lazyPower> i was hoping that wouldn't be the case. I can probably shell script this enough to be executed by bundletester as a class of tests
[19:05] <marcoceppi> because d.add will see you're adding etcd, regarldess of version and just laugh at you and say "this isn't what you want"
[19:05] <lazyPower> eg bundletester -Y upgrade-test  whre it does env setup in bash and hten hands over ot amulet
[19:05] <marcoceppi> and deploy the local copy instead
[19:05] <marcoceppi> give me 2 mins to get you some samply script
[19:06] <stormmore> lazyPower, ah the fun! no worries, I am trying to get caught up with documentation right now anyway
[19:07] <marcoceppi> lazyPower: this is what will mess you up: https://github.com/juju/amulet/blob/master/amulet/charm.py#L54
[19:08] <lazyPower> marcoceppi: yeah thats exactly what i was thinking
[19:08] <lazyPower> is that the path resolution amulet uses by default is what i should be using, but as you're calling out, its going to punch me in the face
[19:08] <marcoceppi> lazyPower: so you couldn't ever really deploy a prev version
[19:09] <marcoceppi> you could put d.add('etcd-24') but that will simply boil down to "etcd is what I'm testing, so lets use the repo on disk not the store"
[19:09] <marcoceppi> it was code too clever
[19:09] <lazyPower> marcoceppi: so how about the proposed work-around where it becomes a separate test configuration, that is initially setup with a bundle file, and then it manually upgrades the charm and drops into the amulet test suite associated with that scenario? would that be good enough if amulet can just source whats in the model and say "ok, i can proceed now?"
[19:10] <marcoceppi> lazyPower: that should workd
[19:10] <lazyPower> ok let me strawman this out and see if i can get anywhere in 40 minutes.
[19:10] <lazyPower> if not i'll bug you again about possible redirection
[19:11] <marcoceppi> lazyPower: yeah, we can patch d.add() to handle strongly typed charmstore urls differently than lazy typed ones
[19:12] <marcoceppi> stokachu: how is the OSX support for conjure-up coming along?
[19:13] <stokachu> marcoceppi: still on the todo list, getting jaas support added now
[19:14] <kwmonroe> stokachu: i'll be your huckleberry if/when you need a conjure-up tester ^^
[19:14] <kwmonroe> i happen to have a shiny macintosh
[19:15] <stokachu> kwmonroe: thanks, yea i am trying to finish this other stuff up to get to that next
[19:16] <kwmonroe> no problem stokachu, there's still plenty of time before EOD ;)
[19:52] <lazyPower> marcoceppi: i think this workaround will work. its a bit meat-fisted because its executing between lint/proof steps but i'll eat the time differential to have a functional automated test.
[19:52] <lazyPower> will bug and try to flesh this out more later. thanks for getting me unblocked
[20:30] <stormmore>  I blame lazyPower for the split :P
[20:43] <lazyPower> i would too
[20:44] <lazyPower> that guy
[20:44]  * lazyPower shakes a tiny fist
[20:50]  * stormmore bites his lip trying not to make a joke or 2 that would get him in trouble
[21:04] <SimonKLB> lazyPower: just read your response to ybaumy and I'm not disagreeing with anything
[21:04] <lazyPower> glad i'm on message :)
[21:04] <SimonKLB> adding more machine metrics would be super-easy, it's more of not adding too much and making the charmscaler too complex
[21:04] <lazyPower> SimonKLB: perhaps flavors of charm scaler?
[21:04] <SimonKLB> but custom application-specific metrics would also be possible, that would need some extra work though
[21:05] <lazyPower> IO Scaler, CPU Scaler, Mem scaler - COMPLICACATED_BUT_ALL_INCLUSIVE scaler.
[21:05] <SimonKLB> yea, that is a possibility, wouldnt want like 10 different charms to keep track of though :D
[21:05] <lazyPower> SimonKLB: layers baybeh
[21:05] <SimonKLB> true that ;)
[21:07] <SimonKLB> that could actually be really neat, i'll definately add it to the backlog and see if it gets traction by the rest of the guys :)
[21:55] <magicaltrout> folks started using the CDK today lazyPower so you can officialy say your code is in use by DARPA and NASA
[21:55] <magicaltrout> i might have to buy you a badge or something
[21:55] <lazyPower> O_O
[21:55] <lazyPower> Life Achievement unlocked!
[21:55] <magicaltrout> heh
[21:56] <lazyPower> mbruzek1: Cynerva ryebot ^
[21:59] <stormmore> I may have to "play" with the autoscaled kubernetes bundle
[22:01] <lazyPower> magicaltrout: glad you got everything sorted though :) I was nervous based on last weeks direction of conversation
[22:01] <SimonKLB> stormmore: please do! and let me know how it went :)
[22:01] <lazyPower> namely missing layers from your DOCKER_REPOSITORY
[22:02] <magicaltrout> yeah, well I don't know what happened there, but the trusty nuke option seemed to fix it
[22:02] <magicaltrout> we've done that often enough outside of CDK anyway, i think its pretty standard ;)
[22:02] <lazyPower> because yay docker
[22:02] <lazyPower> \o/
[22:11] <stormmore> SimonKLB, it might be a bit but from what I am reading it should solve a problem I haven't figured out yet
[22:25] <ryebot> lazyPower: haha awesome :D
[23:01] <kwmonroe> if <solar-flare>; ./hooks/update-status; fi
[23:02] <magicaltrout> hehe
[23:02] <kwmonroe> status-set "oh noooooooo"
[23:04] <magicaltrout> is that the one where they miss mars again due to doing the sums wrong?
[23:10] <kwmonroe> sums{ locale: en-US }; # calculate burn based on kilometers-to-target; burn( sums{ locale: $LOCALE } ).
[23:10] <kwmonroe> fool-proof
[23:14] <kwmonroe> https://jujucharms.com/login/u/spicule/saiku-hadoop-spark
[23:15] <kwmonroe> heh, magicaltrout ^^ disregard that paste.. i was formulating a question for you
[23:16] <magicaltrout> uh oh
[23:17] <kwmonroe> magicaltrout: what would it take to rebase saiku-h-s to the most current hadoop-spark?  iow, can i swap out the hadoop-spark that you have (https://jujucharms.com/u/spicule/saiku-hadoop-spark) with this one?  https://jujucharms.com/hadoop-spark/
[23:19] <magicaltrout> i doubt that saiku bundle will spin up at the moment. I have another one getting ready here i'm hoping to drop next week
[23:21] <magicaltrout> oh the download does work
[23:21] <magicaltrout> it probably will start
[23:22] <magicaltrout> well there wasn't anything overly special in that bundle so it should just get an update
[23:24] <kwmonroe> weeeeellllll magicaltrout.. i think you were basing that bundle off apache-* charms, which have been replaced by bigtop-* stuffs.  if the saiku bits don't care (which i think they don't), then the apache->bigtop hadoop swap shouldn't make a difference.
[23:26] <magicaltrout> yeah, i have a bunch of stuff to get done here, I've got a new guy starting in May who will hopefully pick up a bunch of this slack, and i'm finally getting my charms into CI slowly
[23:29] <kwmonroe> no worries magicaltrout -- i just noticed that bundle because i'm fixin to release a refresh of the big data charms to align with the bigtop 1.2 release. anyone reliant on the spark interface may want to consider including the new bits.
[23:30] <magicaltrout> don't break shit
[23:30] <kwmonroe> 2 late
[23:30] <magicaltrout> boo
[23:30] <magicaltrout> I have some big data stuff coming in the next month or so
[23:31] <kwmonroe> phew!  then it'll take a month or so before you realize i broke shit.
[23:31] <magicaltrout> excellent
[23:31] <kwmonroe> see ya next spring, future magicaltrout
[23:31] <magicaltrout> check your msgs kwmonroe