[00:28] <anastasiamac> babbageclunk: PTAL https://github.com/juju/juju/pull/9069 - create-backup options check revisited..
[00:30] <babbageclunk> kwmonroe: yay, glad you got all the glitches sorted out! ☙
[00:30] <babbageclunk> anastasiamac: looking
[00:30] <anastasiamac> \o/
[00:37] <babbageclunk> anastasiamac: approved!
[00:40] <anastasiamac> \o/\o/\o/
[00:57] <anastasiamac> review anyone - https://github.com/juju/juju/pull/9071? one line change to remove an xtra newline..
[00:57] <veebers> anastasiamac: I can hit it
[00:58] <anastasiamac> veebers: tyvm!
[00:58] <veebers> anastasiamac: hah always keen to grab the easy ones. LGTM
[00:59] <anastasiamac> veebers: :D
[01:24] <thumper> vinodhini: ping
[01:31] <vinodhini> thumper
[01:31] <vinodhini> yes.
[01:37] <veebers> thumper: to set logging to debug, it's a model-config command, do I need to do something so the controller is set to that logging level too, or does that hit everything?
[02:14] <vinodhini> thumper:
[02:24] <veebers> kelvinliu_: have you deployed a k8s cluster in aws recently? I'm trying but failing (following Ians post in discourse)
[02:26] <kelvinliu_> veebers, last time was ~ 1 month ago, i think
[02:26] <kelvinliu_> veebers, any errors u got?
[02:26] <veebers> kelvinliu_: this time around it's different, I see a complaint about too many tags
[02:28] <veebers> https://pastebin.canonical.com/p/ZkFg7HY3k3/
[02:29] <anastasiamac> thumper: is another 2.3 release on the cards?
[02:29] <anastasiamac> i have a bug I am fixing in 2.4 and + and wondering if it should go futher back too...
[02:29] <thumper> anastasiamac: at some stage, and it depends on the ease of back porting
[02:30] <anastasiamac> thumper: real easy backport... was just wondering if it was worth the effort... but since the answer is 'yes, there is such a thing happenning', I'll backport ;)
[02:32] <thumper> it is happening at some stage
[02:32] <anastasiamac> ack. ta!
[02:32] <veebers> kelvinliu_: you used eks with caas before?
[02:34] <kelvinliu_> veebers, looks like too many tags on subnet? weird. No, I didn't use eks yet.
[02:35] <veebers> Can someone educate me on tags in aws and how juju uses them, It seems we have 298 of them in this region. Is this a cleanup failure issue? Is there an account limit to how many there is allowed?
[02:36] <veebers> hmm, no because a different region has more tags, is it per region? *shrug* I'll try use a different region
[02:37] <kelvinliu_> veebers, i think the error is for that subnet resource has too many tags
[02:38] <anastasiamac> veebers: do u mean that 'tags' are separate entity in aws?
[02:38] <anastasiamac> i thought tags where just sort-of-labels on the instances
[02:38] <veebers> kelvinliu_: hmm, ok. Might be a cleanup issue? as there is nothing running in that region
[02:38] <anastasiamac> we use them to identify special purpose of the instance, like 'controller' machine or 'juju' machine, etc..
[02:38] <veebers> anastasiamac: It might be a charm layer error too. I"m just trying to make sense of it
[02:39] <veebers> well, actually right now I'm trying to ignore it for the immediate future :-)
[02:39] <kelvinliu_> veebers, can u describe this subnet subnet-60510a26 to see the tags of it?
[02:40] <veebers> kelvinliu_: I'm not sure how, via the aws console?
[02:41] <kelvinliu_> veebers, i guess we are not always creating a new vpc and subnets for it, but always reuse existing subnets. But we always add new tags to that subnet, which exceeded the max ~50 tags per resource limit.
[02:41] <kelvinliu_> veebers, either awscli or console. console would be easier
[02:43] <kelvinliu_> veebers, if my assumption was correct, we should consider to remove the tags added to existing resources when we tear down the cluster.
[02:46] <veebers> kelvinliu_: by console I mean the web ui, we're talking about the same thing?
[02:47] <kelvinliu_> from the subnet tag issue, I remembered another thing - I think we should have only 1 cluster in same vpc. if not, the k8s traffic would have troubles. im not sure how we manage this in CDK now.
[02:48] <veebers> kelvinliu_: I was spinning up in the same region wallyworld has in the past, not sure if things got left lying around etc.
[02:48] <kelvinliu_> veebers, yes, console -> web ui, awscli -> boto cli
[02:48] <veebers> kelvinliu_: ack, sweet. Sorry a bit slow right now :-P
[02:48] <kelvinliu_> veebers, take a look how many tags the  subnet-60510a26 has, 50?
[02:49] <veebers> kelvinliu_: just figuring out how to view that now
[02:50] <kelvinliu_> veebers, i don't have web ui access setup yet. u should be able to find it from vpc -> subnets -> search  subnet-60510a26
[02:50] <veebers> kelvinliu_: ack, just found the tags tab now
[02:51] <veebers> kelvinliu_: it has a bunch, all starting with 'kubernetes.io/cluster/kubernetes-<bnlah>" no count though so would have to do manually ^_^
[02:52] <kelvinliu_> veebers, my assumption is correct. :-
[02:53] <veebers> kelvinliu_: ah, I missed the 'the max ~50 tags limit'. Ok so that makes sense
[02:54] <veebers> kelvinliu_: so right now I should be able to deploy a cluster to a 'fresh' region as we wouldn't have filled up the subnet with tags.
[02:54] <kelvinliu_> veebers, https://docs.aws.amazon.com/cli/latest/reference/ec2/delete-tags.html would be  helpful
[02:54] <veebers> I presume it'll be safe enough to manually remove those tags as there are no instances running in that region
[02:55]  * veebers looks
[03:05] <anastasiamac> thumper: this si the PR I want to backport to 2.3 - https://github.com/juju/juju/pull/9073
[03:05] <anastasiamac> as per jam's suggestion, we will not format single value 'config' output to allow for machine consumption...
[03:08] <thumper> I think the stderr output by default will be confusing
[03:12] <anastasiamac> i thought i did stdout...
[03:12] <anastasiamac> thumper: only warning is on stderr
[03:13] <thumper> 	c.Assert(cmdtesting.Stderr(ctx), gc.Equals, "format yaml is ignored\n")
[03:13] <thumper> ￼
[03:13] <thumper> for every time
[03:13] <anastasiamac> thumper: oh i c, u mean for wraning msg...
[03:13] <anastasiamac> yes, sure, can make it a debug
[03:13] <anastasiamac> like log..
[03:13] <thumper> even logged at info level
[03:14] <thumper> but not ctx.Infof
[03:14] <anastasiamac> yep...
[03:14] <anastasiamac> apart form that u r +1?
[03:18] <anastasiamac> thumper: ?
[03:18] <thumper> yep
[03:18] <thumper> approved
[03:19] <anastasiamac> \o/
[03:41] <thumper> vinodhini: where are we with the goose PR?
[03:43] <vinodhini> thumper: i am fixing the mock.
[03:43] <thumper> ok
[03:50] <jam> anastasiamac: thanks for the patch. did you test at all with non-strings? Like an Int or Bool config type? I think Println does the right thing, I was hoping to be sure.
[03:52] <vinodhini> thumper: i have pushed a commit now./
[03:52] <vinodhini> just did.
[03:53] <vinodhini> thumper : can u have a look please
[03:53] <thumper> vinodhini: yep
[03:58] <anastasiamac> jam: i have
[03:58] <jam> great
[03:58] <anastasiamac> what concerns did u have ober println? that it would not print the line?
[03:59] <anastasiamac> i find that it does and is actually safer on platforms where newline is not \n..
[04:00] <vinodhini> thumper: thanks
[04:46] <veebers> thumper: on bionic we use a different mongo package? i.e. I need to update my juju-mongo script to use /usr/bin/mongo not /usr/lib/juju/mongo*/...
[05:16] <thumper> veebers: yes
[06:10] <vinodhini> anastasiamac:
[06:11] <vinodhini> thumper asked me to revert back the export-bundle command to dev mode only CLI part
[06:11] <vinodhini> i have also removed the ci becuase the command is not exposed and bootstrap will fail.
[06:11] <vinodhini> could u please take a look and approve
[06:11] <vinodhini> this is for 2.4.2 relaese
[12:31] <elox> What charms/bundles exists that can deploy an "ovirt" cluster for us to use as a MAAS pod?
[12:41] <jam> guild review requested for https://github.com/juju/txn/pull/43
[12:41] <stickupkid> jam: i'll have a look
[13:04] <stickupkid> jam: done
[21:59] <wpk> thumper: o/
[22:01] <magicaltrout> kwmonroe: you understand the bigtop packaging more than i do
[22:01] <magicaltrout> if we wanted to expose the spark thriftserver
[22:01] <magicaltrout> what do we need to do?
[22:04]  * magicaltrout has a feeling it needs sticking in the deb
[22:04] <magicaltrout> but worth checking
[22:06] <magicaltrout> hmm maybe not, it seems to be kicking around in the deb spark rules
[22:06] <magicaltrout> hrmmm
[22:51] <kwmonroe> magicaltrout: i think we do not enable the thriftserver by default because of a port conflict. remember when you asked about this in 2015? https://lists.ubuntu.com/archives/juju/2015-September/005688.html
[22:51] <kwmonroe> magicaltrout: that said, i don't see how port 10000 conflicts with sparksql, but whatevs.  gimme 4 minutes to deploy spark and make sure thriftserver is even a thing we can start.
[22:52] <kwmonroe> magicaltrout: if it is, it's just a matter of "juju run --application spark '/start/thrift/server'" and "juju run --application spark 'open-port xxxx'".
[22:55] <kwmonroe> magicaltrout: so it seems we don't have an action for the thriftserver :/
[22:56] <magicaltrout> yeah kwmonroe I was poking around and saw the stuff in the deb but it doesn't seem to materialise on the serevr
[22:56] <magicaltrout> server
[22:56] <kwmonroe> magicaltrout: hive does, can i interest you in some hive? https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/hive/layer-hive/README.md#thrift-interface
[22:56] <magicaltrout> unless i'm missing something
[22:57] <magicaltrout> well
[22:57] <magicaltrout> the service
[22:57] <kwmonroe> here's the thing magicaltrout, and you must know i hate saying this in a publicly logged channel, you're not missing anything.  we have no charming method of activating a thriftserver on spark.
[22:58] <magicaltrout> well the charm bit i can cope with
[22:59] <magicaltrout> root@ip-172-31-24-148:~# less /etc/init.d/spark-
[22:59] <magicaltrout> spark-history-server  spark-master          spark-worker
[22:59] <magicaltrout> i'm surprised it doesn't have an init script
[22:59] <magicaltrout> /home/ubuntu/bigtop.release/bigtop-1.2.1/bigtop-packages/src/common/spark/spark-thriftserver.svc
[23:01] <kwmonroe> magicaltrout: don't be so surprised.  the .svc bits haven't made it through bigtop yet because nobody's sure if systemd is a thing yet.
[23:01] <kwmonroe> magicaltrout: do this:
[23:02] <kwmonroe> juju run --unit spark/0 'sudo -H -u spark /usr/lib/spark/sbin/start-thriftserver.sh'
[23:02] <magicaltrout> yeah i shall :P
[23:02] <magicaltrout> fair enough, i thought the svc bits were in use
[23:02] <kwmonroe> yeah, you thought
[23:02] <kwmonroe> (j/k, maybe they are, but when in doubt, i just like to run java by hand)
[23:02] <magicaltrout> i have an oozie question coming in 5 minutes when i've remembered what it was
[23:03] <kwmonroe> i'm EOD in 4 minutes
[23:03] <kwmonroe> maybe less if you ask your question too early
[23:03] <magicaltrout> fair enough i'll leave it to tomorrow
[23:04] <magicaltrout> oozie is missing a linux level user somewhere
[23:04] <kwmonroe> magicaltrout: for reals though, the deb doesn't need to be altered -- start-thriftserver.sh is there on the unit, you'll just have to juju run it vs having a service (or install the svc as a proper service)
[23:04] <kwmonroe> magicaltrout: the thing that bigtop could use is a jira to say that the spark deb is providing a service that is never activated -- that feels like an oversight to me.
[23:04] <magicaltrout> k
[23:04] <magicaltrout> i'll upstream it
[23:05] <kwmonroe> +1
[23:06] <kwmonroe> magicaltrout: i missed your comment earlier, but i'm having real trouble parsing "oozie is missing a linux level user somewhere".  like, the user isn't created, or it's expecting to use some pre-defined user?
[23:08] <magicaltrout> na, i'm trying to put the bits back together cause its been a few weeks, but last time i ran oozie against the bigtop charms, i had to useradd <user> on the namenode/resmgr to get it to run jobs
[23:08] <magicaltrout> so there's a misconfiguration on one side or the other, my fix the other week was a manual useradd, but that clearly doesn't scale :P
[23:11] <kwmonroe> aaaaah yeah magicaltrout
[23:11] <kwmonroe> one sec, i know your struggle
[23:11] <kwmonroe> magicaltrout: before i forget, don't do that thriftserver run.  i'll tell you why in a minute.
[23:13] <kwmonroe> magicaltrout: read this: https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/reactive/namenode.py#L81
[23:14] <magicaltrout> aah that seems very similar
[23:15] <magicaltrout> cool okay i'll try and remember what was broken tomorrow and then possibly send over a PR for that bit cause it certainly fixed it up last time
[23:15] <magicaltrout> and it was something not on the oozie unit itself
[23:15] <kwmonroe> magicaltrout: that's the same issue you're describing with oozie.  we had to do it so the NN could recognize files owned by 'mapred', 'spark', and 'yarn': https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/layer.yaml#L13
[23:16] <magicaltrout> i've not gone completely mad then, thats good to know
[23:16] <kwmonroe> nah, not completely magicaltrout, i think you can make a case for adding 'oozie' (or any other service that has a user that needs to drop bits in hdfs that the NN needs to know about) into that layer.yaml "users" dict.
[23:17] <magicaltrout> cool
[23:17] <kwmonroe> of course, cory_fu_ and i will vote whether or not we care about your case.  you know how that likely goes.
[23:18] <magicaltrout> i also meant to say i can't believe you've missed out on cold wet London in November for a trip to spain or some shit
[23:18] <kwmonroe> magicaltrout: before i forget, don't run that thriftserver
[23:18] <magicaltrout> yeah why not?
[23:18] <kwmonroe> shit, i've forgotten
[23:19] <kwmonroe> lemme check debug logs
[23:19] <magicaltrout> it starts and stops
[23:19] <magicaltrout> living the dream
[23:19] <kwmonroe> oof, looks like it fails to start an apache derby instance and then removes $HOME
[23:20] <magicaltrout> lol
[23:21] <magicaltrout> hmm did it fail or did it just return
[23:22] <kwmonroe> magicaltrout: the process failed.  the machine is fine.  j/k on removing $STUFF.
[23:22] <kwmonroe> magicaltrout: i think it might be because i told you to run it as sudo -u spark and spark can't write to $CHARM_DIR
[23:22] <kwmonroe> java.sql.SQLException: Directory /var/lib/juju/agents/unit-spark-0/charm/metastore_db cannot be created.
[23:22] <kwmonroe>         at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
[23:22] <kwmonroe> lemme try again running that mama jama as root
[23:23] <kwmonroe> lemme export $HOME=/ first
[23:24] <magicaltrout> yeah runs as root
[23:24] <kwmonroe> yeah, much better magicaltrout
[23:24] <kwmonroe> $ juju run --unit spark/0 '/usr/lib/spark/sbin/start-thriftserver.sh'
[23:24] <magicaltrout> just weirdly stops logging like its crashed
[23:25] <magicaltrout> thats cool. Okay great, I might tack on a subordinate or something to fire that up. We've got most of the hue bits and pieces bolted on and working
[23:25] <magicaltrout> just trying to tidy up sparksql and oozie properly
[23:25] <magicaltrout> then it should be about ready along with the druid cluster in a couple of weeks
[23:26] <kwmonroe> magicaltrout: i will outright reject any thrift subordinate, so don't bother.  you can propose an action to start the server, nothing more, nothing less.
[23:27] <magicaltrout> i didn't say i'd bother giving it to you :P
[23:27] <magicaltrout> what i would like is for it to work without a user having to flip a switch
[23:28] <kwmonroe> magicaltrout: i'll take a config thriftserver=true or an action 'start-thriftserver'.  i'm not even kidding, don't you dare make a spark suboridnate that is is just 'spark' with only the thriftserver running.  i'll find you.  i'll be angry.
[23:29] <magicaltrout> okay i might be happy with a config option
[23:29] <magicaltrout> i dislike actions
[23:30] <kwmonroe> then it seems like we've found a middle ground.  let's stop here before we start cursing one another.
[23:32] <magicaltrout> alright fair enough, you're still in my bad books for passing up a trip to london. Thanks for the tips, expect some PR's to land sometime soon.
[23:35] <kwmonroe> +1 magicaltrout, gotta see a guy about a horse now.
[23:55] <rick_h_> kwmonroe: hah. If you're serious we need to catch up sometime