[00:37] <babbageclunk> thumper, wallyworld: why do we run a global clock updater on every controller machine?
[00:40] <wallyworld> babbageclunk: not sure, i would have thought it's only needed on mongo primary?
[00:46] <babbageclunk> That's what I'd have thought too, but it gets run in each one
[01:23] <thumper> NFI
[01:25] <thumper> wallyworld: got a few minutes to chat?
[02:14] <wallyworld> thumper: sure
[02:14] <thumper> wallyworld: 1:1
[02:14] <thumper> ?
[02:47] <veebers> babbageclunk: will git complain saying "my tip is behind the remote" if I've done some rebasing locally? Or have I somehow backtracked my local and built on top of that
[02:49] <babbageclunk> veebers: more context?
[02:50] <veebers> babbageclunk: hah sorry, so I'm pushing updates to a branch in github, git rejects it and says "Updates were rejected because the tip of your current branch is behind its remote counterpart . . ." normally I would think "Oh, push rejected because I squashed commits, I'll --force" but it's saying I'm behind the remote, I should 'git pull'.
[02:50] <babbageclunk> If you've rebased (or otherwise messed with history) you won't be able to push to a branch (to which you've already pushed) without --force-with-lease. Is that what you mean?
[02:50] <veebers> babbageclunk: so perhaps I should just do what it says and actually just git pull
[02:51] <babbageclunk> Oh, if you squashed then you should just do a --force-with-lease.
[02:51] <babbageclunk> (which is safer than --force as I understand it, although really I just do what magit does.)
[02:52] <babbageclunk> https://developer.atlassian.com/blog/2015/04/force-with-lease/
[02:54] <babbageclunk> veebers: ^
[02:54] <veebers> babbageclunk: ack, cheers :-)
[02:57] <veebers> babbageclunk: that worked, thanks again
[02:57] <babbageclunk> :)
[03:30] <veebers> wallyworld: FYI pushed up those changes, waiting for the unit test run to finish (already fixed the one issue that's popped up)
[03:30] <wallyworld> ok, will look
[03:53] <wallyworld> veebers: a couple of questions around validation/naming. lgtm to land once you have looked at the comments; the validation one probably needs at least a todo
[04:04] <veebers> wallyworld: ack, renaming Resource -> Metadata atm. I'll add a card for the validation todo
[04:07] <kelvin_> wallyworld, would u mind to take a look this PR when u got time? thanks https://github.com/juju/juju/pull/8876
[04:09] <babbageclunk> thumper: have a moment? want to talk something through.
[04:10] <thumper> babbageclunk: in a call with wallyworld and jam just now, but maybe after
[04:10] <babbageclunk> ok cool
[04:40] <thumper> babbageclunk: wanna chat now?
[04:41] <wallyworld> kelvin_: sorrt, was in call, reviewed
[04:42] <kelvin_> thanks, wallyworld
[04:45] <babbageclunk> thumper: oops, yup! In 1:1?
[04:45] <thumper> babbageclunk: ack, btw, moved to a meet
[04:45] <babbageclunk> sweet, I always go via the calendar to be safe anyway
[04:55] <veebers> wallyworld: I've pushed the changes, I haven't squashed the latest commit as I wanted you to eyeball it quickly as there where a handful of changes since your approval comments
[04:56] <wallyworld> sure
[04:57] <wallyworld> veebers: looks like a couple of unneeded aliases in deploy.go?
[04:58] <wallyworld> import aliases
[04:58] <veebers> wallyworld: ah, I needed them but might not now, let me dbl check
[04:59] <wallyworld> veebers: lgtm though, with the aliases removed if they ar eno longer needed
[05:00] <veebers> wallyworld: ah, because validateResourceDetails(resources map...), I could rename that map in the function ^_^
[05:01] <wallyworld> you can have a params same name as in import
[05:01] <wallyworld> but rename the parsm is better here
[05:01] <wallyworld> "res" or something
[05:03] <veebers> wallyworld: done. In that last push I also removed a commentary comment and just simplified the err return a bit
[05:05] <wallyworld> veebers: great, land it, no need for me to look again
[05:06] <veebers> wallyworld: sweet, will do
[05:13] <kelvin_> wallyworld, changed the name to bitcoin miner. :->
[05:14] <wallyworld> good :-)
[05:14] <kelvin_> lol land it now
[06:39] <vino> wallyworld: sorry i forgot to ping u.. i pushed PR for review few hrs before.
[06:39] <vino> when u get time plz take a look.
[06:42] <vino> https://github.com/juju/juju/pull/8881
[06:44] <vino> i am going to start working on CLI part.
[07:00] <wallyworld> vino: my internet dropped for a bit there. i've left some comments, see if they makde sense
[07:13] <wallyworld> vino: having internet problems here at the moment, not sure if you saw my reply - review done, let me know if you have questions
[07:15] <vino> wallyworld: just checking.
[09:16] <w0jtas> anyone happy to help with fresh new juju openstack setup ? i cannot launch first instance, "No valid host available"
[10:57] <stickupkid> manadart: CR for this one https://github.com/juju/juju/pull/8879
[11:07] <manadart> stickupkid: Looked at that one this morning, but I'll review properly now.
[11:08] <stickupkid> manadart: yeah, I was manual testing it, as I want to make sure that it worked correctly
[11:08] <stickupkid> manadart: so this won't get a cert from the API, it assumes you have access to everything...
[11:10] <stickupkid> manadart: there is a flakey test in the worker suite - `ProvisionerSuite.TestStopInstancesIgnoresMachinesWithKeep` - i'll try and rebuild it and see if that goes a way
[11:12] <w0jtas> anyone could help? default localhost openstack setup is not working , neutron.log have error "The resource could not be found.", also in keystone i have openssl error
[12:01] <manadart> stickupkid: Got time for a HO?
[12:05] <manadart> stickupkid: Successfully bootstrapped to a remote, but had some questions about the interactive add.
[12:12] <rick_h_> morning party people
[12:20] <manadart> rick_h_: Morning.
[12:28] <w0jtas> anyone ?? really need help https://pastebin.com/0Dz7gUY3
[12:30] <rick_h_> w0jtas: sorry, you'll have to check with the openstack folks. I'm not sure how that is set up. If you can get into the python file and debug what the command it's trying to run it maybe you can run it from the cli on the host and see why the openssl command is returning non-0
[12:31] <rick_h_> w0jtas: check out https://github.com/openstack-charmers/openstack-community
[12:31] <w0jtas> rick_h_: ok will try on openstack chan ;) thanks for answer anyway
[12:33] <w0jtas> tinwood: any chance to help?
[12:54] <magicaltrout> kwmonroe: we tested a manual Hue deployment over the Bigtop stuff the other day and it worked pretty well, so we're going to continue on that path and figure out whether to shove it up to bigdata-charmers at a later date
[12:55] <magicaltrout> the lovely rmcd is also starting work on the Druid charms
[12:55] <magicaltrout> which will eventually back on to HDFS
[12:56] <magicaltrout> I was messing around with Apache Ignite over the Yarn stack over the weekend
[12:56] <magicaltrout> that worked pretty well
[13:39] <rathore_> all: how to find out why juju is rejecting my bundle.yaml ?
[13:39] <rathore_> ERROR invalid charm or bundle provided at "./bundle.yaml"
[13:40] <rick_h_> rathore_: try using charm proof against it
[13:40] <rick_h_> rathore_: oic, is this from juju deploy ./bundle.yaml ?
[13:40] <rathore_> yes it is
[13:41] <rick_h_> rathore_: there's a charm tool for charm and bundle authors and it has a lint tool "charm proof" to help find any issues in them
[13:41] <rathore_> FATAL: No bundle.yaml (Bundle) or metadata.yaml (Charm) found, cannot proof
[13:41] <rick_h_> rathore_: and that bundle in the cwd?
[13:41] <rathore_> charm proof is complaining it doesnt find
[13:42] <rathore_> yes
[13:42] <rathore_> i just modified some bits of openstack-lxd-xenial-queen and juju has started complaining
[13:44] <rathore_> got it to work, just had to run charm proof instead of charm proof ./bundle.yaml
[13:45] <rick_h_> rathore_: gotcha
[13:45] <rick_h_> rathore_: so maybe it's just juju deploy bundle.yaml vs the ./
[13:45] <rick_h_> ?
[13:55] <rathore_> naah juju deploy bundle.yaml is not giving out any errors
[14:22] <rfowler> :~$ juju run-action ceph-osd/0 zap-disk /dev/sdb i-really-mean-it
[14:22] <rfowler> ERROR argument "/dev/sdb" must be of the form key...=value
[14:23] <rfowler> how am I suppose to type that
[14:24] <rick_h_> rfowler: juju run-action ceph-osd/0 zap-disk="/dev/sdb"?
[14:27] <rfowler> ~$ juju run-action ceph-osd/0 zap-disk="/dev/sdb"
[14:27] <rfowler> ERROR invalid unit or action name "zap-disk=/dev/sdb"
[14:27] <rfowler> rick_h_: same
[14:28] <rick_h_> oh sorry
[14:28] <rick_h_> the action name is the param it's not a arg to it
[14:28] <rick_h_> sec, have to look at the action in the charm.
[14:29] <rick_h_> rfowler: ok, so looks like you need the argument flag first
[14:29] <rick_h_> rfowler: so: run-action ceph-osd/0 zapdisk device="/dev/sdb" i-really-mean-it=true
[14:29] <rick_h_> rfowler: or something like that
[14:29] <rick_h_> rfowler: https://api.jujucharms.com/charmstore/v5/ceph-osd/archive/actions.yaml for the action definition
[14:30] <rick_h_> sorry, the arg is "devices" with an S
[14:34] <rfowler> rick_h_: works thanks
[14:37] <rfowler> rick_h_: except it fails and says the disk is mounted went i know it isn't
[14:51] <rick_h_> rfowler: doh, well not sure about that. That's going to fall into the work the charm does itself.
[14:51] <rick_h_> rfowler: but glad we could get it executing
[14:57] <stickupkid> manadart: this PR brings in new error messages, removes the old tools/lxdclient from the provider (i've kept the code around for now!).
[14:58] <stickupkid> manadart: anychance you can have a look?
[14:58] <stickupkid> manadart: I've also removed the ProviderLXDServer interface, in preference to the Server interface. It made testing a lot easier in the long run
[15:23] <manadart> stickupkid: Added some comments. I have to attend to kids now; might check back later or failing that, first thing in the morning.
[15:28] <stickupkid> manadart: sure thing
[15:56] <rathore_> all: whats the correct way of upgrading a bundle
[15:57] <rathore_> i have one deployed and i need to make some changes
[16:05] <rick_h_> rathore_: so there's no method of upgrading a bundle. You just make the changes you need.
[16:06] <rick_h_> rathore_: since bundles aren't really entities that an be tracked and tell what's changed from one install to a later one
[16:27] <kwmonroe> magicaltrout: i'm curious when you say you tested a manual Hue over bigtop... did you add puppet/packaging stuff back to bigtop for hue, or did you do a standalone hue that interacted with other bigtop components?
[16:28] <kwmonroe> either way, good to hear that hue worked pretty well!
[16:32] <stickupkid> manadart jam - seems autoload-credentials is throwing an error concerning oci ERROR could not detect credentials for provider "oci": `stat /home/simon/.oraclebmc/config: no such file or directory`
[17:08] <rathore_> Hey all, anyone knows ab example of neutron gateway ha with juju?
[20:18] <thumper> morning
[20:18] <thumper> rick_h_: seen any official go ahead?
[20:18] <thumper> rick_h_: did you poke solutions qa?
[20:21] <thumper> rick_h_: also, bug 1776995 is very important for upgrade series work
[20:22] <mup> Bug #1776995: subordinate can't relate to applications with different series <upgradeseries> <juju:Triaged> <https://launchpad.net/bugs/1776995>
[20:22] <thumper> rick_h_: we should look to get either hml or externalreality to weave it in with current work
[20:24] <cory_fu> wallyworld: Ping me when you get in?
[20:24] <rick_h_> thumper no word from qa I saw today. Ty for heads up on the bug. I'm out ATM with an appointment.
[20:25] <thumper> rick_h_: ack
[20:27] <magicaltrout> kwmonroe: sorry missed you earlier, we just grabbed the latest and manually stuck a build on there, i don't plan on trying to backport it into bigtop since its been removed
[20:28] <rick_h_> jhobbs: any idea on ok to release? Haven't seen Chris reply to emails yet.
[20:29] <kwmonroe> roger that magicaltrout -- i figured re-importing a puppet manifest atop a bigtop repo clone would be more hassle than it was worth (considering they removed it on purpose).  still good to know integration worked.
[20:30] <kwmonroe> and you don't have to worry with those pesky debs.  tar to production is the way to go ;)
[20:32] <magicaltrout> i was considering snapping it up
[20:34] <magicaltrout> i dunno, you can't always appease the ASF, I think its a good UI for demos etc at the very least because then business managers etc can grok whats going on
[20:34] <magicaltrout> rather than doing some hdfs dfs -ls command and showing them a terminal prompt :P
[20:46] <kwmonroe> magicaltrout: +1 on hue being a great Hadoop User Experience for demos (see what i did there?).
[20:51] <kwmonroe> magicaltrout: that said... and i say this with much fear at your retort, won't the business manager be equally impressed with the namenode UI + whatever dashboard you want (like zeppelin)?
[21:09] <magicaltrout> surely that depends on whether you're interested in getting data in or out
[21:10] <kwmonroe> i think all the data has already gone in.  we just care about the output now magicaltrout.  and it doesn't look good (for humanity).
[21:11] <kwmonroe> still, going for hue against http://mail-archives.apache.org/mod_mbox/bigtop-dev/201804.mbox/%3C5BA7B1B4-B514-4196-ADCB-2D8ECBCCC97F%40oflebbe.de%3E makes me think you secretly want to take over bigtop maint.  you have my +1. not sure how much that pays tho.
[21:12] <magicaltrout> me and my interns against the world!
[21:12] <kwmonroe> my hopes are with you
[21:31] <wallyworld> thumper: release call?
[21:31] <thumper> coming
[21:34] <cory_fu> wallyworld: When you're done with that call, can I have a few minutes of your time?
[21:45] <veebers> wallyworld: thoughts on migration strat for the docker resource collection? I don't imagine critical as it'll just download the resource if not found. Oh, but what about CLI provided resources, will they get lost?
[21:46] <wallyworld> cory_fu: yeah, saw your ping :-) had just crawled out of bed with a coffee in tome to make the meeting. free now
[21:47] <wallyworld> veebers: migration will just need an update to the model description format to add the new collection
[21:47] <wallyworld> that can come a bit later
[21:48] <veebers> wallyworld: I'll add the new resource collectin to the 'ignore' in the migration_internal_test for now (and add a card)
[21:48] <cory_fu> wallyworld: np.  PMed you a Hangout link,
[21:57] <cory_fu> Cynerva: Are you still around?
[21:57] <Cynerva> cory_fu: yeah, what's up?
[21:57] <cory_fu> PM'd you a Hangout link if you have a minute
[22:01] <wallyworld> veebers: sgtm, that's what we normally do for that
[22:30] <cory_fu> wallyworld: https://github.com/juju-solutions/layer-caas-base/pull/5 and https://github.com/juju-solutions/charm-kubeflow-jupyterhub
[22:30] <wallyworld> cory_fu: great ty, will try them out today
[22:33] <cory_fu> Heading out.  o/
[23:01] <rick_h_> release the hounds! errr...I mean 2.4. 0
[23:01] <rick_h_> wheeeee