[00:37] thumper, wallyworld: why do we run a global clock updater on every controller machine? [00:40] babbageclunk: not sure, i would have thought it's only needed on mongo primary? [00:46] That's what I'd have thought too, but it gets run in each one [01:23] NFI [01:25] wallyworld: got a few minutes to chat? [02:14] thumper: sure [02:14] wallyworld: 1:1 [02:14] ? [02:47] babbageclunk: will git complain saying "my tip is behind the remote" if I've done some rebasing locally? Or have I somehow backtracked my local and built on top of that [02:49] veebers: more context? [02:50] babbageclunk: hah sorry, so I'm pushing updates to a branch in github, git rejects it and says "Updates were rejected because the tip of your current branch is behind its remote counterpart . . ." normally I would think "Oh, push rejected because I squashed commits, I'll --force" but it's saying I'm behind the remote, I should 'git pull'. [02:50] If you've rebased (or otherwise messed with history) you won't be able to push to a branch (to which you've already pushed) without --force-with-lease. Is that what you mean? [02:50] babbageclunk: so perhaps I should just do what it says and actually just git pull [02:51] Oh, if you squashed then you should just do a --force-with-lease. [02:51] (which is safer than --force as I understand it, although really I just do what magit does.) [02:52] https://developer.atlassian.com/blog/2015/04/force-with-lease/ [02:54] veebers: ^ [02:54] babbageclunk: ack, cheers :-) [02:57] babbageclunk: that worked, thanks again [02:57] :) [03:30] wallyworld: FYI pushed up those changes, waiting for the unit test run to finish (already fixed the one issue that's popped up) [03:30] ok, will look [03:53] veebers: a couple of questions around validation/naming. lgtm to land once you have looked at the comments; the validation one probably needs at least a todo [04:04] wallyworld: ack, renaming Resource -> Metadata atm. I'll add a card for the validation todo [04:07] wallyworld, would u mind to take a look this PR when u got time? thanks https://github.com/juju/juju/pull/8876 [04:09] thumper: have a moment? want to talk something through. [04:10] babbageclunk: in a call with wallyworld and jam just now, but maybe after [04:10] ok cool [04:40] babbageclunk: wanna chat now? [04:41] kelvin_: sorrt, was in call, reviewed [04:42] thanks, wallyworld [04:45] thumper: oops, yup! In 1:1? [04:45] babbageclunk: ack, btw, moved to a meet [04:45] sweet, I always go via the calendar to be safe anyway [04:55] wallyworld: I've pushed the changes, I haven't squashed the latest commit as I wanted you to eyeball it quickly as there where a handful of changes since your approval comments [04:56] sure [04:57] veebers: looks like a couple of unneeded aliases in deploy.go? [04:58] import aliases [04:58] wallyworld: ah, I needed them but might not now, let me dbl check [04:59] veebers: lgtm though, with the aliases removed if they ar eno longer needed [05:00] wallyworld: ah, because validateResourceDetails(resources map...), I could rename that map in the function ^_^ [05:01] you can have a params same name as in import [05:01] but rename the parsm is better here [05:01] "res" or something [05:03] wallyworld: done. In that last push I also removed a commentary comment and just simplified the err return a bit [05:05] veebers: great, land it, no need for me to look again [05:06] wallyworld: sweet, will do [05:13] wallyworld, changed the name to bitcoin miner. :-> [05:14] good :-) [05:14] lol land it now [06:39] wallyworld: sorry i forgot to ping u.. i pushed PR for review few hrs before. [06:39] when u get time plz take a look. [06:42] https://github.com/juju/juju/pull/8881 [06:44] i am going to start working on CLI part. [07:00] vino: my internet dropped for a bit there. i've left some comments, see if they makde sense [07:13] vino: having internet problems here at the moment, not sure if you saw my reply - review done, let me know if you have questions [07:15] wallyworld: just checking. [09:16] anyone happy to help with fresh new juju openstack setup ? i cannot launch first instance, "No valid host available" [10:57] manadart: CR for this one https://github.com/juju/juju/pull/8879 [11:07] stickupkid: Looked at that one this morning, but I'll review properly now. [11:08] manadart: yeah, I was manual testing it, as I want to make sure that it worked correctly [11:08] manadart: so this won't get a cert from the API, it assumes you have access to everything... [11:10] manadart: there is a flakey test in the worker suite - `ProvisionerSuite.TestStopInstancesIgnoresMachinesWithKeep` - i'll try and rebuild it and see if that goes a way [11:12] anyone could help? default localhost openstack setup is not working , neutron.log have error "The resource could not be found.", also in keystone i have openssl error [12:01] stickupkid: Got time for a HO? [12:05] stickupkid: Successfully bootstrapped to a remote, but had some questions about the interactive add. [12:12] morning party people [12:20] rick_h_: Morning. [12:28] anyone ?? really need help https://pastebin.com/0Dz7gUY3 [12:30] w0jtas: sorry, you'll have to check with the openstack folks. I'm not sure how that is set up. If you can get into the python file and debug what the command it's trying to run it maybe you can run it from the cli on the host and see why the openssl command is returning non-0 [12:31] w0jtas: check out https://github.com/openstack-charmers/openstack-community [12:31] rick_h_: ok will try on openstack chan ;) thanks for answer anyway [12:33] tinwood: any chance to help? [12:54] kwmonroe: we tested a manual Hue deployment over the Bigtop stuff the other day and it worked pretty well, so we're going to continue on that path and figure out whether to shove it up to bigdata-charmers at a later date [12:55] the lovely rmcd is also starting work on the Druid charms [12:55] which will eventually back on to HDFS [12:56] I was messing around with Apache Ignite over the Yarn stack over the weekend [12:56] that worked pretty well [13:39] all: how to find out why juju is rejecting my bundle.yaml ? [13:39] ERROR invalid charm or bundle provided at "./bundle.yaml" [13:40] rathore_: try using charm proof against it [13:40] rathore_: oic, is this from juju deploy ./bundle.yaml ? [13:40] yes it is [13:41] rathore_: there's a charm tool for charm and bundle authors and it has a lint tool "charm proof" to help find any issues in them [13:41] FATAL: No bundle.yaml (Bundle) or metadata.yaml (Charm) found, cannot proof [13:41] rathore_: and that bundle in the cwd? [13:41] charm proof is complaining it doesnt find [13:42] yes [13:42] i just modified some bits of openstack-lxd-xenial-queen and juju has started complaining [13:44] got it to work, just had to run charm proof instead of charm proof ./bundle.yaml [13:45] rathore_: gotcha [13:45] rathore_: so maybe it's just juju deploy bundle.yaml vs the ./ [13:45] ? [13:55] naah juju deploy bundle.yaml is not giving out any errors [14:22] :~$ juju run-action ceph-osd/0 zap-disk /dev/sdb i-really-mean-it [14:22] ERROR argument "/dev/sdb" must be of the form key...=value [14:23] how am I suppose to type that [14:24] rfowler: juju run-action ceph-osd/0 zap-disk="/dev/sdb"? [14:27] ~$ juju run-action ceph-osd/0 zap-disk="/dev/sdb" [14:27] ERROR invalid unit or action name "zap-disk=/dev/sdb" [14:27] rick_h_: same [14:28] oh sorry [14:28] the action name is the param it's not a arg to it [14:28] sec, have to look at the action in the charm. [14:29] rfowler: ok, so looks like you need the argument flag first [14:29] rfowler: so: run-action ceph-osd/0 zapdisk device="/dev/sdb" i-really-mean-it=true [14:29] rfowler: or something like that [14:29] rfowler: https://api.jujucharms.com/charmstore/v5/ceph-osd/archive/actions.yaml for the action definition [14:30] sorry, the arg is "devices" with an S [14:34] rick_h_: works thanks [14:37] rick_h_: except it fails and says the disk is mounted went i know it isn't [14:51] rfowler: doh, well not sure about that. That's going to fall into the work the charm does itself. [14:51] rfowler: but glad we could get it executing [14:57] manadart: this PR brings in new error messages, removes the old tools/lxdclient from the provider (i've kept the code around for now!). [14:58] manadart: anychance you can have a look? [14:58] manadart: I've also removed the ProviderLXDServer interface, in preference to the Server interface. It made testing a lot easier in the long run [15:23] stickupkid: Added some comments. I have to attend to kids now; might check back later or failing that, first thing in the morning. [15:28] manadart: sure thing [15:56] all: whats the correct way of upgrading a bundle [15:57] i have one deployed and i need to make some changes [16:05] rathore_: so there's no method of upgrading a bundle. You just make the changes you need. [16:06] rathore_: since bundles aren't really entities that an be tracked and tell what's changed from one install to a later one [16:27] magicaltrout: i'm curious when you say you tested a manual Hue over bigtop... did you add puppet/packaging stuff back to bigtop for hue, or did you do a standalone hue that interacted with other bigtop components? [16:28] either way, good to hear that hue worked pretty well! [16:32] manadart jam - seems autoload-credentials is throwing an error concerning oci ERROR could not detect credentials for provider "oci": `stat /home/simon/.oraclebmc/config: no such file or directory` [17:08] Hey all, anyone knows ab example of neutron gateway ha with juju? [20:18] morning [20:18] rick_h_: seen any official go ahead? [20:18] rick_h_: did you poke solutions qa? [20:21] rick_h_: also, bug 1776995 is very important for upgrade series work [20:22] Bug #1776995: subordinate can't relate to applications with different series [20:22] rick_h_: we should look to get either hml or externalreality to weave it in with current work [20:24] wallyworld: Ping me when you get in? [20:24] thumper no word from qa I saw today. Ty for heads up on the bug. I'm out ATM with an appointment. [20:25] rick_h_: ack [20:27] kwmonroe: sorry missed you earlier, we just grabbed the latest and manually stuck a build on there, i don't plan on trying to backport it into bigtop since its been removed [20:28] jhobbs: any idea on ok to release? Haven't seen Chris reply to emails yet. [20:29] roger that magicaltrout -- i figured re-importing a puppet manifest atop a bigtop repo clone would be more hassle than it was worth (considering they removed it on purpose). still good to know integration worked. [20:30] and you don't have to worry with those pesky debs. tar to production is the way to go ;) [20:32] i was considering snapping it up [20:34] i dunno, you can't always appease the ASF, I think its a good UI for demos etc at the very least because then business managers etc can grok whats going on [20:34] rather than doing some hdfs dfs -ls command and showing them a terminal prompt :P [20:46] magicaltrout: +1 on hue being a great Hadoop User Experience for demos (see what i did there?). [20:51] magicaltrout: that said... and i say this with much fear at your retort, won't the business manager be equally impressed with the namenode UI + whatever dashboard you want (like zeppelin)? [21:09] surely that depends on whether you're interested in getting data in or out [21:10] i think all the data has already gone in. we just care about the output now magicaltrout. and it doesn't look good (for humanity). [21:11] still, going for hue against http://mail-archives.apache.org/mod_mbox/bigtop-dev/201804.mbox/%3C5BA7B1B4-B514-4196-ADCB-2D8ECBCCC97F%40oflebbe.de%3E makes me think you secretly want to take over bigtop maint. you have my +1. not sure how much that pays tho. [21:12] me and my interns against the world! [21:12] my hopes are with you [21:31] thumper: release call? [21:31] coming [21:34] wallyworld: When you're done with that call, can I have a few minutes of your time? [21:45] wallyworld: thoughts on migration strat for the docker resource collection? I don't imagine critical as it'll just download the resource if not found. Oh, but what about CLI provided resources, will they get lost? [21:46] cory_fu: yeah, saw your ping :-) had just crawled out of bed with a coffee in tome to make the meeting. free now [21:47] veebers: migration will just need an update to the model description format to add the new collection [21:47] that can come a bit later [21:48] wallyworld: I'll add the new resource collectin to the 'ignore' in the migration_internal_test for now (and add a card) [21:48] wallyworld: np. PMed you a Hangout link, [21:57] Cynerva: Are you still around? [21:57] cory_fu: yeah, what's up? [21:57] PM'd you a Hangout link if you have a minute [22:01] veebers: sgtm, that's what we normally do for that [22:30] wallyworld: https://github.com/juju-solutions/layer-caas-base/pull/5 and https://github.com/juju-solutions/charm-kubeflow-jupyterhub [22:30] cory_fu: great ty, will try them out today [22:33] Heading out. o/ [23:01] release the hounds! errr...I mean 2.4. 0 [23:01] wheeeee