[00:40] <vino> wallyworld: addressed ur comments.
[00:41] <vino> cud u plz take a look.
[00:41] <wallyworld> sure
[00:48] <wallyworld> vino: don't forget to change the PR description
[00:48] <vino> sure. i am doing that.
[00:51] <wallyworld> veebers: how far away are you from landing your PR?
[00:57] <veebers> wallyworld: I shouldn't be too far, was looking at the doc first. I can pivot now (just got back from lunch) and get that sorted.
[00:58] <wallyworld> veebers: that would be gr8 as i can't deploy k8s charms at the moment
[00:58] <veebers> ah I see, that o' chestnut. Ok I'll get it sorted
[00:58] <wallyworld> Docker resource with ID: mariadb/mysql_image not found
[00:58] <wallyworld> i think that's the error that is fixed?
[00:58] <veebers> wallyworld: yep
[01:01] <wallyworld> anastasiamac: done
[01:01] <anastasiamac> wallyworld: ta
[01:03] <anastasiamac> wallyworld: model-constraints are indeed not inherited when new models are added... so --constraints on bootstrap does only set default and controller model constraints... i guess the reason is - how would u know from which model to inherit... i suspect that this is a desired behavior
[01:03] <anastasiamac> wallyworld: i will still add --model-constraints as an option to bootstrap
[01:04] <anastasiamac> wallyworld: we should probably consider --model-contraints as an option to add-model too...
[01:04] <wallyworld> the pattern for model config is to store default sspecified at bootstrap time into a separate settings bucket
[01:04] <wallyworld> yes, it needs holistic thought
[01:04] <anastasiamac> wallyworld: yeah... maybe this is what we need to do with constraints too ...
[01:05] <wallyworld> should do a "one pager" to propose a solution
[01:05] <anastasiamac> wallyworld: yes, i'll add it to my really-want-to-address-yesterday bucket :)
[01:05] <anastasiamac> wallyworld: +1 to one pager...
[01:05] <wallyworld> i have lots of those buckets
[01:05] <anastasiamac> \o/
[01:09]  * thumper goes to get food
[01:30] <babbageclunk> thumper: sonofy.co
[01:46] <wallyworld> veebers: won't you need to import the sha packages as well?
[01:47] <thumper> babbageclunk: awesome!!!
[01:48] <veebers> wallyworld: it seems just once in for the 'binary' is fine. I did a manual test deploy using --resource and straight deploy and all is happy. I can add the import not too for completeness
[01:48] <veebers> I'm just sorting the deps
[01:48] <veebers> I realised at push that I hadn't done that yet
[01:48] <wallyworld> veebers: you can't rely on that
[01:48] <veebers> wallyworld: ack fair enough, I'll add the import at usage
[01:49] <wallyworld> especially for say tests, they won't necessarily cause the import side effect
[01:49] <wallyworld> since they only operate on that package they are in plus any transitive deps
[01:50] <veebers> ack, makes sense
[01:54] <veebers> wallyworld: FYI I've pushed the updated branch w/ deps and imports
[01:54] <wallyworld> yay ty
[02:00] <thumper> babbageclunk: installed it, doesn't seem to work properly
[02:01] <babbageclunk> thumper: ah well, sorry!
[02:26] <veebers> wallyworld: which operator image would the edge snap use out of interest? I recall you mentioned that we need to do an image operator push for the edge channel right?
[02:26] <wallyworld> veebers: it uses whatever the last person to push a copy has uploaded
[02:26] <veebers> ah ok
[02:47] <veebers> Is there a way to force 'juju update-clouds' to update? Just removing $JUJU_DATA/clouds.yaml won't do the job
[02:48] <wallyworld> veebers: public-clouds.yaml
[02:48] <wallyworld> clouds.yaml is your personal one
[02:50] <veebers> wallyworld: where is it looking for public-clouds.yaml? Can't see it in $JUJU_DATA, nor any /snap/ dir
[02:50] <wallyworld> it's in $JUJU_DATA
[02:50] <wallyworld> same as clouds.yaml
[02:52] <veebers> my $JUJU_DATA doesn't have that file :-\ (I've set it to another dir for testing something). the only public-clouds.yaml I can find is ~/.local/share/juju/public-clouds.yaml, but moving that doesn't work (so juju isn't checking multiple places it seems)
[02:56] <wallyworld> veebers: sorry, i thought $JUJU_DATA was ~/.local/share/juju
[02:57] <wallyworld> update-clouds operates in that directory
[02:58] <wallyworld> kelvin_: your PR looks fairly complete? main issue I can see is we've lost the make target to update the deps file(s)
[03:29] <veebers> kelvin_: you may need to update the pr merge/check jobs with the deps changes. IIRC they explicitly call godeps etc.
[03:35] <wallyworld> kelvin_: there's no need to commit the lock file, just the toml
[03:35] <wallyworld> dep ensure will do what it needs to do
[03:35] <wallyworld> or whatever the cmd is to generate the lock from toml
[03:43] <vino> veebers: i have a Pr for u to review
[03:44] <vino> adding ci test for export bundle feature.
[03:44] <vino> cud u plz take a look when u r free ?
[03:44] <kelvin_> wallyworld, it should be no problem, we use `godeps` target to ensure deps but now we use `dep`
[03:45] <veebers> vino: sure can, link?
[03:46] <kelvin_> wallyworld, we should commit lock file with the .toml file together. even we try to use sha for revision but we can also just specify branch of the dep, in this case, the lock file has more detailed version control for the deps.
[03:47] <kelvin_> wallyworld, just like  package-lock.json/yarn.lock for node,
[03:47] <wallyworld> kelvin_: so the lock file is not simply generated from the toml file?
[03:47] <babbageclunk> wallyworld: The lock file should definitely be committed.
[03:48] <kelvin_> wallyworld, it's generated by toml file but it could be different later to use the same toml file
[03:48] <vino> https://github.com/CanonicalLtd/juju-qa-jenkins/pull/64
[03:48] <wallyworld> kelvin_: babbageclunk: ok, nps. whatever best practice is
[03:48] <kelvin_> wallyworld, for example, we changed the packages that imports a dep in juju
[03:49] <wallyworld> kelvin_: i think it would be useful to expand these use cases in the google doc and we can see what the workflow would be
[03:50] <vino> veebers: went for a tea. link: https://github.com/CanonicalLtd/juju-qa-jenkins/pull/64
[03:50] <kelvin_> wallyworld, agreed, i will prepare a doc after i get all the things working correctly
[03:50] <wallyworld> yup, sgtm
[03:51] <veebers> vino: sweet, I'll hit that in a little bit
[03:56] <babbageclunk> are other people getting go vet errors on a format string in k8s.go?
[04:03] <veebers> babbageclunk: I just ran the verify script manually now and get the error (2x errors lines 149x-ish)
[04:05] <veebers> babbageclunk: that's really odd, the verify script is called in the check Make target, which should be what the PR/Build/Anything runs
[04:09] <kelvin_> after make add-patches, i got two more errors
[04:09] <kelvin_> https://pastebin.ubuntu.com/p/n7xwpbszdn/
[04:12] <babbageclunk> veebers: weird
[04:12] <veebers> babbageclunk: yeah I'm a bit confused, I can see in the script where it calls "make check", which will call the verify script, and I'm pretty sure IGNORE_VET_WARNINGS isn't set anywhere
[04:12] <babbageclunk> veebers: maybe different versions of go vet?
[04:14] <veebers> babbageclunk: jenkins machines have 1.10.3, I just tried with snap go1.10.3 an my other go1.10 and they both show the error
[04:14] <babbageclunk> hmm
[04:18] <veebers> vino: LGTM
[04:18] <babbageclunk> kelvin_: those errors seem weird - if err is undefined how is it building after make add-patches is run.
[04:18] <babbageclunk> ?
[04:23] <babbageclunk> kelvin_: Is the setprogress one because a patch has been accepted upstream or something?
[04:31] <kelvin_> babbageclunk, yeah, it's weird. i need to take a look further.
[04:32] <babbageclunk> kelvin_: have you kept the same shas as in dependencies.tsv?
[04:33] <kelvin_> babbageclunk, yes, all sha are kept same
[04:33] <babbageclunk> huh, then I don't get it.
[04:33] <kelvin_> babbageclunk, i translate the deps from our tsv file directly
[04:34] <babbageclunk> right, that sounds very sensible.
[05:11] <vino> veebers: thank u. Have a question abt this node label. feature is for any node having lxd capabilities.
[05:12] <vino> then wat is goodra for ?
[05:13] <veebers> vino: goodra is included in the 'features' tag, I imagine you may have seen an example that needed to use goodra expliclity
[05:13] <vino> veebers: i am seeing in bootstrap caas yes.
[05:14] <veebers> vino: ah right, that would be due to the lxd version on goodra, we haven't yet been able to update the version on all machines
[05:15] <vino> veebers: ok. ty.
[08:29] <jacekn> hello. Is it possible to add model with a specific, older version? I need to test bugfix for older juju versions
[12:31] <rick_h_> jacekn: https://docs.jujucharms.com/2.4/en/models-config you can use the --agent-version flag there I believe
[12:48] <jacekn> rick_h_: ERROR "agent-version"" must be set via "upgrade-model"
[12:48] <jacekn> rick_h_: ERROR cannot change version from 2.4.1 to lower version 2.3.1
[12:49] <jacekn> I just bootstrapped from scratch
[12:49] <rick_h_> jacekn: right, but can you use that on an add-model command?
[12:49] <rick_h_> jacekn: I know you can't change it on the fly like other config
[12:49] <rick_h_> jacekn: yea, if add-model with the agent version specified doesn't work then yea a bootstrap will have to be the way to go
[12:51] <jacekn> ah add-model might work (but now I'm getting no agent binaries found for version 2.3.1). I solved the probelm by re-bootstrapping anyway
[12:51] <jacekn> thanks for help though
[12:54] <rick_h_> jacekn: k, when all else fails go with the hammer heh
[13:07] <magicaltrout> hello folks anyone in this channel have half a clue about the lxd snap?
[13:12] <pmatulis> magicaltrout, question?
[13:14] <magicaltrout> i stuck it in #lxd but the migrate thing gets really confused for some reason and if you say no to removing the old lxc stack it still doesn't let you run the snap lxc comamnds but then something (juju or something else) is reinstalling the .deb lxc packages
[13:14] <magicaltrout> cause after a reboot they seem to magically reappear
[13:16] <magicaltrout> also how do you reconfigure the lxdbr in snapworld cause whilst they have intermal ip addresses nothing seem to be able to talk to them now
[13:18] <pmatulis> magicaltrout, that's weird (first part)
[13:20] <pmatulis> magicaltrout, no idea (second part)
[13:20] <pmatulis> i'll test the first part though
[13:20] <magicaltrout> thanks pmatulis just for reference this is a manual cloud box
[13:21] <magicaltrout> with a few different containers on, but they've been running in the lxd snap for weeks, the lxd deb stack has been empty for a long while
[13:27] <pmatulis> magicaltrout, so you're not using the 'lxd' cloud type. you're just creating containerised machines within a manual cloud node
[13:27] <magicaltrout> yeah
[13:34] <magicaltrout> ooooh fml
[13:34] <magicaltrout> when all your containers start
[13:34] <magicaltrout> then it insists on a migration
[13:34] <magicaltrout> *booooom*
[13:35] <magicaltrout> so they're up cause some services are responding
[13:35] <magicaltrout> but i can't login to any of them
[13:50] <magicaltrout> also
[13:50] <magicaltrout> lxd.migrate completely screws up my lxd bridge for some reason
[13:51] <magicaltrout> urgh
[13:58] <magicaltrout> https://discuss.linuxcontainers.org/t/snap-lxd-has-blocked-me-up/2382
[13:58] <magicaltrout> stuck it in there as well
[14:05] <magicaltrout> i would have thought, considering the removing of the lxc debs is optional
[14:05] <magicaltrout> that whilst it wants you to migrate to the new snap
[14:05] <magicaltrout> lxd should still function without removing it
[14:05] <magicaltrout> so whatever does the detection stuff
[14:05] <magicaltrout> seems a bit screwed
[14:33] <pmatulis> indeed
[14:48] <kwmonroe> magicaltrout: is it possible in the deb env that you symlinked /var/lib/lxd/containers to somewhere else?  i did this once in a similar aws/manual machine to put my containers on the ephemeral /dev/sdb (ln -s /var/lib/lxd /mnt, or something like that).  anyway. i recall lxd.migrating booming on trying to stat the symlink'd containers.
[14:49] <magicaltrout> kwmonroe: as if you'd do something as hackish as that! ;)
[14:50] <kwmonroe> i was trying a new gin&tonic recipe at the time.  hackies ensued.
[14:51] <magicaltrout> i think i'm getting stuff back together. The "force this deb not to install" solution from the forums seems to be taking me in the right direction
[15:00] <kwmonroe> magicaltrout: one other thing to check.. if you still have an /etc/default/lxd-bridge, that comes from the .deb and may be trying to start a bridge that should be handled by the lxd snap.  so if you have that file and you seem to have conflicting bridges, try moving that like the migrate script would have done:
[15:00] <kwmonroe> # ll /etc/default/lxd-bridge.migrated
[15:00] <kwmonroe> -rw-r--r-- 1 root root 1206 Jun 28 16:18 /etc/default/lxd-bridge.migrated
[15:41] <manadart> externalreality: A small one: https://github.com/juju/juju/pull/9004. Preparatory patch for upgrade-series worker implementation.
[16:05] <magicaltrout> kwmonroe: we're wiring up Druid to the HDFS storage engine and rmcd says it says "Place your Hadoop configuration XMLs (core-site.xml, hdfs-site.xml, yarn-site.xml, mapred-site.xml) on the classpath of your Druid nodes. You can do this by copying them into conf/druid/_common/"
[16:05] <magicaltrout> working on that logic could we build off of layer:hadoop-client or something and get the configs that way?
[16:15] <magicaltrout> or apache-bigtop-base?
[16:15] <magicaltrout> what part of the stack installs the configs?
[16:30] <magicaltrout> rick_h_: how do we get a big data category on discourse?
[16:44] <rick_h_> magicaltrout: you ask nicely and when I get back from lunch I add it in there for ya
[16:44] <rick_h_> Like magic!
[16:45] <magicaltrout> Dearest Rick
[16:45] <magicaltrout> Could we possibly have
[16:45] <magicaltrout> a Big Data category in the discourse forum
[16:46] <magicaltrout> so that I may ask pertinent questions, like the one above, and their response be stored for all time, so that others may also benifit from Kevins infinite wisdom
[16:46] <magicaltrout> Thanks
[16:46] <magicaltrout> Tom
[16:49] <rick_h_> LoL for you magicaltrout , anything
[17:17] <kwmonroe> magicaltrout: you want to build on layer:hadoop-client.  that will include the hadoop-plugin relation, so once your charm relates to hadoop-plugin, the system will automatically install all the hadoopy things (including ./conf files) from bigtop.
[17:18] <kwmonroe> then you make a druid reactive handler that says @when(hadoop.hdfs.ready), DO_THE_STUFF
[17:18] <magicaltrout>  thanks kwmonroe
[17:20] <kwmonroe> np magicaltrout.  you can also include layer:apache-bigtop-base, but i would only do that if druid is a bigtop project.  the only thing that layer helps you do is setup puppet for bigtopy stuff to happen.
[17:27] <rick_h_> magicaltrout: https://discourse.jujucharms.com/c/charms/big-data
[17:27] <rick_h_> kwmonroe: ^
[17:31] <magicaltrout> why thanks rick_h_
[17:31] <rick_h_> magicaltrout: :)
[20:44] <veebers> Morning all o/
[20:46] <rick_h_> morning veebers
[20:46] <rick_h_> happy friday to you
[20:54] <veebers> why thank you rick_h_ :-) It's a foggy wet Friday but I won't complain :-)
[20:56] <rick_h_> hey, better than it being thurday! :P
[20:56] <veebers> how are things today  cory_fu, I haven't broken anything else for you yet? :-)
[20:57] <veebers> Indeed! Those poor suckers who are still stuck on Thursday
[21:03] <cory_fu> veebers: :)  All good, thanks
[22:53] <veebers> what's the level of repetition that makes sense for a table based test? i.e. if I'm doing to comparisons is it worth doing?
[23:09] <thumper> veebers: I'm moved away from table based tests mostly
[23:10] <thumper> Instead consider a helper function