[00:40] wallyworld: addressed ur comments. [00:41] cud u plz take a look. [00:41] sure [00:48] vino: don't forget to change the PR description [00:48] sure. i am doing that. [00:51] veebers: how far away are you from landing your PR? [00:57] wallyworld: I shouldn't be too far, was looking at the doc first. I can pivot now (just got back from lunch) and get that sorted. [00:58] veebers: that would be gr8 as i can't deploy k8s charms at the moment [00:58] ah I see, that o' chestnut. Ok I'll get it sorted [00:58] Docker resource with ID: mariadb/mysql_image not found [00:58] i think that's the error that is fixed? [00:58] wallyworld: yep [01:01] anastasiamac: done [01:01] wallyworld: ta [01:03] wallyworld: model-constraints are indeed not inherited when new models are added... so --constraints on bootstrap does only set default and controller model constraints... i guess the reason is - how would u know from which model to inherit... i suspect that this is a desired behavior [01:03] wallyworld: i will still add --model-constraints as an option to bootstrap [01:04] wallyworld: we should probably consider --model-contraints as an option to add-model too... [01:04] the pattern for model config is to store default sspecified at bootstrap time into a separate settings bucket [01:04] yes, it needs holistic thought [01:04] wallyworld: yeah... maybe this is what we need to do with constraints too ... [01:05] should do a "one pager" to propose a solution [01:05] wallyworld: yes, i'll add it to my really-want-to-address-yesterday bucket :) [01:05] wallyworld: +1 to one pager... [01:05] i have lots of those buckets [01:05] \o/ [01:09] * thumper goes to get food [01:30] thumper: sonofy.co [01:46] veebers: won't you need to import the sha packages as well? [01:47] babbageclunk: awesome!!! [01:48] wallyworld: it seems just once in for the 'binary' is fine. I did a manual test deploy using --resource and straight deploy and all is happy. I can add the import not too for completeness [01:48] I'm just sorting the deps [01:48] I realised at push that I hadn't done that yet [01:48] veebers: you can't rely on that [01:48] wallyworld: ack fair enough, I'll add the import at usage [01:49] especially for say tests, they won't necessarily cause the import side effect [01:49] since they only operate on that package they are in plus any transitive deps [01:50] ack, makes sense [01:54] wallyworld: FYI I've pushed the updated branch w/ deps and imports [01:54] yay ty [02:00] babbageclunk: installed it, doesn't seem to work properly [02:01] thumper: ah well, sorry! [02:26] wallyworld: which operator image would the edge snap use out of interest? I recall you mentioned that we need to do an image operator push for the edge channel right? [02:26] veebers: it uses whatever the last person to push a copy has uploaded [02:26] ah ok [02:47] Is there a way to force 'juju update-clouds' to update? Just removing $JUJU_DATA/clouds.yaml won't do the job [02:48] veebers: public-clouds.yaml [02:48] clouds.yaml is your personal one [02:50] wallyworld: where is it looking for public-clouds.yaml? Can't see it in $JUJU_DATA, nor any /snap/ dir [02:50] it's in $JUJU_DATA [02:50] same as clouds.yaml [02:52] my $JUJU_DATA doesn't have that file :-\ (I've set it to another dir for testing something). the only public-clouds.yaml I can find is ~/.local/share/juju/public-clouds.yaml, but moving that doesn't work (so juju isn't checking multiple places it seems) [02:56] veebers: sorry, i thought $JUJU_DATA was ~/.local/share/juju [02:57] update-clouds operates in that directory [02:58] kelvin_: your PR looks fairly complete? main issue I can see is we've lost the make target to update the deps file(s) [03:29] kelvin_: you may need to update the pr merge/check jobs with the deps changes. IIRC they explicitly call godeps etc. [03:35] kelvin_: there's no need to commit the lock file, just the toml [03:35] dep ensure will do what it needs to do [03:35] or whatever the cmd is to generate the lock from toml [03:43] veebers: i have a Pr for u to review [03:44] adding ci test for export bundle feature. [03:44] cud u plz take a look when u r free ? [03:44] wallyworld, it should be no problem, we use `godeps` target to ensure deps but now we use `dep` [03:45] vino: sure can, link? [03:46] wallyworld, we should commit lock file with the .toml file together. even we try to use sha for revision but we can also just specify branch of the dep, in this case, the lock file has more detailed version control for the deps. [03:47] wallyworld, just like package-lock.json/yarn.lock for node, [03:47] kelvin_: so the lock file is not simply generated from the toml file? [03:47] wallyworld: The lock file should definitely be committed. [03:48] wallyworld, it's generated by toml file but it could be different later to use the same toml file [03:48] https://github.com/CanonicalLtd/juju-qa-jenkins/pull/64 [03:48] kelvin_: babbageclunk: ok, nps. whatever best practice is [03:48] wallyworld, for example, we changed the packages that imports a dep in juju [03:49] kelvin_: i think it would be useful to expand these use cases in the google doc and we can see what the workflow would be [03:50] veebers: went for a tea. link: https://github.com/CanonicalLtd/juju-qa-jenkins/pull/64 [03:50] wallyworld, agreed, i will prepare a doc after i get all the things working correctly [03:50] yup, sgtm [03:51] vino: sweet, I'll hit that in a little bit [03:56] are other people getting go vet errors on a format string in k8s.go? [04:03] babbageclunk: I just ran the verify script manually now and get the error (2x errors lines 149x-ish) [04:05] babbageclunk: that's really odd, the verify script is called in the check Make target, which should be what the PR/Build/Anything runs [04:09] after make add-patches, i got two more errors [04:09] https://pastebin.ubuntu.com/p/n7xwpbszdn/ [04:12] veebers: weird [04:12] babbageclunk: yeah I'm a bit confused, I can see in the script where it calls "make check", which will call the verify script, and I'm pretty sure IGNORE_VET_WARNINGS isn't set anywhere [04:12] veebers: maybe different versions of go vet? [04:14] babbageclunk: jenkins machines have 1.10.3, I just tried with snap go1.10.3 an my other go1.10 and they both show the error [04:14] hmm [04:18] vino: LGTM [04:18] kelvin_: those errors seem weird - if err is undefined how is it building after make add-patches is run. [04:18] ? [04:23] kelvin_: Is the setprogress one because a patch has been accepted upstream or something? [04:31] babbageclunk, yeah, it's weird. i need to take a look further. [04:32] kelvin_: have you kept the same shas as in dependencies.tsv? [04:33] babbageclunk, yes, all sha are kept same [04:33] huh, then I don't get it. [04:33] babbageclunk, i translate the deps from our tsv file directly [04:34] right, that sounds very sensible. [05:11] veebers: thank u. Have a question abt this node label. feature is for any node having lxd capabilities. [05:12] then wat is goodra for ? [05:13] vino: goodra is included in the 'features' tag, I imagine you may have seen an example that needed to use goodra expliclity [05:13] veebers: i am seeing in bootstrap caas yes. [05:14] vino: ah right, that would be due to the lxd version on goodra, we haven't yet been able to update the version on all machines [05:15] veebers: ok. ty. [08:29] hello. Is it possible to add model with a specific, older version? I need to test bugfix for older juju versions [12:31] jacekn: https://docs.jujucharms.com/2.4/en/models-config you can use the --agent-version flag there I believe [12:48] rick_h_: ERROR "agent-version"" must be set via "upgrade-model" [12:48] rick_h_: ERROR cannot change version from 2.4.1 to lower version 2.3.1 [12:49] I just bootstrapped from scratch [12:49] jacekn: right, but can you use that on an add-model command? [12:49] jacekn: I know you can't change it on the fly like other config [12:49] jacekn: yea, if add-model with the agent version specified doesn't work then yea a bootstrap will have to be the way to go [12:51] ah add-model might work (but now I'm getting no agent binaries found for version 2.3.1). I solved the probelm by re-bootstrapping anyway [12:51] thanks for help though [12:54] jacekn: k, when all else fails go with the hammer heh [13:07] hello folks anyone in this channel have half a clue about the lxd snap? [13:12] magicaltrout, question? [13:14] i stuck it in #lxd but the migrate thing gets really confused for some reason and if you say no to removing the old lxc stack it still doesn't let you run the snap lxc comamnds but then something (juju or something else) is reinstalling the .deb lxc packages [13:14] cause after a reboot they seem to magically reappear [13:16] also how do you reconfigure the lxdbr in snapworld cause whilst they have intermal ip addresses nothing seem to be able to talk to them now [13:18] magicaltrout, that's weird (first part) [13:20] magicaltrout, no idea (second part) [13:20] i'll test the first part though [13:20] thanks pmatulis just for reference this is a manual cloud box [13:21] with a few different containers on, but they've been running in the lxd snap for weeks, the lxd deb stack has been empty for a long while [13:27] magicaltrout, so you're not using the 'lxd' cloud type. you're just creating containerised machines within a manual cloud node [13:27] yeah [13:34] ooooh fml [13:34] when all your containers start [13:34] then it insists on a migration [13:34] *booooom* [13:35] so they're up cause some services are responding [13:35] but i can't login to any of them [13:50] also [13:50] lxd.migrate completely screws up my lxd bridge for some reason [13:51] urgh [13:58] https://discuss.linuxcontainers.org/t/snap-lxd-has-blocked-me-up/2382 [13:58] stuck it in there as well [14:05] i would have thought, considering the removing of the lxc debs is optional [14:05] that whilst it wants you to migrate to the new snap [14:05] lxd should still function without removing it [14:05] so whatever does the detection stuff [14:05] seems a bit screwed [14:33] indeed [14:48] magicaltrout: is it possible in the deb env that you symlinked /var/lib/lxd/containers to somewhere else? i did this once in a similar aws/manual machine to put my containers on the ephemeral /dev/sdb (ln -s /var/lib/lxd /mnt, or something like that). anyway. i recall lxd.migrating booming on trying to stat the symlink'd containers. [14:49] kwmonroe: as if you'd do something as hackish as that! ;) [14:50] i was trying a new gin&tonic recipe at the time. hackies ensued. [14:51] i think i'm getting stuff back together. The "force this deb not to install" solution from the forums seems to be taking me in the right direction [15:00] magicaltrout: one other thing to check.. if you still have an /etc/default/lxd-bridge, that comes from the .deb and may be trying to start a bridge that should be handled by the lxd snap. so if you have that file and you seem to have conflicting bridges, try moving that like the migrate script would have done: [15:00] # ll /etc/default/lxd-bridge.migrated [15:00] -rw-r--r-- 1 root root 1206 Jun 28 16:18 /etc/default/lxd-bridge.migrated [15:41] externalreality: A small one: https://github.com/juju/juju/pull/9004. Preparatory patch for upgrade-series worker implementation. [16:05] kwmonroe: we're wiring up Druid to the HDFS storage engine and rmcd says it says "Place your Hadoop configuration XMLs (core-site.xml, hdfs-site.xml, yarn-site.xml, mapred-site.xml) on the classpath of your Druid nodes. You can do this by copying them into conf/druid/_common/" [16:05] working on that logic could we build off of layer:hadoop-client or something and get the configs that way? [16:15] or apache-bigtop-base? [16:15] what part of the stack installs the configs? [16:30] rick_h_: how do we get a big data category on discourse? [16:44] magicaltrout: you ask nicely and when I get back from lunch I add it in there for ya [16:44] Like magic! [16:45] Dearest Rick [16:45] Could we possibly have [16:45] a Big Data category in the discourse forum [16:46] so that I may ask pertinent questions, like the one above, and their response be stored for all time, so that others may also benifit from Kevins infinite wisdom [16:46] Thanks [16:46] Tom [16:49] LoL for you magicaltrout , anything [17:17] magicaltrout: you want to build on layer:hadoop-client. that will include the hadoop-plugin relation, so once your charm relates to hadoop-plugin, the system will automatically install all the hadoopy things (including ./conf files) from bigtop. [17:18] then you make a druid reactive handler that says @when(hadoop.hdfs.ready), DO_THE_STUFF [17:18] thanks kwmonroe [17:20] np magicaltrout. you can also include layer:apache-bigtop-base, but i would only do that if druid is a bigtop project. the only thing that layer helps you do is setup puppet for bigtopy stuff to happen. [17:27] magicaltrout: https://discourse.jujucharms.com/c/charms/big-data [17:27] kwmonroe: ^ [17:31] why thanks rick_h_ [17:31] magicaltrout: :) [20:44] Morning all o/ [20:46] morning veebers [20:46] happy friday to you [20:54] why thank you rick_h_ :-) It's a foggy wet Friday but I won't complain :-) [20:56] hey, better than it being thurday! :P [20:56] how are things today cory_fu, I haven't broken anything else for you yet? :-) [20:57] Indeed! Those poor suckers who are still stuck on Thursday [21:03] veebers: :) All good, thanks [22:53] what's the level of repetition that makes sense for a table based test? i.e. if I'm doing to comparisons is it worth doing? [23:09] veebers: I'm moved away from table based tests mostly [23:10] Instead consider a helper function