[00:19] <mup> Bug # changed: 1274460, 1274922, 1276943, 1276976
[00:21]  * redir eods a little early for M's birthday
[00:43] <mup> Bug # changed: 1277307, 1277359, 1279093, 1281376, 1281377, 1282731, 1283814
[00:49] <mup> Bug #1284183 changed: jujuclient.EnvError: <Env Error - Details:  {   u'Error': u'watcher was stopped', u'RequestId': 9, u'Response': {   }} <api> <status> <juju-core:Fix Released> <https://launchpad.net/bugs/1284183>
[00:49] <mup> Bug #1285115 changed: very slow upgrade-charm run <upgrade-charm> <juju-core:Fix Released> <https://launchpad.net/bugs/1285115>
[02:16] <mup> Bug # changed: 1286517, 1286570, 1287661, 1288034
[02:25] <mup> Bug # changed: 1288745, 1293324, 1294458, 1294462
[02:40] <mup> Bug # opened: 1288745, 1293324, 1294458, 1294462
[02:41] <anastasiamac> wallyworld: menn0: thumper: m having one of those funnily interesting days \o/ where can I find a list of all hooks that we support and conditions under which they fire? :D
[02:41] <wallyworld> source code :-)
[02:42] <wallyworld> not sure there's doc?
[02:42] <anastasiamac> :D where in source code?
[02:42] <wallyworld> there might be
[02:42] <anastasiamac> or docs?
[02:42] <anastasiamac> we do not have a list?
[02:42] <wallyworld> nfi about docs
[02:42] <wallyworld> may do but i've never used them
[02:42] <katco> anastasiamac: https://jujucharms.com/docs/stable/reference-charm-hooks
[02:42] <menn0> anastasiamac: does this help: https://jujucharms.com/docs/stable/developer-event-cycle
[02:43] <anastasiamac> katco:  menno: this is awesome :D tyvm
[02:44] <wallyworld> in case docs are out of date, hooks are defined in charm/hooks/hooks.go
[02:46] <mup> Bug # changed: 1288745, 1293324, 1294458, 1294462, 1294843, 1297940, 1298662, 1298755, 1299032
[02:47] <anastasiamac> wallyworld: thank you :)
[03:01] <mup> Bug #1299040 changed: juju should fire depart/join hooks when units are dead <hooks> <juju:Triaged> <juju-core:Won't Fix> <https://launchpad.net/bugs/1299040>
[03:01] <mup> Bug #1299579 changed: support unconfined and nested container app armor profile in local provider <improvement> <local-provider> <ubuntu-engineering> <juju-core:Won't Fix> <https://launchpad.net/bugs/1299579>
[03:01] <mup> Bug #1299584 changed: dead machines stuck in status <destroy-machine> <landscape> <juju-core:Fix Released> <https://launchpad.net/bugs/1299584>
[03:28] <mup> Bug # changed: 1300033, 1300755, 1300823, 1301565, 1301999, 1302015, 1303204
[03:37] <mup> Bug #1303205 changed: an option for add-machine to return back the registered machine id <add-machine> <improvement> <juju-core:Fix Released> <https://launchpad.net/bugs/1303205>
[03:37] <mup> Bug #1307643 changed: juju upgrade-juju --upload-tools does not honor the arch <simplestreams> <upgrade-juju> <upload-tools> <juju-core:Fix Released> <https://launchpad.net/bugs/1307643>
[03:37] <mup> Bug #1308101 changed: juju/testing: suite-level Patch never gets restored <tech-debt> <testing> <juju-core:Fix Released> <https://launchpad.net/bugs/1308101>
[04:04] <wallyworld> menn0: at some point, tomorrow or whenever, i'd love eyeballs on this https://github.com/juju/juju/pull/6473 . There's one commit that's important; the rest can be eyeballed as it's already been reviewed when landing into the feature branch initially. And it's hidden by feature flag and will need rework anyway as spec has changed :-(
[04:04] <mup> Bug # changed: 1308966, 1309441, 1311781, 1312786, 1312951, 1314682, 1317596
[04:34] <mup> Bug # changed: 1318601, 1318923, 1319441, 1319608, 1320218, 1321407, 1321408, 1321793, 1322829, 1323441, 1323623
[04:46] <menn0> wallyworld: ok looking now
[04:46] <wallyworld> it can wait
[04:46] <wallyworld> late for you
[04:47] <menn0> wallyworld: yeah, actually it looks big :)
[04:47] <menn0> wallyworld: which commit is the important one?
[04:47] <wallyworld> the one commit that needs review is small
[04:47] <wallyworld> it's in the PR
[04:47] <menn0> wallyworld: duh
[04:47] <menn0> :)
[04:51] <frobware> hackedbellini: https://bugs.launchpad.net/juju/+bug/1634744
[04:51] <mup> Bug #1634744:  bootstrap fails with LXD provider when not using lxdbr0 <bootstrap> <lxd-provider> <juju:New> <https://launchpad.net/bugs/1634744>
[04:53] <menn0> wallyworld: reviewed that commit and found a few little problems
[04:53]  * menn0 is EOD
[04:53] <wallyworld> menn0: ty
[05:54] <bradm> is there a reason there's a juju add-subnet, and lists-subnet, but no apparent way to edit said subnet?
[05:59] <anastasiamac> bradm: no reason :D
[06:01] <bradm> although there's not a lot in the way of editting you can do, I guess
[06:01] <bradm> just remove would do
[06:19] <mup> Bug # changed: 1313862, 1324097, 1324666, 1324949, 1367863
[06:24] <anastasiamac> frobware: do u know if this si still applicable for juju 2?
[06:24] <anastasiamac> https://bugs.launchpad.net/juju-core/+bug/928624
[06:24] <mup> Bug #928624: cached unit public addresses are problematic when public ip address changes <addressability> <canonical-is> <canonical-webops-juju> <network> <juju-core:Triaged> <pyjuju:Won't Fix> <https://launchpad.net/bugs/928624>
[06:26] <frobware>  anastasiamac: I would say yes, or still warrants investigation or a repro.
[06:26] <anastasiamac> frobware: tyvm!
[06:40] <mup> Bug # changed: 766721, 928624, 1371558, 1372016
[06:49] <mup> Bug #1049340 changed: cross-model relations <canonical-is> <canonistack> <feature> <ubuntu-engineering> <juju:Triaged> <juju-core:Won't Fix> <pyjuju:Confirmed> <https://launchpad.net/bugs/1049340>
[06:49] <mup> Bug #1634761 opened: juju 2 subnet refresh from cli <canonical-bootstack> <juju-core:New> <https://launchpad.net/bugs/1634761>
[07:05] <mup> Bug #1634761 changed: juju 2 subnet refresh from cli <canonical-bootstack> <juju-core:New> <https://launchpad.net/bugs/1634761>
[07:05] <mup> Bug #1049340 opened: cross-model relations <canonical-is> <canonistack> <feature> <ubuntu-engineering> <juju:Triaged> <juju-core:Won't Fix> <pyjuju:Confirmed> <https://launchpad.net/bugs/1049340>
[07:11] <mup> Bug # changed: 1049340, 1173089, 1183309, 1188126, 1194483, 1208430, 1208787, 1209112
[07:11] <mup> Bug #1634761 opened: juju 2 subnet refresh from cli <canonical-bootstack> <juju-core:New> <https://launchpad.net/bugs/1634761>
[07:17] <mup> Bug # opened: 1173089, 1183309, 1188126, 1194483, 1208430, 1208787, 1209112
[07:20] <mup> Bug # changed: 1173089, 1183309, 1188126, 1194483, 1208430, 1208787, 1209112, 1209452, 1213186
[07:32] <mup> Bug #1209452 opened: juju status should report hardware characteristics in a more structured format <hours> <status> <ui> <juju-core:Fix Released> <https://launchpad.net/bugs/1209452>
[07:32] <mup> Bug #1213186 opened: Show user progress of deployed service <papercut> <performance> <ui> <juju:Triaged> <juju-core:Won't Fix> <https://launchpad.net/bugs/1213186>
[07:41] <mup> Bug #1209452 changed: juju status should report hardware characteristics in a more structured format <hours> <status> <ui> <juju-core:Fix Released> <https://launchpad.net/bugs/1209452>
[07:41] <mup> Bug #1213186 changed: Show user progress of deployed service <papercut> <performance> <ui> <juju:Triaged> <juju-core:Won't Fix> <https://launchpad.net/bugs/1213186>
[07:50] <mup> Bug #1217726 changed: gui: viewing a service or a service unit causes the api server to enumerate all availalble tools <api> <juju-gui> <juju-core:Fix Released> <https://launchpad.net/bugs/1217726>
[07:50] <mup> Bug #1218651 changed: Requests to meta-data service do not timeout <canonistack> <performance> <security> <juju:Triaged> <juju-core:Won't Fix> <https://launchpad.net/bugs/1218651>
[07:50] <mup> Bug #1634761 changed: juju 2 subnet refresh from cli <canonical-bootstack> <juju:Triaged> <https://launchpad.net/bugs/1634761>
[08:26] <mup> Bug # changed: 1221501, 1221685, 1223339, 1223752, 1227450, 1229755, 1234287, 1234677, 1236662
[08:38] <mup> Bug # changed: 1238677, 1241763, 1241840, 1242725, 1243768, 1244382, 1244841
[08:47] <mup> Bug # opened: 1238677, 1241763, 1241840, 1242725, 1243768, 1244382, 1244841
[08:56] <mup> Bug # changed: 1238677, 1241763, 1241840, 1242725, 1243768, 1244382, 1244841
[09:05] <mup> Bug # changed: 1248674, 1248800, 1252322, 1253651, 1254790, 1255786, 1257729, 1259496
[09:25] <frobware> on my nuc: load average: 2251.20, 1112.39, 431.37
[09:25] <frobware> lol - no wonder it is slow to login...
[09:26]  * frobware grabs some coffee and waits for the load avg to dip below 100...
[09:39] <voidspace> perrito666: you've touched the vsphere provider - do you know who wrote it?
[10:23] <frobware> mgz: ping
[10:52] <frobware> mgz: any chance you could jump on a HO w.r.t. simplestreams
[10:57] <SimonKLB> hey, could anyone here give me some best practice for dealing with exceptions during a config update in a charmed software - to give some context, right now im using data_changed() to see if the configuration has changed but if the configuration task does not complete it might be unchanged in the software
[10:58] <SimonKLB> and since data_change is updateing the unitdata it would not think the config has changed the next iteration
[11:00] <SimonKLB> it would be nice with some kind of two step update rather than writing the new data when using data_changed(), for example first being able to check if the data has changed and then commiting it to the unitdata.kv when it's actually successfully completed
[11:03] <aluria> hi o/ -- I'm trying to get a similar behavior than when I deployed ubuntu --to lxc:N (using juju1) --> now, using --to lxd:N (using juju2)   --- on juju1+LXCs, I had to edit /usr/share/lxc/templates/lxc-ubuntu-cloud to map ifaces into bare metal bridges
[11:04] <aluria> now, I see juju2+LXDs already creates all ifaces listed in "juju list-subnets" -- is that the expected behavior? Could I limit somehow the devices shown on "lxc config show <lxd-name>" ?
[11:04] <aluria> ie. have an LXD only mapping a couple of ifaces into bare metal bridges
[11:06] <aluria> frobware: if I may ask you ^^
[11:07] <frobware> aluria: not at the moment, though that is something we are aware of. In a future version this may well be dynamic (i.e., we will only add what is needed in the container).
[11:08] <frobware> aluria: just to be clear, you're seeing this on MAAS, correct?
[11:09] <aluria> frobware: thx -- and how is the order chosen? ie. eth0 maps to br0-net2, eth1 to br0-net0, ...
[11:09] <aluria> frobware: on MAAS 2, indeed
[11:09] <perrito666> voidspace: answered you in priv
[11:10] <frobware> aluria: they should be paired so that eth0 in the container will be br-eth0 on the host
[11:10] <frobware> aluria: you can see the mapping if you run: lxc config show <container-name>
[11:11] <aluria> frobware: hmm, we're using disable-network-management=True and copying our own /e/n/interfaces file, so names don't follow br-eth0, etc.
[11:11] <dimitern> aluria: if you want not to bridge certain maas node interfaces, keep them unconfigured in maas - juju won't bridge unconfigured interfaces
[11:12] <aluria> frobware: and yep, "lxc config show <lxd-name>" helps to troubleshoot, but I was wondering if that order will be followed on every redeploy (hope so) + on every lxd (I imagine that will happen)
[11:12] <dimitern> aluria: can I ask why are you using disable-net-mgmt: true ?
[11:13] <aluria> dimitern: I'm using "disable-net-mgmt: true" b/c we used to (in bootstack)... to be able to configure bonds, vlan tagging, bridges and make LXCs bridge via the bridge we want and not lxcbr0
[11:14] <mgz> frobware: yo, how can I help?
[11:14] <frobware> mgz: I'm stuck, blocked and confused. Can you jump on a HO?
[11:14] <aluria> dimitern: with juju2, things will change (iirc, on maas 2.2?, bridges will be supported)
[11:14] <dimitern> aluria: ah, I see - well, with juju2 you should have most of that config done by juju on maas
[11:14] <dimitern> aluria: maas 2.1 has initial bridges support (already released?)
[11:15] <mgz> frobware: same one as for standup?
[11:15] <frobware> mgz: yep.
[11:17] <aluria> dimitern: not on ppa:maas/stable but we'll have to check devel series in the near future -- thx for the info on "juju2 not bridging unconfigured interfaces"
[11:17] <dimitern> aluria: it's on ppa:maas/next but on Y, IIRC
[11:18] <aluria> dimitern: ah, it looks it's also on X, ta
[11:20] <dimitern> aluria: yeah, for L2 support (e.g. neutron) that's how it'll work; in future releases we'll have better way of communicating "I just want L2 device(s) thank you"
[11:28] <dimitern> I'd appreciate a second look at https://github.com/juju/juju/pull/6468 please
[11:55] <frobware> voidspace, dooferlad: ^^ any volunteers for dimitern's PR. I took a look but somebody else should too.
[11:55] <dimitern> it's easier to review now, as the prereq has landed and I rebased onto develop
[11:57] <voidspace> frobware: dimitern: looking
[11:57] <dimitern> voidspace: ta!
[12:18] <dimitern> frobware: I figured out how to trigger juju ssh hanging on demand :) not a blackhole route to the node's public address (gets EINVAL) but iptables -t filter -I OUTPUT 1 -s <IP>/32 -j DROP
[12:24] <dimitern> even better (somewhat..) I managed to trigger the bug in parallel.Try
[12:24] <dimitern> when a dual-NIC juju controller node has -j DROP rule on its first nic arriving from the client
[12:49] <voidspace> dimitern: so you've added AllNetworkAddresses to the sshclient  shim
[12:49] <dimitern> voidspace: yeah
[12:49] <voidspace> dimitern: but this PR doesn't implement AllNetworkAddressess, was it pre-existing just not exposed?
[12:50] <voidspace> dimitern: and what's the difference between Addresses and AllNetworkAddresses (except the error return value)
[12:50] <dimitern> voidspace: it was in the prerequise PR - 6467
[12:50] <voidspace> dimitern: ah
[12:51] <dimitern> voidspace: Addresses() returns the merged machine and provider addresses stored on the machine doc (considered obsolete source)
[12:51] <dimitern> voidspace: and AllNetworkAddresses() returns the machine's addresses assigned to link layer devices
[12:51] <voidspace> dimitern: the machine doc is now considered obsolete? what is authoritative
[12:51] <voidspace> ah
[12:51] <voidspace> cool
[12:52] <dimitern> (but as []network.Address, more usable across package boundaries - i.e easier to test, than []*state.Address)
[12:52] <voidspace> dimitern: so has PrefferedPublic|PrivateAddress been updated to use link layer device addresses?
[12:52] <dimitern> voidspace: nope, not yet
[12:52] <voidspace> dimitern: I thought the preferred addresses were still using machine/provider
[12:52] <voidspace> right
[12:52] <voidspace> cool
[12:52] <dimitern> voidspace: but I'd like to, eventually
[12:52] <voidspace> dimitern: that PR looks fine to me, pretty straightforward really
[12:53] <voidspace> dimitern: just going to read frobware's comments
[12:53] <dimitern> voidspace: they are, but on maas the LLD addresses are a better source (and have all the "metadata" along with them - i.e. subnets, dns settings, etc.)
[12:54] <dimitern> voidspace: tyvm! :)
[12:54] <voidspace> dimitern: yep, I like the change to consider LLD authoritative (i.e. to have a proper network model)
[12:54] <voidspace> dimitern: but we need to be consistent - having two sources of truth is bad :-(
[12:54] <dimitern> voidspace: yeah :/ but we'll sort that along the way in the next releases I hope
[12:55] <voidspace> dimitern: cool
[12:59] <macgreagoir> aluria: Feel welcome to ping me if you want to go over any of those differences. You'll love it! ;-)
[13:19] <hackedbellini> frobware: thanks for pointing me the bug
[13:33] <rick_h_> voidspace: ping for chat
[13:43] <rick_h_> voidspace: bailing out, will see you in standup
[13:54] <dooferlad> dimitern: https://github.com/juju/juju/pull/6476 fixes one of your annoyances if you want to take a look
[13:56] <voidspace> rick_h_: oh crap
[13:56] <voidspace> rick_h_: forgot again dammit
[13:56] <voidspace> rick_h_: was afk - see you in a few moments
[13:57] <dimitern> dooferlad: cheers - looking
[14:00] <rick_h_> voidspace: katco natefinch dimitern ping for standup
[14:10] <lazyPower> has anyone seen an issue where juju deploying a bundle yields a response message like the following:  ERROR cannot deploy bundle: cannot create machine for holding easyrsa, etcd and kubernetes-master units: cannot add a new machine: An internal error has occurred (InternalError)
[14:11] <lazyPower> this was filed here: https://github.com/juju-solutions/bundle-kubernetes-core/issues/30  -- and we've not seen anything like this before. I think its bug-worthy on the core side, but i'm not sure what else i can add aside from this bugs output as we didn't encounter it.
[14:24] <katco> alexisb_: rick_h_: https://medium.com/feature-request-management-in-saas/roadmaps-a-product-teams-friend-or-foe-43b1d5cdc1a6#.kqna52meq
[14:25] <rick_h_> qkaty
[14:25] <rick_h_> bah
[14:25] <rick_h_> katco: ty
[15:01]  * macgreagoir is still here. Now no need for the early EOD.
[15:08] <redir> does 1.25 series require the --upload-tools flag to upgrade jujud to a locally built version?
[15:12] <dimitern> redir: still does, just 2.0 doesn't need it
[15:13] <redir> tx dimitern
[15:18] <redir> was there a way to see the logs like --show-logs in 1.24? or did it require  logging in?
[15:22] <dimitern> redir: you could try `juju debug-log` ?
[17:00] <redir> I'm a little late coming back from dr. Trains delayed -- should be back in about 15 minutes
[17:26] <aluria> hi again -- in juju-deployer, we used to use "overrides" for config variables that existed in multiple services (now applications) -- I haven't found such an option on juju2 bundles -- would I need to repeat the same key-value on all charms that have it as config param?
[17:37] <hoenir> can anyone review my latest patches ? https://github.com/juju/juju/pull/6414
[17:38] <hoenir> https://github.com/juju/juju/pull/6464
[17:40] <rick_h_> katco: can you help look at that first one please? ^
[17:40] <rick_h_> perrito666: do you have time to look at the second one today? ^
[17:40] <natefinch> aluria: try on #juju .... they generally know bundles better than we do
[17:40] <perrito666> rick_h_: I do, I am actually waiting on something to break
[17:41] <rick_h_> aluria: sorry, the built in bundles don't support the variable substitution like the deployer does
[17:55] <perrito666> hoenir: reviewed the second, targetted to the wrong branch but otterwise sems find
[17:55] <perrito666> fine
[17:56] <hoenir> perrito666, and what branch is the "dev branch" ?
[17:57] <perrito666> develop
[17:57] <hoenir> "develop" branch?
[17:57] <perrito666> I misstyped wanted to type develope
[17:57] <perrito666> develop
[17:58] <hoenir> but why develop? Why the dev design changed? We know make pr only on develop? Why not on master "as usual"?
[17:58] <hoenir> perrito666, could you add more details?
[17:58] <rick_h_> hoenir: sorry, we've got a new development workflow that started this week
[17:58] <natefinch> rick_h_: is the new development workflow on the wiki?
[17:59] <rick_h_> natefinch: no, only in the email and docs
[17:59] <perrito666> I was about to ask, do we have a pub... oh
[17:59] <perrito666> ok
[18:00] <hoenir> why not send an email to all contribs when a dev design is changing ? It's really hard and somewhat annoying to "change over night"...
[18:01] <hoenir> I will make new pr's on develop, thanks guys.
[18:01] <natefinch> it's pretty easy to move PRs, btw
[18:02] <perrito666> hoenir: no need to, you can re-target your existing one
[18:05] <hoenir> natefinch, , perrito 666 thanks, for the first one I've changed  the branch in the end, but for the second I already created a new PR and closed the older one. So I think it's ok for now.
[18:06] <perrito666> hoenir: pass me the second one so I can review/approve
[18:06] <hoenir> perrito666, right away
[18:07] <hoenir> https://github.com/juju/juju/pull/6477
[18:08] <katco> rick_h_: hoenir: i will tal at 6414 later today; trying to wrap up some changes
[18:08] <hoenir> katco, thanks !
[18:08] <hoenir> perrito666, hmm smth failed, should I rebase with the master and afer rebase with develop ?
[18:10] <perrito666> hoenir: no, just let me figure out how to fix what failed :)
[18:12] <perrito666> rick_h_: sinzui any of you remember the invocation to re-run checks
[18:12] <perrito666> ?
[18:12] <rick_h_> perrito666: !!build!!
[18:12] <perrito666> and perhaps we need to make godeps run a couple of times before failing that one
[18:12] <perrito666> rick_h_: mm, I might not be in the right group of people then, it did nothing
[18:14] <perrito666> hoenir: interesting aproach
[18:19] <hoenir> perrito666, thanks, the changes Implies, that, the user data script could be run multiple times without producing errors or changing the way the jujud is installing and setting up the env windows registers, etc.
[18:22] <perrito666> hoenir: now we need those tests to run again
[18:22] <perrito666> rick_h_: can you try the !!build!! incantation?
[18:26] <rick_h_> perrito666: link me please, ln the phone
[18:27] <perrito666> rick_h_: https://github.com/juju/juju/pull/6477
[18:27] <hoenir> https://github.com/juju/juju/pull/6477
[18:30] <hoenir> why the first failed?
[18:33] <perrito666> hoenir: godeps failed due to network/server issues
[18:33] <perrito666> it is a check that runs a basic set of tests
[18:33] <perrito666> once that passes you can merge
[18:35] <hoenir> could anyone provide a link all of the "incantations"?
[18:36] <perrito666> hoenir: there is only 2 as far as I know !!build!! will cause these to re-run and $$merge$$ that will cause a merge (I am not sure they will work for everyone though)
[18:36] <perrito666> brb
[18:55] <hoenir> and the log ? 2016-10-19 18:50:55 ERROR trusty failed with 2 perrito666
[18:57] <hoenir> perrito666, nevermind I founded.
[18:58] <hoenir> http://juju-ci.vapour.ws/job/github-check-merge-juju/79/artifact/artifacts/trusty-out.log
[19:02] <hoenir> Yeah so the scripts are valid but they don't 100% match in the unit test(added one space by mistake or smth).
[19:03] <hoenir> perrito666, https://www.diffchecker.com/k9QPGjvA fixing it now
[19:23] <hoenir> perrito666, fixed !
[19:29]  * redir lunches
[21:11]  * perrito666 does some serious open heart surgery to vsphere provider
[21:31] <katco> do we only support linux w/ the manual provider?
[21:34] <perrito666> katco: afaik
[21:34] <katco> perrito666: k just checking, ta
[21:38] <perrito666> meh I am not sure if I should refactor this code or publish it as a bad practices manual
[21:39] <katco> perrito666: refactor. here's why: http://ronjeffries.com/xprog/articles/refactoring-not-on-the-backlog/
[21:39] <katco> perrito666: just try and keep it a narrow refactoring (difficult in our codebase bc there are sticky bad things that have propagated)
[21:40] <perrito666> katco: well the code doesnt actually work so not that narrow
[21:40] <perrito666> :p
[21:40] <katco> lol
[21:40] <katco> well then you don't have much of a choice do you lol
[21:40] <perrito666> it does the wrong thing and written in the wrong way :p
[21:40] <perrito666> so you see my issue  :p
[21:40] <katco> perrito666: refactor. this shouldn't even be a question lol
[21:41] <perrito666> katco: still could sell very well as an oreilly reference book :p
[22:01] <katco> frobware: does this really work? https://github.com/juju/juju/blob/staging/environs/manual/addresses.go#L17
[22:02] <katco> frobware: i still don't know much about our networking stuff, but i imagined a more robust resolution framework
[22:12] <anastasiamac> I can see both $$merge$$ and !!build!! being used... is there a considerable difference between the two? is there a preference?
[22:16] <perrito666> anastasiamac: afaik build only triggers the automated tests
[22:17] <anastasiamac> perrito666: k so with !!build!!, there is no landing... we are just requesting a check that land is possible?
[22:17] <perrito666> anastasiamac: exactly it is for cases where tests fail I presume
[22:19] <anastasiamac> perrito666: thnx \o/
[23:06] <wallyworld> menn0: i replied to your comments? care to take a look?
[23:16] <alexisb> menn0, thumper ping
[23:16] <thumper> yes?
[23:37] <thumper> wallyworld: the folks behind exploding kittens have a new game coming
[23:37] <thumper> which I have backed alrady
[23:37] <wallyworld> thumper: me too!
[23:37] <thumper> bearsvsbabies.com
[23:37] <thumper> :)
[23:37] <wallyworld> looks awesome
[23:37] <thumper> yeah
[23:41] <thumper> veebers: I was thinking of spending some time up the road at the cafe
[23:41] <thumper> veebers: interested in joining me?
[23:42] <wallyworld> thumper: still no coffee machine?
[23:42] <thumper> wallyworld: no :-(
[23:42] <wallyworld> ffs
[23:42] <thumper> word
[23:42] <menn0> thumper: hangout again? I've dug a bit further
[23:43] <thumper> yep
[23:44]  * perrito666 reaches new levels of not enough bandwidth
[23:44] <perrito666> 39k upload...
[23:45] <veebers> thumper: ugh, sorry I just organised to grab some lunch with the girls (we have Kerris sister down for Kerris b-day) :-\ Maybe later today or tomorrow?
[23:46] <thumper> hmm...
[23:46] <thumper> I'll probably be there 1-3pm
[23:46] <thumper> ish
[23:47] <veebers> cool, I'll be in touch :-)
[23:58] <redir> what if there's no sdb for new regions. empty string?
[23:59] <perrito666> yay vsphere provider working correctly
[23:59]  * perrito666 jumps like doc brown