[00:22] <redir> seems like a lot of failures on develop currently
[01:00] <redir> I branched from staging and am proposing a merge to develop https://github.com/juju/juju/pull/6485
[01:00] <redir> real easy review
[01:00] <redir> except I don't know why I've got voidspace commits in there with mine.
[01:00] <redir> apparently they aren't in develop but are in staging
[01:02] <redir> yup they were merged directly to staging
[01:02] <redir> anyhow PR ready for review: https://github.com/juju/juju/pull/6485 PTAL
[01:04] <redir> wallyworld, anastasiamac, axw ^
[01:04] <wallyworld> ok
[01:10] <wallyworld> redir: not sure why you needed to branch from staging if you wanted to merge back into devel, anyway lgtm
[01:13] <anastasiamac> wallyworld: isn't it our current worklflow? branch from staging but PR agaisnt develop?
[01:14] <wallyworld> not sure, i always just branch from develop. i've always though it better to branch from the target to which you want to merge
[01:14] <wallyworld> otherwise you end up with unrelated commits
[01:15] <wallyworld> s/end up/can end up
[01:17] <anastasiamac> that's not the workflow tho.. this is why remote is staging
[01:18] <wallyworld> my remote is develp :-)
[01:18] <anastasiamac> special \o/
[01:18] <wallyworld> why have a remote that is different to the target of your PRs?
[01:19] <wallyworld> it just introduces skew like reed saw
[01:19] <anastasiamac> for one, there is no guarantee that what's in develop will be promoted to staging....
[01:19] <anastasiamac> staging is meant to be stable branch
[01:19] <anastasiamac> but we have difficulties promoting to staging atm
[01:19] <anastasiamac> the idea is that failed promotion will not landing everything in develop to staging
[01:20] <redir> tx wallyworld
[01:20] <anastasiamac> so if u branch from develop, u'll have some stuff that is not stable yet
[01:20] <anastasiamac> i agree that until the wrinkles are ironed, mayb it's worthwhile cto consider to branch from develop
[01:20] <anastasiamac> altho if we do that, we'll never iron our wrinkles :D
[01:21]  * redir goes back to branching from develop
[01:21] <redir> I had started there but was seeing unextpected failures and then switched to staging
[01:22] <redir> some of the failures were intermittent and others were because lxd setup on 16.10 defaults IPV6 to on.
[01:25] <anastasiamac> redir: wallyworld: the promotion from develop to satging ws meant to take only about 3hrs... however, atm, it's not happenning
[01:26]  * redir eod
[01:31] <wallyworld> if you branch from develop you'll get stuff not in stage for sure, but most time that's what i want especially if i'm collarborating and need to pick up someone's work as soon as it lands
[01:31] <wallyworld> otherwise i'm blocked until their work hits staging
[01:32] <wallyworld> even if it's 3 hours, that's still half a day lost
[02:03] <anastasiamac> in the situation where there are several PRs being promoted to stging but fail, u will not be able to re-submit ur PR branched from develop easily if the failure is with other PRs
[02:05] <anastasiamac> wallyworld: axw: welcome message fix as per standup: https://github.com/juju/juju/pull/6487
[02:06] <anastasiamac> PTAL at ur leisure :D
[02:06] <wallyworld> sure, otp will look soon
[02:08] <anastasiamac> hmm m not sure why i have 2 commits on it... ?? "Merge commit '7c21f4f4a09f727601fdce45cbd0230063f7f3a3' into HEAD "
[02:40] <axw> anastasiamac: did you branch off master instead of staging perhaps?
[02:40] <axw> anastasiamac: that's what I got when I did that
[02:41] <anastasiamac> axw: i branched of staging.. but it's so far behind... m wondering if i should re-branch and re-propose against develop?
[02:41]  * axw shrugs
[02:42] <anastasiamac> axw: well, i gues my question is if i'll $$merge$$ on this PR (once lgtm'ed), will this commit hurt?
[02:43] <mup> Bug #1493118 changed: Subordinates stuck in error state <juju-core:Won't Fix> <juju-core 1.25:Won't Fix> <https://launchpad.net/bugs/1493118>
[02:43] <mup> Bug #1566450 changed: Juju claims not authorized for LXD <bootstrap> <ci> <intermittent-failure> <lxd> <juju:Triaged> <https://launchpad.net/bugs/1566450>
[02:43] <mup> Bug #1629919 changed: destroy-controller fails and a kill-controller is required. <juju-core:Invalid> <https://launchpad.net/bugs/1629919>
[02:43] <axw> anastasiamac: I don't think it matters. it's just the merge commit. the child commits are already in develop AFAICT
[02:44] <axw> might look a little ugly in history that's all
[02:45] <anastasiamac> axw: i can't imagine anyone intreted in history oon this one... but for future PRs, i'll b kinder to posterity and future us :D
[02:46] <mup> Bug #1493118 opened: Subordinates stuck in error state <juju-core:Won't Fix> <juju-core 1.25:Won't Fix> <https://launchpad.net/bugs/1493118>
[02:46] <mup> Bug #1566450 opened: Juju claims not authorized for LXD <bootstrap> <ci> <intermittent-failure> <lxd> <juju:Triaged> <https://launchpad.net/bugs/1566450>
[02:46] <mup> Bug #1629919 opened: destroy-controller fails and a kill-controller is required. <juju-core:Invalid> <https://launchpad.net/bugs/1629919>
[02:52] <mup> Bug #1493118 changed: Subordinates stuck in error state <juju-core:Won't Fix> <juju-core 1.25:Won't Fix> <https://launchpad.net/bugs/1493118>
[02:52] <mup> Bug #1566450 changed: Juju claims not authorized for LXD <bootstrap> <ci> <intermittent-failure> <lxd> <juju:Triaged> <https://launchpad.net/bugs/1566450>
[02:52] <mup> Bug #1629919 changed: destroy-controller fails and a kill-controller is required. <juju-core:Invalid> <https://launchpad.net/bugs/1629919>
[02:53] <anastasiamac> wallyworld: tyvm for review \o/ I've addressed/replied to all... happy for it to merge?
[02:53] <wallyworld> let me look
[02:54] <wallyworld> anastasiamac: there's no use of template
[02:55] <anastasiamac> wallyworld: m expecting in the long run to have manual page instead
[02:55] <wallyworld> using templates is much better than printf with the same value repeated many times
[02:56] <anastasiamac> wallyworld: it's temporary, bandage solution...
[02:56] <wallyworld> ok
[02:57] <wallyworld> lgtm then
[02:58] <anastasiamac> \o/ i'll consider template after lunch unless something else'll come up :)
[03:19] <anastasiamac> wallyworld: to clarify, coz i've seen u talking about versions on openstack endpoint this week, do openstack endpoint have to ahve a version?
[03:20] <anastasiamac> axw: ^^ if u know...
[03:20] <wallyworld> yep
[03:20] <anastasiamac> m looking to pick up bug 1634770 and wondering what the right thing to do would b..
[03:20] <mup> Bug #1634770: panic when bootstrapping with openstack provider if endpoint omits api version <bootstrap> <openstack-provider> <juju:Triaged> <https://launchpad.net/bugs/1634770>
[03:20] <wallyworld> they all have a version in the url
[03:20] <wallyworld> v1 or v2.1 etc
[03:20] <anastasiamac> k.. so instead of panic, i'll just error :D
[03:21] <axw> anastasiamac: it would be better if we did neither, and queried supported versions
[03:21] <wallyworld> the panic would be because our code expects a version in the supplied url
[03:21] <wallyworld> right now, we expect the user to know
[03:21] <wallyworld> what version to specify
[03:21] <wallyworld> but andrew is right, querying would be better
[03:22] <anastasiamac> axw: wallyworld: i could probably return supported/available versions as part of error
[03:22] <anastasiamac> when none is supplied as per bug..
[03:22] <wallyworld> we do the query for identy for example to know if domain is supported
[03:22] <wallyworld> we never used to do that originally
[03:23] <wallyworld> the new goose code will query for versions
[03:23] <wallyworld> it may be that for cloud endpoints, version becomes optional
[03:23] <wallyworld> if none supplied, use the latest perhpas
[03:26] <anastasiamac> use the latest and log Info msg that none was supplied and latest selected, for e.g.?
[03:28] <axw> anastasiamac: yes, that would be ideal I think
[03:28] <anastasiamac> \o/ awesome!
[03:28] <axw> anastasiamac: maybe just log as debug, I'm not sure that it's that interesting
[03:29] <anastasiamac> axw: u don;t think that users might need to know? i'd imagine not many would run with debug on
[03:29] <axw> anastasiamac: juju should just do the right thing. I don't think there's a reason for the user to care which version of the identity API we use
[03:30] <anastasiamac> axw: k. i'll do debug.. we can alsways change if needed :)
[03:30] <axw> yup
[03:40] <anastasiamac> wallyworld: changed to use template \o/ made vars obviosu as well! i think it looks even better now :)
[03:40] <anastasiamac> wallyworld: still k to land from u?
[03:41] <wallyworld> sure
[03:41]  * wallyworld goes to look
[03:42] <wallyworld> template much easier to understand. thanks
[03:42] <wallyworld> but is only temporary as you say
[03:54] <anastasiamac> sure but i thought since our *temporary* may last longer than usual "temporary", i'd better do the right thing :)
[04:04] <anastasiamac> axw: wallyworld: there is no way to find latest identity version from api. m going to hard-code it to 3 as we've done everywhere else..
[04:06] <wallyworld> be careful, v3 is not universally support IIANM
[04:07] <wallyworld> i think we hard code to v3 when domain is specified in credentials
[04:08] <wallyworld> we do have a way to get identity version
[04:08] <wallyworld> FetchAuthOptions
[04:08] <anastasiamac> wallyworld: yep looking at it now ;)
[04:11] <anastasiamac> it's not actually ideal as it does not tell you what is latest... it just tells u what's available...
[04:12] <anastasiamac> i guess i could just error out and force user to supply version for now
[04:13] <anastasiamac> and once we do have legitimate identiy endpoints without version, we'll update openstack... u know ike on-a-per-need basis :)
[04:19] <mup> Bug #1386284 changed: no warning when tools-metadata-url is misconfigured <bootstrap> <ci> <logging> <simplestreams> <juju-core:Expired> <https://launchpad.net/bugs/1386284>
[04:19] <mup> Bug #1538462 changed: simplestreams debug content is useless (juju bootstrap --debug) <logging> <simplestreams> <juju-core:Expired> <https://launchpad.net/bugs/1538462>
[04:28] <wallyworld> isn't the latest the one with the highest version number?
[04:40] <axw> it's not ever day that a house goes through the air over your head
[04:41]  * axw stops hyperventillating
[05:20] <wallyworld> axw: wtf happened?
[05:21] <axw> wallyworld: heh :)  neighbour's second storey is going on
[05:21] <wallyworld> via a huge crane?
[05:21] <axw> I'll take a photo when the next module goes on
[05:21] <axw> wallyworld: yup
[05:21] <axw> wallyworld: and said crane took the first module pretty much over my head
[05:21] <wallyworld> cool
[05:21] <wallyworld> i'd be outside looking
[05:22] <axw> i went and had a gander. they're just getting ready atm
[05:22] <axw> for the next one
[05:22] <wallyworld> i hope their slings and ropes etc are strong :-)
[05:22] <axw> :)
[05:52] <anastasiamac> wallyworld: from what I am seeing you do not get version numbers from FetchAuthOptions but auth mode. the translation from version id to auth mode is happening within goose
[05:52] <anastasiamac> so by the time we get we canot figure out latest available
[05:52] <anastasiamac> unless we hardcode something
[05:52] <anastasiamac> so m going to just error for now...
[05:53] <anastasiamac> the problem is that our logic in version deduction was a bit error-prone
[05:53] <anastasiamac> m dealing with tests failures atm
[05:53] <anastasiamac> then will propose
[05:54] <wallyworld> goose will be gaining some logic to get endpoint versions without the post processing, we can use that when available
[05:55] <anastasiamac> exactly \o/ for now i'll just error out
[05:55] <anastasiamac> i believe that this was the intent of the code m changing... but it's buggy
[05:55] <anastasiamac> u'll c what i mean when i propose
[06:03] <axw> wallyworld: ok, going back to work now... https://goo.gl/photos/ZPmdqKnRv5aJtqp76
[06:04] <axw> (the one with the shabby lawn is my place)
[06:05] <wallyworld> axw: jeez, what ever happened to the brickie and chippie turning up on site with their utes and cattle dogs and a chicko roll
[06:05] <axw> wallyworld: heh :)  plenty of guys standing around doing nothing at least
[06:05] <wallyworld> ah so they work for the council then
[10:42] <mgz> my internet has just been terrible this morning...
[11:23] <dimitern> mgz: can you +1 https://github.com/juju/juju/pull/6481 if you think it's good to land please?
[11:28] <mgz> dimitern: sure, I'll take a look
[11:28] <dimitern> mgz: cheers!
[11:35] <mgz> hm, I'm not sure about exposing the force of v1 all the way up to an environment variable
[11:35] <dimitern> mgz: it's mostly harmless anyway,
[11:35] <dimitern> mgz: but it does make testing easier, without introducing unnecessary patching / global vars
[11:35] <mgz> yeah, I agree it's unlikey to hurt us, but really we just want that for the unit tests
[11:36] <mgz> as the right way to functional test is with different juju versions
[11:40] <dimitern> mgz: yeah, you're not wrong :) I should try adding one
[11:40] <mgz> okay, this all makes sense to me
[11:40] <mgz> to check I'm getting the important bits right
[11:40] <dimitern> sweet!
[11:41] <mgz> basically this is some of the way towards what we do on bootstrap ssh, for all juju ssh calls
[11:41] <mgz> previously we'd just ask the api for an address (public or private depending on context) and try to ssh to that
[11:41] <mgz> now the code gets every address the machine claims to have
[11:41] <dimitern> and *only* that
[11:42] <mgz> and does some inspecting of them, then hands one off that it reckons will actually work
[11:42] <dimitern> yeah, but this approach is even better (faster) than the one used during bootstrap
[11:42] <mgz> which isn't quite what bootstrap does
[11:42] <mgz> (which is actually try to ssh to every address in parallel and see what happens)
[11:42] <dimitern> yeah, hands off the one it did connect to successfully
[11:43] <mgz> dimitern: okay, +1ed
[11:43] <dimitern> it does, but the timeout used for the parallel.Try during bootstrap is appalling (10m IIRC)
[11:43] <mgz> well, for good reason in that context
[11:43] <mgz> as we're waiting for the remote machine to actually get stuff done as well
[11:44] <mgz> so, it's not 10m of network timeout, it's 10m of please start your ssh server
[11:44] <dimitern> which means if you hit a blackhole route *first* you'll sit there waiting for ssh to come back.. after 10m
[11:44] <mgz> but yeah, it's all rather messy
[11:44] <dimitern> mgz: awesome! thanks :)
[11:45] <dimitern> I've seen it break with some unfortunate iptables rules set
[11:46] <dimitern> but also parallel.Try in general assumes the func you pass to run in parallel won't block forever
[11:47] <dimitern> and I've seen ssh doing that if given an address matching an OUTPUT -j DROP iptables rule
[11:47] <dimitern> let's fix one thing at a time I suppose.. :)
[11:48] <dimitern> mgz: does the $$fixes-BUG_ID$$ thing still work as before or I got to use $$merge$$ ?
[11:50] <mgz> you might have to use $$merge$$ now, I'm not sure my little hack made it across
[11:50] <dimitern> mgz: it does work it seems
[11:51] <dimitern> mgz: however ... http://paste.ubuntu.com/23358843/
[11:51] <dimitern> mgz: you might want to fix that :)
[11:51] <dimitern> [develop: command not found
[11:51] <mgz> heh, that's new
[11:53] <mgz> or perhaps not, as it's not actually causing the run to fail
[11:53] <dimitern> well now :) for a change I actually feel I accomplished something useful this week \o/
[11:54] <dimitern> mgz: nope - it's not causing it to fail but might not stop people trying to land stuff via github-merge-juju directly in staging
[11:54] <dimitern> (AIUI)
[11:54] <mgz> hm, that's the code block that's aimed at stopping people landing directly on master
[11:54] <mgz> and it's borked
[11:54] <mgz> so...
[11:54] <dimitern> :D
[11:55]  * dimitern steps out for ~1h
[11:56] <mgz> okay, fixed, people can no longer land directly on staging :)
[12:49] <mup> Bug #1635622 opened: 'juju ssh <unit> ...' fails with Permission denied (publickey), for only one or two machines in a deployment <juju-core:New> <https://launchpad.net/bugs/1635622>
[13:35] <anastasiamac> frobware: wallyworld: axw: PTAL https://github.com/juju/juju/pull/6488
[13:38] <wallyworld> anastasiamac: lgtm
[13:46] <dimitern> mgz: the windows vm is out of space again I presume
[13:46] <mgz> ah, the gating job chucked you out on windows tests?
[13:46] <dimitern> mgz: my fix failed twice on windows so far, with increasingly weird errors :)
[13:47] <mgz> I'll have a luge
[13:51] <mgz> hm, first one windows tests pass, failed trying to get a trusty instance
[13:51] <mgz> second windows tests failed in a pretty typical intermittent failure manner
[13:52] <mgz> third we're into bad local connection weirdness, but nothing obviously saying oom vs other unhappiness
[13:56] <dimitern> weird..
[13:57] <mgz> we can try restarting that machine and running again
[13:59] <dimitern> mgz: +1
[14:00] <rick_h_> dimitern: ping for standup
[14:00] <dimitern> omw
[14:03] <mgz> oh, and there goes google dropping me
[14:04] <mgz> so, if I'm vanishing from hangout, my internet is just terrible
[14:06] <mgz> well, I can hear you all on the hangout, but can't get my audio through?
[14:07] <mgz> and sometimes not text as well it seems
[14:07] <mgz> anyway, my update on cards, have code reviewed by curtis and good to land, some small fixes to make and some unit test coverage to re-add
[14:07] <mgz> then I have some more maas setup work to do
[14:08] <dimitern> mgz: can't hear you :/
[14:08] <mgz> yeah, net today has been just about good enough for irc and ssh session, but dropping packets everywhere
[14:13] <dimitern> mgz: now the lxd and the windows vms for the merge job seem to be getting worse
[14:16] <mgz> sinzui: ^do you have a particular proceedure on trying to keep these things working? I remember you mail to nicholas about the issues here.
[14:18] <sinzui> mgz: I restart the machine when it is irational. But ssh drops from an AMI instance is another matter. we are not running race tests on the gating job because ssh consistenly drops from the *new* juju-core-slave, but not the old one.
[14:46] <dimitern> chrome0: ping
[14:54] <chrome0> dimitern : Hola
[14:55] <dimitern> chrome0: hey! I'm still trying to reproduce bug 1589680.. no luck so far though, it seems lxc-templates has been part of Recommends for lxc since 0.8.0
[14:55] <mup> Bug #1589680: Upgrading to cloud-archive:mitaka breaks lxc creation <canonical-bootstack> <juju-core:Triaged> <juju-core 1.25:In Progress by dimitern> <https://launchpad.net/bugs/1589680>
[14:56] <dimitern> chrome0: which happens to be the lxc version in trusty/main still
[14:57] <dimitern> chrome0: juju doesn't specify package source when installing lxc, so it will get the most preferred, which is 1.0.8 from trusty/updates
[14:59] <dimitern> chrome0: I bootstrapped trusty with 1.25.6 and added 1 lxc, then installed lxc from trusty-backports (2.0.4) and again did juju add-machine lxc:0 -- no issues or errors I can see
[14:59] <chrome0> dimitern : Hm, we have trusty-updates on the machines this happened on too, and likely had it then as well
[15:00] <dimitern> chrome0: is it possible lxc-1.0.3 was originally installed?
[15:00] <natefinch> gah, we need better documentation on oauth1 vs oath2 and when you use one or the other
[15:01] <dimitern> chrome0: before the mitaka upgrade? (can't see how though.. except manually)
[15:01] <natefinch> a lot of our docs just say "oauth" which is Not A Thing™ in juju
[15:02] <chrome0> dimitern : I can't say for sure anymore, but am reasonably certain that we didn't manually upgrade lxc
[16:03]  * rick_h_ grabs lunchables
[17:08] <rick_h_> katco: pinkg
[17:08] <rick_h_> ping tha tis
[17:08] <rick_h_> bah, /me blames tools for typing issues *bad keyboard, bad!*
[17:11] <katco> rick_h_: lol hey
[17:11] <rick_h_> katco: got a sec to chat real quick on dev workflowy bits?
[17:11] <katco> rick_h_: sure
[17:11] <rick_h_> meet you in ?core please
[17:13] <natefinch> man I hate that all our configs use maps of name : object, rather than a list of object with a property that is the name.  it makes parsing like 10 times more difficult.
[17:19] <katco> natefinch: i.e. map[string]interface{} vs. struct?
[17:20] <natefinch> katco: yeah
[17:20] <natefinch> well, vs []struct
[17:21] <katco> natefinch: hm, i don't understand that part. not having the config as one large struct?
[17:23] <rick_h_> katco: hmm, actually...in this case we don't need master any more.
[17:23] <natefinch> it makes the name not part of the value, it's the key...  it's also then an extra layer of indirection, an extra layer of indenting in the config
[17:23] <rick_h_> katco: because any hotfix would be against the support branches, never master
[17:23] <katco> rick_h_: release-branches are cut from staging?
[17:24] <rick_h_> katco: yea, so rather than making a new release by merging staging->master and then creating a new support branch from master
[17:24] <rick_h_> katco: it would just be to create a new release by creating the new support branch
[17:25] <rick_h_> katco: and skip that master middle man
[17:25] <katco> rick_h_: i thought the purpose of master was to always be releasable? e.g. staging -> master is our opportunity to run the full CI suite?
[17:25]  * redir wonders if we'll wind up with git-flow -- with different names
[17:25] <rick_h_> katco: no, master's job was to be a place to perform hotfixes that only had one PR from the last release
[17:25] <rick_h_> redir: lol, almost, except git-flow doesn't have the idea of dual test runs in it
[17:26] <katco> rick_h_: where do we run our full CI suite w/ no master?
[17:26] <rick_h_> katco: develop->staging
[17:26] <rick_h_> katco: right now to get frmo develop->staging you need a bless
[17:26] <katco> rick_h_: oh, it's PR->develop that is the partial isn't it
[17:26] <rick_h_> katco: right
[17:27] <katco> rick_h_: then yes, we don't need master; but i would get rid of staging instead since master is such a common thing with git
[17:27] <rick_h_> katco: +1, just speaking in current terms so we follow what we're saying
[17:28] <redir> ok master and develop check
[17:28] <redir> hotfixes, check
[17:28] <rick_h_> redir: where do you think I got those from :P
[17:28] <redir> :)
[17:28] <redir> captain we need more power
[17:28]  * redir is missing the dual test runs
[17:28] <rick_h_> redir: so the one plus is that develop->master is automated based on bless
[17:28] <rick_h_> redir: on 1.25? or some other way?
[17:28] <redir> like picture in picture on tv
[17:28] <redir> never saw a use
[17:29] <rick_h_> redir: ? /me isn't following
[17:29] <redir> missing as in doesn't know -- is ignorant
[17:29] <redir> ignore me you were making progress
[17:29] <rick_h_> redir: oic, 30min test run vs 3hr test run
[17:30] <katco> rick_h_: redir: in my head, someday there will be no 30min test run, only a sub-minute one and then CI tests
[17:30] <redir> one to merge to develop and the other for merging to master
[17:31] <redir> I thought the 30 minute one was the CI tests
[17:31] <katco> redir: it's just a subset of our test suite, unfortunately
[17:31]  * redir is curious where the sub minute test run lives:)
[17:32] <katco> redir: in the future! when we've converted our suites to actual unit tests :)
[17:32] <redir> ahhh I wan't in the time machine to the future
[17:32] <rick_h_> :) in a dream land where you don't actually talk to a db in a unit test
[17:32] <redir> s/wasn't
[17:32] <katco> rick_h_: that's not a dream!
[17:32] <rick_h_> katco: :)
[17:33] <katco> rick_h_: e.g. the deploy command is now fully ready to be converted to unit tests. completely in-memory
[17:33] <redir> +1 I think of that as reality and not that as a bad dream
[17:33] <rick_h_> katco: <3
[17:33] <katco> rick_h_: and some of the tests have already been converted as examples
[17:34] <mgz> rick_h_: oh, misc comment, it seems the flag to limit pre-testing to certain users is not actually turned on?
[17:34] <mgz> so, anyone proposing a branch against develop gets their code run on our setup
[17:35] <mgz> I can turn it on and see if it works?
[17:35] <redir> So I came over here to exit bacause DNS keeps failing. REbooting everything from the modem in :(
[17:35] <rick_h_> redir: there's a known big ddos attack going on
[17:35] <redir> oh
[17:35] <redir> the iot one?
[17:36] <mgz> yeah, it's not you redir
[17:36] <katco> redir: possibly
[17:36] <redir> OK.
[17:36] <katco> http://money.cnn.com/2016/10/21/technology/ddos-attack-popular-sites/index.html
[17:36] <mgz> though confusingly my internet is just terrible as well today
[17:36] <katco> yeah it's turning out to be a weird day
[17:36] <redir> I guess irclogs.ubuntu.com is a popular site:|
[17:36] <rick_h_> https://news.ycombinator.com/item?id=12759697
[17:36] <rick_h_> redir: heh
[17:36] <natefinch> it's everybody
[17:37] <redir> OK not rebooting everything.
[17:37] <redir> just the router and I'll use some non major DNS
[17:37] <redir> because it is DNS for me
[17:37] <redir> names not resolving
[17:38] <natefinch> yeah, there's a DNS service down and it's screwing up a TON of sites
[17:38] <redir> oh dyndns
[17:38] <natefinch> yep
[17:38] <redir> Saw they were down
[17:39] <redir> thanks for the intervention
[17:39] <redir> friends don't let friends reboot
[17:39] <redir> unnecessarily
[17:46] <katco> this is rather unprecedented isn't it?
[17:46] <natefinch> I don't remember anything like this, no.
[17:47] <katco> i mean i've seen pleanty of ddos against a site or two
[17:47] <natefinch> I think @FiloSottile said it best: Take this intuition. Now ask, "why doesn't my DNS resolver just remember the IPs that worked 1h ago?" NO GOOD REASON
[17:47] <katco> but this seems like it's affecting a lot of critical sites/infrastructure
[17:48] <natefinch> And: Any website would take stale IPs over downtime. Nobody relies on fast DNS changes anyway, everything is cacheable (➡DNS is bestcase) </rant>
[17:49] <natefinch> well, hopefully after this, the DNS companies will get their act together and start planning for this sort of thing
[17:52] <katco> the timing with the kernel privilege escalation is interesting too
[17:55] <natefinch> lol, this is not the Russ Cox I was looking for: https://aussiecriminals.com.au/high-profile-criminals/russell-mad-dog-cox/
[17:55] <katco> haha
[17:56] <natefinch> just realized goimports doesn't support -s and now I have a sad
[18:04] <redir> I once made a dns mistake (forgot a leading dot) and so the rply was with the wrong IP. That IP led to an http server's default site. I used long TTLs then because it shouldn't change often, so 86400. Google happened to crawl it before the correction propagated. So google returned the wrong results for another 7 days.
[18:05] <redir> I started using short DNS TTLS after that
[18:05] <natefinch> haha yeah
[18:06] <redir> I'll give you one guess who's "business card" site was the default site.
[18:06] <redir> There was a lot of telelphone ringing that week
[18:06] <redir> not the good kind
[18:07] <katco> ohhhh shit... the fan in my main computer sounds like it's bearing just went out
[18:08] <katco> god wtf friday
[18:08] <katco> sigh brb
[18:08] <katco> (hopefully)
[18:08] <redir> luck
[19:38] <natefinch> gah, turning jsonschema into a generic UX is really really hard
[19:38] <natefinch> er generic interactive UX
[19:40] <rick_h_> natefinch: let me know if there are any particular pain points you want to brainstorm on
[19:41] <natefinch> I think it's ok, there's just a huge matrix of interactions that exist.... for now I'm treading a narrow path of supporting only what I know we need to support... but it means there's a lot of edge cases that won't work if we want to use this as a generic "throw anything at it and it'll work" library (which was basically the whole point of writing it in the first place)
[19:42] <rick_h_> natefinch: that's ok though. one stone at a time.
[19:43] <rick_h_> as long as we stick to the types we need, byt focus on supporting the type vs oir need we'll be in good shape imo
[19:44] <natefinch> yep.  Basically, I wrote the schema for openstack, now I'm writing the UI code to handle that particular schema, while doing my best to add checks to properly error out if someone deviates from the supported schemas, instead of just doing the wrong thing.
[19:46] <natefinch> openstack just adds three new ideas - arrays, an enum of values, and an object that is a map of name to object (regions).
[19:47] <perrito666> oh, this is where we complain?
[19:47] <natefinch> haha
[19:47] <natefinch> where else? :)
[19:47] <perrito666> I have been fixing tests for 2 days that basically require me to reverse engineer what a provider is doing interms of api calls
[19:47] <natefinch> the rest of the internet is broken ;)
[19:47] <perrito666> speak for yourself, internet works here, its just crazy slow
[21:24] <perrito666> ghaaa, kidding me? I added error.Traces to the code and the tests break? that was nasty