[00:30] <davecheney> thumper: ping, i'm free to talk whenever you are
[00:31] <katco> wallyworld: https://github.com/juju/juju/pull/513 :)
[00:31] <katco> wallyworld: going to eat dinner, i'll check for a LGTM in a bit and hopefully land it!
[01:00] <wallyworld> katco: awesome. for the 1.20 update, you will need to ensure you use the 1.20 branches of the relevant sub repos ie you will need to update their imports and commit the changes and then use the relevant rev shas
[01:06] <axw> o/ wallyworld
[01:06] <axw> how was nuermberg?
[01:07] <axw> nuern nurem... equal edit distance
[01:18] <wallyworld> axw: hey ya. good but busy. we gots a lot to do. got time for a chat?
[01:18] <axw> wallyworld: sure
[01:19] <wallyworld> see you in standup meeting
[01:43] <thumper> davecheney: around?
[01:48] <davecheney> yeah
[01:48] <davecheney> thumper: sup
[01:48] <thumper> chat?
[01:49] <davecheney> thumper: sure, lets meet in the 1-1 hangout
[01:49] <thumper> kk
[01:51]  * thumper is there waiting for davecheney
[02:06] <wallyworld> thumper: did you want to delay our 1:1
[02:06] <thumper> wallyworld: yes
[02:06] <wallyworld> ok, just ping me
[02:36] <thumper> wallyworld: now?
[02:36] <wallyworld> sure
[02:44] <davecheney> lucky(~/src/github.com/juju/juju) % gb ./...                                                                                                          │················
[02:45] <davecheney> juju/sockets/sockets.go:6:2: no buildable Go source files in /home/dfc/src/gopkg.in/natefinch/npipe.v2
[02:45] <davecheney> anyone else getting this error ?
[02:49] <davecheney> wallyworld: thumper axw https://github.com/juju/juju/pull/514
[02:49] <davecheney> review please
[02:49] <davecheney> this is blocking my build
[02:50] <wallyworld> davecheney: otp, will look soon
[02:50] <axw> eh, I thought it was updated to add a file for linux
[02:50]  * axw check
[02:50] <axw> s
[02:51] <davecheney> i ran godeps and it didn't complain
[02:51] <davecheney> i belive I have the right rev
[02:51] <davecheney> seriously
[02:51] <axw> davecheney: I think you just need to merge trunk again, I think there's a new rev
[02:51] <davecheney> this isn't the right way to fix the problem
[02:51] <davecheney> axw: trunk of which repo ?
[02:52] <axw> npipe
[02:52] <axw> um
[02:52] <axw> merge trunk of juju
[02:52] <axw> there's a new rev of npipe in dependencies.tsv
[02:53] <davecheney> gopkg.in/natefinch/npipe.v2     git     e562d4ae5c2f838f9e7e406f7d9890d5b02467a9
[02:53] <davecheney> that is what it says
[02:53] <davecheney> and that is the rev I have
[02:54] <davecheney> I FUCKING HATE GOPKG>IN AND THE INTERACTION WITH GODEPS
[02:54] <davecheney> I HATE IT SO MUCH
[02:54] <davecheney> I MAKES ME MISERABLE
[02:54]  * thumper chuckles 
[02:55] <axw> davecheney: what's the problem exactly? I'm not having any problems with godeps
[02:55] <davecheney> we have two tools that are trying to do the same thing, pin a dependency of a go package
[02:56] <davecheney> i think it is a serious error to combine those two tools
[02:56] <thumper> davecheney: I don't think it is too terrible, just annoying
[02:57] <thumper> one seems to pin (vaguely) the api, the other pins to commits
[02:59] <wallyworld> davecheney: did you see all the syslog stuff being done? are you happy with it? i thought at one point you may have had an opinion on it? maybe i am misremembering?
[03:00] <davecheney> wallyworld: as in moving away from rsyslog and doing file rolling inside the application ?
[03:00] <wallyworld> davecheney: that and there was talk of using Go's syslog, not sure if that went ahead or not
[03:00] <davecheney> don't use that package
[03:00] <davecheney> it's fucked
[03:00] <davecheney> we can't remove it
[03:00] <davecheney> but it's still fucked
[03:00] <wallyworld> yup
[03:01] <davecheney> i also am not in favor of doing file rolling inside the applicatoin
[03:01] <davecheney> from a decade of sysadmin experience
[03:01] <wallyworld> yep
[03:01] <davecheney> that is a sure fire way to loose logs
[03:01] <wallyworld> davecheney: can you replay to the thread on juju-dev, or email nate directly with your concerns?
[03:01] <wallyworld> reply
[03:01] <davecheney> wallyworld: will my comments be given serious consideration
[03:02] <davecheney> i've not seen a lot of that in the past
[03:02] <davecheney> i know that is a snotty thing to say
[03:02] <davecheney> but i'm not going to kick up a stink unless there is actual possiblity of chance
[03:02] <davecheney> change
[03:02] <davecheney> otherwise i'll save everyone the discomfort of hearing me rant
[03:02] <wallyworld> they will be taken seriously by me, and whomelese else i talk to i'll tell them :-)
[03:02] <wallyworld> don't rant as such, just provide facts
[03:03] <wallyworld> and i'll make sure nate and whoever else takes note
[03:03] <davecheney> here are the facts
[03:03] <davecheney> 1. the state/api/database/mongo whatever runs on linux
[03:03] <davecheney> it will continuye to run on linux for the forseable future even if we have windows workloads
[03:03] <davecheney> we should continuye to use rsyslog for log collection
[03:03] <davecheney> as it decouples the application from log rolling
[03:03] <davecheney> applications write to stdout
[03:04] <davecheney> somethign else takes care of where that stdout goes
[03:04] <davecheney> if you make the application responsible for rolling it's own logs
[03:04] <davecheney> you'll generally find that you end up writing to a log file that has been unlinked
[03:04] <axw> I did comment on the PR about the approach to rolling being a bit broken, but I don't see any change in nate's new PR
[03:05] <wallyworld> i agree. can you put that in an email to the list? feel free to mention your previous sysadmin experience to add weight to your arguments
[03:05] <davecheney> wallyworld: http://12factor.net/logs
[03:05] <davecheney> ^ this is what we shold be aiming for
[03:05] <davecheney> wallyworld: i honestly don't think i can be objective at thispoint
[03:05] <davecheney> i've given you what I can
[03:06] <davecheney> i don't want to take on the heartache of another loosing battle
[03:06] <wallyworld> hmmm, ok. i would prefer you provided the raw material (based on yuor experience) - we can run with it from there
[03:06] <wallyworld> ie fight the battle
[03:06] <wallyworld> but i can do it also
[03:07] <davecheney> wallyworld: which thread ?
[03:07]  * wallyworld looks
[03:07] <davecheney> i can't find it
[03:07] <davecheney> if it's on juju-team
[03:07] <davecheney> i'm not subscribed to that list
[03:07] <davecheney> or i should say, my subscription is still pending
[03:08] <wallyworld> juju-dev - getting rid of all-machines.log
[03:08] <wallyworld> but we could do a new post referencing that thread
[03:09] <axw> also: https://github.com/juju/juju/pull/375, superseded by https://github.com/juju/juju/pull/512
[03:23] <axw> wallyworld: that big test was just moved. I'd rather not change existing things too much at this point
[03:23] <axw> ah you noticed :)
[03:23] <wallyworld> axw: yeah, just made a followup comment :-)
[03:25]  * axw tests it once more for good luck
[03:38] <axw> wallyworld: can you please review https://github.com/juju/testing/pull/28 too?
[03:38] <wallyworld> sure
[03:41] <axw> wallyworld: btw, to test with mongo 2.6 you can just do "JUJU_MONGOD=path/to/mongod go test ./..."
[03:41] <axw> will not work in CI, though; that would require additional changes
[03:41] <wallyworld> axw: np, thanks. there was a test CI job I started a while back
[03:42] <axw> cool
[03:42] <wallyworld> but i never finished it
[07:39] <TheMue> morning
[07:41] <dimitern> morning TheMue
[07:42] <dimitern> I thought jam is here today..
[07:52] <TheMue> dimitern: IMHO he is on holiday the whole week
[07:52] <TheMue> dimitern: oh, no, just took a look into the calendar
[07:54] <dimitern> TheMue, yeah, maybe he'll pop in at some point today
[07:56] <TheMue> dimitern: yep
[07:57] <TheMue> dimitern: so comming back to my change of SupportsNetwork and its usage
[07:57] <dimitern> TheMue, how's that capability branch for the networker going?
[07:57] <dimitern> TheMue, :)
[07:57] <TheMue> dimitern: bingo
[07:57] <waigani_> axw: https://github.com/juju/juju/pull/518
[07:58] <TheMue> dimitern: the capability is currently already implemented, imho only with the wrong naming
[07:58] <TheMue> dimitern: but is this enough
[07:59] <TheMue> dimitern: and then use it where today we test if the providerType is MAAS?
[08:04] <axw> waigani_: looking
[08:06] <axw> waigani_:  :x a much simpler solution has just dawned on me
[08:07] <axw> waigani_: we could have just got the bootstrap instance and used its Addresses method...
[08:07] <dimitern> TheMue, there's a new capability needed, let's call it AllowOnlySafeNetworker(machineId string) bool
[08:07]  * axw continues looking anyway
[08:08] <dimitern> TheMue, which will return false for all providers, except local
[08:09] <dimitern> TheMue, and for local it will check internally if the machineId == bootstrapMachineId (or however it's called) and return true (i.e. do not allow unsafe networker on the bootstrap node) or false (for all other machines)
[08:10] <TheMue> dimitern: ah, already wondered. this description better matches what I’ve seen in the discussion, thanks.
[08:11] <TheMue> dimitern: as we call them mostly Supports… (reads better) I would call it SupportsUnsafeNetworker(machineId)
[08:12] <dimitern> TheMue, I'm fine with that (in this case the meaning is inverted - true for most, false for local bootstrap node)
[08:13] <TheMue> dimitern: I’m also not happy with Networker and SafeNetworker, which implies the first one is „unsafe“ ;)
[08:13] <axw> waigani_: commented on the PR
[08:14] <dimitern> TheMue, well it is kinda unsafe :)
[08:14] <dimitern> waigani_, the bootstrap node can have and does have more than one address usually
[08:20] <waigani> axw: you horrible man
[08:20] <axw> waigani: :(
[08:21] <axw> waigani: on the plus side, the alternative is really quick to implement :p
[08:21] <waigani> axw: lol, no it's a good solution - I raised an eyebrow when I came across Addresses, but assumed we could not use it for all providers
[08:41] <waigani> axw: will the bootstrap instance have  Id() == "0" ?
[08:42] <axw> waigani: no
[08:42] <axw> that's machine ID
[08:42] <axw> instance ID you cannot predict
[08:42] <axw> waigani: just call AllInstances() and get the one and only element out of it
[08:43] <waigani> axw: okay, so we only bootstrap one instance? When to the HA instances get bootstrapped?
[08:43] <axw> waigani: bootstrap = start the first instance
[08:43] <axw> HA comes later, when you do "juju ensure-availability"
[08:44] <waigani> right
[08:47] <voidspac_> dimitern: TheMue: just noticed your conversation
[08:47] <voidspac_> dimitern: TheMue: shouldn't "safe networking" be true for manual provider too
[08:48] <dimitern> voidspac_, yes
[08:48] <voidspac_> dimitern: TheMue: and I agree "safe" is not a brilliant name
[08:48] <voidspac_> but I dislike burning too much energy on name bikeshedding
[08:48] <voidspac_> pick something and use it
[08:48] <dimitern> voidspac_, for all machines with the manual provider, and only for the bootstrap node with local
[08:48] <voidspac_> yep
[08:49] <axw> don't forget machines can be manually added to a non-manual provider environment
[08:49] <axw> in which case provider != "manual"
[08:49] <voidspac_> axw: oh, ouch
[08:49] <axw> probably don't want to muck with their networking either
[08:49] <voidspac_> dimitern: the unit test failures are all uniter tests
[08:49] <voidspac_> dimitern: are you / is anyone looking at them?
[08:50] <dimitern> voidspac_, which failures?
[08:51] <voidspac_> dimitern: the ones jam emailed us about yesterday
[08:51] <voidspac_> dimitern: CI failures
[08:51] <TheMue> axw, dimitern: so manual || local bootstrap || (non-manual && manually added)?
[08:51] <voidspac_> dimitern: you said you would look at them :-)
[08:51] <voidspac_> http://juju-ci.vapour.ws:8080/job/run-unit-tests-precise-amd64/1459/console
[08:51] <dimitern> axw, good point; TheMue you should check for manually provisioned machines and not allow unsafe networker there as well
[08:51] <voidspac_> http://juju-ci.vapour.ws:8080/job/run-unit-tests-precise-amd64/1458/console
[08:51] <voidspac_> http://juju-ci.vapour.ws:8080/job/run-unit-tests-precise-amd64/1457/console
[08:51] <voidspac_> http://juju-ci.vapour.ws:8080/job/run-unit-tests-precise-amd64/1456/console
[08:51] <dimitern> axw, what's the best way to tell if a machine is manually provisioned by its id?
[08:52] <dimitern> voidspac_, aw, sorry, I'm struggling with networker refactoring, will have a look before the standup
[08:52] <axw> dimitern: from what context? you can tell from the state.Machine
[08:52] <voidspac_> dimitern: I can have a look, although maybe not before standup
[08:53] <dimitern> axw, ah, so the machine id is not special
[08:53] <TheMue> axw: idea is to have an environ capability able to say if unsafe networkers are supported by passing the machine id
[08:53] <axw> dimitern: nope. the nonce is special
[08:53] <dimitern> axw, we'll need to lookup the machine in state via the api to be able to tell
[08:53] <axw> dimitern: that would probably be best
[08:53] <dimitern> axw, is the nonce constant?
[08:54] <axw> dimitern: yup
[08:54] <axw> set by the instance broker
[08:54] <axw> well, in this case it's set by the manual provisioning code
[08:54] <dimitern> axw, alright, thanks
[08:54] <axw> dimitern: TheMue: state.Machine.IsManual()
[08:54] <TheMue> axw: nice
[08:55]  * TheMue takes some notes
[09:14] <TheMue> Hmm, just found „SetHasVote()“. Looks strange, would have named it „GrantVote()“.
[09:21] <waigani> axw: I'm hitting the same problem that I hit when I connected to the api after bootstrap: something is hanging. I've set state-server to true on tests
[09:22] <waigani> axw: here's what I've done: http://pastebin.ubuntu.com/8043738/
[09:22] <axw> waigani: *shouldn't* need to do that... there should be no interaction with the state server to obtain the instances or their provider addresses
[09:23] <waigani> axw: I wonder if it is this: c.ConnectionEndpoint(false)
[09:23] <axw> waigani: I have nfi what ConnectionEndpoint is
[09:30] <mattyw> fwereade_, ping?
[09:31] <mattyw> fwereade_, ^^ I'm going to rebase the metric branch before landing - I think most of the review was us working around what should be implemented so I'm going to cleanup the history to make it easier to understand
[09:33] <perrito666> morning
[09:45] <perrito666> fwereade_: ping me when you arearound plz
[09:59] <waigani> axw: okay I got it working. Do we also only want to grab the first address of the first instance?
[10:00] <axw> waigani: I think we should just get them all, like the existing api address caching does
[10:00] <waigani> axw: cool, that is how I have it now
[10:18] <davecheney> https://bugs.launchpad.net/juju-core/+bug/1356806
[10:18] <mup> Bug #1356806: cmd/juju: juju takes 3 seconds to do nothing <juju-core:New> <https://launchpad.net/bugs/1356806>
[10:29] <rogpeppe> davecheney: it's a pity that parsing rsa keys is so slow. ISTR that it's because it does a "probably prime" check when it parses the key.
[10:29] <rogpeppe> davecheney: although i thought that issue was fixed. hmm.
[10:34] <davecheney> rogpeppe: this is on ppc64
[10:34] <davecheney> where there is no asm math library
[10:34] <davecheney> juju is just doing something dumb
[10:34] <davecheney> it doesn't _always_ need to read the private key
[10:34] <rogpeppe> davecheney: i agree it's dumb, but it should not take 3s to parse an RSA key on any platform
[10:34] <davecheney> i don't know how optimised the ppc port is at the moment
[10:34] <davecheney> one would suggest, not at all
[10:34] <rogpeppe> davecheney: even if not optimised
[10:35] <rogpeppe> davecheney: parsing an RSA key should not require many math operations
[10:35] <rogpeppe> davecheney: it's essentially just reading a couple of big numbers
[10:35] <davecheney> rogpeppe: i think we're in agreement here
[10:36] <davecheney> yet here we are, waiting 3 seconds for nothing to happen
[10:36] <rogpeppe> davecheney: i'd like to fix both places
[10:36] <rogpeppe> davecheney: both the Go library (to make parsing RSA keys more efficient) and juju-core (to avoid doing that when it doesn't need to)
[10:36] <dimitern> alexisb, ping
[10:46] <dimitern> voidspac_, standup?
[10:47] <voidspac_> dimitern: I'm there...
[10:58] <voidspac_> dimitern: connection lost :-(
[10:58] <voidspac_> dimitern: overlapping subnets are not allowed in the new model - we have one network
[10:58] <dimitern> voidspac_, yes, but where have you seen the overlapping subnets in the model?
[10:59] <voidspac_> dimitern: but isn't it possible that subnets on different nics will look like they overlap
[10:59] <voidspac_> but don't because they're on separate networks
[10:59] <voidspac_> but we can't represent that in the new model
[10:59] <voidspac_> cidr is the primary key
[10:59] <dimitern> voidspac_, I'm not sure I understand
[11:00] <voidspac_> or should the different nics be bound to different subnets anyway
[11:00] <dimitern> voidspac_, cidr is the pk for a subnet, yes
[11:00] <dimitern> voidspac_, nics can be bound to one or more subnets
[11:00] <voidspac_> dimitern: two different networks can have the same subnets allocated to different machines though, right?
[11:00] <voidspac_> dimitern: so the same cidr can be duplicated on different networks
[11:00] <dimitern> voidspac_, the confusion I think comes from the term "network"
[11:01] <voidspac_> heh
[11:01] <dimitern> voidspac_, in juju, per the new model, all subnets are in the same (virtual/logical?) network
[11:02] <dimitern> voidspac_, in provider-space, if networks and subnets are supported, there is a possibility to have 2 networks with overlapping subnets
[11:02] <voidspac_> dimitern: right
[11:02] <voidspac_> dimitern: how do we cope with that in the juju model?
[11:02] <dimitern> voidspac_, but I don't think this is a valid case
[11:02] <voidspac_> dimitern: ok
[11:03] <voidspac_> dimitern: because they won't be routable to each other and so you can't use them for anything useful
[11:03] <dimitern> voidspac_, it a weird setup, and we should not allow that, aiui
[11:03] <dimitern> voidspac_, yes
[11:03] <voidspac_> dimitern: but we currently don't "disallow it", we pretend it can't happen
[11:03] <dimitern> voidspac_, but good point about bringing this up
[11:03] <voidspac_> you can still have unroutable subnets, and just whichever one is added first wins
[11:04] <voidspac_> dimitern: I think in summary, networking is hard
[11:04] <dimitern> voidspac_, another possible case is having overlapping vlan subnets with different tags
[11:04] <voidspac_> right
[11:04] <dimitern> :) oh yeah
[11:04] <dimitern> voidspac_, you can't do this in maas I think
[11:05] <voidspac_> dimitern: TheMue:  also I spent a while talking to perrito666 yesterday - current restore does not *need* direct mongo access
[11:05] <dimitern> voidspac_, so juju shouldn't take it either
[11:05] <voidspac_> dimitern: TheMue: it just uses it after doing the restore to update some data, and can be fixed to not do that
[11:05] <voidspac_> dimitern: right
[11:05] <dimitern> voidspac_, great!
[11:05] <voidspac_> dimitern: fair enough
[11:05] <voidspac_> dimitern: TheMue: so that's what I've been on, I was going to swap and look at the bugs
[11:05] <voidspac_> dimitern: I have a precise VM, should I try and repro these bugs?
[11:07] <dimitern> voidspac_, yes please, I'll do the same; hopefully different setups can lead to reproduction of at least some bugs
[11:07] <voidspac_> dimitern: cool, coffee first then I'll spin up my VM
[11:17] <natefinch> fwereade_: team leads?
[12:03] <sinzui> wallyworld, ppc64el unittests are failing in many different ways.
[12:03] <wallyworld> oh joy
[12:03] <sinzui> wallyworld, I am going to increase the number of retries, and also look for cruft left on the machine
[12:04] <sinzui> I am not sure juju is really bad on ppc
[12:04] <wallyworld> sinzui: could it be new tests with ordering issues, or is it existing tests that have started failing? i'll have to look at the logs. currently in standup
[12:32] <dimitern> voidspac_, managed to reproduce some of the ci test failures on a precise vm using rev 9b2d2106476bad0ac528256db23bad073257d4bf
[12:32] <voidspac_> dimitern: cool
[12:35] <voidspac_> dimitern: any ideas on the cause?
[12:35] <voidspac_> dimitern: I took a break and am now still updating the vm
[12:35] <dimitern> voidspac_, actually I'm having issues with mongo now
[12:35] <voidspac_> juju won't build for me
[12:35] <voidspac_> think i need to update go
[12:35] <dimitern> voidspac_, I installed mongodb-server from the ppa:juju/experimental
[12:35] <voidspac_> godeps completed but still build errors
[12:35] <voidspac_> ah
[12:35] <voidspac_> ls
[12:35] <dimitern> voidspac_, you need to install mongodb-server and golang from there
[12:36] <voidspac_> dimitern: I can't just build golang from source?
[12:37] <natefinch> where Go comes from is certainly not the problem
[12:37] <dimitern> voidspac_, you can ofc, but to mimic the test environment as close as possible, I'd use the ppa
[12:37] <voidspac_> ok
[12:37] <voidspac_> just added the ppa and doing an update
[12:37] <voidspac_> go is building anyway
[12:44] <voidspac_> natefinch: actually I think it was the problem (which Go I was using)
[12:44] <voidspac_> natefinch: I think we're no longer compatible with the golang in precise
[12:44] <voidspac_> which I was using before
[12:45] <voidspac_> but building go from source works fine
[12:45] <natefinch> voidspac_: oh, yeah, sorry, I thought you were using something reasonable
[12:45] <natefinch> :D
[12:45] <voidspac_> although I now have golang from the experimental ppa as well
[12:45] <dimitern> voidspac_, interestingly, I can run (most of) state/ tests without any mongo issues, but for agent/ or worker/uniter/ tests I get the same failures "cannot set admin password: need to login"} ("cannot set admin password: need to login"
[12:45] <voidspac_> natefinch: I'd moved it out of the way to try something a while ago and not moved it back
[12:46] <natefinch> voidspac_: I think precise has like 1.1 or something, and we need ~1.2.1
[12:46] <voidspac_> dimitern: running worker/uniter now
[12:46] <voidspac_> natefinch: right
[12:46] <natefinch> voidspac_: newer versions of Go always work with older code.  But older versions of Go don't always work with newer code
[12:46] <voidspac_> natefinch: the actual build error was something nice and helpful like
[12:47] <voidspac_> ../../../code.google.com/p/go.crypto/openpgp/packet/encrypted_key.go:8: import /home/michael/canonical/pkg/linux_amd64/code.google.com/p/go.crypto/openpgp/elgamal.a: object is [linux amd64 go1.2.2 X:none] expected [linux amd64 go1.1.2 X:none]
[12:47] <voidspac_> which looked like a version error to me...
[12:47] <dimitern> does anybody have any clue about that error? "cannot set admin password: need to login"
[12:47] <voidspac_> natefinch: right, in theory anyway
[12:47] <voidspac_> dimitern: axw has done a lot of work in that area recently
[12:47] <natefinch> ahh yeah.... that's actually just old binaries in your pkg directory
[12:47] <dimitern> the "need to login" part happens in several unrelated tests, but not all of them fail
[12:48] <dimitern> voidspac_, this is actually on trunk HEAD
[12:48] <natefinch> voidspac_: they're actually really really really careful to make sure they maintain backwards compatibility.
[12:48] <dimitern> voidspac_, I had too many issues with rev 9b2d2106476bad0ac528256db23bad073257d4bf so decided to try trunk first
[12:48] <voidspac_> dimitern: right
[12:52] <voidspac_> *sigh*
[12:53] <voidspac_> adding upstream repo and actually pulling HEAD
[12:53] <voidspac_> now re-running
[12:53] <voidspac_> I had some failures, but I wasn't on HEAD
[12:53] <voidspac_> although I was on master, so failures still od
[12:53] <voidspac_> *odd
[12:53] <voidspac_> but different
[12:54] <voidspac_> filter_test.go:305 c.Fatalf("unexpected config event")
[12:55] <dimitern> voidspac_, no failures with worker/uniter on HEAD; now re-running on 9b2d2106476bad0ac528256db23bad073257d4bf, after I wiped out $GOPATH/pkg/, just in case
[12:56] <voidspac_> dimitern: I get one failure on HEAD
[12:56] <voidspac_> the same as above
[12:56] <dimitern> voidspac_, lots of failures, all of them stemming from "need to login"
[12:57] <voidspac_> dimitern: so it looks to you like that issue has been fixed
[12:57] <voidspac_> if it's on 9b2d2106476bad0ac528256db23bad073257d4bf but not on head
[12:58] <dimitern> voidspac_, I think something's wrong with my mongo there, I want to fix this first, so I can have a reliable setup
[12:58] <voidspac_> ok
[12:59] <voidspac_> I'm trying worker/uniter with 9b2d2106476bad0ac528256db23bad073257d4bf
[13:04] <voidspac_> dimitern: I get no worker/uniter failures on 9b2d2106476bad0ac528256db23bad073257d4bf
[13:04] <dimitern> voidspac_, hmm how about for agent/ ?
[13:05] <voidspac_> will try
[13:11] <voidspac_> dimitern: I re-ran the worker / uniter tests and got the failure I got previously
[13:11] <voidspac_> ran agent/... tests fine
[13:11] <voidspac_> no failures
[13:12] <voidspac_> but I didn't have this failure on worker/uniter before
[13:12] <voidspac_> so looks like it's flaky rather than consistent
[13:12] <voidspac_> but it's still a different failure to the one on CI
[13:12] <voidspac_> or the ones you have
[13:12] <dimitern> right
[13:12] <voidspac_> so, fark
[13:12] <voidspac_> not helpful at all
[13:39] <hazmat> hmm.. recent yaml changes seem to cause some compat issues.. $ juju status -> error unmarshalling "/opt/juju/environments/ocean.jenv": YAML error: resolveTable item not yet handled: < (with <root>=DEBUG;unit=DEBUG)
[13:58] <dimitern> voidspac_, what mongodb are you using?
[13:58] <dimitern> voidspac_, $ `which mongod` --version ?
[14:01] <dimitern> voidspac_, sorry, I meant $ which mongod && mongod --version
[14:01] <perrito666> natefinch: wwitzel3 standup
[14:02] <dimitern> voidspac_, the version from the ppa is 2.2.4, but this is too old for what we're doing around state.Initialize(), more specifically, SetAdminMongoPassword uses admin.UpsertUser(), which is available in 2.4+
[14:02] <dimitern> sinzui, ping
[14:02] <sinzui> hi dimitern
[14:03] <dimitern> hey, I think I discovered a potential regression re precise + mongodb
[14:03] <sinzui> :(
[14:03] <dimitern> sinzui, but it's funny how it doesn't show on the CI tests - is the mongod there 2.2.4 from ppa:juju/experimental ?
[14:04] <dimitern> sinzui, because I can reproduce this issue at will with both HEAD of trunk and rev 9b2d2106476bad0ac528256db23bad073257d4bf (from http://juju-ci.vapour.ws:8080/job/run-unit-tests-precise-amd64/1456/consoleFull)
[14:04] <sinzui> dimitern, CI uses 2.4.6 to match cloud-tools
[14:05] <dimitern> sinzui, from where?
[14:05] <sinzui> http://ubuntu-cloud.archive.canonical.com/ubuntu/pool/main/m/mongodb/
[14:05] <sinzui> dimitern, ^ looks like I should switch to 2.4.9 to match
[14:06] <dimitern> sinzui, how can I install mongodb-server from there? add-apt-repo ...?
[14:07] <dimitern> sinzui, is it know that we no longer support mongodb 2.2.x, even on precise?
[14:07] <dimitern> known*
[14:07] <hazmat> sinzui, is manual provider in our functional tests?
[14:07] <hazmat> i'm encountering quite a few regressions.. just trying to determine sanity
[14:08] <sinzui> hazmat, yes, deploys an upgrades to instance in aws and real metal
[14:09] <dimitern> ah, found it - ppa:juju/stable
[14:10] <sinzui> sudo add-apt-repository cloud-archive:tools
[14:10] <hazmat> sinzui, hmm.. ic. okay. i'll keep filing bugs then
[14:10] <sinzui> dimitern, https://wiki.ubuntu.com/ServerTeam/CloudToolsArchive
[14:10] <dimitern> thanks sinzui
[14:10] <sinzui> dimitern, I copy packages to juju stable because many people fail to install that archive when using local precise
[14:12] <dimitern> sinzui, right, I'm trying the same tests now with 2.4.6 from the archive
[14:15] <dimitern> sinzui, I can't reproduce any of the failures now :(
[14:17] <hazmat> sinzui, looks like manual provider add-machine but oddly not bootstrap is failing if there isn't an ubuntu user in the base image.
[14:17] <sinzui> ah
[14:17] <dimitern> sinzui, but I have a suggestion how to add debug logging to mgo, so we can see what's going on in more detail
[14:18] <sinzui> hazmat, we have a bug for a cases where ubuntu was removed from a server image and lxc fails
[14:18] <hazmat> sinzui, also wierdly bootstrap but not add-machine uses ssh-agent
[14:20] <voidspac_> dimitern: /usr/bin/mongod
[14:20] <voidspac_> db version v2.4.6
[14:20] <voidspac_> Thu Aug 14 17:19:47.712 git version: nogitversion
[14:20] <voidspac_> dimitern: I added the ppa and installed mongodb-server
[14:20] <dimitern> voidspac_, right I suspected that much - it'll never work with mongodb-server 2.2.4 from ppa:juju/experimental
[14:20] <voidspac_> dimitern: but it must be preferring a newer version from elsewhere
[14:21] <voidspac_> dimitern: but that's not the same error that CI has, right?
[14:21] <dimitern> voidspac_, with mongodb-server 2.4.6 from cloud-archive:tools - no failures
[14:21] <voidspac_> they just have uniter test failures
[14:21] <bac> jcastro: cross-team call today?
[14:21] <voidspac_> ah
[14:21] <voidspac_> dimitern: what about CI with HEAD?
[14:21] <voidspac_> I guess I can check...
[14:21] <dimitern> voidspac_, I'll try HEAD next, just running uniter tests once more to be sure
[14:22] <sinzui> hazmat, I re added ubuntu user to the kvm/maas machine I am provisioning. And I provisioned it with manual provider
[14:22] <voidspac_> dimitern: 1470 passed!
[14:22] <voidspac_> http://juju-ci.vapour.ws:8080/job/run-unit-tests-precise-amd64/1470/
[14:22] <dimitern> voidspac_, sweet!
[14:22] <voidspac_> REVISION_ID=abfd7625309d31ecddd8fa799d64c8d4fa41977c
[14:22] <voidspac_> is that head?
[14:23] <voidspac_> I believe so
[14:24] <hazmat> sinzui, ok.. its a regression though.. used to work in 1.18.. glad to hear it works if the bug isn't triggered but ideally it could be part of the regression check.
[14:26] <dimitern> voidspac_, so I think the failures were either bogus or they got fixes in subsequent branches
[14:26] <voidspac_> dimitern: I think so
[14:27] <voidspac_> dimitern: there are some canonistack-deploy failures
[14:27] <voidspac_> http://juju-ci.vapour.ws:8080/job/canonistack-deploy-precise-amd64-devel/
[14:27] <voidspac_> dimitern: but they're time-outs, so not sure how seriously to take those
[14:27] <voidspac_> I get that with canonistack fairly often
[14:29] <dimitern> voidspac_, perhaps the previous failed test didn't clean up properly?
[14:29] <voidspac_> dimitern: well, the very latest one is a bootstrap failure - the two before that are timeouts
[14:29] <voidspac_> so yes, looks like the test environment is now screwed :-)
[14:31] <dimitern> voidspac_, yes, my thoughts exactly :)
[14:31] <dimitern> sinzui, are you seeing this ^^
[14:32] <sinzui> voidspac_, dimitern . I have seen it. I have not completed investigation. Don't panic. The tests are -devel because canonistack is not production grade. when swift gets slow for example, everything fails
[14:33] <voidspac_> sinzui: but the unit test failures have been fixed
[14:33] <voidspac_> http://juju-ci.vapour.ws:8080/job/run-unit-tests-precise-amd64/1470/
[14:34] <voidspac_> current HEAD
[14:39] <sinzui> voidspac_, yes. I should admit that ppc64el is doing worse. I increased retesting to 4 to improve the chances to pass
[14:41] <voidspac_> sinzui: that's showing latest build passed too
[14:41] <voidspac_> Same revision.
[14:42] <voidspac_> Looks like the increased retesting did the trick...
[14:43] <voidspac_> There was a replicaset failure for that revision.
[14:47] <Beret> alexisb, sinzui - I forgot to mention this on the call
[14:47] <Beret> alexisb, sinzui - sparkiegeek has a branch up to fix an issue we found, would appreciate it getting through whatever process - https://github.com/juju/utils/pull/22
[14:48] <Beret> that hits us quite a lot - it would be good to get it into 1.20
[14:48] <alexisb> Beret, ack
[14:48] <Beret> thanks
[14:48] <alexisb> anyone know who is the oncall reviewer today?
[14:49] <alexisb> natefinch, ^^
[14:51] <natefinch> alexisb: tim and dave: https://docs.google.com/a/canonical.com/spreadsheets/d/1iQLLOWrjzxddm5VhYWYi0-2k3xI6wTMlpkvnVNJCYGY/edit
[14:52] <sinzui> Beret, does that address bug 1354685
[14:52] <mup> Bug #1354685: installation of packages for containers should be retried in face of lock errors <cloud-installer> <landscape> <lxc> <juju-core:In Progress by adam-collard> <https://launchpad.net/bugs/1354685>
[14:52] <sparkiegeek> sinzui: yes :)
[14:52] <natefinch> alexisb: I know that's not helpful
[14:53] <sinzui> sparkiegeek, excellent...CI experiences this issue often.
[14:55] <Beret> sinzui, yes
[15:02] <dimitern> Beret, sparkiegeek, alexisb, https://github.com/juju/utils/pull/22 reviewed
[15:02] <alexisb> dimitern, you rock, thanks!
[15:05] <sparkiegeek> alexisb: hear hear!
[15:05] <sparkiegeek> dimitern: thanks, first reaction: awesome review :)
[15:06] <hazmat> sinzui, is there any trick to building 1.20 branch.. its been broken for me for a while.. here's my latest attempt.. http://paste.ubuntu.com/8046050/
[15:06] <dimitern> sparkiegeek, cheers :)
[15:08] <natefinch> hazmat: bzr branches have broken on me a few times over the past few months.  if you delete /home/kapil/src/launchpad.net/gnuflag and go get it again, it should be fine
[15:09] <natefinch> (and then rerun godeps)
[15:09] <hazmat> natefinch, that did the trick
[15:09] <hazmat> thanks
[15:09] <sinzui> hazmat, natefinch speaks the truth. I have also changed to each branch and pulled the lastest. godeps doen't pull, so if the rev isn't in the repo, it fails
[15:10] <hazmat> sinzui, yeah.. i had been manually doing bzr pull but it kept outputting no changes.. too late to debug more now.. but good to know the workaround.
[15:14] <dimitern> sinzui, hazmat, I have a couple of nice scripts to run godeps and update automatically what's needed - I'm sending a mail to juju-dev in case you might be interested
[15:15] <hazmat> dimitern, sounds like a great merge request for a makefile ;-)
[15:15] <natefinch> sinzui, dimitern, hazmat: current godeps -u *will* fetch new revisions from the remote repo
[15:15] <hazmat> natefinch, *should*
[15:16] <natefinch> sinzui, dimitern, hazmat: sometimes you have to do run godeps more than once
[15:16] <dimitern> natefinch, oh really?
[15:16] <dimitern> natefinch, yep, my script takes care of that
[15:16] <hazmat> natefinch, didn't work for me.. and running multiple times .. with intermittent failures.. ick
[15:16] <natefinch> hazmat: one time can't possibly always work
[15:17] <hazmat> natefinch, pull before update seems sane to me
[15:17] <natefinch> hazmat: actually... strike that
[15:17] <natefinch> hazmat: yeah, the new code does that with -u
[15:17] <natefinch> hazmat: where "new" = a month or so ago
[15:18] <hazmat> hmm.. mine is pretty old (~march).. does the new version support comments
[15:18]  * hazmat updates
[15:21] <natefinch> hazmat: that should fix the problem of not pulling down updates before trying to set the current commit
[15:25] <rogpeppe> when i use juju switch, i see:
[15:25] <rogpeppe> ERROR couldn't read the environment
[15:25] <rogpeppe> this seems to have broken some time relatively recently
[15:25] <rogpeppe> ha, it should really print the actual error
[15:26] <rogpeppe> ah, that's better:
[15:26] <rogpeppe> ERROR couldn't read the environment: cannot parse "/home/rog/.juju/environments.yaml": YAML error: found character that cannot start any token
[15:26] <hazmat> rogpeppe, yup.. recent yaml changes
[15:26] <ericsnow> rogpeppe: when you have a minute could you take a look at https://github.com/juju/utils/pull/19?
[15:26] <hazmat> rogpeppe, i had the same issue earlier on trunk. needed to quote some strings
[15:27] <hazmat> in my jenv file
[15:27] <rogpeppe> hazmat: actually, it seems i accidentally had a tab at the start of my environments.yaml
[15:27] <hazmat> rogpeppe, oh. i had this one .. $ juju status -> error unmarshalling "/opt/juju/environments/ocean.jenv": YAML error: resolveTable item not yet handled: < (with <root>=DEBUG;unit=DEBUG)
[15:27] <rogpeppe> hazmat: (i think it was probably me experimenting with whether yaml is fully JSON-compatible)
[15:27] <katco> rogpeppe: sorry about that; yesterday i put us on a newer version of goyaml.
[15:28] <rogpeppe> katco: it's not your fault
[15:28] <rogpeppe> katco: it's the fault of whoever printed that error without including the actual cause...
[15:28] <katco> hazmat: i received a _lot_ of those errors on goyaml head, so i used an old commit that targetted a fix we wanted. didn't look into it too much
[15:28]  * rogpeppe does not run git blame.
[15:28] <rogpeppe> katco: oh, interesting.
[15:28] <rogpeppe> katco: we're not using gopkg.in/yaml.v1 tip?
[15:29] <hazmat> katco, fair enough.. i'm  glad for the underscore stripping fix.. just worried about users in the field who are going to encouter this when they upgrade
[15:29] <katco> rogpeppe: we are not. lots of tests failed with hazmat's error
[15:29] <katco> hazmat: did you figure out what it was?
[15:29] <rogpeppe> hazmat, katco: i'm guessing it's a quoting error on output
[15:30] <rogpeppe> perhaps it was not adding the correct quotes for strings containing "<"
[15:30] <hazmat> katco, we're writing out jenv with    logging-config: <root>=DEBUG;unit=DEBUG
[15:30] <rogpeppe> hazmat: yup
[15:30] <hazmat> katco, but it couldn't be read without quoting it "<root>=DEBUG;unit=DEBUG"
[15:31] <dimitern> hazmat, sinzui, mail sent (Running godeps -u dependencies.tsv easily)
[15:31] <hazmat> what rogpeppe said ;-)
[15:31] <katco> hazmat: rogpeppe: ah i see... so we're causing our own failure with automatied output? is that going to be an issue for 1.20.4
[15:31] <katco> ?
[15:31] <hazmat> katco, very likely
[15:32] <katco> hazmat: frown. so we need to fix that before i backport?
[15:32] <natefinch> gah, this is why yaml is a PITA
[15:32] <hazmat> katco, ie. if a user has existing an envs, and upgrades, they can't use juju without manually editing their jenvs..
[15:32] <rogpeppe> hazmat: interesting pyyaml also prints that string unquoted
[15:32] <rogpeppe> hazmat: (actually i think it might be the same original C code base)
[15:32] <hazmat> rogpeppe, yeah.. but it parses it fine.
[15:32] <katco> hazmat: yeah that's not a good experience at all.
[15:34] <hazmat> hmm.. i should verify that with trunk using godeps.. its possible i accidentally just ran with the binary produced from go get.. since it actually worked and i wanted trunk of juju.
[15:35] <rogpeppe> hazmat, katco, niemeyer: yes, it's definitely a bug in gopkg.in/yaml.v1 tip
[15:35] <rogpeppe> this fails: http://paste.ubuntu.com/8046246/
[15:36] <rogpeppe> and it should not
[15:36] <rogpeppe> as it's producing output that it cannot parse
[15:37] <katco> rogpeppe: sounds like it might be a bug further back in history as well?
[15:37]  * rogpeppe bisects
[15:37] <katco> https://github.com/go-yaml/yaml/commit/1418a9bc452f9cf4efa70307cafcb10743e64a56#diff-d41d8cd98f00b204e9800998ecf8427e
[15:38] <katco> is the version trunk is using (and what we're wanting to backport to 1.20)
[15:38] <hazmat> katco, yeah.. with the pinned version in dependencies.tsv things work as expected
[15:38] <hazmat> so no issue there
[15:38] <katco> hazmat: ahh great!
[15:38] <katco> hazmat: sounds like i made the correct decision ;)
[15:38] <hazmat> indeed :-)
[15:40] <rogpeppe> katco: you definitely did
[15:41] <rogpeppe> looks like the problem was introduced with 72c33f6840f49f9ed7d1faef7562b3266640fdf4
[15:41] <rogpeppe> which is not surprising, as https://github.com/go-yaml/yaml/issues/1 shows a < being used as a special char
[15:42] <rogpeppe> once again, yaml is too complex for its own good
[15:47] <rogpeppe> hazmat, katco, niemeyer: https://github.com/go-yaml/yaml/issues/24
[15:47] <katco> rogpeppe: thank you kindly rogpeppe
[15:47] <rogpeppe> katco: np
[15:56]  * perrito666 suddenly notices he cannot ssh from state server into agents bc so straightforward as he thought
[16:12] <bodie_> fwereade_, if/when you're available, could you comment on https://github.com/juju/juju/pull/415 ? I think it's good to go but it still has the overwrite behavior, which seems simpler to me and more like what is probably desired
[16:29] <sinzui> hi natefinch hazmat identified a regression between 1.20 and 1.21. This isn't a blocking bug because CI didn't find it. Can you help direct an engineer to look at bug 1356899
[16:29] <mup> Bug #1356899: manual provider add-machine fails if no ubuntu user on machine <manual-provider> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1356899>
[16:30] <natefinch> sinzui: interesting, ok
[16:30] <niemeyer> rogpeppe: Thanks, I'll have a look
[16:40] <gsamfira> perrito666: try adding: "ForwardAgent    yes" on your machine in ~/.ssh/config
[16:40] <gsamfira> helps alot
[17:00] <alexisb> perrito666, I am running a few minutes late
[17:00] <perrito666> no worries, I am trying to kick my camera back into life
[17:04] <alexisb> perrito666, now I am waiting on my hangout
[17:04]  * alexisb tries firefox
[17:08] <mattyw> fwereade_, I wonder if the periodicworker should have a mechanism for working out how many times the workerFunc has been called - this could just built into whatever implements it I guess
[17:16] <fwereade_> mattyw, ehh, we'll build it when we need it
[17:16] <fwereade_> bodie_, tab opened -- and overwriting in general sgtm, I don't think I was arguing against that, I'll take a look
[17:18] <mattyw> fwereade_, I *might* find it useful in the cleanup worker test I'm writing so that I know that a cleanup has been run
[17:19] <mattyw> fwereade_, I'll type some things ans see how it *feels*
[17:19] <fwereade_> mattyw, sgtm
[17:19] <fwereade_> mattyw, it'll need to be goroutine-safe
[17:20] <bodie_> fwereade_, cool, thanks.  action-fail should be ready in a few minutes too
[17:33] <bodie_> fwereade_, PR 520 up
[18:04] <alexisb> ericsnow, team meeting?
[18:04] <alexisb> fwereade_, team meeting?
[18:24] <mattyw> I'm calling it a night then folks, see you all tomorrow
[18:24] <alexisb> bye mattyw !
[18:31] <waigani> natefinch: found your comment - usually I get notified by email. Github must not notify for closed PRs...
[18:32] <natefinch> waigani: ahh, dang.  I'll remember that for the future
[18:45] <sinzui> :(
[18:46] <sinzui> we have a utopic regression.
[18:46] <natefinch> *sad trombone*
[18:50] <sinzui> natefinch, Can you pass this on to an engineer: https://bugs.launchpad.net/juju-core/+bug/1357033
[18:50] <mup> Bug #1357033: sourcesSuite.TestGetFilesToBackup consistently fails on utopic <ci> <regression> <test-failure> <utopic> <juju-core:Triaged> <https://launchpad.net/bugs/1357033>
[18:53] <natefinch> sinzui: yep
[18:53] <alexisb> wwitzel3, ready when you are for our 1x1, if you would like to meet early
[18:58] <wwitzel3> alexisb: sure :)
[19:01] <jrwren> how do I get relation data from outside a relation hook?
[19:01] <jrwren> relation-ids says error: no relation name specified
[19:02] <jrwren> relation-list says error: no relation id specified
[19:02] <jrwren> nevermind. consider ^^ that my rubber duck session.
[19:03] <jrwren> relation-ids <relation name> works fine. I typoed :(
[19:06] <waigani> natefinch: https://github.com/juju/juju/pull/521
[19:09] <waigani> natefinch: just reverted for now, I'll look at fixing the bug today and propose a new branch
[19:18] <waigani> natefinch: oh and notification of your comment did come through, my mail client is just slow
[20:10] <katco> https://github.com/juju/juju/pull/522 ready for review
[20:10] <katco> one of the last blockers for 1.20.4 (though i saw we found something new :(  )
[20:44] <natefinch> katco: lgtm'd
[20:44] <katco> natefinch: ty, sir
[20:45] <natefinch> katco:  welcome :)
[20:57] <ericsnow> natefinch: FYI, I'm planning on talking to Ian and Martin tonight about reviewboard.
[20:58] <natefinch> ericsnow: awesome, thanks for spending time after hours to do so
[20:58] <ericsnow> natefinch: no worries.  I really want to get this moving. :)
[21:00] <natefinch> ericsnow: awesome
[21:00] <natefinch> EOD for me
[23:36] <waigani> thumper: so instance.Instance  has a Ports function, asks for a machine ID. I could call that from the bootstrap instance, pass in "0" for id.
[23:36] <thumper> nuh
[23:36] <thumper> not what you want
[23:36]  * thumper gets back to reviewing...
[23:37] <waigani> thumper: environ.Ports() then?
[23:37] <thumper> hang on
[23:38] <waigani> thumper: another question: should I add a new address for each port returned i.e. ["localhost:1234","locahost:5678"]
[23:38]  * thumper ignores waigani for a minute
[23:39] <waigani> hehe
[23:43] <thumper> waigani: if you read the comment on the environ.Ports method, it is pretty clear that it isn't what you want...