[00:02]  * arosales doesn't see wallyworld around . . .
[06:41] <jam1> hmm... CI is unhappy
[06:41] <jam1> It looks like AMZ is just out of instances
[06:41] <jam1> but "hp upgrade" is failing
[06:41] <jam1> and has been since r2644
[06:42] <jam1> I wonder if axw tested upgrade with the bootstrap patch version change
[06:46] <jam1> I see this in the logs, which looks worrying: machine-2: 2014-04-17 12:56:26 INFO juju.worker.apiaddressupdater apiaddressupdater.go:58 API addresses updated to []
[06:47] <jam1> also weird, all-machines.log only shows machine-2 getting the updated tools. Nothing about machine-0 even noticing that it wanted them.
[06:47] <jam1> I do wish we could run in --debug mode...
[06:48] <jam1> I wonder if we could log what API's are being called in Info mode, even if we don't log all of the details we would in Debug mode.
[06:49] <jam1> anyway, upgrade is borked... :(
[07:16] <vladk> jam1: good morning
[07:58] <dimitern> vladk, morning
[07:58] <dimitern> vladk, are you working today?
[07:59] <dimitern> vladk, jam is usually off on fridays (swapping them with sundays)
[07:59] <dimitern> mgz, you around today?
[08:00]  * dimitern will desperately need reviewers today :/
[08:00] <vladk> dimitern: morning, I'm working, and you?
[08:01] <dimitern> vladk, yes - there was some misunderstanding on my part - i thought i had public holiday on monday and decided to take it, but it turned out it's today.. meh no big deal
[08:20] <vladk> dimitern, could you take a look https://codereview.appspot.com/88380044/
[08:20] <dimitern> vladk, looking
[08:26] <dimitern> vladk, reviewed
[08:37] <vladk> dimitern: thanks
[08:37] <vladk> why do you setupNetworks only if config.HasNetworks?
[08:37] <vladk> I think they should setup always, so they setup also on bootstrap and add-machine commands.
[08:39] <dimitern> vladk, eventually we'll do that, but for now the requirement is to set them up only when specified explicitly when deploying
[08:46] <vladk> dimitern: I told about this with jam, his opinion:
[08:46] <vladk> I'm probably happier if we set up everything rather than only the ones the user supplied
[08:46] <vladk> as then if you want to deploy another service, in say a container, then we know that we do have that network
[08:46] <vladk> now, when we have the NetworkWorker that can do dynamic setup of networks
[08:46] <vladk> it matters less
[08:46] <vladk> because then we can just set up the minimum, and then add ones that we need later.
[08:47] <vladk> I thought we were starting all by default.
[08:48] <dimitern> vladk, exactly, the worker will give us that
[08:48] <dimitern> vladk, but remember we're doing MVP now, so we're taking some shortcuts
[09:12] <voidspace> morning all
[09:12] <voidspace> rogpeppe: ping
[09:13]  * rogpeppe is not really here
[09:13] <voidspace> It's a UK bank holiday today and Monday
[09:13] <voidspace> rogpeppe: that's what I was checking
[09:13] <rogpeppe> voidspace: indeed it is
[09:13]  * voidspace would like to not really be here as well
[09:14] <voidspace> rogpeppe: so I was just checking in
[09:14] <rogpeppe> voidspace: i'm just sorting out insurance and packing before going away tomorrow...
[09:14] <voidspace> rogpeppe: happy good friday
[09:14] <rogpeppe> voidspace: you too
[09:14] <voidspace> rogpeppe: have a good weekend
[09:14] <rogpeppe> voidspace: you'll be happy to head HA has now landed...
[09:14] <rogpeppe> s/head/hear/
[09:15] <voidspace> rogpeppe: I just saw some emails
[09:15] <voidspace> rogpeppe: awesome
[09:15] <voidspace> rogpeppe: ah, looks like you're going on a proper holiday
[09:15] <rogpeppe> voidspace: have a go - see if you can make it work...
[09:15] <voidspace> rogpeppe: enjoy
[09:15] <rogpeppe> voidspace: i am!
[09:15] <voidspace> rogpeppe: will do, I'll try and break it :-)
[09:15] <jam1> vladk: dimitern: I'm "off" today, but if you need something you can ping me.
[09:15] <rogpeppe> voidspace: taking advantage of colorado mountain stuff
[09:16] <voidspace> rogpeppe: ah, of course
[09:16] <voidspace> gophercon
[09:16] <jam1> Upgrade is broken, so I might give it a poke, as we can't do any sort of release with that
[09:16] <voidspace> rogpeppe: see you in vegas then
[09:16] <rogpeppe> voidspace: up
[09:16] <rogpeppe> yup
[09:16] <rogpeppe> voidspace: aye
[09:16] <jam1> hi voidspace, welcome back
[09:16] <jam1> (well, welcome back to IRC at least :)
[09:16] <voidspace> jam1: hi, and thanks
[09:17] <jam1> voidspace: are you back in the UK?
[09:17] <voidspace> jam1: yep
[09:17] <voidspace> jam1: for a week at least
[09:17] <jam1> voidspace: lucky you to get to fly trans atlantic every other week
[09:17] <voidspace> jam1: I'm waiting to see how bad the jetlag is
[09:17] <voidspace> it usually lasts me a week
[09:17] <jam1> voidspace: just don't change your TZ for this week
[09:17] <voidspace> so I should recover just in time
[09:17] <jam1> wake up 6 hours late
[09:17] <voidspace> jam1: hah, I did consider it
[09:17] <voidspace> jam1: my daughter has other ideas
[09:17] <jam1> voidspace: I thought you liked to sleep in and start late anyway
[09:18] <voidspace> hah, normally I do
[09:19] <voidspace> Brett Cannon (Python core dev) will be looking for work soon, and has Go experience (by the way)
[09:19]  * voidspace subtly changing topic away from my sleeping habits
[09:20] <voidspace> he's an excellent dev, hopefully we have a slot for him when he becomes available
[09:20] <jam1> voidspace: no such luck mr sleepy. I think I've actually met Brett at a pycon a few years ago. Is he the one who was doing importlib stuff?
[09:21] <voidspace> jam1: yep, currently a googler - great guy
[09:21] <jam1> voidspace: if he's looking, you should get his name in to Alexis, I think our slots are filling up pretty quickly.
[09:21] <voidspace> jam1: he's not looking just yet - but planning a move in the next few months
[09:22] <voidspace> he has to wait a bit longer for his options to vest, so I don't think we can tempt him into an early leave
[09:22] <voidspace> he'll get in touch with me though, so we'll see
[09:26] <jam1> voidspace: so "a few months" is certainly long enough for things to change. But at least atm the head count should all be filled by then (I think)
[09:31] <dimitern> jam1, ah, alright then
[09:32] <jam1> dimitern: since you and fwereade are hanging out, can you poke him about Manifest-charm-deployer ? I'm pretty sure I LGTM'd it, and it would be good to have in the next release
[09:32] <fwereade> jam1, I'm catching up on email at the moment, and I'll want to run a fresh live test against reality with the latest code, but I'll land that today
[09:32] <jam1> fwereade: sounds good.
[09:33] <jam1> fwereade: as for "user" all the other files were explicitly checked with ft.File("name") preserveUser is checking the same thing but doesn't *look* the same as the previous N checks.
[09:33] <jam1> so ignore me
[09:34] <jam1> but I missed it because it wasn't matching the pattern
[09:34] <fwereade> jam1, yeah, I worried vaguely that it was less obvious, but thought I'd prefer to stick with the var than dupe the definition
[09:34] <fwereade> jam1, maybe I should be putting them allin vars, but that felt inconvenient
[09:35] <jam1> fwereade: at this point, we've spent too long discussing it vs just landing it :)
[09:38] <fwereade> jam1, quite so :)
[09:38] <jam1> fwereade: do you have a take on the "juju bootstrap" should always be exactly pinned discussion?
[09:39] <jam1> I feel like the discussion has gotten into bickering, and I'm trying to keep it productive.
[09:39] <jam1> I feel like we haven't really come to a consensus
[09:39] <jam1> so I'm want to actually change our behavior without having that.
[09:40] <jam1> But I don't want to come across as just being petulant or defensive.
[09:41] <jam1> I think abentley does have some points we should consider, but I also want us to come up with a strong consensus as I'd rather have consistency in this area, rather than doing it X for 2 releases and then changing our minds again.
[09:47] <fwereade> jam1, yeah, just catching up and pondering
[09:49] <jam1> fwereade: anyway, I'd appreciate more input in the thread, as I feel like more comments from me isn't productive anymore.
[09:49] <jam1> rogpeppe: if you're still here: https://bugs.launchpad.net/juju-core/+bug/1309444
[09:49] <_mup_> Bug #1309444: peergrouper spins in local/upgraded environment <ha> <logging> <juju-core:Triaged> <https://launchpad.net/bugs/1309444>
[09:49] <jam1> local provider doesn't support --replicaset (yet?) so the peergrouper just bounces endlessly
[09:50] <fwereade> jam1, do you remember who's been working on the precise/trusty lxc issues?
[09:50] <jam1> and I *think* upgraded environments will do the same (today)
[09:50] <jam1> fwereade: do you have an issue in particular?
[09:50] <rogpeppe> jam1: oops, the peergrouper worker should be disabled for local environments
[09:50] <rogpeppe> jam1: upgraded environments might be ok if axw's branch has landed
[09:50] <jam1> rogpeppe: is it sufficient for it to see "not in replicaset mode" and just exit gracefully?
[09:51] <rogpeppe> jam1: it could check the replica set status and see that there are no members
[09:51] <rogpeppe> jam1: that would be somewhat more graceful
[09:51] <jam1> rogpeppe: well this ends up in the log 2x:
[09:51] <jam1> 2014-04-18 09:45:41 ERROR juju.worker.peergrouper worker.go:137 peergrouper loop terminated: cannot get replica set status: cannot get replica set status: not running with --replSet
[09:51] <jam1> 2014-04-18 09:45:41 ERROR juju.worker runner.go:218 exited "peergrouper": cannot get replica set status: cannot get replica set status: not running with --repl
[09:51] <jam1> thats a lot of not-getting the replica set status :)
[09:52] <rogpeppe> jam1: it can't just exit though - otherwise it'll be restarted (we should perhaps fix that so it's possible for a worker to exit without being restarted)
[09:52] <jam1> rogpeppe: I thought we had a way for workers to exit with "I'm finished now"
[09:52] <rogpeppe> jam1: i don't think so, but we may do
[09:52] <fwereade> jam1, see #juju-gui just now
[09:52] <rogpeppe> jam1: i always thought that just exiting with a nil error should be enough
[09:52] <fwereade> rogpeppe, jam1, they were meant to not be restarted if they return nil
[09:53] <fwereade> rogpeppe, jam1, not sure what happened if that never landed, I thought we rediscussed that exact issue a few weeks ago
[09:53] <rogpeppe> fwereade: yeah, we should do that
[09:53] <rogpeppe> fwereade: (if we don't already)
[09:53]  * rogpeppe is really gone now
[09:54] <jam1> rogpeppe: fwereade: "if workerInfo.start == nil { // The worker has been deliberately stopped"
[09:55] <rogpeppe> jam1: ah, that's cool then
[09:55] <fwereade> excellent
[10:03] <wwitzel3> hello
[10:50] <natefinch> mgz, perrito666, dimitern, fwereade: staup?
[10:50] <dimitern> natefinch, coming
[10:50] <natefinch> standup that is
[14:23] <dimitern> fwereade, mgz, vladk|offline, natefinch, i'd appreciate a review on this critical bug fix https://codereview.appspot.com/89260044
[15:31] <dimitern> sinzui, when you're about to release 1.19.1, please add this to the release notes https://bugs.launchpad.net/juju-core/+bug/1307513/comments/1
[15:31] <_mup_> Bug #1307513: Support multiple (physical & virtual) network interfaces with the same MAC address on the same machine <tech-debt> <juju-core:In Progress by dimitern> <https://launchpad.net/bugs/1307513>
[15:33] <sinzui> dimitern, Fab!
[15:33] <sinzui> Thank you very much dimitern
[15:34] <dimitern> sinzui, :) np
[18:43] <natefinch> sinzui: I have a fix for this bug, but I don't think I actually know the area of the code well enough to be confident that it's the right fix.  It sort of looks like it should never have worked before:  https://bugs.launchpad.net/juju-core/+bug/1304407
[18:43] <_mup_> Bug #1304407: juju bootstrap defaults to i386 <amd64> <apport-bug> <ec2-images> <metadata> <trusty> <juju-core:Triaged> <juju-core 1.18:Triaged> <juju-core (Ubuntu):Triaged> <https://launchpad.net/bugs/1304407>
[18:44] <sinzui> natefinch, I think other rules that forced the local arch were in play
[18:45] <natefinch> sinzui: could be.  It looks like the code that picks the image gets a list of matching ones back (one for amd64 and one for 386) and then just takes whichever is first
[18:46] <sinzui> ouch
[18:48] <sinzui> natefinch, Isn't the real issue with that bug is that we think amd64 is preferred either because AWS prefers it or because we see our local arch as the preference?
[18:50] <sinzui> natefinch, would setting a large mem constraint also force selection of amd64? (all the i386 instances have small memory)
[18:51] <natefinch> sinzui: what I was seeing was that we were passing in the constraints the user had defined (in this case, no constraints), and then filtering the list of images down to the cheapest ones, which leaves m1.small, and there's two versions, 386 and amd64.  Since there was more than one that matched what the user wanted we just picked the first one.  I don't know how it was being restricted to local arch before.
[18:52] <natefinch> sinzui: what my change does is that if there's more than one image that matches what the user requested, it prefers to choose the one with the same arch as the local machine
[18:52] <natefinch> sinzui: but if such a thing doesn't exist, it just picks whatever is first in the list
[18:53] <sinzui> natefinch, I agree with your solution. I suppose for many people, the arch is not important so ling as the service works
[18:54] <natefinch> sinzui: right.  If it were up to me, I'd probably just default to always choosing amd64... it's generally the default these days anyway, and matching the dinky old laptop someone is using to run the client on is not very intuitive to me.... but I'm not sure if other people had a specific reason for matching the local machine
[18:55] <sinzui> natefinch, I agree. I suspect the surprise was we expect the more powerful /better arch to be selected
[18:56] <natefinch> sinzui: I can send a quick email to the list about it.  either way is trivial to code.  I'd think most people would presume 64 bit is better, all things being equal.
[18:58] <sinzui> yep
[19:51] <natefinch> sinzui: any idea on how to reproduce the upgrading issue?   I just went 1.18->trunk without a hitch
[19:52] <sinzui> natefinch, Your units upgraded?
[19:53] <natefinch> sinzui: yep   just standard wordpress/mysql
[19:53] <natefinch> sinzui: but 1.19 now
[19:53] <natefinch> (1.19.1.1)
[19:54] <sinzui> The tests all set tool-metadata-url to the testing streams
[19:55] <sinzui> CI is republishing tools now. in 15 minutes there will be tools that are trunk
[19:56] <natefinch> sinzui: I did use --upload-tools, that probably skews things
[19:56] <sinzui> Yes, users are not supposed to use that
[19:57] <sinzui> I cannot set tools on joyent because several the libs used by precise are tool old
[19:57] <sinzui> and the machines are not allowed to get deps from anywhere other than Lp
[19:57]  * sinzui ponders giving up for the day
[19:59] <natefinch> sinzui: I'll take another look without upload-tools
[20:20] <natefinch> sinzui: how do I get it to upgrade without if 1.19 hasn't been released
[20:21] <sinzui> set tools-metadata-url to one of the testing streams
[20:21] <sinzui> natefinch, which cloud are you using
[20:21] <natefinch> sinzui: aws
[20:22] <sinzui> natefinch, juju-dist.s3.amazonaws.com/testing/tools
[20:22] <sinzui> hmm, publication of the latest rev is stalled though
[20:22]  * sinzui looks
[20:30] <natefinch> juju status
[20:30] <natefinch> hehh
[20:32] <natefinch> man I hate that we have environments.yaml and the jenvs
[20:33] <natefinch> I always go edit the environments.yaml first and wonder why it doesn't do anything
[20:37] <natefinch> sinzui: I can't make tools-metadata-url work.  I put it in the correct jenv, but I still get no upgrades available
[20:38] <sinzui> natefinch, This is what I have for aws: http://pastebin.ubuntu.com/7278665/
[20:40] <natefinch> sinzui: maybe the problem is that I changed it after I bootstrapped
[20:40] <sinzui> I already reported that bug :)
[20:40] <sinzui> natefinch, I think I cannot be changed if it was ever set
[20:40] <sinzui> But when not set, you can set it once
[20:41] <natefinch> sinzui: is there more to setting it than just editing the jenv?
[20:41] <sinzui> natefinch, I would prefer to run the tests by bootstrapping with the released stream,  then change tools-metadata-url to use the testing stream
[20:42] <sinzui> natefinch, I used juju set-env tools-metadata-url=https://juju-dist.s3.amazonaws.com/testing/tools
[20:42] <sinzui> It works for my joyent env which didn't have that key set
[20:43] <natefinch> sinzui: ahh, that worked
[20:44] <sinzui> natefinch, oh was that key set in the env before?
[20:44] <sinzui> I want to update the bug with your experience
[20:45] <natefinch> sinzui: no, it wasn't set before
[20:45] <natefinch> sinzui: I just thought I could edit the jenv directly, but that doesn't seem to work
[20:46] <sinzui> the jenv is just the pre-state used to bootstrap the env.
[20:47] <natefinch> sinzui: I thought that was the environments.yaml :/
[20:47] <sinzui> I think there is a bug reported asking that juju warn when the jenv doesn't match the env
[20:47] <natefinch> sinzui: I guess that's the pre-pre-state
[20:47] <sinzui> :)
[20:50] <natefinch> sinzui: anyway, my upgrade worked fine
[20:51] <sinzui> natefinch, looky http://ec2-54-84-137-170.compute-1.amazonaws.com:8080/job/hp-upgrade/1090/console
[20:51] <sinzui> That just happened in CI
[20:51] <sinzui> What is the lastest revs?
[20:52] <sinzui> joyent is testing upgrade now
[20:53] <natefinch> sorry, not sure what you're asking about latest revs
[20:53] <sinzui> joyent just passed
[20:55] <sinzui> natefinch, r2655 works
[20:56] <sinzui> dimerrs branch doesn't look related, but it has a positive effect
[20:57] <sinzui> natefinch, local just passed
[20:57] <sinzui> azure and aws are testing now
[20:57] <sinzui> and you effectively did the aws test this hour
[20:58] <natefinch> sinzui: yeah, that's cool
[20:58] <natefinch> man..... I really don't get how launchpad is organized.  How do I just get a list of commits to trunk? It shouldn't be that hard to find
[20:59] <natefinch> ahh, I see..  I can't click on trunk, because that's a "Series"
[20:59] <sinzui> I like the qbzr extension locally
[20:59] <sinzui> Lp lists the last 10 commits to the branck
[20:59] <natefinch> sinzui: huh, never occurred to me
[21:00] <sinzui> the branch is owned by gobot
[21:00] <sinzui> https://code.launchpad.net/~go-bot/juju-core/trunk
[21:00] <sinzui> I know that since I need to explicitly be that bot to tag the branch.
[21:00] <natefinch> sinzui: right
[21:01] <natefinch> sinzui: Andrew made a commit this morning that looks like it might have been more likely to fix things.  At least it mentioned upgrade changes.
[21:01] <natefinch> sinzui: 2654
[21:02] <sinzui> I think so to reading the log, but the hp, joyent, and local upgrade tests failed with that specific rev.
[21:03] <natefinch> sinzui: weird
[21:04] <natefinch> sinzui: well, EOD for me regardless.  Glad it seems to be upgrading now, whatever the reason
[21:04] <sinzui> Have a nice weekend natefinch
[21:05] <natefinch> sinzui: you too