[03:15] <mattyw> morning all
[03:44] <anastasiamac> mattyw: o/
[03:46] <mattyw> anastasiamac, \o
[03:49] <anastasiamac> mattyw: ur morning is my afternoon :D but morning to u nonetheless!
[03:50] <mattyw> anastasiamac, http://www.total-knowledge.com/~ilya/mips/ugt.html
[03:51] <mattyw> anastasiamac, it's a good point - you're still active in the channel on a sunday afternoon
[03:52] <anastasiamac> mattyw: monday afternoon :D
[03:52] <anastasiamac> mattyw: m k with ugt but
[03:52] <anastasiamac> mattyw: u didn't ugt
[03:54] <mattyw> anastasiamac, I thought you were south america?
[03:54] <anastasiamac> mattyw: ug is shorter :D for both start (*universal greeting*) and finish (*universal goodbye*)
[03:54] <mattyw> anastasiamac, I'm now starting to think that was wrong
[03:55] <anastasiamac> mattyw: m in Australia
[03:55] <mattyw> anastasiamac, monday afternoon makes much more sense to be now :)
[03:55] <mattyw> anastasiamac, I'm in Malayia at the moment - but only for the next month - then I'm back in the UK
[03:56] <anastasiamac> mattyw: nice place to spend coldest month of the year :D
[03:58] <mattyw> anastasiamac, it's tough working from home ;)
[05:43] <axw> mattyw: jealous of all the tasty food you're likely eating :(
[05:44] <mattyw> axw, The best part is I've still managed to loose weight - it's win win!
[05:44] <axw> mattyw: hah, winner :)
[05:50] <axw> anastasiamac: would you mind if we rename the "storage" API to "storagemanager"? there will be a "storageworker" (possibly "storageprovisioner") API, and the names are likely going to get muddled
[05:50] <anastasiamac> axw: +1 to rename but r u sure about "manager'?
[05:51] <axw> anastasiamac: I'm not fussed on the name, just picked that to go with "usermanager", "environmentmanager", "imagemanager", etc.
[05:52] <axw> anastasiamac: there's also metricsmanager which doesn't fit the same pattern, and "diskmanager" which should be renamed anyway
[05:52] <axw> diskmanager's job is now just to publish info on block devices
[05:54] <anastasiamac> axw: what would "manager" be primarily responsible for? crud ops?
[05:54] <axw> anastasiamac: yep, I think so
[05:54] <axw> anastasiamac: the user's view into that subsystem
[05:55] <anastasiamac> axw: k. lets rename it to "storagemanager" to align with majority of managers :D
[05:55] <axw> okey dokey
[05:55] <anastasiamac> axw: hopefully, all the "managers" that do not do crud will be renamed :D
[05:56] <axw> anastasiamac: that's difficult, since they exist in released code
[05:56] <axw> well, the metrics one
[05:56] <axw> the disk one is easy
[05:56] <anastasiamac> axw: my "hopefully" is not prescriptive :)
[09:32] <jam> dimitern: if you see william around, poke him for me. I'll try to keep an eye out.
[09:33] <jam> fwereade: apparently invoking your name was enough to wake you up (I just tried to ping your 30s ago :)
[09:33] <dimitern> jam, sure
[09:33] <fwereade> jam, haha, jolly good
[09:33] <fwereade> jam, sorry, my sleep got a bit confused over the flights home
[09:33] <fwereade> jam, I *think* I slept more than usual in24h but it's a bit hard to follow
[09:34] <fwereade> jam, ah yes and my calendar is still on za time, so I thought we were strting at 11
[09:35] <dimitern> voidspace, hey old chap ;)
[09:35] <jam> fwereade: I was just about to grab a coffee and a snack, but I'll be back on the hangout in a couple mins
[09:35] <fwereade> jam, cool, will be there in a mo
[09:36] <voidspace> dimitern: morning
[09:36] <dimitern> voidspace, morning!
[09:36] <dimitern> voidspace, even though I'm officially off today, I'll be joining the standup to sync up
[09:37] <voidspace> dimitern: welcome back to Europe
[09:37] <voidspace> dimitern: ah, ok
[09:37] <voidspace> dimitern: cool
[09:37] <voidspace> dimitern: I'm grabbing coffee, brb
[09:37] <dimitern> voidspace, ok
[09:39] <dimitern> how's the expert on juju metadata generate-tools?
[09:39] <dimitern> s/how's/who's/
[09:39] <fwereade> dimitern, wallyworld
[09:42] <dimitern> fwereade, yeah, so since he's off today I guess I need the next guy who knows his way around there
[09:42] <fwereade> dimitern, is this the "it generates index not index2" thing or something else?
[09:43] <fwereade> dimitern, I might suggest someone from curtis' team, because they're the ones who have to do most of the generating in anger
[09:43] <dimitern> fwereade, Muntaner here has issues bootstrapping a private openstack cloud due to not finding the juju tools - he had issues before with images, but that's resolved now and bootstrap launches an instance, but can't find the tools
[09:44] <Muntaner> logs -> http://paste.ubuntu.com/10139639/
[09:44] <dimitern> fwereade, ok, so just to summarize it for you - generate-tools did create what's needed in the dir, but the tarballs themselves are missing - how should you get them for a stable release?
[09:46] <dimitern> guys, I need a review on these PRs - http://reviews.vapour.ws/r/890/ (main one for 1.21), http://reviews.vapour.ws/r/891/ (foreport for 1.22 of the same fix), http://reviews.vapour.ws/r/892/ (foreport for 1.23)
[09:46] <dimitern> these fix bug 1418433
[09:46] <mup> Bug #1418433: unit ports not populated by API megawatcher <api> <regression> <juju-core:In Progress by dimitern> <juju-core 1.21:In Progress by dimitern> <juju-core 1.22:In Progress by dimitern> <juju-gui:Triaged> <https://launchpad.net/bugs/1418433>
[09:47] <fwereade> dimitern, hmm, I had expected that for a stable release you'd just want to mirror streams.c.c?
[09:47] <fwereade> Muntaner, ^^
[09:47] <Muntaner> hi fwereade :)
[09:47] <fwereade> Muntaner, o/
[09:47] <dimitern> fwereade, that sgtm - so basically download all of it in a local dir and run validate-tools against it?
[09:48] <Muntaner> fwereade: going a bit crazy with juju + private openstack cloud
[09:48] <fwereade> Muntaner, dimitern: it seemed like the lowest-friction thing to do was just to say "mirror streams.canonical.com somewhere your private cloud can see it"
[09:49] <fwereade> Muntaner, dimitern: there will surely be cases where that's not good enough, eg if you're testing hotfixes or proposed releases or whatever
[09:49] <dimitern> Muntaner, or just put juju-1.21.*.tgz files from http://reviews.vapour.ws/r/892/
[09:50] <dimitern> Muntaner, or just put juju-1.21.*.tgz files from http://streams.canonical.com/juju/tools/releases/ in your local metadata dir at the same path
[09:50] <dimitern> sorry, ignore the first url :)
[09:50] <Muntaner> dimitern: it's fine :)
[09:50] <Muntaner> I'll try and give feedback now
[09:50] <dimitern> Muntaner, cheers
[09:50] <fwereade> Muntaner, dimitern: but for the "I just want stable tools" case it's meant to work -- and all the signatures should still match etc so you can still have some confidence in what you're running even though it's not directly from the official source
[09:51] <fwereade> Muntaner, dimitern: it would be best of all to use a tools-metadata-url pointing to a local server, rather than keeping it in a local dir, I think
[09:52] <Muntaner> fwereade: should I do this in environments.yawl?
[09:52] <Muntaner> what line should I add?
[09:52] <dimitern> fwereade, Muntaner, that's if you are willing to do a full mirror of the /juju/tools/ dir from streams.c.c and run a local webserver for them; alternatively you can just generate the tools metadata and copy the tools tarballs in there
[09:53] <dimitern> Muntaner, or, maybe even simpler
[09:54] <fwereade> Muntaner, have you seen https://juju.ubuntu.com/docs/howto-privatecloud.html#deploying-private-clouds ?
[09:54] <fwereade> Muntaner, "For tools, it is often easiest to just mirror https://streams.canonical.com/tools."
[09:54] <dimitern> Muntaner, try first of all to specify tools-metadata-url: https://streams.canonical.com/juju/tools in envs.yaml and then bootstrap with --metadata-source <that dir you used earlier>
[09:54] <Muntaner> yes fwereade, but I was following the troubleshooting suggestions by dimitern :)
[09:55] <dimitern> Muntaner, the doc is the correct source, but it *might* be a bit out of date, if it doesn't work for you as described there we need to file a docs bug to fix it
[09:57] <fwereade> Muntaner, so just to be clear: assuming you don't have outside internet access the first thing to try is to mirror streams.c.c on some accessible webserver, and set a suitable value for tools-metadata-url in environments.yaml
[09:57] <fwereade> Muntaner, is that failing for you?
[09:58] <Muntaner> so, my environments.yawl now is: http://paste.ubuntu.com/10139920/
[09:58] <Muntaner> gonna run juju bootstrap --metadata-source /home/mike/juju-tools --debug
[09:59] <dimitern> Muntaner, ok, give it a try
[09:59] <fwereade> Muntaner, I'm not sure you want both of those?
[09:59] <jam> where oh where has my fwereade gone, where, oh where can he be... :)
[09:59] <dimitern> fwereade, he needs the metadata for the images - it's a local openstack
[09:59] <Muntaner> juju created the vm on my openstack...
[09:59] <Muntaner> it is running apt-stuff on it
[10:00] <Muntaner> (communicating via floating-ip)
[10:00] <Muntaner> installing packages...
[10:00] <Muntaner> error QQ
[10:01] <Muntaner> dimitern: fwereade -> http://paste.ubuntu.com/10139949/
[10:04] <dimitern> Muntaner, ok, try tools-metadata-url: https://streams.canonical.com/juju and re-bootstrap
[10:04] <Muntaner> as far as I can understand... he gets the tools, but fails something else later
[10:05] <dimitern> Muntaner, ah, no wait
[10:05] <Muntaner> dimitern: already got that in environments.yawl :)
[10:05] <dimitern> Muntaner, it actually seems to fail finding images
[10:06] <dimitern> gtg for now, will come back to you later Muntaner
[10:06] <Muntaner> maybe should set some other urls in environments.yawl? I remember some tutorial suggesting to do that...
[10:09] <Muntaner> QQ
[10:15] <Muntaner> fwereade: in fact, the file http://cloud-images.ubuntu.com/releases/streams/v1/index.json exists, why can't juju read it?
[10:19] <Muntaner> fwereade: he also is searching for http://cloud-images.ubuntu.com/releases/streams/v1/mirrors.json, which does not exist
[10:46] <dimitern> voidspace, btw when you have some time, please have a look at these -  http://reviews.vapour.ws/r/890/ (main one for 1.21), http://reviews.vapour.ws/r/891/ (foreport for 1.22 of the same fix), http://reviews.vapour.ws/r/892/ (foreport for 1.23)
[10:48] <voidspace> dimitern: ok
[10:48] <fwereade> Muntaner, sorry, I'm in a meeting and failing to distribute my attention properly
[10:49] <Muntaner> fwereade: no problems :)
[11:22] <dimitern> dooferlad, hey hey :)
[11:23] <dooferlad> dimitern: hi :-)
[11:23] <dimitern> voidspace, meet the new and improved dooferlad !
[11:23] <dimitern> dooferlad, thanks for doing this! :)
[11:24] <dimitern> mgz, hey! o/
[11:24] <mgz> dimitern: hey!
[11:25] <dimitern> mgz, how do you like go-goose? :)
[11:25] <mgz> it's a cute name :)
[11:26] <mgz> dimitern: do you need help migrating the code?
[11:26] <dimitern> mgz, yeah :) it became cuter as a side-effect :)
[11:27] <dimitern> mgz, I could use some tips to do it faster I guess
[11:27] <mgz> dimitern: if you want, I can do the import and give you the git branch
[11:27] <mgz> if we just want the existing history as per juju-core migration
[11:27] <dimitern> mgz, do you like the icon I came up with ? (it took 15m during lunch, but hey - better have one than a blocky generic GH icon)
[11:28] <mgz> yeah, the icon was one of my fav bits :)
[11:28] <dimitern> mgz, :D
[11:29] <dimitern> mgz, I'd like to keep the history, but we need to migrate the code into a "v1" branch and make that the default, while leaving master alone and putting a "error.go" file that panics in case someone tries to use it (like mgo has)
[11:30] <mgz> dimitern: see lp:~gz/+junk/juju_git_import for instance
[11:30] <mgz> I can do an imports rewrite and then we fiddle with the error.go bit after?
[11:31] <dimitern> mgz, awesome, sgtm
[11:31] <mgz> okay, I'
[11:31] <dimitern> mgz, do you think we can set up gated merges with the bot relatively easy as well?
[11:31] <mgz> ll adapt that, do the import, then share the branch with you and we can work out what else we needed
[11:31] <dimitern> mgz, +1 ta!
[11:31] <mgz> dimitern: that's a little more work, but I have some of the setup for it already
[11:32] <mgz> you'll want to add the juju-bot to the team
[11:32] <dimitern> mgz, great, I'll do that then
[11:32] <dimitern> mgz, and ask all the hackers to make themselves public for the bot to see them
[11:33] <dimitern> mgz, the bot setup can be done in the upcoming days, it's fine - no need to rush it
[11:35] <voidspace> dimitern: hehe, nice
[11:35] <voidspace> dooferlad: welcome :-)
[11:35] <dooferlad> voidspace: hi!
[11:36] <dimitern> mgz, jujubot is invited in it's own Bots team with write access, like for juju/juju
[11:36] <mgz> dimitern: do, out github path is going to be github.com/go-goose/goose but the imports should be gopkg.in/goose.v1 ?
[11:37] <mgz> dimitern: ta
[11:37] <dimitern> mgz, yes, exactly
[11:41] <dimitern> mgz, for an easy way to update all imports I used the awesome govers tool rogpeppe1 did
[11:43] <rogpeppe1> mgz, dimitern: govers is now at github.com/rogpeppe/govers FWIW
[11:44] <dimitern> rogpeppe1, ah, nice - thanks!
[11:45] <mgz> dimitern: github.com/bz2/goose is a first pass import, script for that at lp:~gz/+junk/goose_git_import
[11:47] <mgz> we want to see if there's anything obviously wrong with that, then make any further changes in a new rev and push to the go-goose project - or redo if needed
[11:47] <dimitern> mgz, sgtm, looking at those now
[11:48] <rogpeppe1> mgz: govers -m github.com/goose gopkg.in/goose.v1
[11:48] <mgz> hm, seems I didn't get the import renaming somehow
[11:48] <rogpeppe1> mgz: also has the advantage that it makes sure that all dependencies are using the same import too
[11:48] <mgz> I'll use rog's thing
[11:48] <rogpeppe1> oops
[11:49] <rogpeppe1> govers -m launchpad.net/goose gopkg.in/goose.v1
[11:50] <dimitern> mgz, I've created v1 branch from master for go-goose/goose
[11:50] <dimitern> rogpeppe1, ta!
[11:51] <dimitern> mgz, I think we need it in place before trying to change imports
[11:53] <mgz> well, it's a bit chicken and egg
[11:53] <mgz> just sedding the import paths like I did should make it possible to test locally first
[11:53] <mgz> I'm just confused by the history atm, my commit was wrong...
[11:54] <mgz> missed the -a on commit, rerunning
[11:54] <dimitern> mgz, ah, ok
[11:55] <mgz> okay, pushed up new revisions, try again
[11:55] <mgz> that's beetter
[11:55] <dimitern> mgz, incidentally, do you have permissions to add dooferlad to juju hackers team?
[11:55] <mgz> sure
[11:55] <dimitern> mgz, ta!
[11:55] <perrito667> wiiii sosie saco al gaspar
[11:55] <perrito667> sorri wrong channel
[11:56] <dimitern> perrito667, was that you password ? :D
[11:56] <mgz> dimitern: he's already a member
[11:57] <dimitern> mgz, well it doesn't show on his lp page and he can't triage bugs, so I guess something else is wrong
[11:57] <mgz> ah, not github
[11:57] <dimitern> mgz, yep, LP
[11:57] <mgz> you mean adding to the ~juju team
[11:58] <dimitern> mgz, yes, sorry I should've been more clear :)
[11:58] <dimitern> and also I should be an admin there as well hmmm
[11:58] <mgz> I'm not an admin on that one. can bug curtis later or one of the other leads.
[11:58] <dimitern> jam, ping
[11:58] <mgz> dimitern: you should now
[11:59] <dimitern> mgz, yeah, I'll ask john to add me, thanks
[11:59] <perrito667> anybody could http://reviews.vapour.ws/r/889/ ?
[11:59] <voidspace> dimitern: couldn't we detect older clients and report port ranges as individual ports?
[11:59] <voidspace> dimitern: rather than dropping the ranges
[12:00] <dimitern> perrito667, LGTM
[12:00] <dimitern> voidspace, we do report ranges that look like single ports into the Ports slice
[12:00] <dimitern> voidspace, and that imo is backward-compatible with older clients
[12:01] <Muntaner> dimitern: still fighting with images metadata
[12:01] <Muntaner> can you give me some advices?
[12:04] <dimitern> Muntaner, have you tried following that section about generating images metadata for private clouds in that docs page?
[12:05] <dimitern> fwereade, ping
[12:05] <Muntaner> yes, totally
[12:05] <dimitern> Muntaner, so how far did you manage to go down that path?
[12:07] <Muntaner> dimitern: brb in 2 hours
[12:07] <dimitern> Muntaner, sure
[12:07] <Muntaner> bye :)
[12:09] <jam> dimitern: pong
[12:09] <dimitern> jam, hey, I've noticed I'm not an admin on LP ~juju so I can't add james to it, can you make me an admin please?
[12:10] <jam> dimitern: just did both of those things
[12:11] <jam> fwereade: I'm back around
[12:12] <dimitern> jam, thanks!
[12:12] <jam> dimitern: you may want to independently confirm it :)
[12:13] <dimitern> jam, just did
[12:20] <dimitern> fwereade, all sorted, please ignore last ping
[12:20] <dimitern> voidspace, ah, sorry - now I got what you're asking
[12:22] <dimitern> voidspace, and that seems fair, rather than dropping ports add them individually, however that potentially means blowing up the response size with large ranges for the sake of older clients, which will have a few individual ports opened anyway
[12:49] <voidspace> dimitern: ah, ok - so you don't need to "detect older clients" as that's what compatiblePorts is for anyway
[12:49] <voidspace> dimitern: but if the older client wants to know open ports then shouldn't we tell them?
[12:49] <dimitern> voidspace, yes, the older clients will expect Ports to be populated
[12:49] <voidspace> my chat window isn't scrolling dammit
[12:50] <voidspace> that's why I missed your replies
[12:50] <dimitern> voidspace, ah, sorry :)
[12:50] <voidspace> dimitern: not your fault...
[12:50] <voidspace> dimitern: but if we have a port range open then we won't tell an older client about it
[12:50] <voidspace> dimitern: is it alright to just not tell them?
[12:50] <voidspace> rest of the PR looks fine including tests
[12:51] <dimitern> voidspace, older clients won't use ranges, so if they did open 50, 51, and 52.. hmm I have to check this
[12:51] <jam> voidspace: you're thinking to map the range into each entry for compat ?
[12:51] <voidspace> jam: if 50:52 is open report 50, 51, 52
[12:51] <voidspace> jam: i.e. tell the truth...
[12:51] <jam> dimitern: I don't know that we need to collapse individual requests into a range, but we should probably return the individual items if a new client opened a range
[12:51] <jam> voidspace: right
[12:51] <dimitern> I think the current behavior is to collapse consecutive ports into a range, so that'll become 50-52
[12:52] <dimitern> jam, yeah, ranges are always there
[12:52] <jam> voidspace: dimitern: I don't particularly care if we collapse or not, it depends heavily on the close behavior
[12:52] <jam> IIRC there was a discussion as to whether you have to exact match in a  close request
[12:52] <dimitern> jam, however, if a newer client opens 5000-6000/tcp and if we list each port individually for the sake of older clients - guess how big the response will be :)
[12:52] <jam> so if a client does Open(50), Open(51), Open(52), then Close(50) must also work
[12:52] <jam> whether we represent that as 3 open ports or a range of 50-52
[12:53] <jam> dimitern: no bigger than it is today
[12:54] <dimitern> jam, but it's much easier to open a huge range with one command today
[12:54] <voidspace> how many bytes is a serialised port struct?
[12:54] <jam> dimitern: again, this is about compatibility
[12:54] <jam> we deal with it
[12:54] <voidspace> a few bytes from the look of it
[12:54] <jam> new clients won't call the old method
[12:54] <dimitern> voidspace, {{Number: 123, Protocol: "tcp"}} x number of ports
[12:54] <voidspace> so a thousand is a few kilobytes
[12:57] <jam> dimitern: voidspace: so yeah, unless there is a really strong problem, I'd just go with "expand port ranges for the compatibility code"
[12:57] <jam> *especially* if we are auto upgrading
[12:57] <jam> if you did
[12:57] <jam> Open(50), Open(51)
[12:58] <jam> then you better get 50, 51 from "list-open-ports"
[12:58] <voidspace> dimitern: I'm failing to trigger the error conditions to test the charmrevisionupdater change
[12:58] <voidspace> dimitern: I think we're using a mock api (haven't confirmed that) so setting the env variables fails to cause an error
[12:58] <dimitern> voidspace, I'd just mock the updateVersions call to trigger a failure in the test and check the worker exits
[12:59] <voidspace> dimitern: ok will do, thanks
[13:00] <jam> voidspace: dimitern: does my last point make sense? I could sort-of be convinced that if you did a request for a Open 50:52, then maybe the compatible list-ports wouldn't return it (maybe), but *definitely* if we are automatically turning a request for 2 ports to be opened into a range, then we *must* turn a range back into individual items on a list response
[13:00] <jam> fwereade: are you around yet?
[13:00] <dimitern> jam, voidspace, ok, I'll do that instead - return all individual ports in ranges
[13:00] <voidspace> cool
[13:00] <voidspace> going on lunch
[13:00] <fwereade> jam, heyhey, yes I am
[13:01] <jam> fwereade: I should really go help my son for a sec, but I realized I've been sitting in the room for 30 min and missed the earlier ping. I still want to go over the diff with you, how about in 10 min?
[13:01] <fwereade> jam, sgtm
[13:01] <dimitern> jam, if you do open(50), open(51), open(52), then close(51), you'll see 50 and 51 as ports in both slices, but if you do open(50-52) you can't close(51)
[13:02] <jam> dimitern: then why represent it as 50-52 if you can't close it? Is it just a display thing that aggregates the range?
[13:02] <dimitern> at least I think you can't - must check the code
[13:02] <jam> dimitern: I'd really rather not have list return things that you can't then use for close
[13:02] <dimitern> jam, because we're representing it as a range, not individual ports
[13:02] <jam> because I have the feeling that given what you've said
[13:03] <jam> if you do Open(50), 51, 52. and then you do list and get 50-52, but you can't close(50:52) that's bad
[13:03] <dimitern> jam, opened-ports hook tool will return ranges as specified in open-port
[13:03] <jam> dimitern: do you understand my concern?
[13:03] <jam> open + opened + close should all work nicely together
[13:03] <dimitern> jam, however, if you called open-port 3 times with consecutive numbers, opened-ports will return a collapsed 50-52 range
[13:03] <jam> changing under the hood is fine, as long as close can still interoperate
[13:04] <jam> open 50, open 51, open 52, => you must be able to close 51
[13:04] <jam> I think we agree on that
[13:04] <jam> opened => 50:52, you must be able to close 50:52
[13:04] <jam> does that also sound sane ?
[13:04] <dimitern> jam, yes, and that's how it works for both cases you mention
[13:05] <jam> dimitern: but open 50, open 51, open 52, then opened returns 50:52, but you can't close 50:52
[13:05] <jam> because it doesn't match an open() request
[13:05] <dimitern> jam, however the mix of the two - open-port 50-52/tcp then close-port 51/tcp won't work, but opened-ports I believe will return collapsed ranges for the first case (open 50,51,52 individually)
[13:06] <dimitern> jam, I have to double-check that case
[13:06] <jam> dimitern: if I do open(50), open(51), open(52) can I close(50-52) /
[13:06] <jam> ?
[13:06] <dimitern> anyway, I *really* need to go now to catch the bank before it closes
[13:06] <jam> I *must* be able to close what I open
[13:06] <dimitern> jam, I'll verify this as well
[13:06] <jam> and I *must* be able to close what opened returns
[13:06] <jam> as long as those two hold true, I don't have a huge care for collapsing vs not
[13:08] <dimitern> jam, I agree, and will double check all these cases, then get back to you (tomorrow perhaps)
[13:08] <mgz> dimitern: is there anything else we want to do on the goose repo, before moving it to go-goose? the tests pass locally, I didn't try using it as a replacement with juju-core though
[13:08] <dimitern> mgz, well, there will be a few bits we can add post-migration, like CONTRIBUTING.md, README.md, stuff like that
[13:09]  * dimitern goes out, bbl fwiw
[13:52] <jam> fwereade: so now I have to do dinner, etc. Are you going to be around later? If you're EOD, then maybe we can just pick up tomorrow?
[13:59] <fwereade> jam, I expect I'll be around later, but likely to be off and on slightly at random once laura's home (not so long from now...)
[14:41] <Muntaner> hi devs
[14:41] <Muntaner> hi dimitern, fwereade
[14:50] <fwereade> Muntaner, heyhey
[14:51] <fwereade> Muntaner, I should be able to concentrate for a little bit
[14:51] <fwereade> Muntaner, what's the latest?
[14:51] <Muntaner> fwereade: I have fresh logs for you
[14:51] <Muntaner> I "downgraded" the iso of cloud server to try a workaround
[14:51] <Muntaner> and get these logs
[14:52] <Muntaner> fwereade:
[14:52] <Muntaner> http://paste.ubuntu.com/10143346/
[14:53] <Muntaner> envs.yawl -> http://paste.ubuntu.com/10143360/
[14:54] <Muntaner> fwereade: the VM is successfully created on the openstack
[14:54] <Muntaner> and does all the apt-get stuff
[14:54] <Muntaner> as far as I can see in the logs, it's able to run MongoDB
[14:54] <Muntaner> but then... that odd error
[14:55] <Muntaner> need more background, fwereade ?
[14:59] <Muntaner> fwereade: hard to fix, right? :)
[15:06] <fwereade> Muntaner, yeah, I'm scratching my head and hunting through code
[15:06] <Muntaner> fwereade: is this a mistake in my environment or it's actually juju failing?
[15:09] <fwereade> Muntaner, I can't quite figure that out yet, still digging
[15:10] <fwereade> Muntaner, I'm a little suspicious about your image-metadata-url (and tools-), though -- I thought you were in an isolated environment?
[15:10] <Muntaner> fwereade: the thing I can't figure is: how do openstack now which image should instance in the VM if actually fetching of metadata fails?
[15:11] <Muntaner> well fwereade - I'm with my laptop, connected to the openstack server via Lan (10.0.0.0/24)
[15:11] <Muntaner> both the machines can naturally go on the internet (the VM created by juju can actually use apt-get stuff)
[15:11] <Muntaner> I can access VMs via floating IPs, I can ping, ssh them
[15:11] <Muntaner> and naturally, also the server is contactable
[15:15] <fwereade> Muntaner, yeah, that's what's weird -- I can't figure out what you would have started (ie how you got far enough to fail that way)
[15:16] <fwereade> Muntaner, ah, ok, I think I see some of it
[15:17] <fwereade> Muntaner, with the --metadata-source param you shouldn't need those urls in environments.yaml
[15:18] <fwereade> Muntaner, what happens if you just remove them?
[15:19] <Muntaner> you mean - them in envs.yawl?
[15:20] <fwereade> Muntaner, yes
[15:20] <Muntaner> fwereade: trying it
[15:20] <fwereade> Muntaner, I *think* that we're automatically uploading what you need to the right place, but it's not even looking there because it trusts the fields in environments.yaml
[15:22] <dimitern> sinzui, ping
[15:22] <fwereade> Muntaner, if that works, thank you for finding a bug -- we should at least fail earlier and more clearly if there's no way to sanely handle both those settings
[15:22] <sinzui> hi dimitern
[15:22] <Muntaner> fwereade: not working :(
[15:22] <Muntaner> seems quite the same log
[15:22] <dimitern> sinzui, hey, so re bug 1416928
[15:22] <mup> Bug #1416928: juju agent using lxcbr0 address as apiaddress instead of juju-br0 breaks agents <api> <lxc> <network> <juju-core:Triaged> <juju-core 1.21:Triaged> <juju-core 1.22:Triaged> <https://launchpad.net/bugs/1416928>
[15:22] <Muntaner> fwereade: http://paste.ubuntu.com/10143716/
[15:23] <Muntaner> fwereade: envs.yawl -> http://paste.ubuntu.com/10143723/
[15:23] <dimitern> sinzui, thanks for providing more info
[15:24] <dimitern> sinzui, however, I think we shouldn't block 1.21.2 for this bug, I had a chat with xwwt and alexisb in cape town that we can possibly do a subsequent point release with this
[15:24] <alexisb> sinzui, that is correct
[15:25] <dimitern> sinzui, but I'm interested to hear your thoughts as well
[15:25] <xwwt> dimitern, sinzui:  We should not block 1.21.2 on this one.  We can point release later.
[15:25] <fwereade> Muntaner, hmm, I need to dig a bit further, but glad to see you can still launch an instance withoutthose fields
[15:25] <Muntaner> yep fwereade, but I had to add upload-tools, since I got another error without it
[15:25] <Muntaner> I'm somewhat confused
[15:25] <sinzui> dimitern, I am concerned about multiple releases. But I know stakeholders will appreciate your fix for bug 1416134
[15:25] <mup> Bug #1416134: Unable to override network-bridge if container type is kvm (local provider) <cloud-installer> <config> <lxc> <network> <regression> <cloud-installer:Fix Committed by adam-stokes> <juju-core:Fix Committed by dimitern> <juju-core 1.21:Fix Committed by dimitern> <juju-core 1.22:Fix
[15:25] <mup> Released by dimitern> <https://launchpad.net/bugs/1416134>
[15:26] <sinzui> dooferlad, is your fix for bug 1417617 queued for 1.21?
[15:26] <mup> Bug #1417617: apt-proxy can be incorrectly set when the fallback from http-proxy is used <apt> <network> <proxy> <juju-core:Fix Committed by dooferlad> <juju-core 1.21:In Progress by dooferlad> <juju-core 1.22:In Progress by dooferlad> <https://launchpad.net/bugs/1417617>
[15:27] <dimitern> sinzui, yes, that fix among a few other as well - it's still worth releasing 1.21.2
[15:27] <dooferlad> sinzui: Hopefully an hour
[15:27] <dimitern> dooferlad, sweet!
[15:28] <sinzui> dimitern, I will give dooferlad an opportunity get his fix merged and tested while I prepare for a 1.21.2 release to proposed
[15:28] <dimitern> thank you sinzui!
[15:29] <fwereade> Muntaner, ah, yes, makes sense
[15:29] <fwereade> Muntaner, can I see the directory structure for your metadata source please?
[15:29] <dimitern> which reminds me to finish the fix for bug 1418433
[15:29] <fwereade> Muntaner, I think I see what's failing
[15:29] <mup> Bug #1418433: unit ports not populated by API megawatcher <api> <regression> <juju-core:In Progress by dimitern> <juju-core 1.21:In Progress by dimitern> <juju-core 1.22:In Progress by dimitern> <juju-gui:Triaged> <https://launchpad.net/bugs/1418433>
[15:29] <Muntaner> fwereade: how want me to show you that?
[15:30] <Muntaner> well fwereade, I have just one folder generated
[15:30] <Muntaner> which is /home/mike/juju-tools/images/streams/v1/
[15:30] <Muntaner> into, I have two files
[15:30] <Muntaner> index.json
[15:30] <Muntaner> com.ubuntu.cloud:released:imagemetadata.json
[15:30] <fwereade> Muntaner, for future reference: `tree -d /home/mike/juju-tools/`
[15:31] <fwereade> Muntaner, can I see them both please?
[15:31] <Muntaner> fwereade: sorry for my noobness :)
[15:31] <fwereade> Muntaner, no worries :)
[15:31] <Muntaner> yes fwereade, paste incoming
[15:32] <Muntaner> fwereade: http://paste.ubuntu.com/10143826/
[15:32] <fwereade> Muntaner, looks like tree is not necessarily installed, but it's very handy
[15:32] <Muntaner> installing :)
[15:32] <Muntaner> fwereade: tree is -> http://paste.ubuntu.com/10143835/
[15:33] <fwereade> Muntaner, and because I wasn't thinking, if you just skip the -d, it'll give you files too
[15:34] <Muntaner> fwereade: -> http://paste.ubuntu.com/10143855/
[15:35] <Muntaner> commands used: juju metadata generate-image -a amd64 -i 662d6b37-2c1b-42c3-9bd7-6f1c3ab5af50 -r RegionOne -s trusty -d /home/mike/juju-tools -u http://10.0.0.230:5000/v2.0 -e OpenStack
[15:35] <Muntaner> juju metadata validate-images -d /home/mike/juju-tools
[15:38] <fwereade> Muntaner, yeah, that looks right, and it looks like we're not uploading it properly :-/
[15:39] <Muntaner> fwereade: may this be a network problem of my architecture?
[15:39] <fwereade> Muntaner, can the cloud instances see your laptop? as a quick-and-dirty hack, you could serve that dir over http...
[15:39] <Muntaner> fwereade: yes, obviosly they can
[15:40] <Muntaner> they can ssh, ping my laptop and - I think - all the devices in 10.0.0.0/24
[15:41] <fwereade> Muntaner, ok, you could try opening a new terminal and running `python -m SimpleHTTPServer` in your metadata source dir, and specifying that URL in your environments.yaml
[15:42] <Muntaner> socket.error: [Errno 98] Address already in use
[15:42] <bodie_> fwereade, looking for confirmation whether `action do` CLI arg values should be parsed as yaml -- I'm thinking yes
[15:43] <fwereade> bodie_, hmmmmm I don;t think we should be parsing anything as yaml that's not explicitly marked as such
[15:43] <bodie_> juju action do mysql/0 sleep time=1000 "time is supposed to be a number"
[15:43] <bodie_> not the greatest ux
[15:43] <dimitern> voidspace, still around?
[15:44] <fwereade> bodie_, agreed, but if we're not careful "y" will parse to `true` and so on
[15:44] <bodie_> hmm, yeah
[15:45] <bodie_> fwereade, in that case the validation would reject it with a message; then the user could pass signal="y", right?
[15:46] <fwereade> Muntaner, `python -m SimpleHTTPServer <some-unused-port>`?
[15:46] <fwereade> bodie_, that is true
[15:46] <dimitern> frankban, ping
[15:46] <frankban> hi dimitern
[15:46] <fwereade> bodie_, ok, yes, I think you're right
[15:46] <Muntaner> Serving HTTP on 0.0.0.0 port 60000 ...
[15:46] <Muntaner> fwereade:
[15:47] <fwereade> Muntaner, ok, cool
[15:47] <dimitern> frankban, hey, thanks for the review on http://reviews.vapour.ws/r/891/ - can you please also review and approve http://reviews.vapour.ws/r/890/ (the original fix for 1.21) and http://reviews.vapour.ws/r/892/ (foreport of the same fix for trunk, 1.23)
[15:47] <Muntaner> fwereade: nothing more is coming
[15:47] <Muntaner> is it hanging?
[15:47] <bodie_> fwereade, my reasoning is that with action-set it's more important for values to come back to the user exactly as they were set for the purpose of reporting accurately
[15:47] <Muntaner> never used this command
[15:47] <bodie_> thanks, I'll open a quick PR
[15:47] <fwereade> Muntaner, you should be able to specify image-metadata-url as http://your-laptops-ip:60000/images
[15:48] <Muntaner> ok fwereade
[15:48] <fwereade> Muntaner, and go back to streams.c.c for tools-metadata-url (sorry, I'd thought you had tools metadata generated locally)
[15:48] <fwereade> Muntaner, then cross your fingers and try again
[15:49] <fwereade> Muntaner, (but with tools-m-u you shouldn't need --upload-tools)
[15:49] <Muntaner> fwereade: what do you mean with "go back to streams.c.c." ?
[15:49] <fwereade> Muntaner, sorry, specify `tools-metadata-url: streams.canonical.com`
[15:49] <frankban> dimitern: sure
[15:49] <dimitern> frankban, thank you!
[15:49] <fwereade> Muntaner, you had that before and I erroneously told you to drop it
[15:50] <Muntaner> so fwereade
[15:50] <Muntaner> fwereade: http://paste.ubuntu.com/10144073/
[15:50] <Muntaner> does it look fine?
[15:52] <fwereade> Muntaner, I think so, maybe an https:// on streams.c.c :)
[15:52] <ericsnow> rogpeppe1: do you think there would be any benefit to our direct HTTP request code in juju core from the httprequest repo code?
[15:52] <Muntaner> ok did it
[15:53] <voidspace> dimitern: yes
[15:54] <Muntaner> fwereade: YOU'RE GREAT, WORKED! :D
[15:54] <Muntaner> 2015-02-09 15:53:35 INFO juju.cmd supercommand.go:329 command finished
[15:54] <fwereade> Muntaner, sweet!
[15:54] <dimitern> voidspace, ah, it's ok just fyi - I changed the fix for the ports bug as you suggested
[15:54] <fwereade> Muntaner, ok, so serving it from your laptop is not a production solution ;)
[15:55] <fwereade> Muntaner, you'll want something a bit less temporary there
[15:55] <voidspace> dimitern: I'll take a look
[15:55] <dimitern> voidspace, ta
[15:55] <Muntaner> fwereade: I'm just saying... have something to show to bosses :D
[15:56] <fwereade> Muntaner, yeah, I know that feeling :)
[15:56] <Muntaner> fwereade: now I'm trying to deploy juju gui
[15:56] <dimitern> Muntaner, hey, can I ask you a favor :) summarize what eventually worked as a comment on the bug you had originally
[15:56] <voidspace> dimitern: all the tests are for single port ranges
[15:57] <voidspace> dimitern: oh no they're no
[15:57] <dimitern> voidspace, except for one
[15:57] <voidspace> *not
[15:57] <dimitern> :)
[15:57] <voidspace> dimitern: yeah, I misread
[15:57] <voidspace> sorry
[15:57] <dimitern> np
[15:57] <Muntaner> dimitern: well, the only workaround that fixed the situation has been the Httpserver
[15:57] <fwereade> Muntaner, you'll almost certainly want to `deploy --to 0` there fwiw
[15:57] <Muntaner> dimitern: should I open a launchpad bug?
[15:58] <dimitern> Muntaner, yes, I'd really appreciate if you do, and add comments about what you're trying to do and how you managed to resolve it
[15:59] <dimitern> Muntaner, others might find it useful, and we can do something about making this easier in the future
[15:59] <Muntaner> ok dimitern will do it
[15:59] <Muntaner> thanks 10000
[15:59] <rogpeppe1> ericsnow: depends how many http request kinds you've got
[15:59] <dimitern> Muntaner, np, I'm glad we're able to help you and also thanks for all your patience! ;)
[16:00] <ericsnow> rogpeppe1: basically 3: tools, charms, and backups (upload/download)
[16:00] <fwereade> Muntaner, I've asked ericsnow to follow up on this and figure out why it wasn't working with just metadata-source -- but I'm pretty sure you've found a real bug, for which, much thanks :)
[16:00] <rogpeppe1> ericsnow: in which case almost certainly not
[16:01] <ericsnow> rogpeppe1: k, thanks
[16:01] <voidspace> dimitern: LGTM
[16:02] <voidspace> dimitern: have you forward ported as well yet?
[16:02] <dimitern> voidspace, awesome! yes, I already asked frankban to have a look, but it's good to have another set of eyes - http://reviews.vapour.ws/r/891/ http://reviews.vapour.ws/r/892/
[16:03] <voidspace> dimitern: 892 isn't showing the inner loop yet
[16:04] <dimitern> voidspace, nope, I'm about to propose it
[16:04] <voidspace> ok
[16:06] <ericsnow> Muntaner: FYI, your troubles appear to be related to bug #1271744
[16:06] <mup> Bug #1271744: bootstrap on maas with --metadata-source fails <bootstrap> <maas> <maas-provider> <upload-tools> <juju-core:Triaged> <https://launchpad.net/bugs/1271744>
[16:06] <ericsnow> Muntaner: so I'll follow up there
[16:08] <ericsnow> Muntaner: would you mind verifying that that matches the situation you ran into?
[16:29] <natefinch> sinzui: you around?
[16:29] <sinzui> I am
[16:30] <natefinch> sinzui: alexis wanted me to work with you on blocking bugs... what's our current list?  I'm a little out of the loop.
[16:31] <sinzui> natefinch, https://launchpad.net/juju-core/+milestone/1.21.2
[16:31] <sinzui> natefinch, https://launchpad.net/juju-core/+milestone/1.22-beta3 has one extra bug that voidspace  is working on
[16:32] <natefinch> wwitzel3: you're working on this one, right? https://bugs.launchpad.net/juju-core/+bug/1417875
[16:32] <mup> Bug #1417875: ERROR juju.worker runner.go:219 exited "rsyslog": x509: certificate signed by unknown authority <canonical-bootstack> <logging> <regression> <juju-core:Triaged by wwitzel3> <juju-core 1.21:Triaged> <juju-core 1.22:Triaged> <https://launchpad.net/bugs/1417875>
[16:34] <voidspace> natefinch: sinzui: so I believe that dimitern has already backported my fix to 1.22
[16:34] <voidspace> natefinch: sinzui: let me double check and mark that as fix committed if it's done
[16:35] <dooferlad> voidspace: Could you review http://reviews.vapour.ws/r/895/ please?
[16:36] <voidspace> dooferlad: will do
[16:36] <voidspace> natefinch: sinzui: hmm... looks like there's a slight error in the backport
[16:36] <wwitzel3> natefinch: yep, sorry, replied in wrong channel
[16:36] <Muntaner> ericsnow: dimitern fwereade sorry, was talking with my boss
[16:36] <Muntaner> just in time, lol
[16:37] <voidspace> dimitern: ping
[16:37] <ericsnow> Muntaner: no worries :)
[16:37] <dimitern> voidspace, hey
[16:37] <voidspace> dimitern: you backported my proxyupdater fix to 1.22, right?
[16:37] <voidspace> dimitern: because whoever did it unfortunately did it slightly wrong
[16:37] <voidspace> dimitern: I'll fix
[16:37] <natefinch> wwitzel3: cool, I just marked the 1.21 version of the bug as assigned to you too, so it'll be more obvious in the 1.21 list that it's actually being worked on.
[16:38] <dimitern> voidspace, oh, my bad then - what happened?
[16:38] <voidspace> dimitern: also, https://github.com/juju/juju/pull/1564
[16:38] <voidspace> dimitern: handleProxyValues is called with a proxy.Settings object called proxySettings
[16:38] <voidspace> dimitern: this is new settings
[16:39] <voidspace> dimitern: in trunk we call SetEnvironmentVariables on that *before* setting it to s.proxy
[16:39] <dimitern> voidspace, ah, ok - I must've overlooked that
[16:39] <dimitern> voidspace, sorry :/
[16:39] <voidspace> dimitern: the code in 1.22 calls s.proxy.SetEnvrionmentVariables()
[16:39] <voidspace> dimitern: which is the *old settings*
[16:39] <Muntaner> ericsnow: it doesn't seem to resemble my situation
[16:39] <voidspace> dimitern: np
[16:39] <dimitern> voidspace, I can't recall doing it though..
[16:39] <voidspace> dimitern: someone did it
[16:39] <Muntaner> actually I don't have tools-related-issues, but more image metadata issues
[16:39] <voidspace> dimitern: maybe it was me :-)
[16:39] <Muntaner> with HTTPServer workaround, it works
[16:40] <dimitern> :)
[16:40] <dimitern> voidspace, btw that 1564 PR's diff seems very wrong to me
[16:40] <dimitern> "Showing with 6,842 additions and 2,087 deletions."
[16:40] <voidspace> dimitern: hah, it's targetting master
[16:40] <voidspace> that's why...
[16:40] <voidspace> don't merge it...
[16:41] <dimitern> :)
[16:41] <dimitern> ok, I'll have a look later, but I gtg now
[16:41] <voidspace> dimitern: I hope you've enjoyed your day off! :-o
[16:42] <dimitern> :) I promise to try resting more tomorrow
[16:43] <ericsnow> Muntaner: well, the maas part of it won't apply, but the problem they describe with a local metadata index does, right?
[16:44] <frankban> dimitern: so you changed the branch so that all the ports are enumerated in unitInfo.Ports. this could lead to lots of bytes sent to the wire, but I agree this is technically correct. I think the GUI needs to swithc to useing PortRanges when possible, and recalculate real single ports on the client side.
[16:45] <voidspace> frankban: a port is 7 or 8 bytes, so even for a thousand ports it's only 7-8kb.
[16:46] <voidspace> frankban: so it depends what you mean by "lots of bytes"...
[16:46] <frankban> voidspace: agreed
[16:48] <Muntaner> ericsnow: sorry if I'm slow - doing 1000 things right now
[16:48] <ericsnow> Muntaner: no worries, this isn't pressing (more of a follow-up to ensure we track the problem)
[16:49] <Muntaner> ericsnow: yes, seems pretty similar
[16:53] <Muntaner> ericsnow: actually, the workaround to make these metadata visible to the environment is to expose them via Http - since uploading isn't happening
[16:54] <ericsnow> Muntaner: okay, I'll go from there, thanks
[16:55] <Muntaner> ericsnow: just saying, nobody had tested an environment like mine before? :)
[17:01] <voidspace> dooferlad: you don't add the trailing slash. Have you manually tested it?
[17:01] <voidspace> dooferlad: if it works without it then fine...
[17:02] <dooferlad> voidspace: yea, it works. This is just a backport.
[17:02] <voidspace> ah, cool
[17:02] <voidspace> dooferlad: LGTM
[17:04] <voidspace> natefinch: care to take a look http://reviews.vapour.ws/r/898/
[17:04] <voidspace> natefinch: this is for https://bugs.launchpad.net/juju-core/+bug/1403225 for 1.22
[17:05] <mup> Bug #1403225: charm download behind the enterprise proxy fails <cloud-installer> <deploy> <proxy> <sync-tools> <cloud-installer:Confirmed for adam-stokes> <juju-core:Fix Committed by mfoord> <juju-core 1.21:Won't Fix> <juju-core 1.22:In Progress by mfoord> <https://launchpad.net/bugs/1403225>
[17:05] <voidspace> wwitzel3: as you're OCR, if you have time could you look at this one http://reviews.vapour.ws/r/896/
[17:06] <dooferlad> wwitzel3: Also, http://reviews.vapour.ws/r/895/ please
[17:07] <voidspace> dooferlad: I already gave that a ShipIt
[17:08] <dooferlad> voidspace: I thought I needed two reviews?
[17:08] <voidspace> dooferlad: no
[17:08] <voidspace> :-)
[17:08] <dooferlad> voidspace: OK :-)
[17:08] <dooferlad> wwitzel3: forget that then!
[17:10] <Muntaner> ericsnow: fwereade : juju is amazing, I'm deploying stuff easily as nothing :)
[17:10] <natefinch> voidspace: ship it
[17:10] <ericsnow> Muntaner: awesome! :)
[17:11] <Muntaner> ericsnow: but bosses are asking strange stuff... argh, dunno if I can do it via juju
[17:12] <natefinch> Muntaner: you can do anything with Juju, just a matter of figuring out how ;)
[17:12] <Muntaner> natefinch: well, they're talking about an existing .net application which is using Windows Azure... maybe we should do a sort of porting, :S
[17:13] <natefinch> Muntaner: we do support deploying to windows, though it takes a little setup right now
[17:14] <voidspace> natefinch: thanks
[17:14] <natefinch> (and I'm being generous with "a little")
[17:14] <Muntaner> natefinch: I don't have the whole plot clear right now
[17:15] <natefinch> sinzui: The comments on this bug seem to say it's been that way  for almost a year ... https://bugs.launchpad.net/juju-core/+bug/1417449
[17:15] <mup> Bug #1417449: Log files now owned by syslog user, 1.21.1 <canonical-is> <logging> <regression> <juju-core:Triaged> <juju-core 1.21:Triaged> <juju-core 1.22:Triaged> <https://launchpad.net/bugs/1417449>
[17:15] <Muntaner> seems like I should use a VM with Windows Server + Juju
[17:15] <Muntaner> sooo... OpenStack + Windows Server + Juju
[17:15] <natefinch> jw4: saw your comment on the above bug...
[17:16] <sinzui> natefinch, it certainly has been for many many months. also...
[17:16] <voidspace> who owns juju team calendar. jamestunnicliffe needs adding to it.
[17:16] <voidspace> natefinch: ^^ any idea?
[17:16] <sinzui> natefinch, If our policy is use juju debug-log to see the logs, then wont fix is a valid close
[17:17] <natefinch> sinzui: in general I think the right answer is for local provider to behave like the rest of the providers.... which I sort of assume also have the logs owned by syslog
[17:18] <sinzui> natefinch, +1, We mark it wont fix and we are done
[17:18] <natefinch> sinzui: seems fine to me.  I'll comment and do so
[17:21] <Muntaner> natefinch: actually, what is the relationship between juju and windows?
[17:23] <natefinch> Muntaner: not sure what you're asking.  Juju can deploy to Windows machines, though right now it takes some special setup.
[17:25] <Muntaner> ok natefinch, that's an answer :) what do you mean with "special setup" ?
[17:29] <natefinch> Muntaner: when you use juju to add a machine to your environment (whether through juju add-machine or as a part of juju-deploy), it figures out what image to use for the machine by looking for metadata in a simple-streams location.  For Linux images, this is available on all the clouds Juju supports.  However, for Windows, we don't yet have those metadata files available in the clouds.  You also need the images themselve
[17:29] <natefinch> s, which need to have a special init system installed on them (cloud-init by Cloudbase) so that we can bootstrap the images appropriately.
[17:30] <natefinch> Muntaner: setting this up in your own maas or openstack environment is not too hard, but if you want to deploy to a public cloud, it gets a lot trickier
[17:30] <natefinch> Muntaner: sorry, there's no simpler way to answer the question.  Basically... getting windows images to boot up is harder because they're not free and open source like linux
[17:31] <Muntaner> natefinch: why this difference in difficulty from private to public?
[17:31] <Muntaner> natefinch: lol, very clear :)
[17:32] <natefinch> Muntaner: you control everything in your private cloud, so it's easy to set up all the right urls, image locations, etc.  You don't control everything on Azure/AWS/etc, so it's harder.  I am actually not 100% sure if you can even do it on a public cloud right now.
[17:33] <Muntaner> natefinch: thanks for the patience :)
[17:33] <Muntaner> you guys have been very gentle with me
[17:34] <Muntaner> a lot of thanks, natefinch, ericsnow, fwereade
[17:34] <Muntaner> maybe I'll be back there for future problems :)
[17:34] <natefinch> Muntaner: if you'd like a full report on how to do enable windows support for juju from an expert, I can set that up.
[17:34] <Muntaner> natefinch: this will be a task of the next days :)
[17:36] <Muntaner> bye guys
[18:12] <voidspace> juju branch
[18:12] <voidspace> no that's wrong
[18:12] <voidspace> go branch
[18:12] <voidspace> no that's wrong too
[18:12] <voidspace> git branch
[18:12] <voidspace> ah yes, that's what I was doing...
[18:12] <voidspace> one of those afternoons...
[18:12] <mgz> :D
[18:14] <natefinch> voidspace: yeah, I keep thinking I should make an uber-command that'll just delegate to the right one based on subcommand... there's only a couple overlaps
[18:16] <perrito666> I keep telling myself I need to alias got to git
[18:17] <voidspace> natefinch: do it :-)
[18:19] <perrito666> I hate when shims keep growing
[18:22] <wwitzel3> perrito666: the guy who installed the door frames in my old house didn't have that same concern
[18:23] <voidspace> :-)
[18:23] <voidspace> wwitzel3: how's the house?
[18:23] <perrito666> lol
[18:23] <voidspace> the new one I mean...
[18:23] <wwitzel3> voidspace: great :) .. we spent this weekend taking the guest bathroom down to studs and subfloor
[18:24] <wwitzel3> voidspace: is your deal all final on your new house?
[18:25] <voidspace> wwitzel3: signed and returned all the paperwork. Hoping to get a completion date tomorrow and the keys Friday or Monday.
[18:25] <voidspace> wwitzel3: so not quite all final yet...
[18:25] <voidspace> wwitzel3: you got a lot of work to do on the house?
[18:25] <perrito666> wow that is FAST
[18:26] <voidspace> perrito666: we made the offer about four weeks ago and have pushed really hard to get it done
[18:26] <voidspace> perrito666: for the UK that is really fast
[18:26] <wwitzel3> voidspace: not a lot .. just the most expensive parts :) both bathrooms and the kitchen are getting completely redone.
[18:27] <voidspace> wwitzel3: yow, sounds expensive
[18:27] <voidspace> thankfully we don't *think* we need to do anything like that
[18:27] <voidspace> mostly buy a new kitchen table and some wardrobes
[18:28] <wwitzel3> voidspace: yeah, it wasn't a suprised at least, we knew going in which makes it easier
[18:28] <perrito666> we require a "research" made on the house for possible debts and all kinds of judiciary problems and that alone takes a month, then you have a... mm I dont think there are these people elsewere, they are guys that bear witness of a contract and record it in national records and sign making it valid, well one of those crafts new papers for the howse (usually including the previous papers) so basically a houses paper
[18:28] <perrito666> are like a git repo :p
[18:28] <voidspace> wwitzel3: you done any forward porting of bugs from 1.21/1.22 ?
[18:28] <voidspace> I forget the procedure
[18:29] <voidspace> do I merge just the specific revisions
[18:29] <perrito666> and after that your house is yours but you still need to wait for that to get updated in all the right places for like 6 months
[18:30] <wwitzel3> voidspace: it has been a while, I will have to do it with my current one, once it is fixed though. sinzui can you point us to a doc or steps for forwarding porting of fixes?
[18:31] <voidspace> perrito666: we have a lot of searches to do too (environmental, land registry etc) but they can be done in a couple of weeks with good solicitors
[18:31] <voidspace> perrito666: our land registry doesn't have "branches" though, just a mainline
[18:31] <voidspace> so searches are easier
[18:31] <perrito666> voidspace: I think here is mostly due to the lack of digital information
[18:31] <voidspace> and all charges (debts) are recorded with the land registry
[18:32] <voidspace> perrito666: land registry is still all paper
[18:32] <wwitzel3> perrito666, voidspace: our process is as quick as the sellers and buyers want it to be really .. our house was 30-days, my parents just did a cash deal with a 72-hour closing.
[18:32] <voidspace> nice
[18:32] <wwitzel3> I would say the average is 45-60 days
[18:32] <perrito666> wwitzel3: I assume there is not much paperwork right?
[18:32] <natefinch> wwitzel3: buyers, sellers, and banks (where it's not cash, which is most of the time)
[18:32] <sinzui> wwitzel3, I don't think core ever wrote a doc about git patch
[18:33] <perrito666> I am already in love on how you guys sell cars, I assume houses are pretty much the same
[18:33] <natefinch> perrito666: there's a ton of paperwork, but that's what lawyers are for.  I paid ~$500 for a lawyer to basically do all the annoying work for us when we bought our house.
[18:33] <sinzui> wwitzel3, and I am just assuming git patch is the technique to extract and apply changes between divergent branches
[18:33] <voidspace> sinzui: so it's just a manual process to forward port
[18:33] <sinzui> voidspace, 'fraid so
[18:34] <voidspace> np
[18:35] <perrito666> natefinch: here we pay a percentage (quite high given the numbers at play) to the notary (that was the word I was missing)
[18:35] <perrito666> and then a fee for each research to be done
[18:35] <perrito666> then a couple of taxes for all these papers to be sealed by the right authorities
[18:35] <perrito666> :p
[18:35] <perrito666> same applies for cars
[18:36] <perrito666> between the day I paid my car in the dealership and the day I got it there where around 2 months
[18:37] <voidspace> sinzui: I've marked but 1403225 as fix committed for 1.22
[18:37] <voidspace> https://bugs.launchpad.net/juju-core/+bug/1403225
[18:37] <mup> Bug #1403225: charm download behind the enterprise proxy fails <cloud-installer> <deploy> <proxy> <sync-tools> <cloud-installer:Confirmed for adam-stokes> <juju-core:Fix Committed by mfoord> <juju-core 1.21:Won't Fix> <juju-core 1.22:Fix Committed by mfoord> <https://launchpad.net/bugs/1403225>
[18:37] <sinzui> thank you voidspace
[18:39] <natefinch> perrito666: we pay a percentage to the real estate agents (2-3% for the buyer's agent and same for seller's agent), which is a shame, because they don't actually do all that much in this age of looking for houses on the internet.
[18:39] <hazmat> natefinch: +1 insanity..
[18:39] <voidspace> natefinch: in the UK pretty much everyone uses one website, rightmove, to find houses
[18:40] <perrito666> well at least I dont have to blow snow :p
[18:40] <voidspace> natefinch: but you have to be an estate agent to put houses up there, so they keep the cartel going
[18:40] <perrito666> real estate agents are not a mandatory thing here,  just practical
[18:40] <natefinch> voidspace: yep, basically the same here.... there's one official listing that several online sites use,  but you need to be in their cartel to put things up there.
[18:41] <hazmat> natefinch: redfin is slightly disruptive to the industry with lower fees.. also re agent commissions, seller pays costs. there are many listings, only need a real estate agent for mls listing.
[18:41]  * hazmat has been house shopping lately
[18:41] <natefinch> hazmat: yep, I bought my house through redfin.  Loved it, got a ~$5k check afterward IIRC
[18:42] <hazmat> nice
[18:43] <natefinch> hazmat: It's kinda lucky, because they don't cover everywhere, but the town I wanted to buy in is pretty expensive, so they cover it, but not many of the nearby towns.
[18:43] <natefinch> hazmat: at least as of 5 years ago when I purchased
[18:43] <perrito666> natefinch: so why exactly you get money for buying a house?
[18:43] <perrito666> usually is the other way around
[18:44] <natefinch> perrito666: the seller pays the buyer's real estate agent 2%, and the seller's real estate agent 2%.... redfin is the buyer's agent, and they turn around and give the buyer 1/2 to 2/3rds of their commission
[18:45] <natefinch> 333333333333333333
[18:45] <natefinch> (sorry..... kids)
[18:47] <perrito666> sounds a bit like commercial .. dis?loyalty? I dont know the word for it in english
[18:47] <perrito666> although I believe middle men are just blod suckers so I have nothing against it
[18:48] <natefinch> perrito666: yep.  in theory you don't need a real estate agent, but if you can't get your house listed on The_List_Everyone_Looks_At then no one will see your house is for sale.
[18:50] <perrito666> we call that list "the paper classified adds" you get in there for a whole week for about 80C
[18:50] <perrito666> and for free in the paper internet site :p
[18:51] <perrito666> there is a bad part to it, you need to live in this country
[19:01] <voidspace> g'night all
[19:01] <voidspace> EOD
[19:09] <bodie_> godeps: cannot update "/var/lib/jenkins/workspace/github-merge-juju/tmp.hkAIMyHVpG/RELEASE/src/golang.org/x/crypto"
[19:09] <bodie_> :S
[19:10] <bodie_> fatal: remote error: Git repository not found
[19:10] <bodie_> that doesn't seem right
[19:19] <bodie_> whatever it was, seems to have worked this time
[20:00] <marcoceppi> Looking for a juju architect to explain some things to me
[20:00] <fwereade> marcoceppi, heyhey
[20:01] <natefinch> man, it's like rubbing a genie's bottle
[20:01] <marcoceppi> fwereade: my favorite architect (who happened to repsond to me first)
[20:01]  * marcoceppi goes to PM
[20:01] <bodie_> was anyone working on a Docker Actions enabled charm?
[20:01]  * perrito666 rubbs his cheap wather bottle and all he gets is a sale's rep
[20:01] <perrito666> natefinch: seems marco has more luck
[21:21] <cherylj> Could someone review a PR I have for juju/cmd?  https://github.com/juju/cmd/pull/13
[21:21] <katco> wwitzel3: ^^
[21:21] <cherylj> thanks, katco
[21:22] <katco> cherylj: np, you know about the OCR schedule, right?
[21:22] <lazyPower> bodie_: o/
[21:23] <cherylj> yes, but I swore today was Tuesday and I didn't see natefinch online
[21:23] <cherylj> Guess I'm still on NZ time
[21:23] <lazyPower> bodie_: we've been looking into it with the docker infrastructure bundle we threw down, anything specific i can answer?
[21:23] <katco> cherylj: haha :)
[21:23] <katco> cherylj: i hope your trip went well
[21:23] <lazyPower> katco: if i want to track down someone that grokks the jujud on ppc64el who would i pester? :)
[21:23] <cherylj> katco:  it was great!  I learned a lot and had a great time getting to know everyone
[21:23] <katco> lazyPower: i think davecheney is our resident ppc expert
[21:24] <lazyPower> perfect, ta
[21:24] <katco> lazyPower: possible thumper?
[21:24] <katco> cherylj: good to hear :D
[21:25] <katco> lazyPower: they are both in AUS/NZ time, so they should be on later
[21:25] <lazyPower> already passed on the info. cheers katco :)
[21:25] <katco> lazyPower: o/
[21:27] <bodie_> lazyPower, just curious since I'm working with actions :)
[21:27] <lazyPower> awesome, we're looking at supercharging the docker charms with action support, such as image cleanup, getting running containers, status output, health checks, etc.
[21:28] <lazyPower> there's quite a bit that's been tossed around but nothing yet, and I'm a pleb that doesn't run off tip yet - so actions haven't landed as of yet
[21:31] <bodie_> lazyPower, right on.  maybe I could contribute?  I guess your charm in LP is the latest?
[21:31] <bodie_> I just want something flashy to show off
[21:31] <bodie_> but I'm also really interested in working with Docker
[21:31] <lazyPower> ah, actually our dev focus is on github. let me fish up the bundle for you
[21:32] <lazyPower> https://github.com/mbruzek/docker-bundle
[21:32] <lazyPower> its in the revq atm pending eyeballs, but dev focus is there :)
[21:46] <lazyPower> bodie_: if you have *any* issues with the docker bundle we have - feedback/prs/bugs accepted with open arms and a puppy.
[21:47] <lazyPower> i'm using it in prod with success, but its still only 1/8 of the story that we want to tell with docker.
[21:47] <wwitzel3> cherylj: taking a look at that PR now
[21:47] <cherylj> wwitzel3, thanks!
[21:50]  * perrito666 is back
[21:54] <bodie_> lazyPower, excellent :)
[22:31] <davecheney> lazyPower: ping
[22:32] <lazyPower> davecheney: pong
[22:33] <davecheney> lazyPower: you rang ?
[22:34] <lazyPower> davecheney: besiner is looking for a ppc64el expert, there's a backlog of jujud issues he's uncovered orchestrating ppc from an amd64 host.
[22:34] <lazyPower> http://paste.ubuntu.com/10145407/
[22:34] <davecheney> that's pretty messed up
[22:35] <davecheney> has /var/lib/juju/tools/machine-1-lxc-1/jujud
[22:35] <davecheney> been replaed with a shell script ?
[22:35] <lazyPower> beisner: ping
[22:35] <lazyPower> ^
[22:35] <davecheney> lazyPower: protip: this is how I debug ppc64el issues
[22:35] <beisner> yo!
[22:35] <davecheney> 1. were the tools found ?
[22:35] <davecheney> 2. did they download properly
[22:35] <beisner> o/ lazyPower davecheney
[22:35] <davecheney> 3. there is no three
[22:35] <davecheney> that's it
[22:35]  * perrito666 sees a paradox
[22:36] <beisner> lazyPower, davecheney - getting repro scenario documented for review.
[23:04] <beisner> hi davecheney, lazyPower - please see  http://paste.ubuntu.com/10149580/.  no bug filed yet.  i'm US central time, can touch base in a few hrs after dinner/family time.
[23:06] <davecheney> beisner: juju bootstrap --constraints arch=amd64 || true
[23:06] <davecheney> ^ why are you skipping this error ?
[23:07] <beisner> davecheney, no error there.  i had that because i had already just bootstrapped.
[23:07] <beisner> ie. a cheap "bootstrap if not bootstrapped" line
[23:07] <davecheney> ok
[23:08] <davecheney> beisner: can you paste the contents of the upstart control file for machine-1-lxc-3
[23:08] <beisner> davecheney, please get more verbose, where is that?
[23:08] <beisner> lol
[23:09] <davecheney> i want to rule oout a typo or syntax error in the upstrart file
[23:09] <davecheney> that errror loooks like it's coming from a shell
[23:09] <beisner> path i should look in?
[23:09] <davecheney> /var/init
[23:09] <davecheney> sorry
[23:09] <davecheney>  /etc/init
[23:09] <davecheney> but i have no idea what that looks like when lxc is applied
[23:10] <beisner> oh ok that's where i'm looking
[23:10] <davecheney> it might be called something like
[23:10] <davecheney> jujud-machine-1-lxc-1
[23:10] <davecheney> but i'm not sure
[23:10] <davecheney> 'cos lxc
[23:12] <beisner> jujud-machine-1-lxc-3.conf ->  http://paste.ubuntu.com/10149660/
[23:12] <beisner> gotta run, will return in a bit.