[00:00] <davecheney> http://paste.ubuntu.com/6840930/
[00:00] <davecheney> great, how am I supposed to shut down this environment ?
[00:18] <wallyworld_> i would kill the machines by hand and delete the control container, then delete the jenv
[00:21] <thumper> davecheney: really, WTF?
[00:22] <thumper> how do I remove the govet check?
[00:22] <thumper> wallyworld_: I upgraded to trusty last night
[00:22] <thumper> trunk starts the local provider
[00:22] <wallyworld_> hmmm
[00:22] <thumper> although my http-proxy wasn't on the right port
[00:23] <thumper> trunk mind, not release
[00:23] <thumper> release is all fubared
[00:23] <wallyworld_> when i tried on friday, the agent ran, but no lxc container would start
[00:23] <thumper> for some reason I've lost the all-machines.log
[00:23] <wallyworld_> yeah, me too
[00:23] <thumper> need to figure that out
[00:23] <thumper> I have a branch to submit for destroy
[00:23] <wallyworld_> \o/
[00:24] <thumper> if I can get lbox shite working
[00:24] <wallyworld_> thumper: with go vet, i installed from source and got it to work that way
[00:24] <thumper> wallyworld_: you installed go from source?
[00:24]  * wallyworld_ tries to remember
[00:24] <wallyworld_> go get -u ...something
[00:24] <wallyworld_> let me see if my history shows it
[00:25] <wallyworld_> i think the error message gave a hint
[00:25] <thumper> it tries to open the /usr/lib/go pkg dir
[00:25] <thumper> for some reason it isn't honouring the GOPATH
[00:26] <thumper> davecheney: can I just change an environment variable?
[00:26] <wallyworld_> not sure if this was the attempt that worked
[00:26] <wallyworld_> sudo bash -c "export GOPATH=/usr/lib/go; go get code.google.com/p/go.tools/cmd/vet"
[00:26] <wallyworld_> something like that was necessary
[00:26] <wallyworld_> i just assumed it was my fucked up set up
[00:26] <wallyworld_> didn't realise it was a general issue for trusty
[00:27] <thumper> hmm...
[00:28] <thumper> yeah, that looks weird to the extent I don't want to do that
[00:28] <wallyworld_> i may have had to do it with a GOPATH pointing to a dir off my home, not usr/lib
[00:28] <wallyworld_> but i recall sudo and messing with GOPATH was necessary
[00:28] <thumper> a fuck it...
[00:30] <thumper> nah, that doesn't work
[00:31] <wallyworld_> hmmm sorry. i can't recall exactly what worked. i *think* i had to point GOPATH to a local src dir, but use sudo. not sure thpugh
[00:33]  * thumper stabs .lbox.check
[00:34] <thumper> FARK
[00:34] <thumper> can't propose with a dirty branch
[00:35] <thumper> so either I commit the lack of vet
[00:35] <thumper> or don't use lbox
[00:35]  * thumper decides to skip lbox
[00:36] <wallyworld_> \o/
[00:39] <thumper> wallyworld_: https://code.launchpad.net/~thumper/juju-core/fix-lxc-destroy/+merge/203861
[00:39] <wallyworld_> looking
[00:42] <wallyworld_> thumper: done. btw did you see google offloaded motorola to lenovo. those lenovo guys are really going on a shopping spree
[00:42] <thumper> no I didn't know that
[00:43] <thumper> wallyworld_: did google keep all the patents?
[00:43] <wallyworld_> yep :-)
[00:43] <wallyworld_> of course
[00:43] <thumper> heh
[00:43] <wallyworld_> they licenced them to lenovo
[00:43] <thumper> wallyworld_: I can't land it yet, it didn't fix the problem
[00:43] <wallyworld_> so they paied $12bil, sold for $3bil
[00:46] <thumper> ah...
[00:46] <thumper> hang on
[00:46] <thumper> I see what's happened now
[00:46] <thumper> I have a running juju machine agent for zero
[00:46] <thumper> rebuilding a local juju doesn't help destroy environment
[00:47] <thumper> because the environment is already running
[00:47] <thumper> and using an old jujud
[00:47] <thumper> which doesn't have my fix
[00:47] <thumper> FFS
[00:52] <thumper> manual hackery FTW
[00:53] <wallyworld_> i've done the same thing before too
[00:57] <thumper> ok, at least that is the local provider starting and stopping in trusty
[00:57] <thumper> now to poke and see why we have no all-machines.log
[00:58] <thumper> hmm... I think I may see
[01:00] <axw> thumper: I looked into that the other day, it was permissions
[01:00] <thumper> yeah, I saw that too
[01:01] <thumper> axw: have you a fix ready?
[01:01] <thumper> if not, I'll do it
[01:01] <axw> thumper: I think we may have to do something yucky, like symlinking to /var/log/juju
[01:01] <axw> no
[01:01] <axw> it passed out of my memory on the flight back
[01:07] <thumper> wallyworld_: I'm witting in our weekly hangout...
[01:07] <wallyworld_> oh yeah
[01:12] <hazmat> thumper, how's proxy support coming?
[01:12] <thumper> hazmat: trunk is almost feature complete
[01:13] <thumper> there are a few things outstanding
[01:13] <thumper> - remove old apt proxy cruft in container setup
[01:13] <thumper> - make sure containers get the proxy info for the container providers
[01:13] <thumper> - some upgrade code
[01:16] <hazmat> hmm.. that sounds pretty good... does that include things like the cli respecting std http_proxy env vars for tool lookup?
[01:16] <thumper> hazmat: in a hangout just now
[01:16] <thumper> hazmat: how long have you got?
[01:17] <hazmat> thumper, i'll make some time.. i'm pushing through on a late night
[01:32] <thumper> waigani: hangout? standup
[01:32] <waigani> yep
[01:33] <davecheney> cap
[01:33] <davecheney> crap
[01:33] <davecheney> now destroy-environment cannot save me
[01:33] <davecheney> https://bugs.launchpad.net/juju-core/+bug/1274355
[02:01] <thumper> school run
[02:01] <thumper> bbs
[02:04]  * axw rejoices and promptly stops thinking about windows cli
[02:05] <axw> davecheney: try destroy-environment --force
[02:05] <axw> davecheney: actually... what version are you on?
[02:05] <davecheney> trunk
[02:06] <davecheney> axw: yup
[02:06] <davecheney> that worked
[02:06] <axw> davecheney: destroy-environment now goes and talks to the API server to destroy the environment "cleanly"
[02:06] <axw> if it's borked, --force is required
[02:07] <axw> we just had a chat on our scrum, and figured we should probably tell that to the user if it fails and --force wasn't specified
[02:13] <davecheney> couldn't hurt
[02:51]  * thumper sighs
[02:52]  * thumper created a broken local provider
[02:52]  * thumper deletes things
[03:31] <thumper> fark!!!
[03:31] <thumper> I think it is apparmour
[03:32] <axw> thumper: doing what?
[03:32] <thumper> stopping writing to ~/.juju/log/all-machines.log
[03:32] <axw> hm ok
[03:33]  * thumper pokes and tweaks
[03:36]  * thumper does something icky
[03:46]  * axw goes out for lunch
[03:46] <axw> bbl
[03:47] <axw> if I don't return, the heat has claimed me
[03:47] <thumper> wallyworld_: ping
[03:47] <thumper> wallyworld_: who owns your /var/log/syslog file?
[03:48] <axw> thumper: mine is syslog:adm
[03:48] <thumper> hmm...
[03:48] <thumper> mine is messagebus:adm
[03:48] <thumper> no idea where that came from
[04:14] <wallyworld_> thumper: um, let me check
[04:14] <thumper> wallyworld_: that's ok
[04:15] <wallyworld_> i don't have one right now
[04:15] <thumper> all these permission problems are due to rsyslog apparmor profiles
[04:15] <wallyworld_> ah
[04:15] <thumper> I've got a work-around
[04:15] <thumper> but I need to go make dinner
[04:15] <thumper> so it'll have to wait
[04:15] <wallyworld_> f*cking apparmor
[04:15] <wallyworld_> :-)
[04:15] <thumper> waigani: is there a bug for the "no all-machines.log for the local provider"?
[04:16] <waigani> thumper: I have not created on
[04:16] <waigani> *one
[06:16] <axw> wallyworld_: when you're not busy, can you please poke the bot?
[06:16] <wallyworld_> sure
[06:16] <wallyworld_> will do it now
[06:17] <wallyworld_> axw: poked
[06:17] <axw> thanks
[06:38] <axw> wallyworld_: I thought there was a reason we needed FullPath for the manual bootstrap - the reason eludes me though
[06:42] <wallyworld_> axw: FullPath is filled in when the particular tools are read from simplestreams. but is was unnecessary to fill in that value when writing metadata files caused it is ignored then
[06:43] <wallyworld_> we just store ralative path in the json
[06:43] <wallyworld_> and construct fullpath when needed as the metadata is loaded
[06:43] <axw> yeah, I thought it was used in-memory... my memory may just be faulty though
[06:44] <axw> welp, seems to work anyway
[06:44] <wallyworld_> yay
[06:53] <axw> wallyworld_: in provider/maas/storage.go, there's talk about 404 and IsNotFoundError
[06:53] <axw> does this not work?
[06:53] <wallyworld_> um
[06:54] <wallyworld_> that works
[06:54] <wallyworld_> but the issues is URL()
[06:54] <axw> ah sorry
[06:54] <axw> URL, not Get
[06:54] <wallyworld_> yeah, maas is the only one that can't give a url for a non existent fle
[06:54] <wallyworld_> so i changed dummy storagr to match that
[06:54] <axw> wallyworld_: ahh, so MAAS 404s even for URL
[06:55] <axw> got it
[06:55] <wallyworld_> yeah :-(
[06:55] <wallyworld_> sadly yes
[07:00] <axw> wallyworld_: why do we no longer set imagemetadata.DefaultBaseURL in bootstrap? and does tools.DefaultBaseURL continue to be updated?
[07:00] <axw> (sorry for dumb questions, just trying to be thorough...)
[07:00] <wallyworld_> axw: np. the image metadata is uploaded from a local dir to cloud storage
[07:00] <wallyworld_> it was a mistake to set th default url
[07:00] <wallyworld_> in the cloud storage, the uploaded metadata is in the search path
[07:01] <wallyworld_> and keeping default url unchanged means we fall back
[07:01] <wallyworld_> if no images found in user metadata
[07:01] <axw> ok
[07:01] <axw> whereas if we change it, it'll try to look in cloud storage and in the local dir
[07:02] <axw> right?
[07:02] <wallyworld_> yeah, but it will error out later in the boot process
[07:02] <wallyworld_> i didn't look into why
[07:02] <axw> mk
[07:02] <wallyworld_> i decided it was wrong to change the default url
[07:03] <wallyworld_> tools url is changed cause that's what sync tools needs
[07:03] <wallyworld_> end result - tools and image metadata is uploaded to cloud storage if local dir is used
[07:04] <axw> thanks
[07:04] <wallyworld_> and cloud storage is in simplestreams search path, behind config urs and ahead of default locations
[07:04] <axw> got it
[07:04] <axw> I remember why FullPath/URL was required before: we used to do a URL fetch, rather than go through storage
[07:05] <axw> so we previously had to keep the URL in memory to pass along to the merging bit
[07:32] <wallyworld_> axw: just to check - i'm pretty sure that we don't need full url now right
[07:32] <axw> wallyworld_: right
[07:33] <wallyworld_> cool, that matches my understanding too
[07:33] <axw> since we use the path & storage now
[07:33] <wallyworld_> yep
[07:33] <wallyworld_> just wanted to be 1000% sure :-)
[07:33] <axw> :)
[07:34] <wallyworld_> axw: with the abs(), i thought it better to report the whole dir to the user to avoid confusion
[07:35] <axw> okey dokey
[08:41] <jam> axw: do we actually have a solution for people trying to use juju-core with Canonistack and setting up 'sshuttle' while they are trying to bootstrap?
[08:42] <jam> or does bootstrap itself complete because it uses the SSH redirection, (I thought one of the steps was to directly connect to the API to know that it is up, so now that step will fail)
[08:42] <axw> jam: bootstrap works because it goes through SSH, sshuttle is still required as it was before post-bootstrap
[08:43] <axw> there's no API interaction during bootstrap
[08:43] <jam> axw: so "before" you could use sshuttle to the bootstrap node once it came up, and then everything else would go direct to the API, that stil works?
[08:43] <axw> we did talk about that during SFO - juju status at the end - but it hasn't been implemented
[08:43] <jam> ah, k
[08:43] <axw> jam: yep
[08:44] <axw> sshuttle -r ubuntu@bootstrap-node subnet/cidr
[08:45] <axw> I would kinda of like it if API connections did some magical tunnelling as a fallback, but it's probably too messy
[08:46] <jam> axw: It would be helpful for people at Canonical, I'm not sure that it is a general solution.
[08:47] <jam> Saying "you need public access to the cloud machines you are using" doesn't seem like a particularly bad thing to me.
[08:47] <axw> yeah I guess so. I don't really know how many of our private cloud users will have the same issues
[08:48] <axw> probably not that many
[08:49] <mgz> morning
[08:49] <axw> morning mgz
[09:01] <jam> axw: I think you'll generally need some-sort-of-VPN but we actually have 2 different ones inside Canonical. (the Server Stack needs OpenVPN to access, and doesn't work via SSH at all)
[09:02] <axw> true, though our bootstrap won't even work without ssh access
[09:03] <axw> I was just thinking it'd be nice to have guaranteed API access if bootstrap succeeds
[09:03] <axw> it's not a big deal to have to sshuttle though, really
[09:06] <mgz> I may be a min or two late for meeting later
[09:33] <rogpeppe> wallyworld_: ping
[09:36] <rogpeppe> axw: you reviewed this branch, i think: https://codereview.appspot.com/58510044
[09:37] <rogpeppe> axw: i'm having difficulty understanding how it works - did you work it out?
[09:52] <axw> rogpeppe: sorry, which part?
[09:53] <axw> doh, good catch on the dummy storage boolean...
[09:54] <axw> rogpeppe: re "How does the URL get into the metadata now?" -- simplestreams metadata doesn't encode the URL
[09:55] <axw> we used to store it in the in-memory object when tools metadat was fetched by URL
[09:55] <axw> but now we use the path, and go through storage
[09:56] <axw> rogpeppe: and the fact taht we were calling the environment storage'URL
[09:56] <axw> oops
[09:56] <axw> storage's URL method, caused an error wtih MAAS
[09:59] <thumper> where's wallyworld_
[09:59] <wallyworld_> here
[10:04] <wallyworld_> on the juju call. yay. will look in a bit. i'm also 3/4 drunk :-D
[10:05] <bigjools> wallyworld_: ROFL
[10:41] <thumper> wallyworld_: 3/4 drunk and working on maas fixes \o/
[10:41] <wallyworld_> no, i'm ok now :-)
[10:41] <bigjools> haha
[10:41] <wallyworld_> it's been an hour or more since my last wine, i was exagerating :-)
[10:42] <wallyworld_> a man can drink a glass of wine with dinner :-)
[10:42] <bigjools> digging more like
[10:42] <bigjools> a single glass has been known to get you blotto
[11:03] <fwereade> right, I'm not interviewing horacio right now because we need to turn the power off here to investigate scary burning smells
[11:03] <fwereade> hopefully I will be back shortly
[11:04] <natefinch> fwereade: good luck
[11:04] <fwereade> cheers
[11:08] <jam> mgz: are you available to just hang out?
[11:12] <rogpeppe> axw, wallyworld_: what's the purpose of ToolsMetadata.FullPath? are we actually setting it now?
[11:12] <rogpeppe> ah! it's ignored
[11:12] <wallyworld_> rogpeppe: it's used when wget fetches the tools
[11:13] <wallyworld_> it's ignore writing out
[11:13] <rogpeppe> wallyworld_: ok
[11:13] <wallyworld_> it is composed from the relative path stored in the json
[11:13] <wallyworld_> composed when needed
[11:15] <wallyworld_> rogpeppe: is trunk working?
[11:15] <rogpeppe> wallyworld_: i'm going to try it in a mo
[11:15]  * wallyworld_ crosses his fingers
[11:16] <mgz> jam: sure
[11:16] <jam> mgz: I'm just going to grab a snack, mumble or g+?
[11:17] <mgz> lets try mumble first, I'll be on there
[11:26] <jam> yeah, I can't understand you at all, which means it is messed up, I'll reconnect 1 more time
[11:27] <mgz> jam: we'll have to just fall back to hangout
[11:27] <mgz> use the daily standup  one?
[11:29] <rogpeppe> wallyworld_: sorry, it appears that there are no spare boxes currently, so testing it will have to wait for a little bit
[11:29] <wallyworld_> ok, np
[11:29] <jam> mgz: now ff is freaking out again... just a sec
[11:29] <mgz> :)
[11:30] <wallyworld_> rogpeppe: i'm quietly confident it will work :-)
[11:30] <jam> mgz: I'm in the "juju core team" one from earlier today
[11:30] <rogpeppe> wallyworld_: did you see my remarks on your CL?
[11:31] <wallyworld_> no, i'll look
[11:35] <wallyworld_> rogpeppe: tomorrow i'll fix the remaining issues, thanks
[11:35] <rogpeppe> wallyworld_: thanks
[11:36] <wallyworld_> rogpeppe: it seems that we're still not quite universally consistent with converting relative paths to absolute when running commands
[11:36] <rogpeppe> wallyworld_: yeah, we should probably use Context.AbsPath throughout
[11:37] <wallyworld_> yeah, i didn't realise we had that method
[12:07] <dimitern> rogpeppe, i'm trying to debug this missing GOPATH issue that cause uniter tests to fail in an install hook
[12:07] <dimitern> rogpeppe, so i'm using your debug.Callers() thing to dump the stack when I detect GOPATH is empty in testing/charms.go:init()
[12:08] <rogpeppe> dimitern: ok
[12:08] <dimitern> rogpeppe, but what I'm getting is a bunch of files, and the line number for each one is always the last line in the file
[12:08] <dimitern> rogpeppe, what does that mean?
[12:08] <rogpeppe> dimitern: paste?
[12:08] <jam> rogpeppe: so if you *don't* supply -test.timeout, then when it times out it just says "test took to long" if you *do* supply a value (like 10s in my testing) then you get a panic
[12:09] <dimitern> rogpeppe, http://paste.ubuntu.com/6843438/
[12:10] <dimitern> rogpeppe, and my changes to init(): http://paste.ubuntu.com/6843442/
[12:11] <dimitern> rogpeppe, for some reason during the install hook the environment gets reset or something and GOPATH is lost
[12:15] <jam> rogpeppe: it looks like '-test.timeout' is for each individual test triggering a panic, vs the global timeout being run by the meta-process when running multiple packages
[12:19] <rogpeppe> jam: interesting
[12:20] <rogpeppe> dimitern: sorry, i got waylaid by insanity. i have to go to lunch now, sorry, otherwise i won't make it to the discussion in 40 mins. will look when i can.
[12:21] <dimitern> rogpeppe, ok
[13:06] <dimitern> rogpeppe, I found a solution: http://paste.ubuntu.com/6843674/ adding GOPATH to the list of env vars a hook gets
[13:06] <rogpeppe> dimitern: yeah, i thought that might be the problem
[13:07] <dimitern> rogpeppe, what i don't get is why it wasn't a problem before?
[13:07] <rogpeppe> axw: are you around?
[13:12] <rogpeppe> dimitern: agreed - i'm not sure either
[13:12] <rogpeppe> dimitern: there's a perhaps-better solution
[13:12] <dimitern> rogpeppe, what's that?
[13:12] <rogpeppe> dimitern: just make it so init doesn't panic
[13:13] <dimitern> rogpeppe, but it won't solve the issue - it won't find the repo path still
[13:13] <rogpeppe> dimitern: it doesn't need the repo path
[13:13] <rogpeppe> dimitern: it's just re-running the test binary as a hook, right?
[13:13] <dimitern> rogpeppe, why don't remove init() altogether then?
[13:15] <dimitern> rogpeppe, it seems better to get the absolute path of the charms.go in testing and infer the path from there
[13:15] <dimitern> rogpeppe, is that possible?
[13:16] <rogpeppe> dimitern: how would you do that?
[13:16] <rogpeppe> dimitern: i think that removing the init altogether is reasonable though
[13:16] <dimitern> rogpeppe, well, in python i'll do it with os.path.realpath(".") or something
[13:16] <rogpeppe> dimitern: the current directory isn't related to where charms.go is
[13:17] <dimitern> rogpeppe, no, sorry - i'll use the __file__ magic const that contains the full path to the current file
[13:17] <dimitern> rogpeppe, kind of like $0 in bash
[13:17] <rogpeppe> dimitern: define testing.Charms as a function, say func() *charms.Repo
[13:17] <rogpeppe> dimitern: no such thing in Go
[13:17] <jam> there is some 'build' runtime magic stuff
[13:18] <rogpeppe> dimitern: (and probably a good thing too, as we'd be tempted to do this, and it would be a bad idea :-])
[13:18] <rogpeppe> jam: we're already doing that
[13:18] <jam> we use it in testing/charms.go ? something ilke that
[13:18] <rogpeppe> jam: but GOPATH isn't set if we re-exec the binary
[13:20] <rogpeppe> dimitern: something like this perhaps? http://paste.ubuntu.com/6843741/
[13:21] <rogpeppe> dimitern: that actually makes tests that don't use testing charms slightly faster
[13:21] <rogpeppe> s/tests/suites/
[13:21] <axw> rogpeppe: sort of around, what's up?
[13:22] <rogpeppe> axw: just wondering whether it's currently possible to manually bootstrap on a node, but not use the manual/null provider
[13:23] <axw> rogpeppe: bootstrap or "provision"?
[13:23] <axw> add-machine?
[13:23] <rogpeppe> axw: bootstrap
[13:23] <axw> no
[13:23] <TheMue> rogpeppe: interested in reviewing https://codereview.appspot.com/58510045/? it still misses tests (have to think about it and how to do best), but it works live.
[13:23] <rogpeppe> axw: right, i thought so
[13:23] <rogpeppe> axw: but needed to check
[13:23] <axw> rogpeppe: nps. who wants that and why?
[13:24] <rogpeppe> axw: there are currently some people who have duplicated the entire bootstrap logic in shell (and it works... kinda)
[13:24] <rogpeppe> axw: just to get this behaviour
[13:24] <rogpeppe> axw: the specific reason is that they want to bootstrap juju onto a maas controller node
[13:24] <dimitern> rogpeppe, it doesn't work
[13:24] <axw> oh, the installer thing
[13:24] <dimitern> rogpeppe, it still fails if GOPATH is not set
[13:25] <rogpeppe> dimitern: doesn't matter
[13:25] <rogpeppe> dimitern: why would that hook code be calling testing.Repo ?
[13:25] <axw> rogpeppe: why do they not just use the manual provider...?
[13:25] <axw> no ssh?
[13:25] <rogpeppe> axw: because they want the environment to be using the maas provider
[13:25] <axw> oh yeah, right
[13:26] <dimitern> rogpeppe, it doesn't, it just imports stuff from testing
[13:26] <rogpeppe> dimitern: right, so it's ok then
[13:27] <dimitern> rogpeppe, adding GOPATH to that list of vars fixes the issue, i don't want to make it any more intrusive, and i want to land this already
[13:27] <rogpeppe> dimitern: that actually makes it *more* intrusive, no?
[13:27] <rogpeppe> dimitern: because that list of vars is in production code, no?
[13:28] <dimitern> rogpeppe, so what - in production it will be empty if you don't have go installed, and even if you do, it won't affect anything
[13:29] <rogpeppe> dimitern: we should not be changing production code for this kind of reason - it's a testing bug and it should be fixed in the testing code
[13:29] <rogpeppe> dimitern: there might even be a less intrusive way, one mo
[13:31] <dimitern> rogpeppe, what you proposed with making Repo a func doesn't work, because it's a type and it's exported
[13:31] <dimitern> rogpeppe, I have to refactor the whole thing
[13:32] <rogpeppe> dimitern: sorry, i meant Charms
[13:32] <rogpeppe> dimitern: but, i'm pretty sure there's a much less intrusive way
[13:32] <rogpeppe> dimitern: just checking
[13:33] <dimitern> rogpeppe, in every test that uses testing.Charms.xxx I have to go and change it so it reads testing.Charms().xxx
[13:34] <rogpeppe> dimitern: sure, but that's one trivial global change: gofmt -r -w 'testing.Charms -> testing.Charms()'
[13:34] <rogpeppe> dimitern: but, again, i think there's a better way
[13:34] <dimitern> rogpeppe, I'm listening
[13:34] <rogpeppe> dimitern: i'm still testing whether it's viable
[13:40] <rogpeppe> dimitern: something like this: http://paste.ubuntu.com/6843805/
[13:40] <rogpeppe> dimitern: those are the only changes necessary, i think
[13:41] <dimitern> rogpeppe, I'll try it
[13:41] <TheMue> rogpeppe: your last hint to me works fine and can be reviewed ;)
[13:41] <rogpeppe> TheMue: i'll have a look
[13:42] <TheMue> rogpeppe: thx. unit tests are still missing, have to think how to do it best
[13:43] <rogpeppe> TheMue: what's the backward compatibility issue you mention w.r.t. environtag/
[13:43] <rogpeppe> ?
[13:44] <rogpeppe> TheMue: oh, i see
[13:44] <rogpeppe> TheMue: why not just do that behaviour server side, i.e. no tags means everything?
[13:44] <TheMue> rogpeppe: ah, fine. yes, today the command doesn't need an argument. william wanted, that always a tag has to be passed
[13:44] <rogpeppe> TheMue: then you won't need any special casing for the environment tag at all
[13:45] <rogpeppe> TheMue: really?
[13:45] <TheMue> rogpeppe: but i have to add an error replay from server-side w/o a tag
[13:45] <rogpeppe> TheMue: seems odd to me
[13:45] <TheMue> rogpeppe: yes, really ;)
[13:45] <rogpeppe> TheMue: i don't understand the "error replay" thing
[13:45] <TheMue> rogpeppe: I could live with a default choosing the environment too, would make it simpler
[13:45] <TheMue> rogpeppe: eh, reply
[13:46] <rogpeppe> TheMue: yeah
[13:46] <TheMue> rogpeppe: if you initially call w/o entities or later set empty entities
[13:46] <rogpeppe> TheMue: it simplifies things
[13:46] <dimitern> rogpeppe, this works yes, tyvm
[13:46] <rogpeppe> dimitern: np
[13:46] <TheMue> rogpeppe: definitely, please add a comment regarding this point
[13:47] <rogpeppe> dimitern: sorry for the pushback, but the GOPATH thing sounded like creeping dependencies, fixing things far away from where the actual problem was
[13:47] <rogpeppe> TheMue: will do
[13:47] <dimitern> rogpeppe, yeah, I agree - sorry, but I was trying to land this for a day now and getting really frustrated
[13:47] <TheMue> rogpeppe: but filtering is really nice ;) tested it with one and multiple tags
[13:47] <rogpeppe> TheMue: cool
[13:48] <rogpeppe> dimitern: i know the feeling
[13:53] <jam> rogpeppe: we had a gecko show up inside, which my wife made quite clear that it had to go :)
[13:53] <rogpeppe> jam: lol
[13:54] <rogpeppe> jam: from a position firmly on top of a table?
[13:54] <jam> they're a bit harder to deal with than mice, as mice don't climb 2m up the wall
[13:54] <jam> rogpeppe: not quite so bad, but certainly back off a ways and quite vocal
[13:54] <mgz> geckos sound fun
[13:54] <rogpeppe> mgz: +1
[13:56] <jam> mgz: they're actually really quite cute (IMO) but I can understand not wanting to catch unexpected motion out of the corner of your eye. It can be a bit disturbing.
[13:57] <mgz> okay, lunch for me, hae proposed a much simplified https://codereview.appspot.com/56560043 along the lines you suggested jam (previous rev has the real cert stuff in)
[14:18] <mramm> so anybody got an update for me on the 1.17.2 world
[14:19] <mramm> as in, are we able to do a 1.17.2 release anytime soon (as measured in hours)
[14:19] <mramm> sinzui: ^^^?
[14:21] <marcoceppi> mgz: are you still going to be able to make it to cfgmgmt camp?
[14:22] <mgz> marcoceppi: yeah, I'll give you and rbasak my travel details
[14:22] <marcoceppi> mgz: cool, robbie will be at fossdem but not cfgmgmt camp, that'll just be us
[14:26] <mramm> arosales: you around?
[14:26] <sinzui> mramm, local provider is very broken in trunk. It is not releasable.
[14:26] <sinzui> mramm, but when we have a blessed revision, a release takes 4 hours
[14:27] <mramm> juju dev team, is there a way to get just 1.17.1 plus the MAAS bootstrap fix as a branch that we could release as 1.17.2?
[14:27] <mgz> sinzui: so, in the meeting this morning our antipodes were under the impression the only remaining broken things were misunderstandings/script things to fix
[14:27] <mgz> sinzui: from the mailing list thread, I'm not clear what's still borked
[14:29] <sinzui> mgz, maybe one thing needs to be fixed. local deploys of 1.17.1/2 have not worked since last Friday, so I have nothing to give me confidence that there is only one thing blocking the fix
[14:31] <sinzui> mgz, though We can see that changes to trunk are fixing things. Since yesterday, local deploy fails because wordpress fails to start: http://162.213.35.54:8080/job/local-deploy/
[14:32] <sinzui> We also see this in some of the failing azure tests
[14:34] <rogpeppe> TheMue: you have a review
[14:36] <mramm> mgz: sinzui: who do we need to get into a hangout to sort out these issues?
[14:36] <mgz> mramm: I think ideally the US/AUS overlap time
[14:37] <mramm> well, that is quite a bit from now...
[14:37] <mramm> but if that's where we need a meeting, we can do it
[14:37] <arosales> mramm, hello
[14:37] <mramm> but I'd like to have everything we can sorted out by then
[14:38] <mramm> arosales: saw your message regarding MAAS for the charmers -- I have one priority over that which is MAAS for CI
[14:38] <mramm> I think we can use the MAAS testing lab for that
[14:38] <mramm> james page and I were just talking about it
[14:38] <arosales> sinzui, are you available for the juju cross team?
[14:38] <mramm> is that now?
[14:38] <mramm> joining
[14:38] <mramm> sorry lost track of time here
[14:38] <sinzui> arosales, now? me in standup
[14:39] <arosales> sinzui, ah ok. I think the cross team meeting is an hour early than it was in 2013
[14:39] <arosales> mramm, ^
[14:39] <TheMue> rogpeppe: thank you
[14:40] <mramm> arosales: we can move it later
[14:40] <arosales> sinzui, and simple stream workflow is still in progress or waiting on utlemming
[14:50] <TheMue> rogpeppe: I like the idea with the regex, it is more powerful. will change it.
[14:50] <TheMue> rogpeppe: and also to no-filter-means-show-everything
[14:51] <rogpeppe> TheMue: thanks
[14:56] <sinzui> mgz, I consistently see a mysql error in local deploy http://162.213.35.54:8080/job/local-deploy/767/console
[14:57] <sinzui> mgz, We are not getting logs though...
[14:57] <sinzui> mgz, do I dare increase the memory constraints as we did for HP
[14:58]  * sinzui accidentally deployed juju on 512m this week and was surprised to see the tests pass...juju's memory requirements probably shrank
[14:59] <mgz> yeah, juju itself is generally good
[14:59] <hatch> I am trying to determine which version of juju-core actually removes the requirement for sudo on local deployments? 1.17.1 seems to require it but I'm hearing reports that 1.17.2 does not
[14:59] <mgz> mysql still does that "allocate 80% of machine's memory" thing though
[14:59] <mgz> which is probably not good if there's anything else on the box at all
[14:59] <sinzui> hatch, trusty + 1.17.2 (which I have not released
[15:00] <hatch> sinzui what about precise?
[15:00] <hatch> do you guys consider the 1.17.x release to be a non-stable release? Should quickstart use sudo up to 1.18 ?
[15:00] <sinzui> hatch, precise version will still ask for sudo on demand
[15:01] <hatch> even in 1.18?
[15:01] <sinzui> 1.18 will request sudo passwords on demand
[15:02] <hatch> ok so trusty with 1.17.2 or greater will not require sudo, everything else does
[15:02] <hatch> sinzui: sorry that was a question :)
[15:04] <sinzui> hatch, yes. In general, juju will will ask for input as needed. That means your scripts need to provide sudo passwords after they call bootstrap, or do a trivial sudo op before bootstrapping so that the op is not paused for a password
[15:06] <hatch> sinzui right, but we also don't want to run it as sudo when it's not required to so I was hoping for something we could programatically check for to see if sudo will be required
[15:07] <mgz> hatch: basically it depends on juju version and lxc version
[15:07] <sinzui> okay, hatch, I understand. "juju version" will return 1.XX.X. if the dotted numbers are greater than 1.17.1, don't use sudo
[15:07] <hatch> mgz, well, apparently also the ubuntu series
[15:08] <hatch> sinzui even on precise?
[15:08] <mgz> that's only an indication oflxc version
[15:08] <sinzui> hatch lxc version is implicit with series
[15:08] <sinzui> hatch, yes, even precise that I test
[15:08] <sinzui> This is was ran 10 minutes ago
[15:08] <sinzui> juju --show-log bootstrap -e local --constraints mem=2G
[15:09] <sinzui> on precise local
[15:09] <hatch> ok perfect, haha wow that was confusing for a bit :)
[15:09] <hatch> version > 1.17.1 ! require sudo :D
[15:10] <sinzui> bingo
[15:10] <hatch> ok on it!
[15:11] <hatch> sinzui will 1.17.x be released to the stable ppa or is it just a development version?
[15:12] <sinzui> hatch, odd numbers are always unstable and not subtle for production
[15:13] <sinzui> hatch, when we think the 17 changes are complete, we will call them 1.18.0
[15:13] <hatch> ahh ok gotcha
[15:13] <hatch> doing the node.js like version releases :)
[15:22] <frankban> sinzui: can we assume odd numbers are never published on the stable ppa or on main/universe?
[15:22] <sinzui> frankban, The answer should be yes and yes...
[15:22] <frankban> sinzui: cool thanks
[15:23] <sinzui> frankban, BUT
[15:25] <frankban> sinzui: I feel like there is a BUT
[15:27] <sinzui> frankban, I had no intention of letting odd numbers into the ubuntu devel (trusty), but jamespage wanted it in trusty to test gccgo
[15:28] <bac> rogpeppe: i'm getting a mongo error from juju 1.17.1 when trying to deploy a charm.  was wondering if you've seen it before.  https://pastebin.canonical.com/103823/plain/
[15:28] <bac> rogpeppe: on trusty with local provider
[15:28] <frankban> sinzui: so 1.17.1 is in trusty?
[15:28] <rogpeppe> bac: 1.17.1 is broken
[15:29] <rogpeppe> bac: i can't see your paste, but i know what it's gonna look like
[15:29] <bac> rogpeppe: good parts: E11000 duplicate key error index: juju.charms.$_id_ dup key:
[15:31] <frankban> sinzui: yes it is, thank you
[15:34] <mramm> frankban: yes it is in trusty
[15:35] <mramm> 1.17 is the last time we will push a dev release into trusty though
[15:36] <mramm> we need to get better at releasing stable versions on a regular schedule
[15:36] <mramm> that will eliminate the felt need to push devel releases into the release
[15:36] <rogpeppe> bac: ah, that's a different issue
[15:36] <mramm> but we needed to get that process running, due to the main inclusion process that we need to go through this cycle
[15:36] <rogpeppe> bac: dimitern knows about that, i believe
[15:37] <rogpeppe> bac: it's something to do with charm uploads - were you running deploy twice, or something?
[15:37] <rogpeppe> bac: (twice concurrently, i mean)
[15:38] <bac> rogpeppe: no.  tried one deploy from juju-gui and got the error
[15:38] <rogpeppe> bac: hmm.
[15:38] <dimitern> bac, I'm trying to land a fix for bug 1067979 since yesterday - hopefully not long now
[15:38] <_mup_> Bug #1067979: race: concurrent charm deployments corrupts deployments <deploy> <race-condition> <test-needed> <juju-core:In Progress by dimitern> <https://launchpad.net/bugs/1067979>
[15:38] <rogpeppe> bac: can you reproduce?
[15:38] <bac> rogpeppe: yes, could yesterday via the gui.  was going to spin up an env by hand and see if i can reproduce
[15:39] <bac> rogpeppe: was just checking first to see if it was a known issue that could be waved off.  glad to help with reproducing if needed
[15:41] <rogpeppe> bac: once dimitern has landed his fix, perhaps you could try to see if you could reproduce the problem from trunk tip
[15:41] <bac> will do, rogpeppe
[16:39] <TheMue> rogpeppe1: take a look: http://paste.ubuntu.com/6844672/
[16:39] <TheMue> rogpeppe1: ;)
[16:53] <dimitern> bac, the fix for bug 1067979 landed, btw
[16:53] <_mup_> Bug #1067979: race: concurrent charm deployments corrupts deployments <deploy> <race-condition> <test-needed> <juju-core:Fix Committed by dimitern> <https://launchpad.net/bugs/1067979>
[16:57] <natefinch-afk> rogpeppe1: When you get a chance, I'd like to talk about the machine agent stuff again.  The API code has me mystified as to where the info is that I'm supposed to be watching
[16:58] <rogpeppe1> natefinch-afk: ok, perhaps soon?
[16:59] <natefinch> rogpeppe1: sure thing
[16:59] <natefinch> rogpeppe1: just wasn't sure how late you'd be there today
[17:02] <dimitern> natefinch, a quick review? https://codereview.appspot.com/54680045
[17:04] <dimitern> rogpeppe1, fwereade, https://codereview.appspot.com/54680045 ?
[17:05]  * dimitern bbiab
[17:07] <rogpeppe1> natefinch: i'm here for another hour, but currently in a discussion
[17:07] <arosales> mgz, fwereade using the conf call today for the joyent provider sync
[17:10] <natefinch> rogpeppe1: ok, I know that takes precedence
[17:10] <natefinch> dimitern: I can review
[17:13] <mgz> I'll need to read the wiki about conf call things
[17:15] <mgz> arosales: I actually don't have a pin, if I need one, and getting one seems to involve an rt
[17:15] <dimitern> natefinch, thanks
[17:15] <arosales> mgz sent you the callin details and updated the invite
[17:40] <mgz> dstroppa: so, first thing to try with the amulet tests is adding sentries=False when constructing amulet.Deployment()
[17:41] <mgz> then just remove the d.sentry.wait() call below, it's not strictly needed
[17:42] <mgz> it gives you fewer hooks to actually inspect your charms and hooks, but can be changed back later when you need them
[17:43] <dstroppa> mgz: tried that, this is what I'm getting now http://paste.ubuntu.com/6844982/
[17:52] <mgz> dstroppa: much better
[17:54] <mgz> dstroppa: get lp:juju-deployer and put the script on your path
[17:55] <dimitern> natefinch, review poke?
[17:56] <natefinch> dimitern: yeah, sorry, had to do one thing in the meantime, but working on it now
[17:56] <dimitern> natefinch, cheers
[18:11] <dstroppa> mgz: I'm getting syntax errors when installing juju-deployer
[18:12] <rogpeppe1> natefinch: hangout?
[18:12] <natefinch> rogpeppe1: yeah, standup one?
[18:13] <rogpeppe1> natefinch: ok
[18:13] <rogpeppe1> natefinch: https://plus.google.com/hangouts/_/calendar/am9obi5tZWluZWxAY2Fub25pY2FsLmNvbQ.mf0d8r5pfb44m16v9b2n5i29ig?authuser=1
[18:14] <rogpeppe1> TheMue: cool
[18:16] <natefinch> rogpeppe1: you're frozen in the hangout?
[18:29] <dpb1> Hi, is this a bug?  I'm trying to do juju terminate-machine, and I'm in this weird state.  I think that this instance launched in an 'ERROR' state, fwiw:  http://paste.ubuntu.com/6845201/
[18:37] <fwereade> dpb1, you should be able to --force it
[18:39] <dpb1> fwereade: nice.  I suppose it will not clean up the instance in the "ERROR" state, though?
[18:40] <dpb1> --force appears to at least removed it from juju's brain, which is what I needed.
[18:43] <fwereade> dpb1, afraid not, it's just a way to get it out of juju's hair
[18:43] <dpb1> fwereade: k, thx
[18:44] <rogpeppe1> g'night all
[18:56] <natefinch> dimitern: reviewed, sorry for the delay
[19:08] <marcoceppi> got a question about api stuff?
[19:09] <marcoceppi> that was a astatement
[19:09] <natefinch> marcoceppi: I'll answer if I can
[19:11] <marcoceppi> natefinch: nvm, sorted as maas sadness
[19:12] <natefinch> marcoceppi: heh ok
[20:07]  * thumper has a few errands in town to prep for the trip
[20:07] <thumper> will be back soonish
[20:28] <bac> dimitern: you still around?
[21:21]  * thumper wants to stab something
[21:21] <thumper> ffs
[21:22] <thumper> natefinch: you around?
[21:22] <natefinch> thumper: yep
[21:22] <natefinch> thumper: please don't stab me
[21:22] <thumper> natefinch: have you tried the ec2 provider recently
[21:22] <thumper> mine won't start
[21:22] <natefinch> thumper: nope, I can try it
[21:22] <thumper> ERROR cannot make S3 control bucket: A conflicting conditional operation is currently in progress against this resource. Please try again.
[21:22] <thumper> I'm trying to try out my all-machines fix for the local provider
[21:23] <thumper> and want to confirm it doesn't break others
[21:23] <thumper> hmm... seems to be working now
[21:23] <thumper> after failing twice
[21:24] <thumper> wonder if it was an amazon glitch
[21:24] <natefinch> thumper: yeah weird
[21:31] <thumper> natefinch: lbox still hates me (no go vet), can you review on LP? https://code.launchpad.net/~thumper/juju-core/all-machines-trusty/+merge/204106
[21:31]  * thumper also wants to get waigani to test it
[21:31] <thumper> as I know he is on saucy still and had a similar problem with no all-machines.log
[21:31] <thumper> I'm pretty sure this will fix it, but always good to get confirmation.
[21:33] <natefinch> thumper: at least it's a small change, so not so bad w/o side by side diffs
[21:33] <thumper> yeah
[21:34] <thumper> uh oh
[21:34]  * thumper has a bad feeling
[21:35] <thumper> poo
[21:35] <thumper> doesn't work on precise
[21:36]  * thumper hangs head
[21:37] <thumper> FARRRRKKK!!!!!
[21:38] <natefinch> thumper: my wife knows the guy that owns fark, actually.
[21:39]  * thumper resists the urge to have series specific rsyslog rendering
[21:44] <natefinch> thumper: still want a review on that code, or...
[21:44] <thumper> not just yet
[21:47] <thumper> stabby stabby stabby
[21:57] <thumper> natefinch: care to take a look now?
[21:57] <thumper> just confirming on ec2
[21:57] <thumper> works for local
[21:57]  * thumper looks for someone not on trusty
[21:59] <natefinch> thumper: looking
[22:03] <thumper> hoo fucking ray
[22:03] <thumper> seems to work on precise
[22:03] <natefinch> thumper: maybe other people more familiar with rsyslog and bash would understand it, but personally, I'd like a comment about why you have to set and reset FileCreateMode like you do.
[22:03] <thumper> natefinch: where do you want the comment?
[22:04] <thumper> I thought I had one?
[22:04] <thumper> ah, it is in the description of the merge
[22:04] <thumper> can put it in the code somewhere if you like
[22:05] <natefinch> Yeah, just the first time you do it.
[22:06] <thumper> I'll have it in the block comment for the template itself
[22:06] <thumper> I don't want it rendered in the actual config file
[22:08] <natefinch> right
[22:09] <natefinch> ok, time for me to go.  Have fun in cape town
[22:18] <hazmat> thumper, so i'm using latest trunk, and juju get-env doesn't show proxy config as an option.. how does one go about setting it?
[22:19] <thumper> probably because the default is to Omit it if not set
[22:19] <thumper> juju set-env http-proxy=http://foo
[22:19] <thumper> ya know
[22:20] <thumper> I'm tempted to change it from Omit to default to ""
[22:20] <hazmat> got it.. revno 2224
[22:20] <thumper> what do you think?
[22:20] <thumper> because if you have one set, you can't unset it
[22:20] <thumper> as it won't allow ""
[22:20] <thumper> stupid juju
[22:20] <hazmat> hmm.
[22:20] <hazmat> thumper, a doc merge proposal would do :-)
[22:20] <hazmat> its pretty hard to discover otherwise
[22:20] <thumper> oh for sure
[22:21] <thumper> I want it feature complete before we bang on about it though :)
[22:21] <hazmat> the diff on revno 2224 shows all the options. i needed to find.
[22:21] <hazmat> thumper, i'll have users testing it tonight :-)
[22:21] <thumper> cool
[22:21] <thumper> let me know if there are problems or surprises
[22:21] <thumper> I probably won't see the email until I'm in cape town
[22:21] <thumper> but I will see it
[22:21] <hazmat> thumper, will do.. careful what you wish for :-)
[22:22] <thumper> you going to be there?
[22:22] <hazmat> thumper, yup.. i should be in saturday late
[22:22]  * thumper nods
[22:22] <thumper> I get in 9am ish sunday morning
[22:22]  * thumper has been futzing around with rsyslog configs
[22:23] <hazmat> there's some other fire i'm parachuting in to work on.. which will take up most of the freetime during the sprint week, but this project with proxies will probably still keep kicking and screaming.
[22:28] <sinzui> thumper, lxc is looking better today, upgrade tests work. deploys always fail because of a wordpress agent error. DO you have any ideas about r2282 that would break the agent
[22:29]  * thumper looks at that revno
[22:29] <sinzui> trusting on aws is broken too, but I think that is aws since canonistack's trusty passed
[22:30] <thumper> sinzui: I just used aws with trusty and trunk ok
[22:30] <thumper> I hit a glitch with s3 when I first tried
[22:30] <thumper> but that resolved itself
[22:30] <sinzui> thumper, lucky you. I have many tests and 4 personal runs that cannot bootstrap
[22:30] <thumper> sinzui: no idea
[22:30] <thumper> sinzui: try now
[22:30] <thumper> it was working for me
[22:31] <thumper> sinzui: we should get a real test that looks for the all-machines.log for the local provider
[22:31] <thumper> I have just fixed the bug and merge should be working through tarmc
[22:31] <sinzui> thumper, I add recovery of that log today for lxc
[22:31] <thumper> lxc or local?
[22:32] <thumper> we have containers in other providers remember :)
[22:32] <sinzui> thumper, lxc. since we had no logs and lots of failures, I wrote a hack to find the log in local/log before the machine is torn down.
[22:32] <thumper> I have a horrible feeling that I am missing something important that I should be doing today to prepare for the trip tomorrow
[22:32] <sinzui> http://162.213.35.54:8080/job/local-deploy/780/ is the last failure
[22:33] <thumper> sinzui: oh, you'll love my last change then
[22:33] <sinzui> thumper, CI scp's the log from the bootstrapped machine, but it has never been able to do that with lxc
[22:33] <thumper> I keep the log file around until you re-bootstrap
[22:33] <thumper> will be in /var/log/juju-<user>-<envname>/all-machines.log
[22:34] <sinzui> thumper, thank you
[22:35] <sinzui> thumper, aws trusty still dies on apt-get upgrade
[22:35] <thumper> hmm
[22:35] <thumper> are you deploying to trusty?
[22:35] <thumper> I'm not doing that
[22:35] <sinzui> thumper, I am just boostrapping on trusty and failing.
[22:36] <sinzui> it worked for CI yesterday
[22:36] <thumper> hmm...
[22:36] <thumper> sorry, but worked for me just an hour ago
[22:36] <sinzui> thumper, I think it is aws.
[22:37] <thumper> waigani_: can I get you to pull trunk, and try a local provider?
[22:37] <thumper> waigani_: see if you get all-machines.log now
[22:37] <thumper> I expect that you should
[22:37] <waigani_> okay
[22:37] <sinzui> the unittests failed when they removed the ami we were using, so I expect to tweak the tests to keep them current with trusty in aws
[22:56] <bigjools> morning wallyworld_ how's the head? :)
[22:56] <wallyworld_> fine
[22:56] <wallyworld_> jeez
[23:12] <sinzui> wallyworld_, I am seeing azure header issues.
[23:12] <wallyworld_> oh
[23:12] <sinzui> wallyworld_, I just started a manual test of stable to reassure myself
[23:13] <wallyworld_> you are seeing issues running trunk?
[23:13] <sinzui> wallyworld_, we are see this with recent revs: http://162.213.35.54:8080/job/azure-deploy/758/console
[23:17] <sinzui> looking good for the manual bootstrap
[23:17] <wallyworld_> sinzui: i can't think of anything that has changed in juju wrt that error
[23:17] <sinzui> I fear that azure has changed.
[23:18] <wallyworld_> oh joy
[23:18] <sinzui> We find the machines are often left running or stopped in azure after a test.
[23:18] <wallyworld_> in azure but not ec2 or other clouds?
[23:20] <sinzui> wallyworld_, azure and canonistack are leaving machines behind. but in the case of azure, I am seeing the header errors and a lot of 307 temporary redirect errors that state the delete failed
[23:22] <wallyworld_> hmmmm. off hand, i can't offer any useful insight
[23:22] <sinzui> wallyworld_, good news. 1.16.5 on azure is good. I will try to collate the trunk errors into something meaningful
[23:23] <wallyworld_> good or worked when run but might be flakey next time?
[23:25] <wallyworld_> trunk could well be to blame. there have been no azure specific changes to my knowledge but destroy for example has changed and maybe that is indirectly causing issues on azure